Metadata-Version: 2.4
Name: github-copilot-sdk
Version: 0.2.0
Summary: Python SDK for GitHub Copilot CLI
Author-email: GitHub <opensource@github.com>
License-Expression: MIT AND LicenseRef-Copilot-CLI
Project-URL: Homepage, https://github.com/github/copilot-sdk
Project-URL: Repository, https://github.com/github/copilot-sdk
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
License-File: CLI-LICENSE.md
Requires-Dist: python-dateutil>=2.9.0.post0
Requires-Dist: pydantic>=2.0
Provides-Extra: dev
Requires-Dist: ruff>=0.1.0; extra == "dev"
Requires-Dist: ty>=0.0.2; extra == "dev"
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Requires-Dist: pytest-timeout>=2.0.0; extra == "dev"
Requires-Dist: httpx>=0.24.0; extra == "dev"
Provides-Extra: telemetry
Requires-Dist: opentelemetry-api>=1.0.0; extra == "telemetry"
Dynamic: license-file

# Copilot Python SDK

Python SDK for programmatic control of GitHub Copilot CLI via JSON-RPC.

> **Note:** This SDK is in technical preview and may change in breaking ways.

## Installation

```bash
pip install -e ".[telemetry,dev]"
# or
uv pip install -e ".[telemetry,dev]"
```

## Run the Sample

Try the interactive chat sample (from the repo root):

```bash
cd python/samples
python chat.py
```

## Quick Start

```python
import asyncio
from copilot import CopilotClient, PermissionHandler

async def main():
    # Create and start client
    client = CopilotClient()
    await client.start()

    # Create a session (on_permission_request is required)
    session = await client.create_session(on_permission_request=PermissionHandler.approve_all, model="gpt-5")

    # Wait for response using session.idle event
    done = asyncio.Event()

    def on_event(event):
        if event.type.value == "assistant.message":
            print(event.data.content)
        elif event.type.value == "session.idle":
            done.set()

    session.on(on_event)

    # Send a message and wait for completion
    await session.send("What is 2+2?")
    await done.wait()

    # Clean up
    await session.disconnect()
    await client.stop()

asyncio.run(main())
```

Sessions also support the `async with` context manager pattern for automatic cleanup:

```python
async with await client.create_session(on_permission_request=PermissionHandler.approve_all, model="gpt-5") as session:
    await session.send("What is 2+2?")
    # session is automatically disconnected when leaving the block
```

## Features

- ✅ Full JSON-RPC protocol support
- ✅ stdio and TCP transports
- ✅ Real-time streaming events
- ✅ Session history with `get_messages()`
- ✅ Type hints throughout
- ✅ Async/await native

## API Reference

### CopilotClient

```python
from copilot import CopilotClient, SubprocessConfig

# Spawn a local CLI process (default)
client = CopilotClient()  # uses bundled CLI, stdio transport
await client.start()

session = await client.create_session(on_permission_request=PermissionHandler.approve_all, model="gpt-5")

def on_event(event):
    print(f"Event: {event['type']}")

session.on(on_event)
await session.send("Hello!")

# ... wait for events ...

await session.disconnect()
await client.stop()
```

```python
from copilot import CopilotClient, ExternalServerConfig

# Connect to an existing CLI server
client = CopilotClient(ExternalServerConfig(url="localhost:3000"))
```

**CopilotClient Constructor:**

```python
CopilotClient(
    config=None,        # SubprocessConfig | ExternalServerConfig | None
    *,
    auto_start=True,    # auto-start server on first use
    on_list_models=None, # custom handler for list_models()
)
```

**SubprocessConfig** — spawn a local CLI process:

- `cli_path` (str | None): Path to CLI executable (default: bundled binary)
- `cli_args` (list[str]): Extra arguments for the CLI executable
- `cwd` (str | None): Working directory for CLI process (default: current dir)
- `use_stdio` (bool): Use stdio transport instead of TCP (default: True)
- `port` (int): Server port for TCP mode (default: 0 for random)
- `log_level` (str): Log level (default: "info")
- `env` (dict | None): Environment variables for the CLI process
- `github_token` (str | None): GitHub token for authentication. When provided, takes priority over other auth methods.
- `use_logged_in_user` (bool | None): Whether to use logged-in user for authentication (default: True, but False when `github_token` is provided).
- `telemetry` (dict | None): OpenTelemetry configuration for the CLI process. Providing this enables telemetry — no separate flag needed. See [Telemetry](#telemetry) below.

**ExternalServerConfig** — connect to an existing CLI server:

- `url` (str): Server URL (e.g., `"localhost:8080"`, `"http://127.0.0.1:9000"`, or just `"8080"`).

**`create_session` Parameters:**

All parameters are keyword-only:

- `on_permission_request` (callable): **Required.** Handler called before each tool execution to approve or deny it. Use `PermissionHandler.approve_all` to allow everything, or provide a custom function for fine-grained control. See [Permission Handling](#permission-handling) section.
- `model` (str): Model to use ("gpt-5", "claude-sonnet-4.5", etc.).
- `session_id` (str): Custom session ID for resuming or identifying sessions.
- `client_name` (str): Client name to identify the application using the SDK. Included in the User-Agent header for API requests.
- `reasoning_effort` (str): Reasoning effort level for models that support it ("low", "medium", "high", "xhigh"). Use `list_models()` to check which models support this option.
- `tools` (list): Custom tools exposed to the CLI.
- `system_message` (dict): System message configuration. Supports three modes:
  - **append** (default): Appends `content` after the SDK-managed prompt
  - **replace**: Replaces the entire prompt with `content`
  - **customize**: Selectively override individual sections via `sections` dict (keys: `"identity"`, `"tone"`, `"tool_efficiency"`, `"environment_context"`, `"code_change_rules"`, `"guidelines"`, `"safety"`, `"tool_instructions"`, `"custom_instructions"`, `"last_instructions"`; values: `SectionOverride` with `action` and optional `content`)
- `available_tools` (list[str]): List of tool names to allow. Takes precedence over `excluded_tools`.
- `excluded_tools` (list[str]): List of tool names to disable. Ignored if `available_tools` is set.
- `on_user_input_request` (callable): Handler for user input requests from the agent (enables ask_user tool). See [User Input Requests](#user-input-requests) section.
- `hooks` (dict): Hook handlers for session lifecycle events. See [Session Hooks](#session-hooks) section.
- `working_directory` (str): Working directory for the session. Tool operations will be relative to this directory.
- `provider` (dict): Custom API provider configuration (BYOK). See [Custom Providers](#custom-providers) section.
- `streaming` (bool): Enable streaming delta events.
- `mcp_servers` (dict): MCP server configurations for the session.
- `custom_agents` (list): Custom agent configurations for the session.
- `config_dir` (str): Override the default configuration directory location.
- `skill_directories` (list[str]): Directories to load skills from.
- `disabled_skills` (list[str]): List of skill names to disable.
- `infinite_sessions` (dict): Automatic context compaction configuration.

**`resume_session` Parameters:**

- `session_id` (str): **Required.** The ID of the session to resume.

The parameters below are keyword-only:

- `on_permission_request` (callable): **Required.** Handler called before each tool execution to approve or deny it. Use `PermissionHandler.approve_all` to allow everything, or provide a custom function for fine-grained control. See [Permission Handling](#permission-handling) section.
- `model` (str): Model to use (can change the model when resuming).
- `client_name` (str): Client name to identify the application using the SDK.
- `reasoning_effort` (str): Reasoning effort level ("low", "medium", "high", "xhigh").
- `tools` (list): Custom tools exposed to the CLI.
- `system_message` (dict): System message configuration.
- `available_tools` (list[str]): List of tool names to allow. Takes precedence over `excluded_tools`.
- `excluded_tools` (list[str]): List of tool names to disable. Ignored if `available_tools` is set.
- `on_user_input_request` (callable): Handler for user input requests from the agent (enables ask_user tool).
- `hooks` (dict): Hook handlers for session lifecycle events.
- `working_directory` (str): Working directory for the session.
- `provider` (dict): Custom API provider configuration (BYOK).
- `streaming` (bool): Enable streaming delta events.
- `mcp_servers` (dict): MCP server configurations for the session.
- `custom_agents` (list): Custom agent configurations for the session.
- `agent` (str): Name of the custom agent to activate when the session starts.
- `config_dir` (str): Override the default configuration directory location.
- `skill_directories` (list[str]): Directories to load skills from.
- `disabled_skills` (list[str]): List of skill names to disable.
- `infinite_sessions` (dict): Automatic context compaction configuration.
- `disable_resume` (bool): Skip emitting the session.resume event (default: False).
- `on_event` (callable): Event handler registered before the session.resume RPC.

**Session Lifecycle Methods:**

```python
# Get the session currently displayed in TUI (TUI+server mode only)
session_id = await client.get_foreground_session_id()

# Request TUI to display a specific session (TUI+server mode only)
await client.set_foreground_session_id("session-123")

# Subscribe to all lifecycle events
def on_lifecycle(event):
    print(f"{event.type}: {event.sessionId}")

unsubscribe = client.on(on_lifecycle)

# Subscribe to specific event type
unsubscribe = client.on("session.foreground", lambda e: print(f"Foreground: {e.sessionId}"))

# Later, to stop receiving events:
unsubscribe()
```

**Lifecycle Event Types:**
- `session.created` - A new session was created
- `session.deleted` - A session was deleted
- `session.updated` - A session was updated
- `session.foreground` - A session became the foreground session in TUI
- `session.background` - A session is no longer the foreground session

### System Message Customization

Control the system prompt using `system_message` in session config:

```python
session = await client.create_session(
    system_message={
        "content": "Always check for security vulnerabilities before suggesting changes."
    }
)
```

The SDK auto-injects environment context, tool instructions, and security guardrails. The default CLI persona is preserved, and your `content` is appended after SDK-managed sections. To change the persona or fully redefine the prompt, use `mode: "replace"` or `mode: "customize"`.

#### Customize Mode

Use `mode: "customize"` to selectively override individual sections of the prompt while preserving the rest:

```python
from copilot import SYSTEM_PROMPT_SECTIONS

session = await client.create_session(
    system_message={
        "mode": "customize",
        "sections": {
            # Replace the tone/style section
            "tone": {"action": "replace", "content": "Respond in a warm, professional tone. Be thorough in explanations."},
            # Remove coding-specific rules
            "code_change_rules": {"action": "remove"},
            # Append to existing guidelines
            "guidelines": {"action": "append", "content": "\n* Always cite data sources"},
        },
        # Additional instructions appended after all sections
        "content": "Focus on financial analysis and reporting.",
    }
)
```

Available section IDs: `"identity"`, `"tone"`, `"tool_efficiency"`, `"environment_context"`, `"code_change_rules"`, `"guidelines"`, `"safety"`, `"tool_instructions"`, `"custom_instructions"`, `"last_instructions"`. Use the `SYSTEM_PROMPT_SECTIONS` dict for descriptions of each section.

Each section override supports four actions:
- **`replace`** — Replace the section content entirely
- **`remove`** — Remove the section from the prompt
- **`append`** — Add content after the existing section
- **`prepend`** — Add content before the existing section

Unknown section IDs are handled gracefully: content from `replace`/`append`/`prepend` overrides is appended to additional instructions, and `remove` overrides are silently ignored.

### Tools

Define tools with automatic JSON schema generation using the `@define_tool` decorator and Pydantic models:

```python
from pydantic import BaseModel, Field
from copilot import CopilotClient, define_tool, PermissionHandler

class LookupIssueParams(BaseModel):
    id: str = Field(description="Issue identifier")

@define_tool(description="Fetch issue details from our tracker")
async def lookup_issue(params: LookupIssueParams) -> str:
    issue = await fetch_issue(params.id)
    return issue.summary

session = await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    tools=[lookup_issue],
)
```

> **Note:** When using `from __future__ import annotations`, define Pydantic models at module level (not inside functions).

**Low-level API (without Pydantic):**

For users who prefer manual schema definition:

```python
from copilot import CopilotClient, Tool, PermissionHandler

async def lookup_issue(invocation):
    issue_id = invocation["arguments"]["id"]
    issue = await fetch_issue(issue_id)
    return {
        "textResultForLlm": issue.summary,
        "resultType": "success",
        "sessionLog": f"Fetched issue {issue_id}",
    }

session = await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    tools=[
        Tool(
            name="lookup_issue",
            description="Fetch issue details from our tracker",
            parameters={
                "type": "object",
                "properties": {
                    "id": {"type": "string", "description": "Issue identifier"},
                },
                "required": ["id"],
            },
            handler=lookup_issue,
        )
    ],
)
```

The SDK automatically handles `tool.call`, executes your handler (sync or async), and responds with the final result when the tool completes.

#### Overriding Built-in Tools

If you register a tool with the same name as a built-in CLI tool (e.g. `edit_file`, `read_file`), the SDK will throw an error unless you explicitly opt in by setting `overrides_built_in_tool=True`. This flag signals that you intend to replace the built-in tool with your custom implementation.

```python
class EditFileParams(BaseModel):
    path: str = Field(description="File path")
    content: str = Field(description="New file content")

@define_tool(name="edit_file", description="Custom file editor with project-specific validation", overrides_built_in_tool=True)
async def edit_file(params: EditFileParams) -> str:
    # your logic
```

#### Skipping Permission Prompts

Set `skip_permission=True` on a tool definition to allow it to execute without triggering a permission prompt:

```python
@define_tool(name="safe_lookup", description="A read-only lookup that needs no confirmation", skip_permission=True)
async def safe_lookup(params: LookupParams) -> str:
    # your logic
```

## Image Support

The SDK supports image attachments via the `attachments` parameter. You can attach images by providing their file path, or by passing base64-encoded data directly using a blob attachment:

```python
# File attachment — runtime reads from disk
await session.send(
    "What's in this image?",
    attachments=[
        {
            "type": "file",
            "path": "/path/to/image.jpg",
        }
    ],
)

# Blob attachment — provide base64 data directly
await session.send(
    "What's in this image?",
    attachments=[
        {
            "type": "blob",
            "data": base64_image_data,
            "mimeType": "image/png",
        }
    ],
)
```

Supported image formats include JPG, PNG, GIF, and other common image types. The agent's `view` tool can also read images directly from the filesystem, so you can also ask questions like:

```python
await session.send("What does the most recent jpg in this directory portray?")
```

## Streaming

Enable streaming to receive assistant response chunks as they're generated:

```python
import asyncio
from copilot import CopilotClient, PermissionHandler

async def main():
    client = CopilotClient()
    await client.start()

    session = await client.create_session(
        on_permission_request=PermissionHandler.approve_all,
        model="gpt-5",
        streaming=True,
    )

    # Use asyncio.Event to wait for completion
    done = asyncio.Event()

    def on_event(event):
        if event.type.value == "assistant.message_delta":
            # Streaming message chunk - print incrementally
            delta = event.data.delta_content or ""
            print(delta, end="", flush=True)
        elif event.type.value == "assistant.reasoning_delta":
            # Streaming reasoning chunk (if model supports reasoning)
            delta = event.data.delta_content or ""
            print(delta, end="", flush=True)
        elif event.type.value == "assistant.message":
            # Final message - complete content
            print("\n--- Final message ---")
            print(event.data.content)
        elif event.type.value == "assistant.reasoning":
            # Final reasoning content (if model supports reasoning)
            print("--- Reasoning ---")
            print(event.data.content)
        elif event.type.value == "session.idle":
            # Session finished processing
            done.set()

    session.on(on_event)
    await session.send("Tell me a short story")
    await done.wait()  # Wait for streaming to complete

    await session.disconnect()
    await client.stop()

asyncio.run(main())
```

When `streaming=True`:

- `assistant.message_delta` events are sent with `delta_content` containing incremental text
- `assistant.reasoning_delta` events are sent with `delta_content` for reasoning/chain-of-thought (model-dependent)
- Accumulate `delta_content` values to build the full response progressively
- The final `assistant.message` and `assistant.reasoning` events contain the complete content

Note: `assistant.message` and `assistant.reasoning` (final events) are always sent regardless of streaming setting.

## Infinite Sessions

By default, sessions use **infinite sessions** which automatically manage context window limits through background compaction and persist state to a workspace directory.

```python
# Default: infinite sessions enabled with default thresholds
session = await client.create_session(on_permission_request=PermissionHandler.approve_all, model="gpt-5")

# Access the workspace path for checkpoints and files
print(session.workspace_path)
# => ~/.copilot/session-state/{session_id}/

# Custom thresholds
session = await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    infinite_sessions={
        "enabled": True,
        "background_compaction_threshold": 0.80,  # Start compacting at 80% context usage
        "buffer_exhaustion_threshold": 0.95,  # Block at 95% until compaction completes
    },
)

# Disable infinite sessions
session = await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    infinite_sessions={"enabled": False},
)
```

When enabled, sessions emit compaction events:

- `session.compaction_start` - Background compaction started
- `session.compaction_complete` - Compaction finished (includes token counts)

## Custom Providers

The SDK supports custom OpenAI-compatible API providers (BYOK - Bring Your Own Key), including local providers like Ollama. When using a custom provider, you must specify the `model` explicitly.

**ProviderConfig fields:**

- `type` (str): Provider type - `"openai"`, `"azure"`, or `"anthropic"` (default: `"openai"`)
- `base_url` (str): API endpoint URL (required)
- `api_key` (str): API key (optional for local providers like Ollama)
- `bearer_token` (str): Bearer token for authentication (takes precedence over `api_key`)
- `wire_api` (str): API format for OpenAI/Azure - `"completions"` or `"responses"` (default: `"completions"`)
- `azure` (dict): Azure-specific options with `api_version` (default: `"2024-10-21"`)

**Example with Ollama:**

```python
session = await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="deepseek-coder-v2:16b",  # Model to use with the custom provider
    provider={
        "type": "openai",
        "base_url": "http://localhost:11434/v1",  # Ollama endpoint
        # api_key not required for Ollama
    },
)

await session.send("Hello!")
```

**Example with custom OpenAI-compatible API:**

```python
import os

session = await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-4",
    provider={
        "type": "openai",
        "base_url": "https://my-api.example.com/v1",
        "api_key": os.environ["MY_API_KEY"],
    },
)
```

**Example with Azure OpenAI:**

```python
import os

session = await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-4",
    provider={
        "type": "azure",  # Must be "azure" for Azure endpoints, NOT "openai"
        "base_url": "https://my-resource.openai.azure.com",  # Just the host, no path
        "api_key": os.environ["AZURE_OPENAI_KEY"],
        "azure": {
            "api_version": "2024-10-21",
        },
    },
)
```

> **Important notes:**
> - For Azure OpenAI endpoints (`*.openai.azure.com`), you **must** use `type: "azure"`, not `type: "openai"`.
> - The `base_url` should be just the host (e.g., `https://my-resource.openai.azure.com`). Do **not** include `/openai/v1` in the URL - the SDK handles path construction automatically.

## Telemetry

The SDK supports OpenTelemetry for distributed tracing. Provide a `telemetry` config to enable trace export and automatic W3C Trace Context propagation.

```python
from copilot import CopilotClient, SubprocessConfig

client = CopilotClient(SubprocessConfig(
    telemetry={
        "otlp_endpoint": "http://localhost:4318",
    },
))
```

**TelemetryConfig options:**

- `otlp_endpoint` (str): OTLP HTTP endpoint URL
- `file_path` (str): File path for JSON-lines trace output
- `exporter_type` (str): `"otlp-http"` or `"file"`
- `source_name` (str): Instrumentation scope name
- `capture_content` (bool): Whether to capture message content

Trace context (`traceparent`/`tracestate`) is automatically propagated between the SDK and CLI on `create_session`, `resume_session`, and `send` calls, and inbound when the CLI invokes tool handlers.

Install with telemetry extras: `pip install copilot-sdk[telemetry]` (provides `opentelemetry-api`)

## Permission Handling

An `on_permission_request` handler is **required** whenever you create or resume a session. The handler is called before the agent executes each tool (file writes, shell commands, custom tools, etc.) and must return a decision.

### Approve All (simplest)

Use the built-in `PermissionHandler.approve_all` helper to allow every tool call without any checks:

```python
from copilot import CopilotClient, PermissionHandler

session = await client.create_session({
    "model": "gpt-5",
    "on_permission_request": PermissionHandler.approve_all,
})
```

### Custom Permission Handler

Provide your own function to inspect each request and apply custom logic (sync or async):

```python
from copilot import PermissionRequest, PermissionRequestResult

def on_permission_request(request: PermissionRequest, invocation: dict) -> PermissionRequestResult:
    # request.kind — what type of operation is being requested:
    #   "shell"       — executing a shell command
    #   "write"       — writing or editing a file
    #   "read"        — reading a file
    #   "mcp"         — calling an MCP tool
    #   "custom-tool" — calling one of your registered tools
    #   "url"         — fetching a URL
    #   "memory"      — accessing or updating session/workspace memory
    #   "hook"        — invoking a registered hook
    # request.tool_call_id  — the tool call that triggered this request
    # request.tool_name     — name of the tool (for custom-tool / mcp)
    # request.file_name     — file being written (for write)
    # request.full_command_text — full shell command (for shell)

    if request.kind.value == "shell":
        # Deny shell commands
        return PermissionRequestResult(kind="denied-interactively-by-user")

    return PermissionRequestResult(kind="approved")

session = await client.create_session({
    "model": "gpt-5",
    "on_permission_request": on_permission_request,
})
```

Async handlers are also supported:

```python
async def on_permission_request(request: PermissionRequest, invocation: dict) -> PermissionRequestResult:
    # Simulate an async approval check (e.g., prompting a user over a network)
    await asyncio.sleep(0)
    return PermissionRequestResult(kind="approved")
```

### Permission Result Kinds

| `kind` value | Meaning |
|---|---------|
| `"approved"` | Allow the tool to run |
| `"denied-interactively-by-user"` | User explicitly denied the request |
| `"denied-no-approval-rule-and-could-not-request-from-user"` | No approval rule matched and user could not be asked (default when no kind is specified) |
| `"denied-by-rules"` | Denied by a policy rule |
| `"denied-by-content-exclusion-policy"` | Denied due to a content exclusion policy |
| `"no-result"` | Leave the request unanswered (not allowed for protocol v2 permission requests) |

### Resuming Sessions

Pass `on_permission_request` when resuming a session too — it is required:

```python
session = await client.resume_session("session-id", {
    "on_permission_request": PermissionHandler.approve_all,
})
```

### Per-Tool Skip Permission

To let a specific custom tool bypass the permission prompt entirely, set `skip_permission=True` on the tool definition. See [Skipping Permission Prompts](#skipping-permission-prompts) under Tools.

## User Input Requests

Enable the agent to ask questions to the user using the `ask_user` tool by providing an `on_user_input_request` handler:

```python
async def handle_user_input(request, invocation):
    # request["question"] - The question to ask
    # request.get("choices") - Optional list of choices for multiple choice
    # request.get("allowFreeform", True) - Whether freeform input is allowed

    print(f"Agent asks: {request['question']}")
    if request.get("choices"):
        print(f"Choices: {', '.join(request['choices'])}")

    # Return the user's response
    return {
        "answer": "User's answer here",
        "wasFreeform": True,  # Whether the answer was freeform (not from choices)
    }

session = await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    on_user_input_request=handle_user_input,
)
```

## Session Hooks

Hook into session lifecycle events by providing handlers in the `hooks` configuration:

```python
async def on_pre_tool_use(input, invocation):
    print(f"About to run tool: {input['toolName']}")
    # Return permission decision and optionally modify args
    return {
        "permissionDecision": "allow",  # "allow", "deny", or "ask"
        "modifiedArgs": input.get("toolArgs"),  # Optionally modify tool arguments
        "additionalContext": "Extra context for the model",
    }

async def on_post_tool_use(input, invocation):
    print(f"Tool {input['toolName']} completed")
    return {
        "additionalContext": "Post-execution notes",
    }

async def on_user_prompt_submitted(input, invocation):
    print(f"User prompt: {input['prompt']}")
    return {
        "modifiedPrompt": input["prompt"],  # Optionally modify the prompt
    }

async def on_session_start(input, invocation):
    print(f"Session started from: {input['source']}")  # "startup", "resume", "new"
    return {
        "additionalContext": "Session initialization context",
    }

async def on_session_end(input, invocation):
    print(f"Session ended: {input['reason']}")

async def on_error_occurred(input, invocation):
    print(f"Error in {input['errorContext']}: {input['error']}")
    return {
        "errorHandling": "retry",  # "retry", "skip", or "abort"
    }

session = await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    hooks={
        "on_pre_tool_use": on_pre_tool_use,
        "on_post_tool_use": on_post_tool_use,
        "on_user_prompt_submitted": on_user_prompt_submitted,
        "on_session_start": on_session_start,
        "on_session_end": on_session_end,
        "on_error_occurred": on_error_occurred,
    },
)
```

**Available hooks:**

- `on_pre_tool_use` - Intercept tool calls before execution. Can allow/deny or modify arguments.
- `on_post_tool_use` - Process tool results after execution. Can modify results or add context.
- `on_user_prompt_submitted` - Intercept user prompts. Can modify the prompt before processing.
- `on_session_start` - Run logic when a session starts or resumes.
- `on_session_end` - Cleanup or logging when session ends.
- `on_error_occurred` - Handle errors with retry/skip/abort strategies.

## Requirements

- Python 3.11+
- GitHub Copilot CLI installed and accessible
