Metadata-Version: 2.4
Name: github-copilot-sdk
Version: 0.2.2rc0
Summary: Python SDK for GitHub Copilot CLI
Author-email: GitHub <opensource@github.com>
License-Expression: MIT AND LicenseRef-Copilot-CLI
Project-URL: Homepage, https://github.com/github/copilot-sdk
Project-URL: Repository, https://github.com/github/copilot-sdk
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
License-File: CLI-LICENSE.md
Requires-Dist: python-dateutil>=2.9.0.post0
Requires-Dist: pydantic>=2.0
Provides-Extra: dev
Requires-Dist: ruff>=0.1.0; extra == "dev"
Requires-Dist: ty<0.0.25,>=0.0.2; extra == "dev"
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Requires-Dist: pytest-timeout>=2.0.0; extra == "dev"
Requires-Dist: httpx>=0.24.0; extra == "dev"
Provides-Extra: telemetry
Requires-Dist: opentelemetry-api>=1.0.0; extra == "telemetry"
Dynamic: license-file

# Copilot Python SDK

Python SDK for programmatic control of GitHub Copilot CLI via JSON-RPC.

> **Note:** This SDK is in public preview and may change in breaking ways.

## Installation

```bash
pip install -e ".[telemetry,dev]"
# or
uv pip install -e ".[telemetry,dev]"
```

## Run the Sample

Try the interactive chat sample (from the repo root):

```bash
cd python/samples
python chat.py
```

## Quick Start

```python
import asyncio
from copilot import CopilotClient
from copilot.session import PermissionHandler

async def main():
    # Client automatically starts on enter and cleans up on exit
    async with CopilotClient() as client:
        # Create a session with automatic cleanup
        async with await client.create_session(model="gpt-5") as session:
            # Wait for response using session.idle event
            done = asyncio.Event()

            def on_event(event):
                if event.type.value == "assistant.message":
                    print(event.data.content)
                elif event.type.value == "session.idle":
                    done.set()

            session.on(on_event)

            # Send a message and wait for completion
            await session.send("What is 2+2?")
            await done.wait()

asyncio.run(main())
```

### Manual Resource Management

If you need more control over the lifecycle, you can call `start()`, `stop()`, and `disconnect()` manually:

```python
import asyncio
from copilot import CopilotClient
from copilot.session import PermissionHandler

async def main():
    client = CopilotClient()
    await client.start()

    # Create a session (on_permission_request is required)
    session = await client.create_session(
        on_permission_request=PermissionHandler.approve_all,
        model="gpt-5",
    )

    done = asyncio.Event()

    def on_event(event):
        if event.type.value == "assistant.message":
            print(event.data.content)
        elif event.type.value == "session.idle":
            done.set()

    session.on(on_event)
    await session.send("What is 2+2?")
    await done.wait()

    # Clean up manually
    await session.disconnect()
    await client.stop()

asyncio.run(main())
```

## Features

- ✅ Full JSON-RPC protocol support
- ✅ stdio and TCP transports
- ✅ Real-time streaming events
- ✅ Session history with `get_messages()`
- ✅ Type hints throughout
- ✅ Async/await native
- ✅ Async context manager support for automatic resource cleanup

## API Reference

### CopilotClient

```python
from copilot import CopilotClient, SubprocessConfig
from copilot.session import PermissionHandler

async with CopilotClient() as client:
    async with await client.create_session(model="gpt-5") as session:
        def on_event(event):
            print(f"Event: {event.type}")

        session.on(on_event)
        await session.send("Hello!")

        # ... wait for events ...
```

> **Note:** For manual lifecycle management, see [Manual Resource Management](#manual-resource-management) above.

```python
from copilot import CopilotClient, ExternalServerConfig

# Connect to an existing CLI server
client = CopilotClient(ExternalServerConfig(url="localhost:3000"))
```

**CopilotClient Constructor:**

```python
CopilotClient(
    config=None,        # SubprocessConfig | ExternalServerConfig | None
    *,
    auto_start=True,    # auto-start server on first use
    on_list_models=None, # custom handler for list_models()
)
```

**SubprocessConfig** — spawn a local CLI process:

- `cli_path` (str | None): Path to CLI executable (default: `COPILOT_CLI_PATH` env var, or bundled binary)
- `cli_args` (list[str]): Extra arguments for the CLI executable
- `cwd` (str | None): Working directory for CLI process (default: current dir)
- `use_stdio` (bool): Use stdio transport instead of TCP (default: True)
- `port` (int): Server port for TCP mode (default: 0 for random)
- `log_level` (str): Log level (default: "info")
- `env` (dict | None): Environment variables for the CLI process
- `github_token` (str | None): GitHub token for authentication. When provided, takes priority over other auth methods.
- `use_logged_in_user` (bool | None): Whether to use logged-in user for authentication (default: True, but False when `github_token` is provided).
- `telemetry` (dict | None): OpenTelemetry configuration for the CLI process. Providing this enables telemetry — no separate flag needed. See [Telemetry](#telemetry) below.

**ExternalServerConfig** — connect to an existing CLI server:

- `url` (str): Server URL (e.g., `"localhost:8080"`, `"http://127.0.0.1:9000"`, or just `"8080"`).

**`CopilotClient.create_session()`:**

These are passed as keyword arguments to `create_session()`:

- `model` (str): Model to use ("gpt-5", "claude-sonnet-4.5", etc.). **Required when using custom provider.**
- `reasoning_effort` (str): Reasoning effort level for models that support it ("low", "medium", "high", "xhigh"). Use `list_models()` to check which models support this option.
- `session_id` (str): Custom session ID
- `tools` (list): Custom tools exposed to the CLI
- `system_message` (SystemMessageConfig): System message configuration
- `streaming` (bool): Enable streaming delta events
- `provider` (ProviderConfig): Custom API provider configuration (BYOK). See [Custom Providers](#custom-providers) section.
- `infinite_sessions` (InfiniteSessionConfig): Automatic context compaction configuration
- `on_permission_request` (callable): **Required.** Handler called before each tool execution to approve or deny it. Use `PermissionHandler.approve_all` to allow everything, or provide a custom function for fine-grained control. See [Permission Handling](#permission-handling) section.
- `on_user_input_request` (callable): Handler for user input requests from the agent (enables ask_user tool). See [User Input Requests](#user-input-requests) section.
- `hooks` (SessionHooks): Hook handlers for session lifecycle events. See [Session Hooks](#session-hooks) section.

**Session Lifecycle Methods:**

```python
# Get the session currently displayed in TUI (TUI+server mode only)
session_id = await client.get_foreground_session_id()

# Request TUI to display a specific session (TUI+server mode only)
await client.set_foreground_session_id("session-123")

# Subscribe to all lifecycle events
def on_lifecycle(event):
    print(f"{event.type}: {event.sessionId}")

unsubscribe = client.on(on_lifecycle)

# Subscribe to specific event type
unsubscribe = client.on("session.foreground", lambda e: print(f"Foreground: {e.sessionId}"))

# Later, to stop receiving events:
unsubscribe()
```

**Lifecycle Event Types:**

- `session.created` - A new session was created
- `session.deleted` - A session was deleted
- `session.updated` - A session was updated
- `session.foreground` - A session became the foreground session in TUI
- `session.background` - A session is no longer the foreground session

### Tools

Define tools with automatic JSON schema generation using the `@define_tool` decorator and Pydantic models:

```python
from pydantic import BaseModel, Field
from copilot import CopilotClient, define_tool

class LookupIssueParams(BaseModel):
    id: str = Field(description="Issue identifier")

@define_tool(description="Fetch issue details from our tracker")
async def lookup_issue(params: LookupIssueParams) -> str:
    issue = await fetch_issue(params.id)
    return issue.summary

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    tools=[lookup_issue],
) as session:
    ...
```

> **Note:** When using `from __future__ import annotations`, define Pydantic models at module level (not inside functions).

**Low-level API (without Pydantic):**

For users who prefer manual schema definition:

```python
from copilot import CopilotClient
from copilot.tools import Tool, ToolInvocation, ToolResult
from copilot.session import PermissionHandler

async def lookup_issue(invocation: ToolInvocation) -> ToolResult:
    issue_id = invocation.arguments["id"]
    issue = await fetch_issue(issue_id)
    return ToolResult(
        text_result_for_llm=issue.summary,
        result_type="success",
        session_log=f"Fetched issue {issue_id}",
    )

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    tools=[
        Tool(
            name="lookup_issue",
            description="Fetch issue details from our tracker",
            parameters={
                "type": "object",
                "properties": {
                    "id": {"type": "string", "description": "Issue identifier"},
                },
                "required": ["id"],
            },
            handler=lookup_issue,
        )
    ],
) as session:
    ...
```

The SDK automatically handles `tool.call`, executes your handler (sync or async), and responds with the final result when the tool completes.

#### Overriding Built-in Tools

If you register a tool with the same name as a built-in CLI tool (e.g. `edit_file`, `read_file`), the SDK will throw an error unless you explicitly opt in by setting `overrides_built_in_tool=True`. This flag signals that you intend to replace the built-in tool with your custom implementation.

```python
class EditFileParams(BaseModel):
    path: str = Field(description="File path")
    content: str = Field(description="New file content")

@define_tool(name="edit_file", description="Custom file editor with project-specific validation", overrides_built_in_tool=True)
async def edit_file(params: EditFileParams) -> str:
    # your logic
```

#### Skipping Permission Prompts

Set `skip_permission=True` on a tool definition to allow it to execute without triggering a permission prompt:

```python
@define_tool(name="safe_lookup", description="A read-only lookup that needs no confirmation", skip_permission=True)
async def safe_lookup(params: LookupParams) -> str:
    # your logic
```

## Image Support

The SDK supports image attachments via the `attachments` parameter. You can attach images by providing their file path, or by passing base64-encoded data directly using a blob attachment:

```python
# File attachment — runtime reads from disk
await session.send(
    "What's in this image?",
    attachments=[
        {
            "type": "file",
            "path": "/path/to/image.jpg",
        }
    ],
)

# Blob attachment — provide base64 data directly
await session.send(
    "What's in this image?",
    attachments=[
        {
            "type": "blob",
            "data": base64_image_data,
            "mimeType": "image/png",
        }
    ],
)
```

Supported image formats include JPG, PNG, GIF, and other common image types. The agent's `view` tool can also read images directly from the filesystem, so you can also ask questions like:

```python
await session.send("What does the most recent jpg in this directory portray?")
```

## Streaming

Enable streaming to receive assistant response chunks as they're generated:

```python
import asyncio
from copilot import CopilotClient
from copilot.session import PermissionHandler

async def main():
    async with CopilotClient() as client:
        async with await client.create_session(
            on_permission_request=PermissionHandler.approve_all,
            model="gpt-5",
            streaming=True,
        ) as session:
            # Use asyncio.Event to wait for completion
            done = asyncio.Event()

            def on_event(event):
                match event.type.value:
                    case "assistant.message_delta":
                        # Streaming message chunk - print incrementally
                        delta = event.data.delta_content or ""
                        print(delta, end="", flush=True)
                    case "assistant.reasoning_delta":
                        # Streaming reasoning chunk (if model supports reasoning)
                        delta = event.data.delta_content or ""
                        print(delta, end="", flush=True)
                    case "assistant.message":
                        # Final message - complete content
                        print("\n--- Final message ---")
                        print(event.data.content)
                    case "assistant.reasoning":
                        # Final reasoning content (if model supports reasoning)
                        print("--- Reasoning ---")
                        print(event.data.content)
                    case "session.idle":
                        # Session finished processing
                        done.set()

            session.on(on_event)
            await session.send("Tell me a short story")
            await done.wait()  # Wait for streaming to complete

asyncio.run(main())
```

When `streaming=True`:

- `assistant.message_delta` events are sent with `delta_content` containing incremental text
- `assistant.reasoning_delta` events are sent with `delta_content` for reasoning/chain-of-thought (model-dependent)
- Accumulate `delta_content` values to build the full response progressively
- The final `assistant.message` and `assistant.reasoning` events contain the complete content

Note: `assistant.message` and `assistant.reasoning` (final events) are always sent regardless of streaming setting.

## Infinite Sessions

By default, sessions use **infinite sessions** which automatically manage context window limits through background compaction and persist state to a workspace directory.

```python
# Default: infinite sessions enabled with default thresholds
async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
) as session:
    # Access the workspace path for checkpoints and files
    print(session.workspace_path)
    # => ~/.copilot/session-state/{session_id}/

# Custom thresholds
async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    infinite_sessions={
        "enabled": True,
        "background_compaction_threshold": 0.80,  # Start compacting at 80% context usage
        "buffer_exhaustion_threshold": 0.95,  # Block at 95% until compaction completes
    },
) as session:
    ...

# Disable infinite sessions
async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    infinite_sessions={"enabled": False},
) as session:
    ...
```

When enabled, sessions emit compaction events:

- `session.compaction_start` - Background compaction started
- `session.compaction_complete` - Compaction finished (includes token counts)

## Custom Providers

The SDK supports custom OpenAI-compatible API providers (BYOK - Bring Your Own Key), including local providers like Ollama. When using a custom provider, you must specify the `model` explicitly.

**ProviderConfig fields:**

- `type` (str): Provider type - `"openai"`, `"azure"`, or `"anthropic"` (default: `"openai"`)
- `base_url` (str): API endpoint URL (required)
- `api_key` (str): API key (optional for local providers like Ollama)
- `bearer_token` (str): Bearer token for authentication (takes precedence over `api_key`)
- `wire_api` (str): API format for OpenAI/Azure - `"completions"` or `"responses"` (default: `"completions"`)
- `azure` (dict): Azure-specific options with `api_version` (default: `"2024-10-21"`)

**Example with Ollama:**

```python
async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="deepseek-coder-v2:16b",  # Required when using custom provider
    provider={
        "type": "openai",
        "base_url": "http://localhost:11434/v1",  # Ollama endpoint
        # api_key not required for Ollama
    },
) as session:
    await session.send("Hello!")
```

**Example with custom OpenAI-compatible API:**

```python
import os

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-4",
    provider={
        "type": "openai",
        "base_url": "https://my-api.example.com/v1",
        "api_key": os.environ["MY_API_KEY"],
    },
) as session:
    ...
```

**Example with Azure OpenAI:**

```python
import os

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-4",
    provider={
        "type": "azure",  # Must be "azure" for Azure endpoints, NOT "openai"
        "base_url": "https://my-resource.openai.azure.com",  # Just the host, no path
        "api_key": os.environ["AZURE_OPENAI_KEY"],
        "azure": {
            "api_version": "2024-10-21",
        },
    },
) as session:
    ...
```

> **Important notes:**
>
> - When using a custom provider, the `model` parameter is **required**. The SDK will throw an error if no model is specified.
> - For Azure OpenAI endpoints (`*.openai.azure.com`), you **must** use `type: "azure"`, not `type: "openai"`.
> - The `base_url` should be just the host (e.g., `https://my-resource.openai.azure.com`). Do **not** include `/openai/v1` in the URL - the SDK handles path construction automatically.

## Telemetry

The SDK supports OpenTelemetry for distributed tracing. Provide a `telemetry` config to enable trace export and automatic W3C Trace Context propagation.

```python
from copilot import CopilotClient, SubprocessConfig

client = CopilotClient(SubprocessConfig(
    telemetry={
        "otlp_endpoint": "http://localhost:4318",
    },
))
```

**TelemetryConfig options:**

- `otlp_endpoint` (str): OTLP HTTP endpoint URL
- `file_path` (str): File path for JSON-lines trace output
- `exporter_type` (str): `"otlp-http"` or `"file"`
- `source_name` (str): Instrumentation scope name
- `capture_content` (bool): Whether to capture message content

Trace context (`traceparent`/`tracestate`) is automatically propagated between the SDK and CLI on `create_session`, `resume_session`, and `send` calls, and inbound when the CLI invokes tool handlers.

Install with telemetry extras: `pip install copilot-sdk[telemetry]` (provides `opentelemetry-api`)

## Permission Handling

An `on_permission_request` handler is **required** whenever you create or resume a session. The handler is called before the agent executes each tool (file writes, shell commands, custom tools, etc.) and must return a decision.

### Approve All (simplest)

Use the built-in `PermissionHandler.approve_all` helper to allow every tool call without any checks:

```python
from copilot import CopilotClient
from copilot.session import PermissionHandler

session = await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
)
```

### Custom Permission Handler

Provide your own function to inspect each request and apply custom logic (sync or async):

```python
from copilot.session import PermissionRequestResult
from copilot.generated.session_events import PermissionRequest

def on_permission_request(request: PermissionRequest, invocation: dict) -> PermissionRequestResult:
    # request.kind — what type of operation is being requested:
    #   "shell"       — executing a shell command
    #   "write"       — writing or editing a file
    #   "read"        — reading a file
    #   "mcp"         — calling an MCP tool
    #   "custom-tool" — calling one of your registered tools
    #   "url"         — fetching a URL
    #   "memory"      — accessing or updating session/workspace memory
    #   "hook"        — invoking a registered hook
    # request.tool_call_id  — the tool call that triggered this request
    # request.tool_name     — name of the tool (for custom-tool / mcp)
    # request.file_name     — file being written (for write)
    # request.full_command_text — full shell command (for shell)

    if request.kind.value == "shell":
        # Deny shell commands
        return PermissionRequestResult(kind="denied-interactively-by-user")

    return PermissionRequestResult(kind="approved")

session = await client.create_session(
    on_permission_request=on_permission_request,
    model="gpt-5",
)
```

Async handlers are also supported:

```python
async def on_permission_request(request: PermissionRequest, invocation: dict) -> PermissionRequestResult:
    # Simulate an async approval check (e.g., prompting a user over a network)
    await asyncio.sleep(0)
    return PermissionRequestResult(kind="approved")
```

### Permission Result Kinds

| `kind` value                                                | Meaning                                                                                  |
| ----------------------------------------------------------- | ---------------------------------------------------------------------------------------- |
| `"approved"`                                                | Allow the tool to run                                                                    |
| `"denied-interactively-by-user"`                            | User explicitly denied the request                                                       |
| `"denied-no-approval-rule-and-could-not-request-from-user"` | No approval rule matched and user could not be asked (default when no kind is specified) |
| `"denied-by-rules"`                                         | Denied by a policy rule                                                                  |
| `"denied-by-content-exclusion-policy"`                      | Denied due to a content exclusion policy                                                 |
| `"no-result"`                                               | Leave the request unanswered (not allowed for protocol v2 permission requests)           |

### Resuming Sessions

Pass `on_permission_request` when resuming a session too — it is required:

```python
session = await client.resume_session(
    "session-id",
    on_permission_request=PermissionHandler.approve_all,
)
```

### Per-Tool Skip Permission

To let a specific custom tool bypass the permission prompt entirely, set `skip_permission=True` on the tool definition. See [Skipping Permission Prompts](#skipping-permission-prompts) under Tools.

## User Input Requests

Enable the agent to ask questions to the user using the `ask_user` tool by providing an `on_user_input_request` handler:

```python
async def handle_user_input(request, invocation):
    # request["question"] - The question to ask
    # request.get("choices") - Optional list of choices for multiple choice
    # request.get("allowFreeform", True) - Whether freeform input is allowed

    print(f"Agent asks: {request['question']}")
    if request.get("choices"):
        print(f"Choices: {', '.join(request['choices'])}")

    # Return the user's response
    return {
        "answer": "User's answer here",
        "wasFreeform": True,  # Whether the answer was freeform (not from choices)
    }

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    on_user_input_request=handle_user_input,
) as session:
    ...
```

## Session Hooks

Hook into session lifecycle events by providing handlers in the `hooks` configuration:

```python
async def on_pre_tool_use(input, invocation):
    print(f"About to run tool: {input['toolName']}")
    # Return permission decision and optionally modify args
    return {
        "permissionDecision": "allow",  # "allow", "deny", or "ask"
        "modifiedArgs": input.get("toolArgs"),  # Optionally modify tool arguments
        "additionalContext": "Extra context for the model",
    }

async def on_post_tool_use(input, invocation):
    print(f"Tool {input['toolName']} completed")
    return {
        "additionalContext": "Post-execution notes",
    }

async def on_user_prompt_submitted(input, invocation):
    print(f"User prompt: {input['prompt']}")
    return {
        "modifiedPrompt": input["prompt"],  # Optionally modify the prompt
    }

async def on_session_start(input, invocation):
    print(f"Session started from: {input['source']}")  # "startup", "resume", "new"
    return {
        "additionalContext": "Session initialization context",
    }

async def on_session_end(input, invocation):
    print(f"Session ended: {input['reason']}")

async def on_error_occurred(input, invocation):
    print(f"Error in {input['errorContext']}: {input['error']}")
    return {
        "errorHandling": "retry",  # "retry", "skip", or "abort"
    }

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    model="gpt-5",
    hooks={
        "on_pre_tool_use": on_pre_tool_use,
        "on_post_tool_use": on_post_tool_use,
        "on_user_prompt_submitted": on_user_prompt_submitted,
        "on_session_start": on_session_start,
        "on_session_end": on_session_end,
        "on_error_occurred": on_error_occurred,
    },
) as session:
    ...
```

**Available hooks:**

- `on_pre_tool_use` - Intercept tool calls before execution. Can allow/deny or modify arguments.
- `on_post_tool_use` - Process tool results after execution. Can modify results or add context.
- `on_user_prompt_submitted` - Intercept user prompts. Can modify the prompt before processing.
- `on_session_start` - Run logic when a session starts or resumes.
- `on_session_end` - Cleanup or logging when session ends.
- `on_error_occurred` - Handle errors with retry/skip/abort strategies.

## Commands

Register slash commands that users can invoke from the CLI TUI. When the user types `/commandName`, the SDK dispatches the event to your handler.

```python
from copilot.session import CommandDefinition, CommandContext, PermissionHandler

async def handle_deploy(ctx: CommandContext) -> None:
    print(f"Deploying with args: {ctx.args}")
    # ctx.session_id  — the session where the command was invoked
    # ctx.command      — full command text (e.g. "/deploy production")
    # ctx.command_name — command name without leading / (e.g. "deploy")
    # ctx.args         — raw argument string (e.g. "production")

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    commands=[
        CommandDefinition(
            name="deploy",
            description="Deploy the app",
            handler=handle_deploy,
        ),
        CommandDefinition(
            name="rollback",
            description="Rollback to previous version",
            handler=lambda ctx: print("Rolling back..."),
        ),
    ],
) as session:
    ...
```

Commands can also be provided when resuming a session via `resume_session(commands=[...])`.

## UI Elicitation

The `session.ui` API provides convenience methods for asking the user questions through interactive dialogs. These methods are only available when the CLI host supports elicitation — check `session.capabilities` before calling.

### Capability Check

```python
ui_caps = session.capabilities.get("ui", {})
if ui_caps.get("elicitation"):
    # Safe to call session.ui methods
    ...
```

### Confirm

Shows a yes/no confirmation dialog:

```python
ok = await session.ui.confirm("Deploy to production?")
if ok:
    print("Deploying...")
```

### Select

Shows a selection dialog with a list of options:

```python
env = await session.ui.select("Choose environment:", ["staging", "production", "dev"])
if env:
    print(f"Selected: {env}")
```

### Input

Shows a text input dialog with optional constraints:

```python
name = await session.ui.input("Enter your name:")

# With options
email = await session.ui.input("Enter email:", {
    "title": "Email Address",
    "description": "We'll use this for notifications",
    "format": "email",
})
```

### Custom Elicitation

For full control, use the `elicitation()` method with a custom JSON schema:

```python
result = await session.ui.elicitation({
    "message": "Configure deployment",
    "requestedSchema": {
        "type": "object",
        "properties": {
            "region": {"type": "string", "enum": ["us-east-1", "eu-west-1"]},
            "replicas": {"type": "number", "minimum": 1, "maximum": 10},
        },
        "required": ["region"],
    },
})

if result["action"] == "accept":
    region = result["content"]["region"]
    replicas = result["content"].get("replicas", 1)
```

## Elicitation Request Handler

When the server (or an MCP tool) needs to ask the end-user a question, it sends an `elicitation.requested` event. Provide an `on_elicitation_request` handler to respond:

```python
from copilot.session import ElicitationContext, ElicitationResult, PermissionHandler

async def handle_elicitation(
    context: ElicitationContext,
) -> ElicitationResult:
    # context["session_id"]           — the session ID
    # context["message"]              — what the server is asking
    # context.get("requestedSchema")  — optional JSON schema for form fields
    # context.get("mode")             — "form" or "url"

    print(f"Server asks: {context['message']}")

    # Return the user's response
    return {
        "action": "accept",  # or "decline" or "cancel"
        "content": {"answer": "yes"},
    }

async with await client.create_session(
    on_permission_request=PermissionHandler.approve_all,
    on_elicitation_request=handle_elicitation,
) as session:
    ...
```

When `on_elicitation_request` is provided, the SDK automatically:

- Sends `requestElicitation: true` to the server during session creation/resumption
- Reports the `elicitation` capability on the session
- Dispatches `elicitation.requested` events to your handler
- Auto-cancels if your handler throws an error (so the server doesn't hang)

## Requirements

- Python 3.11+
- GitHub Copilot CLI installed and accessible
