Describe the bug
When launching a background agent via the task tool with an explicit model parameter, the agent silently substitutes
a different model and completes without any indication of the substitution — until you inspect the model: field in
the result metadata.
Environment: GitHub Copilot CLI v1.0.44, Windows
Affected version
GitHub Copilot CLI 1.0.44
Steps to reproduce the behavior
Steps to reproduce:
- Launch a background agent with model: "claude-opus-4.7" or model: "gpt-5.5" or model: "gpt-5.3-codex", by prompting "do a tri-model review using claude opus 4.7, gpt 5.5 and gpt 5.3 codex" with claude sonnet as the active model.
- Agent completes with status: idle
- Read result — model: claude-sonnet-4.6 appears in the metadata, not the requested model
Actual behaviour:
Agent silently downgrades to claude-sonnet-4.6 and returns a result. No warning, no error, nothing in the output
indicating the substitution occurred. The substitution is only discoverable by reading the raw metadata line model:
in the agent result.
Why this matters:
Workflows that depend on specific model capabilities (e.g., multi-model review loops requiring claude-opus-4.7,
gpt-5.5, and gpt-5.3-codex for independent perspectives) are silently invalidated. The caller believes they got
three independent reviews when in fact two were the same model. gpt-5.3-codex appeared to work correctly; it was
claude-opus-4.7 and gpt-5.5 that substituted.
Expected behavior
Either:
- Use the requested model, or
- Fail with a clear error: "Model 'claude-opus-4.7' is not available in this environment" so the caller can decide
how to proceed
Additional context
No response
Describe the bug
When launching a background agent via the task tool with an explicit model parameter, the agent silently substitutes
a different model and completes without any indication of the substitution — until you inspect the model: field in
the result metadata.
Environment: GitHub Copilot CLI v1.0.44, Windows
Affected version
GitHub Copilot CLI 1.0.44
Steps to reproduce the behavior
Steps to reproduce:
Actual behaviour:
Agent silently downgrades to claude-sonnet-4.6 and returns a result. No warning, no error, nothing in the output
indicating the substitution occurred. The substitution is only discoverable by reading the raw metadata line model:
in the agent result.
Why this matters:
Workflows that depend on specific model capabilities (e.g., multi-model review loops requiring claude-opus-4.7,
gpt-5.5, and gpt-5.3-codex for independent perspectives) are silently invalidated. The caller believes they got
three independent reviews when in fact two were the same model. gpt-5.3-codex appeared to work correctly; it was
claude-opus-4.7 and gpt-5.5 that substituted.
Expected behavior
Either:
how to proceed
Additional context
No response