Preview native LLM runtime stack#27114
Draft
kitlangton wants to merge 8 commits into
Draft
Conversation
1413e88 to
87d3dd0
Compare
This was referenced May 13, 2026
9085865 to
8b3a19f
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
@opencode-ai/llmLLMEventshape consumed by the session processor.LLM.Service-> nativeLLMClient-> realRequestExecutor.layerpath with only the HTTP client swapped forHttpRecorder.recordingLayer(...).Beta Opt-In
Use your normal console login and model config. Then start opencode with:
That is the intended beta path.
The flag is safe to use globally because native is only attempted for supported requests. Unsupported requests fall back to AI SDK.
Runtime Selection
The default runtime remains AI SDK.
Native is only attempted when
experimentalNativeLlmis enabled throughRuntimeFlags.Native execution currently runs only when all of these are true:
openaior console-managedopencode/Zen@ai-sdk/openaiIf any condition is not met, opencode logs the reason and falls back to AI SDK for that request.
Fallback cases include:
Console/Zen Support
Console-managed Zen config travels through the existing account/config/provider path:
Account.config(accountID, orgID)returns the remote provider configAccount.token(accountID)populatesOPENCODE_CONSOLE_TOKENprovider.opencode.options.apiKeyprovider.opencode.options.headers["x-org-id"]is preserved and passed through the native requestprovider.opencode.api/ model provider API points at the console proxy URLThe native runtime supports this path when the selected Zen model uses
@ai-sdk/openai, so the request goes through OpenAI Responses semantics while authenticating/routing through the console-managed Zen provider.Module Boundaries
packages/opencode/src/session/llm.tsremains the opencode session LLM service boundary. It owns opencode concerns:The runtime adapters live under
packages/opencode/src/session/llm/:ai-sdk.tsconverts AI SDKfullStreamparts into nativeLLMEvents. This is used by the default runtime path.native-request.tsconverts normalized opencode session input into a native@opencode-ai/llmLLMRequest. It does not execute requests.native-runtime.tsowns the native support gate, native request execution, provider option header passthrough, and opencode tool -> native executable tool bridge.README.mddocuments the boundaries and safety rules for this folder.packages/llmstays generic. Session lifecycle, permissions, plugins, provider discovery, and opencode model selection stay inpackages/opencode.Native Path
When native is supported for a request:
native-request.tslowers them into a nativeLLMRequestnative-runtime.tsbridges opencode tools into@opencode-ai/llmtoolsx-org-idare carried into the native requestLLMClient.stream(...)runs the OpenAI Responses route through the normalRequestExecutorLLMEventsLocal tool execution remains opencode-owned.
Event Semantics
The package event model uses:
step-start/step-finishfor each provider/model callfinishexactly once for the top-level stream/generate operationUsage is attached to
step-finishfor per-step usage andfinishfor aggregate usage when known.Current Support
Supported in this preview:
opencode/Zen models using@ai-sdk/openai@ai-sdk/openaix-org-id)@opencode-ai/llmNot enabled in opencode native runtime yet:
Those paths continue using AI SDK.
Smoke Tests
With your normal console login:
Then:
opencode/Zen model.Verification
cd packages/opencode && bun typecheckcd packages/opencode && bun run test -- test/session/llm-native-recorded.test.ts test/session/llm-native.test.ts test/session/llm.test.ts --test-name-pattern nativecd packages/opencode && bun run test -- test/session/compaction.test.tspackages/opencode/test/fixtures/recordings/session/native-openai-tool-call.jsonpackages/opencode/test/fixtures/recordings/session/native-zen-tool-call.jsongit diff --checkbun turbo typecheck