Skip to content

Preview native LLM runtime stack#27114

Draft
kitlangton wants to merge 8 commits into
devfrom
llm-native-combined-preview
Draft

Preview native LLM runtime stack#27114
kitlangton wants to merge 8 commits into
devfrom
llm-native-combined-preview

Conversation

@kitlangton
Copy link
Copy Markdown
Contributor

@kitlangton kitlangton commented May 12, 2026

Summary

  • Adds an opt-in native LLM runtime preview for opencode sessions while keeping AI SDK as the default runtime.
  • Converts both AI SDK and native provider streams into the shared @opencode-ai/llm LLMEvent shape consumed by the session processor.
  • Adds OpenAI Responses native request compilation, streaming, local tool execution, and tool-call context preservation.
  • Adds HTTP recorder integration tests that run the real opencode LLM.Service -> native LLMClient -> real RequestExecutor.layer path with only the HTTP client swapped for HttpRecorder.recordingLayer(...).

Beta Opt-In

Use your normal console login and model config. Then start opencode with:

OPENCODE_EXPERIMENTAL_NATIVE_LLM=true bun run dev

That is the intended beta path.

The flag is safe to use globally because native is only attempted for supported requests. Unsupported requests fall back to AI SDK.

Runtime Selection

The default runtime remains AI SDK.

Native is only attempted when experimentalNativeLlm is enabled through RuntimeFlags.

Native execution currently runs only when all of these are true:

  • provider id is openai or console-managed opencode/Zen
  • provider package for the selected model is @ai-sdk/openai
  • auth/config resolves an API key

If any condition is not met, opencode logs the reason and falls back to AI SDK for that request.

Fallback cases include:

  • Anthropic, Gemini, Bedrock, OpenRouter, generic OpenAI-compatible, GitHub Copilot, and every other non-OpenAI/OpenCode-Zen provider
  • OpenAI OAuth
  • missing API key
  • unsupported provider package

Console/Zen Support

Console-managed Zen config travels through the existing account/config/provider path:

  • Account.config(accountID, orgID) returns the remote provider config
  • Account.token(accountID) populates OPENCODE_CONSOLE_TOKEN
  • config env-template substitution resolves provider.opencode.options.apiKey
  • provider.opencode.options.headers["x-org-id"] is preserved and passed through the native request
  • provider.opencode.api / model provider API points at the console proxy URL

The native runtime supports this path when the selected Zen model uses @ai-sdk/openai, so the request goes through OpenAI Responses semantics while authenticating/routing through the console-managed Zen provider.

Module Boundaries

packages/opencode/src/session/llm.ts remains the opencode session LLM service boundary. It owns opencode concerns:

  • auth/config/provider resolution
  • plugin hooks
  • permission filtering
  • telemetry and request headers
  • AI SDK versus native runtime selection

The runtime adapters live under packages/opencode/src/session/llm/:

  • ai-sdk.ts converts AI SDK fullStream parts into native LLMEvents. This is used by the default runtime path.
  • native-request.ts converts normalized opencode session input into a native @opencode-ai/llm LLMRequest. It does not execute requests.
  • native-runtime.ts owns the native support gate, native request execution, provider option header passthrough, and opencode tool -> native executable tool bridge.
  • README.md documents the boundaries and safety rules for this folder.

packages/llm stays generic. Session lifecycle, permissions, plugins, provider discovery, and opencode model selection stay in packages/opencode.

Native Path

When native is supported for a request:

  • opencode builds the usual normalized session inputs
  • native-request.ts lowers them into a native LLMRequest
  • native-runtime.ts bridges opencode tools into @opencode-ai/llm tools
  • provider-level auth headers such as Zen x-org-id are carried into the native request
  • LLMClient.stream(...) runs the OpenAI Responses route through the normal RequestExecutor
  • the session processor consumes common LLMEvents

Local tool execution remains opencode-owned.

Event Semantics

The package event model uses:

  • step-start / step-finish for each provider/model call
  • finish exactly once for the top-level stream/generate operation

Usage is attached to step-finish for per-step usage and finish for aggregate usage when known.

Current Support

Supported in this preview:

  • console-managed opencode/Zen models using @ai-sdk/openai
  • direct OpenAI API-key auth via @ai-sdk/openai
  • OpenAI Responses native route
  • text streaming
  • local opencode tool execution
  • native tool call id/name context propagation
  • provider option header passthrough for Zen (x-org-id)
  • multi-step native tool loops through @opencode-ai/llm
  • usage/cost propagation in focused tests

Not enabled in opencode native runtime yet:

  • Anthropic
  • Gemini
  • Bedrock
  • OpenRouter
  • generic OpenAI-compatible providers
  • GitHub Copilot
  • OpenAI OAuth
  • GitLab workflow model path
  • broad provider-native hosted tool parity

Those paths continue using AI SDK.

Smoke Tests

With your normal console login:

OPENCODE_EXPERIMENTAL_NATIVE_LLM=true bun run dev

Then:

  • Pick a normal opencode/Zen model.
  • Ask for a short plain response.
  • Ask it to call one local tool and then answer.
  • Switch to a non-supported provider and verify it still works through AI SDK fallback.

Verification

  • cd packages/opencode && bun typecheck
  • cd packages/opencode && bun run test -- test/session/llm-native-recorded.test.ts test/session/llm-native.test.ts test/session/llm.test.ts --test-name-pattern native
  • cd packages/opencode && bun run test -- test/session/compaction.test.ts
  • HTTP recorder replay for packages/opencode/test/fixtures/recordings/session/native-openai-tool-call.json
  • HTTP recorder replay for packages/opencode/test/fixtures/recordings/session/native-zen-tool-call.json
  • Prettier check on changed opencode files
  • git diff --check
  • pre-push root bun turbo typecheck

@kitlangton kitlangton force-pushed the llm-native-combined-preview branch from 9085865 to 8b3a19f Compare May 13, 2026 14:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant