Skip to content

fix: correct prompt_cache_retention Literal from "in-memory" to "in_memory"#2971

Open
alvinttang wants to merge 1 commit intoopenai:mainfrom
alvinttang:fix/prompt-cache-retention-typo
Open

fix: correct prompt_cache_retention Literal from "in-memory" to "in_memory"#2971
alvinttang wants to merge 1 commit intoopenai:mainfrom
alvinttang:fix/prompt-cache-retention-typo

Conversation

@alvinttang
Copy link

Summary

  • Fixes the prompt_cache_retention Literal type from "in-memory" (hyphen) to "in_memory" (underscore) across all 5 affected type definition files
  • The API rejects "in-memory" with a 400 error and only accepts "in_memory", so the SDK-typed value was unusable without type: ignore

Fixes #2883

Files changed

  1. src/openai/types/chat/completion_create_params.py
  2. src/openai/types/responses/response_create_params.py
  3. src/openai/types/responses/response.py
  4. src/openai/types/responses/responses_client_event_param.py
  5. src/openai/types/responses/responses_client_event.py

Note on code generation

These type files appear to be auto-generated from an OpenAPI spec (per .stats.yml). The upstream OpenAPI spec should also be updated to use "in_memory" to prevent this from regressing on the next codegen run.

🤖 Generated with Claude Code

…_memory"

The API expects "in_memory" (underscore) but the type definitions
declared "in-memory" (hyphen), causing 400 errors when using the
SDK-typed value.

Fixes openai#2883

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@alvinttang alvinttang requested a review from a team as a code owner March 14, 2026 03:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

prompt_cache_retention type declares "in-memory" but API expects "in_memory"

1 participant