Skip to content

[skyrl] Pass through profiler_config to vLLM engine#1619

Closed
pcmoritz wants to merge 1 commit intoNovaSky-AI:mainfrom
pcmoritz:vllm-profiler-config
Closed

[skyrl] Pass through profiler_config to vLLM engine#1619
pcmoritz wants to merge 1 commit intoNovaSky-AI:mainfrom
pcmoritz:vllm-profiler-config

Conversation

@pcmoritz
Copy link
Copy Markdown
Collaborator

@pcmoritz pcmoritz commented May 4, 2026

Can be used like

"generator":{"inference_engine":{"tensor_parallel_size":4, "profiler_config":"{\"profiler\":\"torch\",\"torch_profiler_dir\":\"/mnt/shared_storage/vllm_profile\"}"}}}'

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a profiler_config field to the InferenceEngineConfig class to pass profiling configurations to the vLLM engine. The review feedback points out that this addition is redundant because engine_init_kwargs already exists for this purpose. Additionally, the current implementation is likely to cause a TypeError at runtime because vLLM expects profiling parameters as top-level arguments rather than a nested dictionary, and there is a type mismatch between the defined Dict type hint and the JSON string usage.

Comment on lines +505 to +506
profiler_config: Dict[str, Any] = field(default_factory=dict)
"""Pass through profiler config to vLLM engine."""
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The addition of profiler_config appears redundant because engine_init_kwargs (line 487) is already designed to pass arbitrary arguments to the vLLM engine.

Furthermore, this implementation is likely to cause runtime issues:

  1. vLLM Compatibility: vLLM expects profiling parameters (such as profiler and torch_profiler_dir) as top-level arguments. If profiler_config is passed as a dictionary key to vllm.LLM or vllm.AsyncEngineArgs (via **kwargs in the engine initialization), it will trigger a TypeError for an unexpected keyword argument.
  2. Type Mismatch: The PR description shows this field being populated with a JSON string, but the type hint is Dict[str, Any]. This will lead to type errors during configuration parsing or when the code expects a dictionary.

Recommendation: Instead of adding a new field, use the existing engine_init_kwargs. If a dedicated field is strictly required for grouping, logic must be added to the engine initialization (e.g., in vllm_engine.py) to unpack these values into the top-level engine arguments.

@pcmoritz
Copy link
Copy Markdown
Collaborator Author

pcmoritz commented May 5, 2026

Replaced by #1622

@pcmoritz pcmoritz closed this May 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant