[skyrl] Pass through profiler_config to vLLM engine#1619
[skyrl] Pass through profiler_config to vLLM engine#1619pcmoritz wants to merge 1 commit intoNovaSky-AI:mainfrom
Conversation
There was a problem hiding this comment.
Code Review
This pull request adds a profiler_config field to the InferenceEngineConfig class to pass profiling configurations to the vLLM engine. The review feedback points out that this addition is redundant because engine_init_kwargs already exists for this purpose. Additionally, the current implementation is likely to cause a TypeError at runtime because vLLM expects profiling parameters as top-level arguments rather than a nested dictionary, and there is a type mismatch between the defined Dict type hint and the JSON string usage.
| profiler_config: Dict[str, Any] = field(default_factory=dict) | ||
| """Pass through profiler config to vLLM engine.""" |
There was a problem hiding this comment.
The addition of profiler_config appears redundant because engine_init_kwargs (line 487) is already designed to pass arbitrary arguments to the vLLM engine.
Furthermore, this implementation is likely to cause runtime issues:
- vLLM Compatibility: vLLM expects profiling parameters (such as
profilerandtorch_profiler_dir) as top-level arguments. Ifprofiler_configis passed as a dictionary key tovllm.LLMorvllm.AsyncEngineArgs(via**kwargsin the engine initialization), it will trigger aTypeErrorfor an unexpected keyword argument. - Type Mismatch: The PR description shows this field being populated with a JSON string, but the type hint is
Dict[str, Any]. This will lead to type errors during configuration parsing or when the code expects a dictionary.
Recommendation: Instead of adding a new field, use the existing engine_init_kwargs. If a dedicated field is strictly required for grouping, logic must be added to the engine initialization (e.g., in vllm_engine.py) to unpack these values into the top-level engine arguments.
|
Replaced by #1622 |
Can be used like