Skip to content

Bugfix - Fix DeepSpeed BF16 config validation error#796

Open
polarG wants to merge 1 commit intomainfrom
dev/hongtaozhang/fix-deepspeed-bf16-config-validation
Open

Bugfix - Fix DeepSpeed BF16 config validation error#796
polarG wants to merge 1 commit intomainfrom
dev/hongtaozhang/fix-deepspeed-bf16-config-validation

Conversation

@polarG
Copy link
Copy Markdown
Contributor

@polarG polarG commented Mar 26, 2026

Description
The megatron-gpt:deepspeed benchmark fails with return code 3 (INVALID_BENCHMARK_RESULT) during the BF16 training round. The benchmark runs two precision rounds (FP16 then BF16), and while FP16 succeeds, BF16 crashes at DeepSpeed initialization with:
pydantic_core._pydantic_core.ValidationError: 5 validation errors for DeepSpeedBF16Config
loss_scale - Extra inputs are not permitted
loss_scale_window - Extra inputs are not permitted
min_loss_scale - Extra inputs are not permitted
initial_scale_power - Extra inputs are not permitted
hysteresis - Extra inputs are not permitted

Root Cause
__prepare_deespeed_config() in megatron_gpt3.py uses the same precision_template for both FP16 and BF16 configs. This template includes loss-scaling parameters (loss_scale, loss_scale_window, min_loss_scale, initial_scale_power, hysteresis) that are valid for FP16 but rejected by DeepSpeedBF16Config, which uses pydantic strict validation and does not accept extra fields.

BF16 does not need loss scaling because it has sufficient dynamic range to avoid the underflow/overflow issues that FP16 faces.

This was always technically incorrect, but only became a hard failure when DeepSpeed migrated from Pydantic v1 to Pydantic v2 (around DeepSpeed v0.15–v0.16). In Pydantic v1, the extra = "forbid" setting was less strictly enforced, so the extra fields were silently ignored. Pydantic v2 strictly rejects all unknown fields with a ValidationError.

Fix
Generate precision-specific DeepSpeed configs:

  • FP16: includes all loss-scaling parameters (unchanged behavior)
  • BF16: only {'enabled': True}
  • FP32: no precision section

This fix is backward compatible — passing only {'enabled': True} for BF16 is valid in all DeepSpeed versions, since the loss-scaling fields were never used by BF16.

@polarG polarG requested a review from a team as a code owner March 26, 2026 22:20
Copilot AI review requested due to automatic review settings March 26, 2026 22:20
@polarG polarG self-assigned this Mar 26, 2026
@polarG polarG added bug Something isn't working ROCm labels Mar 26, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes DeepSpeed BF16 initialization failures in the Megatron-GPT benchmark by generating precision-specific DeepSpeed config sections, avoiding BF16 rejection of FP16-only loss-scaling fields under newer DeepSpeed/Pydantic validation.

Changes:

  • Generate FP16 DeepSpeed config with loss-scaling parameters (unchanged behavior).
  • Generate BF16 DeepSpeed config with only enabled: True to satisfy strict BF16 schema validation.
  • Omit the precision section entirely for FP32 runs.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@@ -307,15 +307,23 @@ def __prepare_deespeed_config(self, precision_megatron):
"""Prepare deepspeed configs."""
self._config_json_path = os.path.join(self._args.data_home, 'ds_config_gpt.json')
# Load deepspeed config template json file
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says "Load deepspeed config template json file", but this function is constructing the template dict inline (no JSON template is loaded). Updating/removing this comment would avoid confusion about where the config comes from.

Suggested change
# Load deepspeed config template json file
# Build deepspeed config template in memory

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working ROCm

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants