Skip to content

Add Triton unified attention kernel with HuggingFace integration#1034

Draft
kaix-nv wants to merge 1 commit intomainfrom
kaix/triton_kernel
Draft

Add Triton unified attention kernel with HuggingFace integration#1034
kaix-nv wants to merge 1 commit intomainfrom
kaix/triton_kernel

Conversation

@kaix-nv
Copy link
Contributor

@kaix-nv kaix-nv commented Mar 13, 2026

Add a Triton Flash Attention kernel that supports variable-length batching, GQA, causal/non-causal masking, and autograd-compatible forward/backward. Register it as attn_implementation="modelopt_triton" for HuggingFace models.

What does this PR do?

Type of change: ?

Usage

# Add a code snippet demonstrating how to use this

Testing

Before your PR is "Ready for review"

Make sure you read and follow Contributor guidelines and your commits are signed (git commit -s -S).

Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded trust_remote_code=True, torch.load(..., weights_only=False), pickle, etc.).

  • Is this change backward compatible?: ✅ / ❌ / N/A
  • If you copied code from any other sources or added a new PIP dependency, did you follow guidance in CONTRIBUTING.md: ✅ / ❌ / N/A
  • Did you write any new necessary tests?: ✅ / ❌ / N/A
  • Did you update Changelog?: ✅ / ❌ / N/A

Additional Information

@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 13, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 13, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 0fdd2e6b-66ed-4e8e-81f6-f53f2c72d423

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch kaix/triton_kernel
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can use your project's `pylint` configuration to improve the quality of Python code reviews.

Add a pylint configuration file to your project to customize how CodeRabbit runs pylint.

@codecov
Copy link

codecov bot commented Mar 13, 2026

Codecov Report

❌ Patch coverage is 42.85714% with 16 lines in your changes missing coverage. Please review.
✅ Project coverage is 70.08%. Comparing base (bc87981) to head (fe6e6c8).

Files with missing lines Patch % Lines
...y/attention_sparsity/methods/flash_skip_softmax.py 27.27% 8 Missing ⚠️
...pt/torch/sparsity/attention_sparsity/conversion.py 57.14% 6 Missing ⚠️
...delopt/torch/sparsity/attention_sparsity/config.py 0.00% 1 Missing ⚠️
...ch/sparsity/attention_sparsity/sparse_attention.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1034      +/-   ##
==========================================
- Coverage   70.11%   70.08%   -0.03%     
==========================================
  Files         221      221              
  Lines       25459    25471      +12     
==========================================
+ Hits        17851    17852       +1     
- Misses       7608     7619      +11     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@kaix-nv kaix-nv force-pushed the kaix/triton_kernel branch 2 times, most recently from 2af0b56 to bc3d973 Compare March 14, 2026 05:24
Add a Triton Flash Attention kernel that supports variable-length
batching, GQA, causal/non-causal masking, and autograd-compatible
forward/backward. Register it as attn_implementation="modelopt_triton"
for HuggingFace models.

Signed-off-by: Kai Xu <kaix@nvidia.com>
@kaix-nv kaix-nv force-pushed the kaix/triton_kernel branch from bc3d973 to 94cf742 Compare March 14, 2026 06:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant