Conversation
Signed-off-by: Andrew Duffy <andrew@a10y.dev>
Contributor
Author
|
Results on |
Merging this PR will improve performance by 17.64%
Performance Changes
Comparing Footnotes
|
0ax1
approved these changes
Mar 2, 2026
a10y
added a commit
that referenced
this pull request
Mar 4, 2026
## Summary This PR adds G-ALP style patches that allow for data parallel access. This allows removing the additional `execute_patches` kernel launch, and instead inserts patch values inside of the unpacking kernels, at the step when we dispatch from shared -> global memory. The benchmarks added in #6712 show a significant speedup with this change, and performance is pretty much invariant to patch count. <img width="756" height="457" alt="image" src="https://github.com/user-attachments/assets/0164a453-c69b-48be-abaf-81797a16f7fa" /> ### Background The [G-ALP](https://dl.acm.org/doi/10.1145/3736227.3736242) paper's contribution is modifying the standard layout of "exceptions" (what we call Patches in Vortex) to allow for fully data parallel access. Their target is an f32 ALP decoding kernel, but it is equally applicable for our unpacking kernels. The core insight is that storing patches in sorted order, which is great for single-threaded execution on a super-scalar CPU, results in very poor GPU performance. Instead, we need to shuffle them so that they can be accessed ordered by `(chunk, lane)`. The chunk is the normal FastLanes vector chunk size (1024). The lane depends on the width of the type. Doing this means that we get O(1) access to the patches within our kernel. Replicating Figure 2 from the paper: <img width="447" height="432" alt="image" src="https://github.com/user-attachments/assets/cb114056-e071-467f-9a17-be6e4032faf1" /> This diff looks large, but in reality the big pieces * Adds a new `patches.h` header file with shared definitions between the CUDA kernel and the Rust code. * Adds a `patches.cuh` with some C++ code to make it easier to seek/iterate over a range of patch values * Updates the unpacking kernels to accept an optional set of patches This does not change the memory format for the `Patches` type, rather this just does a D2H -> transpose on CPU -> H2D transformation. ## Testing Uses the existing test suite which includes a lot of bit-unpacking with patches. --------- Signed-off-by: Andrew Duffy <andrew@a10y.dev> Co-authored-by: Alexander Droste <alexander.droste@protonmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
We have a benchmark for unpatched bitpacking but I wanted a baseline as I'm refining #6708