diff --git a/flash/configuration/parameters.mdx b/flash/configuration/parameters.mdx index b28d4133..9e15767f 100644 --- a/flash/configuration/parameters.mdx +++ b/flash/configuration/parameters.mdx @@ -29,6 +29,7 @@ This page provides a complete reference for all parameters available on the `End | `scaler_type` | `ServerlessScalerType` | Scaling strategy | auto | | `scaler_value` | `int` | Scaling threshold | `4` | | `template` | `PodTemplate` | Pod template overrides | `None` | +| `min_cuda_version` | `str` or `CudaVersion` | Minimum CUDA version for GPU host selection | `"12.8"` (GPU) / `None` (CPU) | ## Parameter details @@ -537,6 +538,43 @@ template = PodTemplate( For simple environment variables, use the `env` parameter on `Endpoint` instead of `PodTemplate.env`. +### min_cuda_version + +**Type**: `str` or `CudaVersion` +**Default**: `"12.8"` for GPU endpoints, `None` for CPU endpoints + +Specifies the minimum CUDA driver version required on the host machine. GPU endpoints default to `"12.8"` to ensure workers run on hosts with recent CUDA drivers. + +```python +from runpod_flash import Endpoint, GpuType, CudaVersion + +# Use the default (12.8) +@Endpoint(name="ml-inference", gpu=GpuType.NVIDIA_A100_80GB_PCIe) +async def infer(data): ... + +# Override with string value +@Endpoint( + name="legacy-compatible", + gpu=GpuType.NVIDIA_A100_80GB_PCIe, + min_cuda_version="12.4" +) +async def infer_legacy(data): ... + +# Override with CudaVersion enum +@Endpoint( + name="cuda-12", + gpu=GpuType.NVIDIA_A100_80GB_PCIe, + min_cuda_version=CudaVersion.V12_0 +) +async def infer_cuda12(data): ... +``` + +This parameter has no effect on CPU endpoints. + + +Valid CUDA versions: `CudaVersion.V11_1`, `V11_4`, `V11_7`, `V11_8`, `V12_0`, `V12_1`, `V12_2`, `V12_3`, `V12_4`, `V12_6`, `V12_8` (or equivalent strings like `"12.4"`). Invalid values raise a `ValueError`. + + ## EndpointJob When using `Endpoint(id=...)` or `Endpoint(image=...)`, the `.run()` method returns an `EndpointJob` object for async operations: @@ -576,6 +614,7 @@ These changes restart all workers: - Storage (`volume`) - Datacenter (`datacenter`) - Flashboot setting (`flashboot`) +- CUDA version requirement (`min_cuda_version`) Workers are temporarily unavailable during recreation (typically 30-90 seconds).