Problem Description
Once #546 is resolved, users will be able to generate a benchmark config and execute it. The next step is to integrate this workflow into GitHub Actions so that benchmarks can be launched easily from a yaml config file or some defined parameters
Expected behavior
Create a reusable GitHub Actions workflow (e.g., _run_benchmark.yml)
-
The workflow should accept either:
- A path to an existing YAML config committed in the repo (
benchmark_single_table.yaml)
- user inputs (modality, datasets, synthesizers, timeout, num_instances) and generate a YAML config using the scirpt in
sdgym/_benchmark_launcher/script.py
-
Define a run_benchmark_from_config.yml: This workflow calls _run_benchmark.yml and takes as input the filepath of the yaml file
-
Define a run_benchmark_manual.yml: This workflow calls _run_benchmark.yml and takes as input the parameters used in the script
Problem Description
Once #546 is resolved, users will be able to generate a benchmark config and execute it. The next step is to integrate this workflow into GitHub Actions so that benchmarks can be launched easily from a yaml config file or some defined parameters
Expected behavior
Create a reusable GitHub Actions workflow (e.g.,
_run_benchmark.yml)The workflow should accept either:
benchmark_single_table.yaml)sdgym/_benchmark_launcher/script.pyDefine a
run_benchmark_from_config.yml: This workflow calls_run_benchmark.ymland takes as input the filepath of the yaml fileDefine a
run_benchmark_manual.yml: This workflow calls_run_benchmark.ymland takes as input the parameters used in the script