dev.py sets up and launches a full local Rusty Timer dev stack in one command:
- Postgres (Docker)
- Server
- One or more emulators
- Forwarder
- Receiver
It also prepares dev auth tokens, writes runtime config, optionally uploads race data,
and opens services in tmux (preferred) or iTerm2 panes.
- Run from the repository root.
- Python 3.11+ via
uv run. - Installed tools:
docker,cargo,npm,curl. - A multiplexer:
tmux(preferred), or- iTerm2 with Python API enabled.
uv run scripts/dev.py [--no-build] [--clear] [--emulator SPEC ...] [--bibchip PATH] [--ppl PATH]Examples:
# Full setup + launch with one default emulator (port 10001)
uv run scripts/dev.py
# Reuse prior builds and just set up/launch runtime pieces
uv run scripts/dev.py --no-build
# Tear everything down
uv run scripts/dev.py --clear
# Single emulator with custom settings
uv run scripts/dev.py --emulator port=10001,delay=500,file=test_assets/reads.txt,type=raw
# Multiple emulators
uv run scripts/dev.py --emulator port=10001 --emulator port=10002,delay=500,type=fsls
# Auto-generate emulator reads from bibchip and upload race files
uv run scripts/dev.py --bibchip test_assets/bibchip/large.txt --ppl test_assets/ppl/large.ppl--no-build: skip dashboard + Rust build steps.--clear: remove dev artifacts and exit.--emulator SPEC: add an emulator instance. Repeat this flag for multiple emulators.--bibchip PATH: upload chip file to a new race after startup; can also generate emulator reads.--ppl PATH: upload participant file to a new race after startup.--emulatorformat:
port=N,delay=MS,file=PATH,type=raw|fsls
portis required.delaydefaults to2000.typedefaults toraw.fileis optional.- If no
--emulatoris provided, default is one emulator on port10001.
When run normally (--clear not set), dev.py does the following:
- Validates CLI/file inputs and port-collision rules.
- Detects any existing dev instance and prompts whether to kill/reuse/cancel.
- Starts or reuses Docker Postgres container
rt-postgres. - Waits for Postgres readiness (
pg_isready). - Applies SQL migrations from
services/server/migrations/. - Writes temporary dev config/token files under
/tmp/rusty-timer-dev. - Seeds forwarder/receiver dev tokens into
device_tokens. - Runs
npm installin workspace root. - Builds dashboard (
apps/server-ui) unless--no-build. - Builds Rust binaries unless
--no-build. - Launches panes in
tmux(or iTerm2 fallback). - Launches the receiver Tauri app with
RT_RECEIVER_ID=recv-devso it matches the seeded dev receiver token. - Optionally creates a race and uploads bibchip/PPL files.
Created under /tmp/rusty-timer-dev:
forwarder.tomlforwarder-token.txtreceiver-token.txtforwarder.sqlite3(forwarder journal)race-setup.log(if race upload requested)iterm-window-id.txt(when launched via iTerm2)
Default dev tokens:
- Forwarder token:
rusty-dev-forwarder - Receiver token:
rusty-dev-receiver
- Server starts on
http://127.0.0.1:8080. - Receiver runs as a Tauri desktop app (no standalone HTTP API).
- If
apps/server-ui/buildexists, server is launched withDASHBOARD_DIRset to that path. - On startup, the script validates collisions across:
- emulator ports
- forwarder fallback ports (
emulator_port + 1000) - receiver-derived default local ports
If these collide, startup stops with an error.
After dev.py starts, these are the addresses for each component:
| Component | URL | Notes |
|---|---|---|
| Server dashboard | http://localhost:8080 |
Streams, races, exports. Only available if dashboard was built (skipped with --no-build unless apps/server-ui/build already exists). |
| Announcer config | http://localhost:8080/announcer-config |
Enable/configure the live announcer |
| Announcer screen | http://localhost:8080/announcer |
Public-facing finisher display |
| Server API | http://localhost:8080/api/v1/... |
REST API for streams, races, tokens |
| Server health | http://localhost:8080/healthz |
Liveness check |
| Receiver | Tauri desktop app | Receiver status and subscriptions (managed via UI) |
| Forwarder status | http://localhost:8081/healthz |
Forwarder health check (when running manually; dev.py uses default port) |
The receiver and forwarder UIs are available at their respective status HTTP addresses
when built with --features embed-ui. In the dev.py stack, the forwarder embeds its UI
by default.
--bibchipand--pplfiles must exist or startup exits early.- If
--bibchipis set and the first emulator has no explicitfile=..., the script generates emulator-compatible reads at/tmp/rusty-timer-dev/generated-reads.txtand wires that file into the first emulator. - Race setup runs in the background after server health is ready:
- creates race
Dev Race - uploads bibchip to
/api/v1/races/{race_id}/chips/upload - uploads PPL to
/api/v1/races/{race_id}/participants/upload
- creates race
Before setup, the script checks for:
- tmux session
rusty-dev - listeners on server port
8080
If it detects a prior dev instance, it prompts to kill/restart, continue, or cancel.
For non-dev processes using port 8080, it refuses to kill them automatically.
Use:
uv run scripts/dev.py --clearThis attempts to:
- kill tmux session
rusty-dev - remove Docker container
rt-postgres - delete
/tmp/rusty-timer-dev
release.py automates service releases by bumping versions, validating release artifacts, creating commits/tags, and pushing the branch plus each tag separately (so GitHub runs one workflow per tag).
It supports these services:
forwarderreceiverstreameremulatorserver
- Run from a clean git working tree.
- Be on the
masterbranch. - Have push access to
origin/master. - Have Rust toolchain available (
cargo build --releaseis run per service). - For
forwarder/receiverreleases, have Node.js + npm available (UI lint/check/test run). - For optional local server Docker build checks, have Docker available.
- Use
uvto run the script in this repository.
uv run scripts/release.py SERVICE [SERVICE ...] (--major | --minor | --patch | --version X.Y.Z) [--dry-run] [--yes]Examples:
# Patch release for one service
uv run scripts/release.py forwarder --patch
# Minor release for multiple services in one transaction
uv run scripts/release.py forwarder emulator --minor
# Set an explicit version
uv run scripts/release.py receiver --version 2.0.0
# Preview only (no file or git changes)
uv run scripts/release.py forwarder --patch --dry-run
# Server release (Docker build/push handled by GitHub Actions on server tag)
uv run scripts/release.py server --patch
# Server release with optional local Docker build check
uv run scripts/release.py server --version 2.0.0 --server-local-docker-build--major: bumpX.Y.ZtoX+1.0.0--minor: bumpX.Y.ZtoX.Y+1.0--patch: bumpX.Y.ZtoX.Y.Z+1--version X.Y.Z: set an exact semantic version (must match^\d+\.\d+\.\d+$)--dry-run: run checks/builds, print mutating commands, and skip file/git mutations--yes,-y: skip interactive confirmation prompt--server-local-docker-build: forserverreleases, run a local Docker build check before commit/tag--server-docker-image IMAGE: image repository used for the optional local server Docker build tags (default:iwismer/rt-server)
For each requested service, the script:
- Reads
services/<service>/Cargo.tomlpackage version. - Computes target version.
- Skips services already at target.
- Updates
services/<service>/Cargo.toml. Forreceiver, also updatesapps/receiver-ui/src-tauri/tauri.conf.jsonversionto the same semver (required for the Tauri Windows release workflow). - Runs release-workflow parity checks/build:
forwarder/receiver:npm ci, UIlint, UIcheck, UI tests forapps/<service>-uiserver:npm ci, UIlint, UIcheck, UI tests forapps/server-ui, then:- default:
cargo build --release --package server --bin server - optional:
docker build -t <image>:v<version> -t <image>:latest -f services/server/Dockerfile .(with--server-local-docker-build)
- default:
forwarder/receiver/streamer/emulator:cargo build --release --package <service> --bin <service>(--features embed-uiforforwarder/receiver)
- Stages
services/<service>/Cargo.tomlandCargo.lock(and forreceiver,apps/receiver-ui/src-tauri/tauri.conf.json). - Creates commit:
chore(<service>): bump version to <new_version>. - Creates tag:
<service>-v<new_version>(forreceiver, thereceiver-v<new_version>tag triggers.github/workflows/release.ymlReceiver Tauri jobs for the NSIS installer and updater manifest).
The script prints each step and the exact command before execution.
In --dry-run, it still runs the checks/build commands, but prints and skips
mutating commands (version file write, git add, git commit, git tag,
git push).
When output is a TTY, step/command/status lines are colorized for readability.
Set NO_COLOR=1 to force plain text output.
After all services succeed, it runs git push origin master, then pushes each
tag with its own git push origin <tag> so every tag triggers its own GitHub
Actions run (see comments in scripts/release.py).
For server releases, Docker image build/push is handled by GitHub Actions on
server-v<version> tags.
That workflow requires repository secrets DOCKERHUB_USERNAME and DOCKERHUB_TOKEN.
- Fails fast if the working tree is dirty.
- Fails fast if current branch is not
master. - Prints the full release plan before execution.
- Warns on explicit version downgrades (
new < current). - Uses transactional rollback on failure:
- Deletes any tags created in this run.
- Resets git state back to starting
HEAD.
- Duplicate service names in CLI args are de-duplicated (first occurrence wins).
- If every selected service is already at the target version, it exits with “Nothing to release”.
- Because rollback uses
git reset --hard, only run this script when your tree is clean (the script enforces this).
sbc_cloud_init.py asks deployment questions and generates the two files needed
for Raspberry Pi cloud-init setup:
user-datanetwork-config
Use it when preparing an SBC image so you do not need to manually edit YAML.
uv run scripts/sbc_cloud_init.pyOptional output directory:
uv run scripts/sbc_cloud_init.py --output-dir /tmp/sbc-configEnable full first-boot automation (no SSH setup commands required):
uv run scripts/sbc_cloud_init.py --auto-first-bootIn --auto-first-boot mode, the wizard also asks for:
- Server base URL
- Forwarder auth token
- Reader targets
- Status bind address
and writes a user-data that runs deploy/sbc/rt-setup.sh non-interactively
on first boot.
The generated setup env also sets forwarder display_name to the same value as
the configured hostname.
The script prompts for:
- Hostname
- SSH admin username
- SSH public key
- Static IPv4/CIDR for eth0
- Default gateway
- DNS servers
- Optional Wi-Fi settings (SSID/password/regulatory domain for
wlan0)
By default, generated files are written to deploy/sbc/generated/.