Skip to content

AICrossSim/NewComputeBench

Repository files navigation

NewComputeBench

Doc GitHub

NewComputeBench is a benchmark suite for new compute paradigms — Spiking Neural Networks, Optical computation, Processing-in-Memory, and more — via software emulation. We aim to predict the scaling law of neural networks trained with new compute paradigms by running small- and medium-scale experiments and extrapolating observed trends.

📖 Full documentation: aicrosssim.github.io/NewComputeBench


Overview

The project is structured around three phases:

  1. Build a scaling framework for language model pretraining up to 1.1B parameters (AICrossSim-CLM series)
  2. Implement software emulation of new compute paradigms
  3. Filter out promising paradigms through small- and medium-scale experiments, then scale up

Quick Start

git clone https://github.com/AICrossSim/NewComputeBench.git
cd NewComputeBench
git submodule update --init

Option 1 — uv (recommended, assumes CUDA is pre-installed on the system)

uv sync
uv pip install -e ./submodules/mase   # install MASE quantization backend

Option 2 — conda + pip (use this if CUDA is not pre-installed)

conda env create -f environment.yaml   # installs Python 3.11 + CUDA Toolkit
conda activate new-compute
pip install -r requirements.txt
pip install -e ./submodules/mase
# Run inference with a pretrained model
cd experiments/llm-digital/pretrain
python run.py hf-gen --model_name AICrossSim/clm-60m --prompt "London is"

See the Installation Guide for full setup instructions.

Tutorials

Topic Link
LLM Pretraining & Evaluation docs
Random Bitflip on CLM docs
Bitflip-Aware LoRA Fine-Tuning (Llama-3.1-8B) docs
Optical Neural Networks on RoBERTa docs
Optical Neural Networks on CLM docs
Spiking Neural Networks on RoBERTa docs
Processing-in-Memory on RoBERTa docs
Processing-in-Memory on ViT docs

Pretrained Models

Our pretrained AICrossSim-CLM checkpoints are available on HuggingFace:

Model HuggingFace
CLM-60M (clean) AICrossSim/clm-60m
CLM-200M (clean) AICrossSim/clm-200m
CLM-400M (clean) AICrossSim/clm-400m
CLM-1.1B (clean) AICrossSim/clm-1.1b
CLM-60M (bitflip-aware) AICrossSim/bitflip-fc-clm-60m
CLM-200M (bitflip-aware) AICrossSim/bitflip-fc-clm-200m
CLM-400M (bitflip-aware) AICrossSim/bitflip-fc-clm-400m
CLM-1.1B (bitflip-aware) AICrossSim/bitflip-fc-clm-1.1b

About

This project is led by Dr. Yiren Zhao (Imperial College London), Dr. Luo Mai (University of Edinburgh), and Prof. Robert Mullins (University of Cambridge), funded by ARIA.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors