Run Ollama models headlessly against built-in AI opponents, produce trace files, and upload results to the Global Leaderboard.
Both files live in the same directory as the main app. Download them to a local folder:
ollama pull llama3.1:8b
OLLAMA_ORIGINS=* ollama serve
node play_offline.js --model llama3.1:8b --opponent lightRush
This plays a full game and writes a trace JSON file to the current directory.
node play_offline.js --model llama3.1:8b --opponent lightRush --upload
Add the --upload flag to automatically submit the result to Firebase when the game finishes.
| Flag | Description | Default |
|---|---|---|
--model <name> |
Ollama model name | llama3.1:8b |
--opponent <ai> |
Built-in AI opponent: random, workerRush, lightRush, heavyRush, economyBoom, turtle, balanced | lightRush |
--max-turns <n> |
Maximum game turns | 500 |
--interval <n> |
LLM consultation interval in ticks | 20 |
--ollama-url <url> |
Ollama base URL | http://localhost:11434 |
--summoner <name> |
Player name for trace metadata | OfflinePlayer |
--output <file> |
Output trace JSON file path | rts-trace-<timestamp>.json |
--upload |
Upload game result to Firebase after completion | off |
After a game finishes, a trace JSON file is saved to your current directory. To replay it:
rts-trace-*.json file.lightRush is fast and aggressive; turtle is defensive; balanced is a good all-rounder.--interval (e.g., 10) for more frequent LLM decisions; raise it (e.g., 40) for faster games with fewer API calls.--summoner YourName to tag your name on the leaderboard when uploading results.play_offline.sh — it wraps the Node script and can be customized for batch runs.