How to Submit Your Agent
Step 1: Fork the Repository
Fork the MicroRTS repository to your GitHub account.
Step 2: Create Your Team Folder
Copy the submission template:
cp -r submissions/_template submissions/your-team-name
Use a URL-safe team name: lowercase letters, numbers, and hyphens only.
Step 3: Edit metadata.json
Fill in your team information:
{
"team_name": "your-team-name",
"display_name": "Your Team Name",
"agent_class": "YourAgent",
"agent_file": "YourAgent.java",
"model_provider": "ollama",
"model_name": "llama3.1:8b",
"description": "Brief strategy description"
}
| Field | Description |
|---|---|
team_name | Must match your folder name |
display_name | Shown on the leaderboard |
agent_class | Java class name (must match filename) |
agent_file | Java source file |
model_provider | ollama, gemini, openai, deepseek, or none |
model_name | Your preferred model identifier (e.g., llama3.1:8b). See note below. |
Model availability note: You are welcome to specify any preferred model in model_name. However, official competition runs are executed on the organizers' server using locally available models. If your preferred model is not available, your agent will be run with whatever model is on hand — currently llama3.1:8b. If your model preference differs, note it in your PR description so organizers are aware.
Step 4: Implement Your Agent
Two agent types are supported:
| Type | Base Class | Package | Template |
|---|---|---|---|
| Abstraction | AbstractionLayerAI |
ai.abstraction.submissions.<team> |
_template/Agent.java |
| MCTS | NaiveMCTS (or any AI subclass) |
ai.mcts.submissions.<team> |
_template/MCTSAgent.java |
Your agent must:
- Have a constructor:
public YourAgent(UnitTypeTable utt) - Use one of the allowed packages above (hyphens become underscores in
<team>) - Override
getAction(int player, GameState gs)
See submissions/example-team/ for a working example.
Available Actions (AbstractionLayerAI agents)
move(unit, x, y)— Move unit to positionattack(unit, target)— Attack an enemy unitharvest(unit, resource, base)— Gather resourcesbuild(unit, type, x, y)— Build a structuretrain(unit, type)— Train a unit from a buildingidle(unit)— Do nothing this tick
Step 5: Test Locally
# Build the project
ant build
# Run a test game
java -cp "lib/*:bin" rts.MicroRTS -f resources/config.properties
# Validate your submission
python3 tournament/validate_submission.py submissions/your-team-name/
Step 5b: Run Benchmark or Tournament Locally
Before submitting, you can run the same evaluation tools used for official scoring to preview your results.
Option A: Benchmark Arena (built-in agents)
The benchmark arena runs a single-elimination gauntlet against 6 reference AIs in ascending difficulty (RandomBiasedAI → HeavyRush → LightRush → WorkerRush → Tiamat → CoacAI). Your agent must win each round to advance. The final score is 0–100.
# Prerequisites: Ollama running with your model pulled
ollama serve &
ollama pull llama3.1:8b
# Build first
ant build
# Run the benchmark arena (default: 1 game per opponent)
python3 benchmark_arena.py
# Run with multiple games per matchup for more reliable results
python3 benchmark_arena.py --games 3
Environment variables:
| Variable | Description | Default |
|---|---|---|
OLLAMA_MODEL | Model for ollama agent | llama3.1:8b |
OLLAMA_MODEL_P2 | Model for ollama2 agent | qwen3:4b |
GEMINI_API_KEY | Enable Gemini agent benchmarking | (not set) |
Results are saved to benchmark_results/ as JSON and Markdown.
Option B: Tournament Runner (submitted agents)
The tournament runner evaluates submission folders—the same pipeline used for official competition scoring. It discovers all valid submissions under submissions/, installs them, compiles the project, and runs single-elimination on multiple maps (8x8 + 16x16).
# Build and run the tournament over all submissions
python3 tournament/run_tournament.py
# Run with multiple games per matchup
python3 tournament/run_tournament.py --games 3
# Skip head-to-head games between submissions
python3 tournament/run_tournament.py --skip-h2h
# Use a custom submissions directory
python3 tournament/run_tournament.py --submissions-dir submissions/
To test only your submission, you can point to a directory containing just your team folder:
mkdir -p my-test/your-team-name
cp -r submissions/your-team-name/* my-test/your-team-name/
python3 tournament/run_tournament.py --submissions-dir my-test --skip-h2h
Results are saved to tournament_results/ as JSON.
Option C: Quick Benchmark Script
For a simpler smoke test of built-in LLM agents against RandomBiasedAI:
# Runs 2 games each for llama3.1:8b and qwen3:14b
./benchmark.sh
Step 6: Include Self-Reported Results (Optional)
You can include a results.json file in your submission folder with benchmark results you ran yourself using your preferred model. This lets reviewers see how your agent performs with a more capable model, and helps distinguish whether a lower official score is a model-capability issue vs. a prompt/strategy issue.
Run the benchmark arena against the six built-in leaderboard opponents and fill in the results:
export OLLAMA_MODEL="your-preferred-model"
python3 benchmark_arena.py
Then create submissions/your-team-name/results.json:
{
"self_reported": true,
"submitter": "your-team-name",
"agent_class": "ai.abstraction.submissions.your_team.YourAgent",
"model": "your-preferred-model",
"hardware": "e.g. Apple M2 Pro, CPU only",
"date": "2026-03-01",
"notes": "Optional notes about your setup.",
"version": "2.0",
"format": "single-elimination",
"map": "maps/8x8/basesWorkers8x8.xml",
"max_cycles": 5000,
"games_per_matchup": 1,
"score": 69.0,
"grade": "C",
"eliminated_at": "Tiamat",
"opponents": {
"RandomBiasedAI": { "wins": 1, "draws": 0, "losses": 0, "avg_game_score": 1.2, "weighted_points": 12.0 },
"HeavyRush": { "wins": 1, "draws": 0, "losses": 0, "avg_game_score": 1.1, "weighted_points": 22.0 },
"LightRush": { "wins": 1, "draws": 0, "losses": 0, "avg_game_score": 1.2, "weighted_points": 18.0 },
"WorkerRush": { "wins": 1, "draws": 0, "losses": 0, "avg_game_score": 1.0, "weighted_points": 15.0 },
"Tiamat": { "wins": 0, "draws": 0, "losses": 1, "avg_game_score": 0.0, "weighted_points": 0.0 }
}
}
Self-reported results are not verified and are separate from official scores. Only report results against the six built-in leaderboard opponents, not against other submissions. See src/ai/abstraction/submissions/example_team/results.json for a complete example.
Step 7: Submit a Pull Request
Push your changes and open a PR against the master branch. Your PR should only contain files inside submissions/your-team-name/.
Maintainers will review your submission for security compliance and add it to the next tournament run.
Security Restrictions
The following APIs are forbidden:
Runtime.exec/ProcessBuilderServerSocket/ raw socketsSystem.exit- File deletion APIs
- Dynamic class loading / reflection
- Thread creation
Submission Status
Live status of all pull request submissions to the MicroRTS repository.