988 Downloads Updated 1 month ago
The next generation of models built to play Minecraft
Models
View all →Readme
🧠 Andy‑4 Family
A unified repository for the Andy‑4 family of specialist AI models, each tuned for enhanced Minecraft gameplay and multimodal capabilities* via the Mindcraft framework.
*Multimodal variants are coming, and are not yet released
Shared Metadata
- Language: English (
en
) - Tags: gaming, minecraft, mindcraft
- License: Andy 1.1 License
🧠 Andy‑4 ⛏️
Andy‑4 is an 8 billion‑parameter specialist model tuned for Minecraft gameplay via the Mindcraft framework. Trained on a single NVIDIA RTX 3090 over three weeks, Andy‑4 delivers advanced reasoning, multi‑step planning, and robust in‑game decision‑making.
⚠️ Certification: Andy‑4 is not yet certified by the Mindcraft developers. Use in production at your own discretion.
🔍 Model Specifications
- Parameter Count: 8 B
- Training Hardware: 1 × NVIDIA RTX 3090
- Training Duration: ~3 weeks
- Data Volumes:
- Messages: 179,384
- Tokens: 425,535,198
- Conversations: 62,149
- License: Andy 1.1 License
- Repository: Andy-4
Datasets
datasets:
- Sweaterdog/Andy-4-base-1
- Sweaterdog/Andy-4-base-2
- Sweaterdog/Andy-4-ft
language:
- en
base_model:
- unsloth/Llama3.1-8B
tags:
- gaming
- minecraft
- mindcraft
📊 Training Regimen
Andy‑4‑base‑1 dataset (47.4 k examples)
- Epochs: 2
- Learning Rate: 7 × 10⁻⁵
Andy‑4‑base‑2 dataset (48.9 k examples)
- Epochs: 4
- Learning Rate: 3 × 10⁻⁷
Fine‑tune (FT) dataset (4.12 k examples)
- Epochs: 2.5
- Learning Rate: 2 × 10⁻⁵
- Optimizer: AdamW_8bit with cosine decay
- Quantization: 4‑bit (
bnb-4bit
) for inference
- Warm‑Up Steps: 0.1% of each dataset
🚀 Installation
First, choose your quantization (context window base: 8192):
Quantization | VRAM Required |
---|---|
F16 | 16 GB+ |
Q5_K_M | 8 GB+ |
Q4_K_M | 6–8 GB |
Q3_K_M | 6 GB (low) |
Q2_K | 4–6 GB (ultra) |
1. Installation via Ollama
- Select your desired quantization
- Copy the provided
ollama run
command
- Execute it in your terminal
- Use the model (e.g.,
ollama/sweaterdog/andy-4:latest
)
2. Manual Download & Setup
Download:
- From the Hugging Face Files tab, download the
.GGUF
weights (e.g.,Andy-4.Q4_K_M.gguf
).
- Retrieve the
Modelfile
from this repo.
- From the Hugging Face Files tab, download the
Configure
Modelfile
:FROM /path/to/Andy-4.Q4_K_M.gguf
Optional: Adjust
num_ctx
for extended context windows if you have sufficient VRAM.Register Locally:
ollama create andy-4 -f Modelfile
If you lack a GPU, refer to the Mindcraft Discord guide for free cloud options.
🔧 Context‑Window Quantization
To reduce VRAM usage for context caching:
Windows
- Close Ollama.
- In System Properties → Environment Variables, add:
OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 # or q4_0 (more savings, less stable)
- Restart Ollama.
Linux/macOS
export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_KV_CACHE_TYPE="q8_0" # or "q4_0"
ollama serve
📌 Acknowledgments
- Data & Models By: @Sweaterdog
- Framework: Mindcraft
- LoRA Weights: https://huggingface.co/Sweaterdog/Andy-4-LoRA
🤏 Andy‑4‑micro 🧠
Andy‑4‑micro is a lightweight, 1.5 B‑parameter variant of Andy‑4, optimized for responsive local inference and experimentation within the Mindcraft framework.
💡 Trained on a single NVIDIA RTX 3070 over four days, Andy‑4‑micro maintains strong performance while staying efficient.
⚠️ Certification: Not yet certified by Mindcraft developers. Use at your own discretion.
📊 Model Overview
- Parameter Count: 1.5 B
- Training Hardware: 1 × NVIDIA RTX 3070
- Training Duration: ~4 days
- Total Tokens: ~42 M
- Base Architecture: Qwen 2.5
- License: Andy 1.1 License
- Repository: Andy-4-micro
Datasets
datasets:
- Sweaterdog/Andy-4-base-2
- Sweaterdog/Andy-4-ft
language:
- en
base_model:
- unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit
tags:
- gaming
- minecraft
- mindcraft
🚀 Installation
First, choose your quantization (context window base: 8192):
Quantization | VRAM Required |
---|---|
F16 | 5 GB |
Q8_0 | 3 GB+ |
Q5_K_M | 2 GB+ |
Q3_K_M | 1GB or CPU |
1. Installation via Ollama
- Select quantization
- Copy and run the
ollama run
command
- Use
ollama/sweaterdog/andy-4:micro-q8_0
locally
2. Manual Download & Setup
- Download: Grab the
.GGUF
weights (e.g.,Andy-4-micro.Q4_K_M.gguf
) andModelfile
.
- Configure
Modelfile
:
Optional: TweakFROM /path/to/Andy-4-micro.Q4_K_M.gguf
num_ctx
as needed. - Register:
ollama create andy-4-micro -f Modelfile
Free GPU options: see the Mindcraft Discord guide.
🔧 Context‑Window Quantization
Use the same environment-variable tweaks listed above for AdamW-flash attention and KV cache.
📌 Acknowledgments
- Data & Model By: @Sweaterdog
- Framework: Mindcraft
- LoRA Weights: https://huggingface.co/Sweaterdog/Andy-4-micro-LoRA