1,670 Downloads Updated 3 months ago
Updated 3 months ago
3 months ago
12ec425c590d · 1.9GB
A unified repository for the Andy‑4 family of specialist AI models, each tuned for enhanced Minecraft gameplay and multimodal capabilities* via the Mindcraft framework.
en
)Andy‑4 is an 8 billion‑parameter specialist model tuned for Minecraft gameplay via the Mindcraft framework. Trained on a single NVIDIA RTX 3090 over three weeks, Andy‑4 delivers advanced reasoning, multi‑step planning, and robust in‑game decision‑making.
⚠️ Certification: Andy‑4 is not yet certified by the Mindcraft developers. Use in production at your own discretion.
datasets:
- Sweaterdog/Andy-4-base-1
- Sweaterdog/Andy-4-base-2
- Sweaterdog/Andy-4-ft
language:
- en
base_model:
- unsloth/Llama3.1-8B
tags:
- gaming
- minecraft
- mindcraft
Andy‑4‑base‑1 dataset (47.4 k examples)
Andy‑4‑base‑2 dataset (48.9 k examples)
Fine‑tune (FT) dataset (4.12 k examples)
bnb-4bit
) for inferenceFirst, choose your quantization (context window base: 8192):
Quantization | VRAM Required |
---|---|
F16 | 16 GB+ |
Q5_K_M | 8 GB+ |
Q4_K_M | 6–8 GB |
Q3_K_M | 6 GB (low) |
Q2_K | 4–6 GB (ultra) |
ollama run
commandollama/sweaterdog/andy-4:latest
)Download:
.GGUF
weights (e.g., Andy-4.Q4_K_M.gguf
).Modelfile
from this repo.Configure Modelfile
:
FROM /path/to/Andy-4.Q4_K_M.gguf
Optional: Adjust num_ctx
for extended context windows if you have sufficient VRAM.
Register Locally:
ollama create andy-4 -f Modelfile
If you lack a GPU, refer to the Mindcraft Discord guide for free cloud options.
To reduce VRAM usage for context caching:
OLLAMA_FLASH_ATTENTION=1
OLLAMA_KV_CACHE_TYPE=q8_0 # or q4_0 (more savings, less stable)
export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_KV_CACHE_TYPE="q8_0" # or "q4_0"
ollama serve
Andy‑4‑micro is a lightweight, 1.5 B‑parameter variant of Andy‑4, optimized for responsive local inference and experimentation within the Mindcraft framework.
💡 Trained on a single NVIDIA RTX 3070 over four days, Andy‑4‑micro maintains strong performance while staying efficient.
⚠️ Certification: Not yet certified by Mindcraft developers. Use at your own discretion.
datasets:
- Sweaterdog/Andy-4-base-2
- Sweaterdog/Andy-4-ft
language:
- en
base_model:
- unsloth/Qwen2.5-1.5B-Instruct-unsloth-bnb-4bit
tags:
- gaming
- minecraft
- mindcraft
First, choose your quantization (context window base: 8192):
Quantization | VRAM Required |
---|---|
F16 | 5 GB |
Q8_0 | 3 GB+ |
Q5_K_M | 2 GB+ |
Q3_K_M | 1GB or CPU |
ollama run
commandollama/sweaterdog/andy-4:micro-q8_0
locally.GGUF
weights (e.g., Andy-4-micro.Q4_K_M.gguf
) and Modelfile
.Modelfile
:
FROM /path/to/Andy-4-micro.Q4_K_M.gguf
Optional: Tweak num_ctx
as needed.
ollama create andy-4-micro -f Modelfile
Free GPU options: see the Mindcraft Discord guide.
Use the same environment-variable tweaks listed above for AdamW-flash attention and KV cache.