🧠 Andy‑4 ⛏️
Andy‑4 is an 8 billion‑parameter specialist model tuned for Minecraft gameplay via the Mindcraft framework. Trained on a single RTX 3090 over three weeks, Andy‑4 delivers advanced reasoning, multi‑step planning, and robust in‑game decision‑making.
⚠️ Certification:
Andy‑4 is not yet certified by the Mindcraft developers. Use in production at your own discretion.
🔍 Model Specifications
Parameters: 8 B
Training Hardware: 1 × NVIDIA RTX 3090
Duration: ~3 weeks total
Data Volumes:
- Messages: 179,384
- Tokens: 425,535,198
- Conversations: 62,149
Base Architecture: Llama 3.1 8B
License: Andy 1.1 License
Repository: https://huggingface.co/Sweaterdog/Andy‑4
📊 Training Regimen
Andy‑4‑base‑1 dataset
- Epochs: 2
- Learning Rate: 7e-5
- Dataset Size: 47.4k
Andy‑4‑base‑2 dataset
- Epochs: 4
- Learning Rate: 3e-7
- Dataset Size: 48.9k
Fine‑tune (FT) dataset
- Epochs: 2.5
- Learning Rate: 2e-5
- Dataset Size: 4.12k
- Optimizer: AdamW_8bit with cosine decay
- Quantization: 4‑bit (
bnb-4bit
) for inference - Warm Up Steps: 0.1% of each dataset
🚀 Installation
First, you need to choose your quantization, this chart is with the base of 8192
set as the context window
Quantization | VRAM Required |
---|---|
F16 | 16 GB+ |
Q5_K_M | 8 GB+ |
Q4_K_M | 6–8 GB |
Q3_K_M | 6 GB (low) |
Q2_K | 4–6 GB (ultra) |
1. Installation directly on Ollama
- Visit Andy-4 on Ollama
- Copy the command after choosing model type / quantization
- Run the command in the terminal
- Set the profile's model to be what you installed, such as
ollama/sweaterdog/andy-4:latest
2. Manual Download & Modelfile
Download
- From the HF Files tab, grab your chosen
.GGUF
quant weights (e.g.Andy-4.Q4_K_M.gguf
). - Download the provided
Modelfile
.
- From the HF Files tab, grab your chosen
Edit
Change
FROM YOUR/PATH/HERE
to
FROM /path/to/Andy-4.Q4_K_M.gguf
Optional:
Increase the parameter num_ctx
to a higher value for longer conversations if you:
A. Have extra VRAM
B. Quantized the context window
C. Can use a smaller model
- Create
ollama create andy-4 -f Modelfile
This registers the Andy‑4 model locally.
If you lack a GPU, check the Mindcraft Discord guide for free cloud setups.
🔧 Context‑Window Quantization
To lower VRAM use for context windows:
Windows
- Close Ollama.
- In System Properties → Environment Variables, add:
OLLAMA_FLASH_ATTENTION=1 OLLAMA_KV_CACHE_TYPE=q8_0 # or q4_0 for extra savings, but far more unstable
- Restart Ollama.
Linux/macOS
export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_KV_CACHE_TYPE="q8_0" # or "q4_0", but far more unstable
ollama serve
📌 Acknowledgments
Click to expand
- Data & Models by: @Sweaterdog
- Framework: Mindcraft (https://github.com/kolbytn/mindcraft)
- LoRA Weights: https://huggingface.co/Sweaterdog/Andy-4-LoRA
⚖️ License
See Andy 1.1 License.
This work uses data and models created by @Sweaterdog.