English
gaming
minecraft
mindcraft

🧠 Andy‑4 ⛏️

image/png Andy‑4 is an 8 billion‑parameter specialist model tuned for Minecraft gameplay via the Mindcraft framework. Trained on a single RTX 3090 over three weeks, Andy‑4 delivers advanced reasoning, multi‑step planning, and robust in‑game decision‑making.

⚠️ Certification:
Andy‑4 is not yet certified by the Mindcraft developers. Use in production at your own discretion.


🔍 Model Specifications


📊 Training Regimen

  1. Andy‑4‑base‑1 dataset

    • Epochs: 2
    • Learning Rate: 7e-5
    • Dataset Size: 47.4k
  2. Andy‑4‑base‑2 dataset

    • Epochs: 4
    • Learning Rate: 3e-7
    • Dataset Size: 48.9k
  3. Fine‑tune (FT) dataset

    • Epochs: 2.5
    • Learning Rate: 2e-5
    • Dataset Size: 4.12k
  • Optimizer: AdamW_8bit with cosine decay
  • Quantization: 4‑bit (bnb-4bit) for inference
  • Warm Up Steps: 0.1% of each dataset

🚀 Installation

First, you need to choose your quantization, this chart is with the base of 8192 set as the context window

Quantization VRAM Required
F16 16 GB+
Q5_K_M 8 GB+
Q4_K_M 6–8 GB
Q3_K_M 6 GB (low)
Q2_K 4–6 GB (ultra)

1. Installation directly on Ollama

  1. Visit Andy-4 on Ollama
  2. Copy the command after choosing model type / quantization
  3. Run the command in the terminal
  4. Set the profile's model to be what you installed, such as ollama/sweaterdog/andy-4:latest

2. Manual Download & Modelfile

  1. Download

    • From the HF Files tab, grab your chosen .GGUF quant weights (e.g. Andy-4.Q4_K_M.gguf).
    • Download the provided Modelfile.
  2. Edit

    Change

    FROM YOUR/PATH/HERE
    

    to

    FROM /path/to/Andy-4.Q4_K_M.gguf
    

Optional: Increase the parameter num_ctx to a higher value for longer conversations if you:

A. Have extra VRAM

B. Quantized the context window

C. Can use a smaller model

  1. Create
    ollama create andy-4 -f Modelfile
    

This registers the Andy‑4 model locally.


If you lack a GPU, check the Mindcraft Discord guide for free cloud setups.

🔧 Context‑Window Quantization

To lower VRAM use for context windows:

Windows

  1. Close Ollama.
  2. In System Properties → Environment Variables, add:
    OLLAMA_FLASH_ATTENTION=1  
    OLLAMA_KV_CACHE_TYPE=q8_0   # or q4_0 for extra savings, but far more unstable
    
  3. Restart Ollama.

Linux/macOS

export OLLAMA_FLASH_ATTENTION=1
export OLLAMA_KV_CACHE_TYPE="q8_0"   # or "q4_0", but far more unstable
ollama serve

📌 Acknowledgments

Click to expand

⚖️ License

See Andy 1.1 License.

This work uses data and models created by @Sweaterdog.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train Sweaterdog/Andy-4

Collection including Sweaterdog/Andy-4