My first quantization, this is a q4_0 GGML(ggjtv3) and GGUFv2 quantization of the model https://huggingface.co/acrastt/OmegLLaMA-3B I hope it's working fine. 🤗

Prompt format:

Interests: {interests}
Conversation:
You: {prompt}
Stranger: 
Downloads last month
54
GGUF
Model size
3.43B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Dataset used to train Aryanne/OmegLLaMA-3B-ggml-and-gguf