Uploaded model
- Developed by: harshalmore31
- License: apache-2.0
- Finetuned from model : unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
Overview
Naval Llama is a fine-tuned version of the Meta Llama 3.1 8B model, optimized for fast and efficient text generation. By leveraging advanced LoRA techniques with Unsloth and HuggingFace's TRL library, this model has been trained 2x faster while preserving the core wisdom of Eric Jorgenson’s The Almanack of Naval Ravikant. The model is available in GGUF format, making it ideal for both cloud-based and local text-generation inference.
Features
- Fast Fine-Tuning: Achieved 2x faster training using Unsloth’s efficient LoRA implementation.
- Efficient Quantization: Uses 4-bit quantization to minimize VRAM requirements while maintaining performance.
- GGUF Format: The model is converted to GGUF format for optimized deployment with tools like llama.cpp.
- Versatile Use Cases: Suitable for generating insightful responses, summarizing content, and creative text generation.
- This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 36
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.