Transformers
GGUF
English
text-generation-inference
unsloth
gemma3
conversational
File size: 2,550 Bytes
3064845
 
 
 
 
 
 
 
 
 
b8e1ef4
 
3064845
 
26f83d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3064845
 
 
 
 
 
 
 
b8e1ef4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
datasets:
- harshalmore31/Eric-Jorgenson_The-Almanack-of-Naval-Ravikant
---

# naval-gemma: AI Model Emulating Naval Ravikant's Wisdom

## Model Overview

**naval-gemma** is an AI language model fine-tuned to emulate the wisdom and insights of Naval Ravikant, a renowned entrepreneur and philosopher. Built upon Google DeepMind's Gemma-3-4B architecture, this model offers responses reflecting Naval's perspectives on topics like wealth, happiness, and decision-making.

## Model Details

- **Model Architecture:** Gemma-3-4B
- **Fine-Tuning Dataset:** Extracted from "The Almanack of Naval Ravikant" by Eric Jorgenson
- **Quantization:** GGUF Q8_0 (4.1GB)
- **Inference Platforms:** Compatible with Ollama and llama.cpp for local, offline usage

## Usage

To utilize naval-gemma locally:

1. **Pull the Model:**
   ```bash
   ollama pull harshalmore31/naval-gemma
   ```

2. **Run the Model:**
   ```bash
   ollama run harshalmore31/naval-gemma
   ```


*Note:* Ensure Ollama or llama.cpp is installed and configured on your system.

## Example Interaction

**Prompt:** "How can I build wealth without luck?"

**Response:**
"Play long-term games with long-term people. Build specific knowledge, apply leverage, and let compounding work over time."

## License

This model is fine-tuned on public content from "The Almanack of Naval Ravikant" and distributed for educational and research purposes. Commercial use or redistribution should comply with fair use and original content ownership rights.

## Acknowledgements

- **Naval Ravikant:** For his timeless wisdom
- **Eric Jorgenson:** Author of "The Almanack of Naval Ravikant"
- **Google DeepMind:** Developers of the Gemma-3-4B model
- **Ollama & llama.cpp:** Tools enabling local AI inference

## Contact

For inquiries or contributions, please reach out via [GitHub](https://github.com/harshalmore31/naval-gemma) or [Hugging Face](https://huggingface.co/harshalmore31/naval_gemma-3).

---


# Uploaded finetuned  model

- **Developed by:** harshalmore31
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit

This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)