File size: 2,824 Bytes
c957bd8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6387d88
c957bd8
6387d88
 
704cba4
 
712e81d
6387d88
 
712e81d
6387d88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66efefd
704cba4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---

# Uploaded  model

- **Developed by:** prithivMLmods
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
Here’s a revised version of your instructions formatted for a GitHub `README.md` file:

---

[demo video on YouTube](https://youtu.be/_9IcVFuql2s?si=PIoTlPmuXBhG3zx1).

# Run with Ollama 🦙

### Download and Install Ollama

To get started, download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your Windows or Mac system.

### Run Your Own Model in Minutes

### Steps to Run GGUF Models:

#### 1. Create the Model File
   - Name your model file appropriately, for example, `metallama`.

#### 2. Add the Template Command
   - Include a `FROM` line with the base model file. For instance:

     ```bash
     FROM Llama-3.2-1B.F16.gguf
     ```

   - Make sure the model file is in the same directory as your script.

#### 3. Create and Patch the Model
   - Use the following command in your terminal to create and patch your model:

     ```bash
     ollama create metallama -f ./metallama
     ```

   - Upon success, a confirmation message will appear.

   - To verify that the model was created successfully, run:

     ```bash
     ollama list
     ```

     Ensure that `metallama` appears in the list of models.

---

## Running the Model

To run the model, use:

```bash
ollama run metallama
```

### Sample Usage

In the command prompt, run:

```bash
D:\>ollama run metallama
```

Example interaction:

```plaintext
>>> write a mini passage about space x
Space X, the private aerospace company founded by Elon Musk, is revolutionizing the field of space exploration.
With its ambitious goals to make humanity a multi-planetary species and establish a sustainable human presence in
the cosmos, Space X has become a leading player in the industry. The company's spacecraft, like the Falcon 9, have
demonstrated remarkable capabilities, allowing for the transport of crews and cargo into space with unprecedented
efficiency. As technology continues to advance, the possibility of establishing permanent colonies on Mars becomes
increasingly feasible, thanks in part to the success of reusable rockets that can launch multiple times without
sustaining significant damage. The journey towards becoming a multi-planetary species is underway, and Space X
plays a pivotal role in pushing the boundaries of human exploration and settlement.
```

---

You’re now ready to run your own model with Ollama!

![Demo of Project](Demo/gguf.gif)