khushwant04 commited on
Commit
19d4973
1 Parent(s): 39cca76

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -22
README.md CHANGED
@@ -1,22 +1,45 @@
1
- ---
2
- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
3
- language:
4
- - en
5
- license: apache-2.0
6
- tags:
7
- - text-generation-inference
8
- - transformers
9
- - unsloth
10
- - llama
11
- - trl
12
- ---
13
-
14
- # Uploaded model
15
-
16
- - **Developed by:** khushwant04
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
19
-
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
-
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Llama-3.2-3b-FineTome-100k
2
+
3
+ ![Model Logo](link_to_logo_image) <!-- Optional: Replace with a logo if available -->
4
+
5
+ ## Model Description
6
+
7
+ **Llama-3.2-3b-FineTome-100k** is a fine-tuned version of the Llama 3.2 model, optimized for various natural language processing (NLP) tasks. It has been trained on a dataset containing 100,000 examples, designed to improve its performance on domain-specific applications.
8
+
9
+ ### Key Features
10
+
11
+ - **Model Size**: 3 billion parameters
12
+ - **Architecture**: Transformer-based architecture optimized for NLP tasks
13
+ - **Fine-tuning Dataset**: 100k curated examples from diverse sources
14
+
15
+ ## Use Cases
16
+
17
+ - Text generation
18
+ - Sentiment analysis
19
+ - Question answering
20
+ - Language translation
21
+ - Dialogue systems
22
+
23
+ ## Installation
24
+
25
+ To use the **Llama-3.2-3b-FineTome-100k** model, ensure you have the `transformers` library installed. You can install it using pip:
26
+
27
+ ```bash
28
+ pip install transformers
29
+
30
+ from transformers import AutoTokenizer, AutoModelForCausalLM
31
+
32
+ # Load the tokenizer and model
33
+ tokenizer = AutoTokenizer.from_pretrained("huggingface/llama-3.2-3b-finetome-100k")
34
+ model = AutoModelForCausalLM.from_pretrained("huggingface/llama-3.2-3b-finetome-100k")
35
+
36
+ # Encode input text
37
+ input_text = "What are the benefits of using Llama-3.2-3b-FineTome-100k?"
38
+ input_ids = tokenizer.encode(input_text, return_tensors='pt')
39
+
40
+ # Generate output
41
+ output = model.generate(input_ids, max_length=50)
42
+ output_text = tokenizer.decode(output[0], skip_special_tokens=True)
43
+
44
+ print(output_text)
45
+ ```