File size: 2,713 Bytes
d02d5bc
 
 
 
 
 
 
 
 
 
 
0ba4c48
 
 
 
 
d02d5bc
 
0ba4c48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d02d5bc
 
 
 
0ba4c48
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
metrics:
- bleu
- cer
- meteor
library_name: transformers
---

# Llama-3.2B Finetuned Model

## 1. Introduction
This model is a finetuned version of the Llama-3.2B large language model. It has been specifically trained to provide detailed and accurate responses for university course-related queries. This model offers insights on course details, fee structures, duration, and campus options, along with links to corresponding course pages. The finetuning process ensured domain-specific accuracy by utilizing a tailored dataset.

---

## 2. Dataset Used for Finetuning
The finetuning of the Llama-3.2B model was performed using a private dataset obtained through web scraping. Data was collected from the University of Westminster website and included:

- Course titles
- Campus details
- Duration options (full-time, part-time, distance learning)
- Fee structures (for UK and international students)
- Course descriptions
- Direct links to course pages

This dataset was carefully cleaned and formatted to enhance the model's ability to provide precise responses to user queries.

---

## 3. How to Use This Model
To use the Llama-3.2B finetuned model, follow the steps below:

1. **Prepare the Query Function**
   - Define the function to handle user queries and generate responses:
     
     ```python
     from transformers import TextStreamer

def chatml(question, model):
         messages = [{"role": "user", "content": question},]

         inputs = tokenizer.apply_chat_template(messages,
                                                tokenize=True,
                                                add_generation_prompt=True,
                                                return_tensors="pt",).to("cuda")

         print(tokenizer.decode(inputs[0]))
         text_streamer = TextStreamer(tokenizer, skip_special_tokens=True,
                                      skip_prompt=True)
         return model.generate(input_ids=inputs,
                               streamer=text_streamer,
                               max_new_tokens=512)
     ```

2. **Query the Model**
   - Use the following example to test the model:
     
     ```python
     question = "Does the University of Westminster offer a course on AI, Data and Communication MA?"
     x = chatml(question, model)
     ```

This setup ensures you can effectively query the Llama-3.2B finetuned model and receive detailed, relevant responses.

---


# Uploaded  model

- **Developed by:** roger33303
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct