Update README.md
Browse files
README.md
CHANGED
@@ -47,11 +47,7 @@ This model was fine-tuned on a **Tesla T4 (Google Colab)** using **Unsloth**, a
|
|
47 |
|
48 |
---
|
49 |
|
50 |
-
|
51 |
-
### 1. Install Dependencies
|
52 |
-
```bash
|
53 |
-
pip install unsloth transformers torch datasets
|
54 |
-
```
|
55 |
|
56 |
### 2. Load the Model
|
57 |
```python
|
@@ -98,51 +94,7 @@ _ = model.generate(**inputs, streamer=text_streamer, max_new_tokens=128,
|
|
98 |
use_cache=True, temperature=1.5, min_p=0.1)
|
99 |
```
|
100 |
|
101 |
-
### **2. Fine-Tuning on a New Dataset**
|
102 |
-
```python
|
103 |
-
from datasets import load_dataset
|
104 |
-
from unsloth.trainer import UnslothVisionDataCollator
|
105 |
-
from trl import SFTTrainer, SFTConfig
|
106 |
-
|
107 |
-
FastVisionModel.for_training(model) # Enable training mode
|
108 |
-
|
109 |
-
dataset = load_dataset("your_custom_dataset")
|
110 |
-
data_collator = UnslothVisionDataCollator(model, tokenizer)
|
111 |
-
|
112 |
-
trainer = SFTTrainer(
|
113 |
-
model=model,
|
114 |
-
tokenizer=tokenizer,
|
115 |
-
data_collator=data_collator,
|
116 |
-
train_dataset=dataset,
|
117 |
-
args=SFTConfig(
|
118 |
-
per_device_train_batch_size=2,
|
119 |
-
gradient_accumulation_steps=4,
|
120 |
-
warmup_steps=5,
|
121 |
-
max_steps=30,
|
122 |
-
learning_rate=2e-4,
|
123 |
-
optim="adamw_8bit",
|
124 |
-
output_dir="outputs"
|
125 |
-
),
|
126 |
-
)
|
127 |
-
trainer.train()
|
128 |
-
```
|
129 |
-
|
130 |
-
---
|
131 |
-
|
132 |
-
## Deployment
|
133 |
-
### **Save Locally**
|
134 |
-
```python
|
135 |
-
model.save_pretrained("Hnm_Llama3.2_(11B)-Vision_lora_model")
|
136 |
-
tokenizer.save_pretrained("Hnm_Llama3.2_(11B)-Vision_lora_model")
|
137 |
-
```
|
138 |
-
|
139 |
-
### **Push to Hugging Face**
|
140 |
-
```python
|
141 |
-
model.push_to_hub("your_huggingface_username/Hnm_Llama3.2_(11B)-Vision_lora_model")
|
142 |
-
tokenizer.push_to_hub("your_huggingface_username/Hnm_Llama3.2_(11B)-Vision_lora_model")
|
143 |
-
```
|
144 |
|
145 |
-
---
|
146 |
|
147 |
## Notes
|
148 |
- This model is optimized for vision-language tasks in the medical field but can be adapted for other applications.
|
|
|
47 |
|
48 |
---
|
49 |
|
50 |
+
|
|
|
|
|
|
|
|
|
51 |
|
52 |
### 2. Load the Model
|
53 |
```python
|
|
|
94 |
use_cache=True, temperature=1.5, min_p=0.1)
|
95 |
```
|
96 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
|
|
|
98 |
|
99 |
## Notes
|
100 |
- This model is optimized for vision-language tasks in the medical field but can be adapted for other applications.
|