Agnuxo commited on
Commit
a271696
·
verified ·
1 Parent(s): 2e71778

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -12
README.md CHANGED
@@ -1,23 +1,51 @@
1
  ---
2
- base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
 
 
 
 
 
 
 
3
  language:
4
  - en
5
- license: apache-2.0
 
 
6
  tags:
7
- - text-generation-inference
8
- - transformers
9
- - unsloth
10
- - llama
11
- - trl
12
- - sft
13
  ---
14
 
15
- # Uploaded model
16
 
17
- - **Developed by:** Agnuxo
 
18
  - **License:** apache-2.0
19
- - **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
20
 
21
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
23
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ model_size: 3722585088
3
+ required_memory: 13.87
4
+ metrics:
5
+ - GLUE_MRPC
6
+ license: apache-2.0
7
+ datasets:
8
+ - jtatman/python-code-dataset-500k
9
+ - Vezora/Tested-143k-Python-Alpaca
10
  language:
11
  - en
12
+ - es
13
+ base_model: unsloth/Phi-3.5-mini-instruct
14
+ library_name: adapter-transformers
15
  tags:
16
+ - ORPO
17
+ - turbo
18
+ - code
19
+ - python
20
+ - mini
21
+ - destil
22
  ---
23
 
24
+ # Uploaded model
25
 
26
+ [<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" width="100"/><img src="https://github.githubassets.com/assets/GitHub-Logo-ee398b662d42.png" width="100"/>](https://github.com/Agnuxo1)
27
+ - **Developed by:** [Agnuxo](https://github.com/Agnuxo1)
28
  - **License:** apache-2.0
29
+ - **Finetuned from model:** Agnuxo/Phi-3.5
30
 
31
+ This model was fine-tuned using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
32
 
33
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
34
+
35
+ ## Benchmark Results
36
+
37
+ This model has been fine-tuned for various tasks and evaluated on the following benchmarks:
38
+
39
+ ### GLUE_MRPC
40
+ **Accuracy:** 0.5784
41
+ **F1:** 0.6680
42
+
43
+ ![GLUE_MRPC Metrics](./GLUE_MRPC_metrics.png)
44
+
45
+
46
+ Model Size: 3,722,585,088 parameters
47
+ Required Memory: 13.87 GB
48
+
49
+ For more details, visit my [GitHub](https://github.com/Agnuxo1).
50
+
51
+ Thanks for your interest in this model!