lilloukas Ariel Lee commited on
Commit
761ac41
1 Parent(s): e4ed3fc

Update README.md (#1)

Browse files

- Update README.md (af93077f3c33aaca18e508ee3632fea629dae08b)


Co-authored-by: Ariel Lee <[email protected]>

Files changed (1) hide show
  1. README.md +90 -1
README.md CHANGED
@@ -1,3 +1,92 @@
1
  ---
2
- license: other
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - llama
6
+ license: apache-2.0
7
+ metrics:
8
+ - MMLU
9
+ - ARC
10
+ - HellaSwag
11
+ - TruthfulQA
12
+ - ReClor
13
  ---
14
+
15
+ # 🥳 Platypus30B has arrived!
16
+
17
+ | Metric | Value |
18
+ |-----------------------|-------|
19
+ | MMLU (5-shot) | 64.2 |
20
+ | ARC (25-shot) | 76.7 |
21
+ | HellaSwag (10-shot) | 84.3 |
22
+ | TruthfulQA (0-shot) | 37.4 |
23
+ | ReClor (0-shot) | 70 |
24
+
25
+ ## Model Description
26
+
27
+ Platypus30B is an instruction fine-tuned LlaMa model.
28
+
29
+ ## Apply Delta Weights
30
+
31
+ ```sh
32
+ ADD
33
+ ```
34
+
35
+ ## Usage
36
+
37
+ ```sh
38
+ ADD
39
+ ```
40
+
41
+ ## Model Details
42
+
43
+ * **Trained by**: [Ariel Lee & Cole Hunter, LINK TO WEBSITES]
44
+ * **Model type:** **Platypus30B** is an auto-regressive language model based on the LLaMA transformer architecture.
45
+ * **Language(s)**: English
46
+ * **License for base weights**: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
47
+
48
+ | Hyperparameter | Value |
49
+ |---------------------------|-------|
50
+ | \\(n_\text{parameters}\\) | 33B |
51
+ | \\(d_\text{model}\\) | 6656 |
52
+ | \\(n_\text{layers}\\) | 60 |
53
+ | \\(n_\text{heads}\\) | 52 |
54
+
55
+ ## Training
56
+
57
+ ### Training Dataset
58
+
59
+ Dataset of highly filtered and curated question and answer pairs. Release TBD.
60
+
61
+ ### Training Procedure
62
+
63
+ `lilloukas/Platypus30b` was instruction fine-tuned using lora [CITE REPO] on 2 A100 80GB with the following configuration:
64
+
65
+ | Hyperparameter | Value |
66
+ |---------------------|-------|
67
+ | learning_rate | --- |
68
+ | batch_size | --- |
69
+ | microbatch_size | --- |
70
+ | warmup_steps | --- |
71
+ | epochs | --- |
72
+ | weight_decay | --- |
73
+ | optimizer | --- |
74
+ | weight_decay | --- |
75
+ | cutoff_len | --- |
76
+ | lora_target_modules | --- |
77
+
78
+
79
+ ## Limitations and bias
80
+
81
+ The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA [paper](https://arxiv.org/abs/2302.13971). We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
82
+
83
+ ## Citations
84
+
85
+ ```bibtex
86
+ @article{touvron2023llama,
87
+ title={LLaMA: Open and Efficient Foundation Language Models},
88
+ author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
89
+ journal={arXiv preprint arXiv:2302.13971},
90
+ year={2023}
91
+ }
92
+ ```