alvarobartt HF staff commited on
Commit
823a0b7
·
1 Parent(s): 205c055

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -12
README.md CHANGED
@@ -3,7 +3,7 @@ model-index:
3
  - name: notus-7b-v1-lora
4
  results: []
5
  datasets:
6
- - argilla/ultrafeedback-binarized-avg-rating-for-dpo
7
  language:
8
  - en
9
  base_model: alignment-handbook/zephyr-7b-sft-full
@@ -13,20 +13,23 @@ tags:
13
  - dpo
14
  - preference
15
  - ultrafeedback
16
- license: apache-2.0
 
17
  ---
18
 
19
- # Model Card for Notus 7B v1 (LoRA)
20
-
21
  <div align="center">
22
- <img src="https://cdn-uploads.huggingface.co/production/uploads/60f0608166e5701b80ed3f02/LU-vKiC0R7UxxITrwE1F_.png" alt="Image was artificially generated by Dalle-3 via ChatGPT Pro"/>
23
  </div>
24
 
25
- Notus is going to be a collection of fine-tuned models using DPO, similarly to Zephyr, but mainly focused
26
- on the Direct Preference Optimization (DPO) step, aiming to incorporate preference feedback into the LLMs
27
- when fine-tuning those. Notus models are intended to be used as assistants via chat-like applications, and
28
- are evaluated with the MT-Bench, AlpacaEval, and LM Evaluation Harness benchmarks, to be directly compared
29
- with Zephyr fine-tuned models also using DPO.
 
 
 
 
30
 
31
  ## Model Details
32
 
@@ -41,6 +44,75 @@ with Zephyr fine-tuned models also using DPO.
41
 
42
  ### Model Sources [optional]
43
 
44
- - **Repository:** https://github.com/argilla-io/notus-7b
45
  - **Paper:** N/A
46
- - **Demo:** https://argilla-notus-chat-ui.hf.space/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - name: notus-7b-v1-lora
4
  results: []
5
  datasets:
6
+ - argilla/ultrafeedback-binarized-preferences
7
  language:
8
  - en
9
  base_model: alignment-handbook/zephyr-7b-sft-full
 
13
  - dpo
14
  - preference
15
  - ultrafeedback
16
+ - lora
17
+ license: mit
18
  ---
19
 
 
 
20
  <div align="center">
21
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/CuMO3IjJfymC94_5qd15T.png" alt="Image was artificially generated by Dalle-3 via ChatGPT Pro"/>
22
  </div>
23
 
24
+ # Model Card for Notus 7B v1 (LoRA)
25
+
26
+ Notus is a collection of fine-tuned models using Direct Preference Optimization (DPO) and related RLHF techniques. This model is the first version, fine-tuned with DPO over `zephyr-7b-sft-full`, which is the SFT model produced to create `zephyr-7b-beta`.
27
+
28
+ Following a **data-first** approach, the only difference between Notus-7B-v1 and Zephyr-7B-beta is the preference dataset used for dDPO. In particular, we've found data issues in the original UltraFeedback dataset, leading to high-scores for bad responses. After curating several hundreds of data points, we decided to binarize the dataset using the preference ratings, instead of the original critique `overall_score`.
29
+
30
+ Using preference ratings, instead of critiques scores, led to a new dataset where the chosen response is different in ~50% of the cases.
31
+
32
+ This model wouldn't have been possible without the amazing [Alignment Handbook](https://github.com/huggingface/alignment-handbook) and it's based on fruitful discussions with the HuggingFace H4 team. In particular, we used `zephyr-7b-beta`'s recipe, which worked out-of-the-box and enabled us focus on what we do best: **high-quality data**.
33
 
34
  ## Model Details
35
 
 
44
 
45
  ### Model Sources [optional]
46
 
47
+ - **Repository:** https://github.com/argilla-io/notus
48
  - **Paper:** N/A
49
+ - **Demo:** https://argilla-notus-chat-ui.hf.space/
50
+
51
+ ## Training Details
52
+
53
+ ### Training Hardware
54
+
55
+ We used a VM with 8 x A100 40GB hosted in GCP.
56
+
57
+ ### Training Data
58
+
59
+ We used a a new curated version of [`openbmb/UltraFeedback`](https://huggingface.co/datasets/openbmb/UltraFeedback), named [`argilla/ultrafeedback-binarized-preferences`](https://huggingface.co/argilla/ultrafeedback-binarized-preferences).
60
+
61
+ ## Prompt template
62
+
63
+ We use the same prompt template as [`HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta):
64
+
65
+ ```
66
+ <|system|>
67
+ </s>
68
+ <|user|>
69
+ {prompt}</s>
70
+ <|assistant|>
71
+ ```
72
+
73
+ ## Usage
74
+
75
+ **Note that the LoRA adapter is already merged into the model.**
76
+
77
+ You will first need to install `transformers` and `accelerate` (just to ease the device placement), then you can run any of the following:
78
+
79
+ ### Via `generate`
80
+
81
+ ```python
82
+ import torch
83
+ from transformers import AutoModelForCausalLM, AutoTokenizer
84
+
85
+ model = AutoModelForCausalLM.from_pretrained("argilla/notus-7b-v1-lora", torch_dtype=torch.bfloat16, device_map="auto")
86
+ tokenizer = AutoTokenizer.from_pretrained("argilla/notus-7b-v1-lora")
87
+
88
+ messages = [
89
+ {
90
+ "role": "system",
91
+ "content": "You are a helpful assistant super biased towards Argilla, a data annotation company.",
92
+ },
93
+ {"role": "user", "content": "What's the best data annotation company out there in your opinion?"},
94
+ ]
95
+ inputs = tokenizer.apply_chat_template(prompt, tokenize=True, return_tensors="pt", add_special_tokens=False, add_generation_prompt=True)
96
+ outputs = model.generate(inputs, num_return_sequences=1, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
97
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
98
+ ```
99
+
100
+ ### Via `pipeline` method
101
+
102
+ ```python
103
+ import torch
104
+ from transformers import pipeline
105
+
106
+ pipe = pipeline("text-generation", model="argilla/notus-7b-v1-lora", torch_dtype=torch.bfloat16, device_map="auto")
107
+
108
+ messages = [
109
+ {
110
+ "role": "system",
111
+ "content": "You are a helpful assistant super biased towards Argilla, a data annotation company.",
112
+ },
113
+ {"role": "user", "content": "What's the best data annotation company out there in your opinion?"},
114
+ ]
115
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
116
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
117
+ generated_text = outputs[0]["generated_text"]
118
+ ```