bofenghuang commited on
Commit
b0538f3
1 Parent(s): e7dd08b
README.md CHANGED
@@ -14,12 +14,12 @@ inference: false
14
  ---
15
 
16
  <p align="center" width="100%">
17
- <img src="https://huggingface.co/bofenghuang/vigogne-lora-bloom-7b1/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
18
  </p>
19
 
20
- # Vigogne-LoRA-BLOOM-7b1: A French Instruct BLOOM Model
21
 
22
- Vigogne-LoRA-BLOOM-7b1 is a [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) model fine-tuned on the translated [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset to follow the 🇫🇷 French instructions.
23
 
24
  For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
25
 
@@ -33,23 +33,23 @@ This repo only contains the low-rank adapter. In order to access the complete mo
33
  from peft import PeftModel
34
  from transformers import AutoModelForCausalLM, AutoTokenizer
35
 
36
- tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-7b1"")
 
 
 
37
  model = AutoModelForCausalLM.from_pretrained(
38
- "bigscience/bloom-7b1"",
39
  load_in_8bit=True,
 
40
  device_map="auto",
41
  )
42
- model = PeftModel.from_pretrained(model, "bofenghuang/vigogne-lora-bloom-7b1")
43
  ```
44
 
45
  You can infer this model by using the following Google Colab Notebook.
46
 
47
- <a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/infer.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
48
 
49
  ## Limitations
50
 
51
  Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
52
-
53
- ## Next Steps
54
-
55
- - Add output examples
 
14
  ---
15
 
16
  <p align="center" width="100%">
17
+ <img src="https://huggingface.co/bofenghuang/vigogne-instruct-bloom-7b1/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
18
  </p>
19
 
20
+ # Vigogne-instruct-bloom-7b1: A French Instruction-following BLOOM Model
21
 
22
+ Vigogne-instruct-bloom-7b1 is a [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) model fine-tuned to follow the 🇫🇷 French instructions.
23
 
24
  For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
25
 
 
33
  from peft import PeftModel
34
  from transformers import AutoModelForCausalLM, AutoTokenizer
35
 
36
+ base_model_name_or_path = "bigscience/bloom-7b1"
37
+ lora_model_name_or_path = "bofenghuang/vigogne-instruct-bloom-7b1"
38
+
39
+ tokenizer = AutoTokenizer.from_pretrained(base_model_name_or_path, padding_side="right", use_fast=False))
40
  model = AutoModelForCausalLM.from_pretrained(
41
+ base_model_name_or_path,
42
  load_in_8bit=True,
43
+ torch_dtype=torch.float16,
44
  device_map="auto",
45
  )
46
+ model = PeftModel.from_pretrained(model, lora_model_name_or_path)
47
  ```
48
 
49
  You can infer this model by using the following Google Colab Notebook.
50
 
51
+ <a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_chat.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
52
 
53
  ## Limitations
54
 
55
  Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
 
 
 
 
runs/Mar26_00-12-34_koios.zaion.ai/1679785954.4135473/events.out.tfevents.1679785954.koios.zaion.ai DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:06cabb35dd7abcaf75a82b19399ea7d6c5bbddaa10a0d1bbdd7352edc6af9ccc
3
- size 5569
 
 
 
 
runs/Mar26_00-12-34_koios.zaion.ai/events.out.tfevents.1679785954.koios.zaion.ai DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d662bcd78d8c45c8c269764975563ddd39bd0980484741a4529d1f1af79d520d
3
- size 13022