andreabac3 commited on
Commit
2c91a87
·
1 Parent(s): 6ddd682

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -23,7 +23,7 @@ Hence, our model is able to answer to your questions in Italian 🙋, fix your b
23
 
24
  ## The 🇮🇹 open-source version of chatGPT!
25
  Discover the capabilities of Fauno and experience the evolution of Italian language models for yourself.
26
- ![demo](screenshot_demo.png)
27
 
28
  ### Why Fauno?
29
  We started with a model called Baize, named after a legendary creature from Chinese literature. Continuing along this thematic line, we developed our Italian model based on Baize and named it Fauno, inspired by an iconic figure from Roman mythology. This choice underlines the link between the two models, while maintaining a distinctive identity rooted in Italian culture.
@@ -40,13 +40,13 @@ Fauno 13B and 30B are coming soon!
40
  from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
41
  from peft import PeftModel
42
 
43
- tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
44
  model = LlamaForCausalLM.from_pretrained(
45
- "decapoda-research/llama-7b-hf",
46
  load_in_8bit=True,
47
  device_map="auto",
48
  )
49
- model = PeftModel.from_pretrained(model, "andreabac3/Fauno-Italian-LLM-7B")
50
  model.eval()
51
  ```
52
 
 
23
 
24
  ## The 🇮🇹 open-source version of chatGPT!
25
  Discover the capabilities of Fauno and experience the evolution of Italian language models for yourself.
26
+ ![demo](screenshot_demo_fix.png)
27
 
28
  ### Why Fauno?
29
  We started with a model called Baize, named after a legendary creature from Chinese literature. Continuing along this thematic line, we developed our Italian model based on Baize and named it Fauno, inspired by an iconic figure from Roman mythology. This choice underlines the link between the two models, while maintaining a distinctive identity rooted in Italian culture.
 
40
  from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
41
  from peft import PeftModel
42
 
43
+ tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-13b-hf")
44
  model = LlamaForCausalLM.from_pretrained(
45
+ "decapoda-research/llama-13b-hf",
46
  load_in_8bit=True,
47
  device_map="auto",
48
  )
49
+ model = PeftModel.from_pretrained(model, "andreabac3/Fauno-Italian-LLM-13B")
50
  model.eval()
51
  ```
52