ssmits commited on
Commit
e599f82
1 Parent(s): 3fe6285

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md CHANGED
@@ -57,6 +57,37 @@ dtype: bfloat16
57
 
58
  ![Layer Similarity Plot](https://cdn-uploads.huggingface.co/production/uploads/660c0a02cf274b3ab77dd6b7/47CiSRvJpmKGGfF-eUY6U.png)
59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  ## Direct Use
61
  Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
62
 
 
57
 
58
  ![Layer Similarity Plot](https://cdn-uploads.huggingface.co/production/uploads/660c0a02cf274b3ab77dd6b7/47CiSRvJpmKGGfF-eUY6U.png)
59
 
60
+ ```python
61
+ from transformers import AutoTokenizer, AutoModelForCausalLM
62
+ import transformers
63
+ import torch
64
+
65
+ model = "ssmits/Falcon2-5.5B-multilingual"
66
+
67
+ tokenizer = AutoTokenizer.from_pretrained(model)
68
+ pipeline = transformers.pipeline(
69
+ "text-generation",
70
+ model=model,
71
+ tokenizer=tokenizer,
72
+ torch_dtype=torch.bfloat16,
73
+ )
74
+ sequences = pipeline(
75
+ "Can you explain the concepts of Quantum Computing?",
76
+ max_length=200,
77
+ do_sample=True,
78
+ top_k=10,
79
+ num_return_sequences=1,
80
+ eos_token_id=tokenizer.eos_token_id,
81
+ )
82
+ for seq in sequences:
83
+ print(f"Result: {seq['generated_text']}")
84
+
85
+ ```
86
+
87
+ 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
88
+
89
+ For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
90
+
91
  ## Direct Use
92
  Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
93