erfanvaredi commited on
Commit
e09d39e
·
verified ·
1 Parent(s): 183f46c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -1
README.md CHANGED
@@ -25,4 +25,54 @@ This model is the double quantized version of `jais-13b-chat` by core42. The aim
25
  Just run it as a text-generation pipeline task.
26
 
27
  # System Requirements:
28
- It successfully has been tested on Google Colab Pro `T4` instance.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  Just run it as a text-generation pipeline task.
26
 
27
  # System Requirements:
28
+ It successfully has been tested on Google Colab Pro `T4` instance.
29
+
30
+ # How To Run:
31
+ 1. First install libs:
32
+ ```sh
33
+ pip install -Uq huggingface_hub transformers bitsandbytes xformers accelerate
34
+ ```
35
+
36
+ 2. Create the pipeline:
37
+ ```py
38
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, TextStreamer, BitsAndBytesConfig
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained("erfanvaredi/jais-7b-chat")
41
+ model = AutoModelForCausalLM.from_pretrained(
42
+ "erfanvaredi/jais-7b-chat",
43
+ trust_remote_code=True,
44
+ device_map='auto',
45
+ )
46
+
47
+ # Create a pipeline
48
+ pipe = pipeline(model=model, tokenizer=tokenizer, task='text-generation')
49
+ ```
50
+
51
+ 3. Create prompt:
52
+ ```py
53
+ chat = [
54
+ {"role": "user", "content": 'Tell me a funny joke about Large Language Models.'},
55
+ ]
56
+ prompt = pipe.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
57
+ ```
58
+
59
+ 4. Create streamer (Its optional. If u want to have generated texts as stream, do it else it does'nt matter):
60
+ ```py
61
+ streamer = TextStreamer(
62
+ tokenizer,
63
+ skip_prompt=True,
64
+ stop_token=[tokenizer.eos_token]
65
+ )
66
+ ```
67
+
68
+ 5. Ask the model:
69
+ ```py
70
+ pipe(
71
+ prompt,
72
+ streamer=streamer,
73
+ max_new_tokens=256,
74
+ temperature=0,
75
+ )
76
+ ```
77
+
78
+ :)