benchang1110 commited on
Commit
2cddcd7
1 Parent(s): f322077

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -3
README.md CHANGED
@@ -1,3 +1,45 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - benchang1110/ChatTaiwan
5
+ language:
6
+ - zh
7
+ pipeline_tag: text-generation
8
+ widget:
9
+ - example_title: 範例一
10
+ messages:
11
+ - role: user
12
+ content: >-
13
+ 你好
14
+
15
+ ---
16
+ ## Model Card for Model ID
17
+
18
+ This model is the instruction finetuning version of [benchang1110/Taiwan-tinyllama-v1.0-base](https://huggingface.co/benchang1110/Taiwan-tinyllama-v1.0-base).
19
+
20
+ ## Usage
21
+ ```python
22
+ import torch, transformers
23
+
24
+ def generate_response():
25
+ model = transformers.AutoModelForCausalLM.from_pretrained("benchang1110/Taiwan-tinyllama-v1.0-chat", torch_dtype=torch.bfloat16, device_map=device,attn_implementation="flash_attention_2")
26
+ tokenizer = transformers.AutoTokenizer.from_pretrained("benchang1110/Taiwan-tinyllama-v1.0-chat")
27
+ streamer = transformers.TextStreamer(tokenizer,skip_prompt=True)
28
+ while(1):
29
+ prompt = input('USER:')
30
+ if prompt == "exit":
31
+ break
32
+ print("Assistant: ")
33
+ message = [
34
+ {'content': prompt, 'role': 'user'},
35
+ ]
36
+ untokenized_chat = tokenizer.apply_chat_template(message,tokenize=False,add_generation_prompt=False)
37
+ inputs = tokenizer.encode_plus(untokenized_chat, add_special_tokens=True, return_tensors="pt",return_attention_mask=True).to(device)
38
+ outputs = model.generate(inputs["input_ids"],attention_mask=inputs['attention_mask'],streamer=streamer,use_cache=True,max_new_tokens=512,do_sample=True,temperature=0.1,repetition_penalty=1.2)
39
+
40
+
41
+ if __name__ == '__main__':
42
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
43
+ generate_response()
44
+
45
+ ```