ArthurZ HF staff commited on
Commit
0c31c7e
1 Parent(s): 0d17c19

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -27,15 +27,16 @@ If any of these two is not installed, the "eager" implementation will be used. O
27
  ## Generation
28
  You can use the classic `generate` API:
29
  ```python
30
- from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
31
- import torch
32
 
33
- tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-370m-hf")
34
- model = MambaForCausalLM.from_pretrained("state-spaces/mamba-370m-hf")
35
- input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"]
36
 
37
- out = model.generate(input_ids, max_new_tokens=10)
38
- print(tokenizer.batch_decode(out))
 
39
  ```
40
 
41
  ## PEFT finetuning example
 
27
  ## Generation
28
  You can use the classic `generate` API:
29
  ```python
30
+ >>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
31
+ >>> import torch
32
 
33
+ >>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-370m-hf")
34
+ >>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-370m-hf")
35
+ >>> input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"]
36
 
37
+ >>> out = model.generate(input_ids, max_new_tokens=10)
38
+ >>> print(tokenizer.batch_decode(out))
39
+ ["Hey how are you doing?\n\nI'm doing great.\n\nI"]
40
  ```
41
 
42
  ## PEFT finetuning example