vaishaal commited on
Commit
099daf7
·
verified ·
1 Parent(s): 3959b33

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md CHANGED
@@ -34,6 +34,31 @@ DCLM-Baseline-7B is a 7 billion parameter language model trained on the DCLM-Bas
34
  - **Paper:** [DataComp-LM: In search of the next generation of training sets for language models](https://arxiv.org/abs/2406.11794)
35
 
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  ### Training Details
38
 
39
  The model was trained using the following setup:
 
34
  - **Paper:** [DataComp-LM: In search of the next generation of training sets for language models](https://arxiv.org/abs/2406.11794)
35
 
36
 
37
+ ## Using Model
38
+
39
+ First install open_lm
40
+
41
+ ```pip install git+https://github.com/mlfoundations/open_lm.git```
42
+
43
+ Then:
44
+ ```
45
+ from open_lm.hf import *
46
+ from transformers import AutoTokenizer, AutoModelForCausalLM
47
+ tokenizer = AutoTokenizer.from_pretrained("apple/DCLM-Baseline-7B")
48
+ model = AutoModelForCausalLM.from_pretrained("apple/DCLM-Baseline-7B")
49
+
50
+ inputs = tokenizer(["Machine learning is"], return_tensors="pt")
51
+ gen_kwargs = {"max_new_tokens": 50, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
52
+ output = model.generate(inputs['input_ids'], **gen_kwargs)
53
+ output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
54
+ print(output)
55
+ ```
56
+
57
+
58
+
59
+
60
+
61
+
62
  ### Training Details
63
 
64
  The model was trained using the following setup: