MoodChartAI commited on
Commit
4e1eab3
·
verified ·
1 Parent(s): 41d7246

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -33,6 +33,54 @@ base_model: EleutherAI/gpt-neo-1.3B
33
  - **Paper [optional]:** [More Information Needed]
34
  - **Demo [optional]:** [More Information Needed]
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  ## Uses
37
 
38
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
33
  - **Paper [optional]:** [More Information Needed]
34
  - **Demo [optional]:** [More Information Needed]
35
 
36
+
37
+
38
+ ```
39
+ from transformers import AutoModelForCausalLM, AutoTokenizer
40
+ import torch
41
+ from peft import PeftModel, PeftConfig
42
+
43
+ import gc
44
+
45
+ gc.collect()
46
+
47
+ model_name = "MoodChartAI/basicmood"
48
+ adapters_name = ""
49
+
50
+
51
+ torch.cuda.empty_cache()
52
+
53
+
54
+ os.system("sudo swapoff -a; swapon -a")
55
+
56
+ print(f"Starting to load the model {model_name} into memory")
57
+
58
+ m = AutoModelForCausalLM.from_pretrained(
59
+ model_name,
60
+ #load_in_4bit=True,
61
+ ).to(device='cpu:7')
62
+
63
+ print(f"Loading the adapters from {adapters_name}")
64
+ m = PeftModel.from_pretrained(m, adapters_name)
65
+
66
+
67
+ tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B", trust_remote_code=True)
68
+
69
+
70
+
71
+ while True:
72
+ mood_input = input("Mood: ")
73
+
74
+ inputs = tokenizer("Prompt: %s Completions: You're feeling"%mood_input, return_tensors="pt", return_attention_mask=True)
75
+ inputs.to(device='cpu:8')
76
+ outputs = m.generate(**inputs, max_length=12)
77
+
78
+ print(tokenizer.batch_decode(outputs)[0])
79
+
80
+
81
+
82
+
83
+ ```
84
  ## Uses
85
 
86
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->