dmayhem93 commited on
Commit
6260eee
1 Parent(s): deca23b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -6
README.md CHANGED
@@ -13,7 +13,7 @@ pipeline_tag: text-generation
13
 
14
  ## Model Description
15
 
16
- `FreeWilly` is a Llama65B model finetuned on an Orca style Dataset
17
 
18
  ## Usage
19
 
@@ -23,7 +23,7 @@ FreeWilly1 cannot be used from the `stabilityai/FreeWilly1-Delta-SafeTensor` wei
23
 
24
 
25
  ```sh
26
- python3 apply_delta.py --base /path/to/model_weights/llama-65b --target FreeWilly1 --delta stabilityai/FreeWilly1-Delta-SafeTensor
27
  ```
28
 
29
 
@@ -32,19 +32,21 @@ Start chatting with `FreeWilly` using the following code snippet:
32
 
33
  ```python
34
  import torch
35
- from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
36
 
37
  tokenizer = AutoTokenizer.from_pretrained("your_path_to_freewilly", use_fast=False)
38
  model = AutoModelForCausalLM.from_pretrained("your_path_to_freewilly", torch_dtype=torch.float16, low_cpu_mem_usage=True, use_accelerate=True)
39
- generator = pipeline(model=model, tokenizer=tokenizer)
40
  system_prompt = "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n"
41
  system_prompt += "### Instruction:\nYou are Free Willy, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n"
42
 
43
  message = "Write me a poem please"
44
  prompt = f"{system_prompt}### Input: {message}\n\n### Response:\n"
45
 
46
- output = generator(prompt, num_return_sequences=1, do_sample=True, top_p=0.95, top_k=0)
47
- print(output)
 
 
48
  ```
49
 
50
  FreeWilly should be used with prompts formatted similarly to Alpaca as below:
@@ -93,6 +95,8 @@ These models are intended for research only, in adherence with the [CC BY-NC-4.0
93
 
94
  Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly.
95
 
 
 
96
  ## Citations
97
 
98
  ```bibtext
@@ -127,3 +131,4 @@ Although the aforementioned dataset helps to steer the base language models into
127
  howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
128
  }
129
  ```
 
 
13
 
14
  ## Model Description
15
 
16
+ `FreeWilly` is a Llama65B model fine-tuned on an Orca style Dataset
17
 
18
  ## Usage
19
 
 
23
 
24
 
25
  ```sh
26
+ python3 apply_delta.py --base-model-path /path/to/model_weights/llama-65b --target-model-path FreeWilly1 --delta-path stabilityai/FreeWilly1-Delta-SafeTensor
27
  ```
28
 
29
 
 
32
 
33
  ```python
34
  import torch
35
+ from transformers import AutoModelForCausalLM, AutoTokenizer
36
 
37
  tokenizer = AutoTokenizer.from_pretrained("your_path_to_freewilly", use_fast=False)
38
  model = AutoModelForCausalLM.from_pretrained("your_path_to_freewilly", torch_dtype=torch.float16, low_cpu_mem_usage=True, use_accelerate=True)
39
+
40
  system_prompt = "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n"
41
  system_prompt += "### Instruction:\nYou are Free Willy, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n"
42
 
43
  message = "Write me a poem please"
44
  prompt = f"{system_prompt}### Input: {message}\n\n### Response:\n"
45
 
46
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
47
+ output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
48
+
49
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
50
  ```
51
 
52
  FreeWilly should be used with prompts formatted similarly to Alpaca as below:
 
95
 
96
  Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly.
97
 
98
+
99
+
100
  ## Citations
101
 
102
  ```bibtext
 
131
  howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
132
  }
133
  ```
134
+