11b commited on
Commit
dae1348
0 Parent(s):

release v1

Browse files
.gitattributes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ *.bin filter=lfs diff=lfs merge=lfs -text
2
+ *.json filter=lfs diff=lfs merge=lfs -text
3
+ *.model filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ thumbnail: null
5
+ tags:
6
+ - text generation
7
+ - instruct
8
+ pipeline_tag: text-generation
9
+ inference: false
10
+ ---
11
+ <h1 style="text-align: center">Metharme 7B</h1>
12
+ <h2 style="text-align: center">An instruction-tuned LLaMA biased towards fiction writing and conversation.</h2>
13
+
14
+ ## Model Details
15
+
16
+ Metharme 7B is an instruct model based on Meta's LLaMA-7B.
17
+
18
+ This is an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. See the [prompting](#prompting) section below for examples.
19
+
20
+ It was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions attached.
21
+
22
+ ## Applying the XORs
23
+
24
+ The model weights in this repository cannot be used as-is. The files here are XORs due to licensing concerns. To obtain proper, usable model weights you need to:
25
+
26
+ - Request access to the original LLaMA weights from Meta [through this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form)
27
+ - Convert them to the HuggingFace Transformers format by using the [convert_llama_weights_to_hf.py](https://github.com/huggingface/transformers/blob/849367ccf741d8c58aa88ccfe1d52d8636eaf2b7/src/transformers/models/llama/convert_llama_weights_to_hf.py) script **for your version of the `transformers` library**
28
+ - With the LLaMA-7B weights in hand, you can use the [xor_codec.py](./xor_codec.py) script provided in this repository:
29
+
30
+ ```bash
31
+ python3 xor_codec.py \
32
+ ./metharme-7b \
33
+ ./xor_encoded_files \
34
+ /path/to/hf-converted/llama-7b \
35
+ --decode
36
+ ```
37
+
38
+ **Note for Windows users:** If you're on Windows, you might run into issues where following the steps above will result in corrupted files. This seems to be because `git` messes with the encoding of text files (so the `.json`s and other relevant files). To avoid this, use WSL. For reference, these are the MD5 hashes you should get after following the steps above:
39
+
40
+ ```bash
41
+ $ rhash -M *
42
+ 4ad1977f59098f1b9dd022684f70e3d8 config.json
43
+ 2917a1cafb895cf57e746cfd7696bfe5 generation_config.json
44
+ bbb197ca1ddcf6860676e837fcc83a2c pytorch_model-00001-of-00002.bin
45
+ b174598f6c9587787156c473366c9903 pytorch_model-00002-of-00002.bin
46
+ 81648ef3915ed2e83d49fed93122d53e pytorch_model.bin.index.json
47
+ 6b2e0a735969660e720c27061ef3f3d3 special_tokens_map.json
48
+ fdb311c39b8659a5d5c1991339bafc09 tokenizer.json
49
+ eeec4125e9c7560836b4873b6f8e3025 tokenizer.model
50
+ f0b65b44265ba51881b1e1881102504f tokenizer_config.json
51
+ ```
52
+
53
+ ## Prompting
54
+
55
+ The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
56
+
57
+ The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
58
+
59
+ ### Prompting example
60
+
61
+ Here's a concrete example. Let's say you have the following system prompt:
62
+
63
+ > This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.
64
+
65
+ And the user has typed in the following message:
66
+
67
+ > Start!
68
+
69
+ To perform inference against the model, the prompt should be formatted like this:
70
+
71
+ ```
72
+ <|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>
73
+ ```
74
+
75
+ The model might generate something like:
76
+
77
+ > You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
78
+ >
79
+ > - Go inside the house.
80
+ > - Ignore the call and move away.
81
+ > - Run as fast as you can.
82
+
83
+ It will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
84
+
85
+ Pretend then that the user has replied with `go inside`. The next prompt should then be:
86
+
87
+ ```
88
+ <|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
89
+
90
+ - Go inside the house.
91
+ - Ignore the call and move away.
92
+ - Run as fast as you can.<|user|>go inside<|model|>
93
+ ```
94
+
95
+ Which might generate something like:
96
+
97
+ > You enter the house, and immediately, a wave of unsettling energy washes over you. Your senses seem heightened, but your mind feels foggy. A figure appears before you - a tall man with piercing eyes. He speaks to you in a language you don't understand.
98
+ >
99
+ > - Ask him what he wants.
100
+ > - Attack him.
101
+ > - Run away.
102
+
103
+ Same process applies. Usually, it is best to do a sliding window over the user and model turns, but keep the system prompt fixed at the start of the context window.
104
+
105
+ ## Limitations and biases
106
+
107
+ The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
108
+
109
+ As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
xor_codec.py ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ '''
2
+ Adapted from OpenAssistant's original xor_codec.py:
3
+ https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor/raw/main/xor_codec.py
4
+ '''
5
+ import os
6
+ import sys
7
+ import gzip
8
+ import numpy
9
+ from pathlib import Path
10
+
11
+ def xor_uncompressed(dst, src_payload, src_base, block_size=4096):
12
+ fp_payload = open(src_payload, 'rb')
13
+ fp_base = open(src_base, 'rb')
14
+ with open(dst, 'wb') as fp:
15
+ while True:
16
+ buf1 = numpy.array(bytearray(fp_payload.read(block_size)), dtype=numpy.uint8)
17
+ buf2 = numpy.array(bytearray(fp_base.read(block_size)), dtype=numpy.uint8)
18
+ padding = len(buf1) - len(buf2)
19
+ if padding > 0: buf2 = numpy.pad(buf2, (0, padding), 'constant', constant_values=(0,))
20
+ if padding < 0: buf2 = buf2[:len(buf1)]
21
+ buf = numpy.bitwise_xor(buf1, buf2)
22
+ fp.write(buf)
23
+ if len(buf1) < block_size: break
24
+ fp_payload.close()
25
+ fp_base.close()
26
+
27
+ def xor_encode(dst, src_payload, src_base, block_size=4096):
28
+ fp_payload = open(src_payload, 'rb')
29
+ fp_base = open(src_base, 'rb')
30
+ with gzip.open(dst, 'wb') as fp:
31
+ while True:
32
+ buf1 = numpy.array(bytearray(fp_payload.read(block_size)), dtype=numpy.uint8)
33
+ buf2 = numpy.array(bytearray(fp_base.read(block_size)), dtype=numpy.uint8)
34
+ padding = len(buf1) - len(buf2)
35
+ if padding > 0: buf2 = numpy.pad(buf2, (0, padding), 'constant', constant_values=(0,))
36
+ if padding < 0: buf2 = buf2[:len(buf1)]
37
+ buf = numpy.bitwise_xor(buf1, buf2)
38
+ fp.write(buf)
39
+ if len(buf1) < block_size: break
40
+ fp_payload.close()
41
+ fp_base.close()
42
+
43
+ def xor_decode(dst, src_payload, src_base, block_size=4096):
44
+ fp_payload = gzip.open(src_payload, 'rb')
45
+ fp_base = open(src_base, 'rb')
46
+ with open(dst, 'wb') as fp:
47
+ while True:
48
+ buf1 = numpy.array(bytearray(fp_payload.read(block_size)), dtype=numpy.uint8)
49
+ buf2 = numpy.array(bytearray(fp_base.read(block_size)), dtype=numpy.uint8)
50
+ padding = len(buf1) - len(buf2)
51
+ if padding > 0: buf2 = numpy.pad(buf2, (0, padding), 'constant', constant_values=(0,))
52
+ if padding < 0: buf2 = buf2[:len(buf1)]
53
+ buf = numpy.bitwise_xor(buf1, buf2)
54
+ fp.write(buf)
55
+ if len(buf1) < block_size: break
56
+ fp_payload.close()
57
+ fp_base.close()
58
+
59
+ def xor_dir(dst, src_payload, src_base, decode=True, compress=True):
60
+ if compress:
61
+ xor = xor_decode if decode else xor_encode
62
+ else:
63
+ xor = xor_uncompressed
64
+ Path(dst).mkdir(parents=True, exist_ok=True)
65
+ for path in os.listdir(src_payload):
66
+ print("[*] Processing '%s'" % path)
67
+ try:
68
+ xor("%s/%s" % (dst, path), "%s/%s" % (src_payload, path), "%s/%s" % (src_base, path))
69
+ except Exception as e:
70
+ print("Exception when processing '%s'" % path)
71
+
72
+ if __name__ == "__main__":
73
+ if len(sys.argv) < 4:
74
+ print("Usage: xor.py <DESTINATION> <PAYLOAD SOURCE> <LLAMA SOURCE> [--encode] [--compress]")
75
+ exit()
76
+ dst = sys.argv[1]
77
+ src_payload = sys.argv[2]
78
+ src_base = sys.argv[3]
79
+ decode = True
80
+ compress = False
81
+ if len(sys.argv) > 4:
82
+ for arg in sys.argv[4:]:
83
+ if arg == "--encode": decode = False
84
+ if arg == "--compress": compress = True
85
+ xor_dir(dst, src_payload, src_base, decode=decode, compress=compress)
xor_encoded_files/config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8de7d07d44fbec38f39eb02bd6218e3d1f554b736a5fb45f4256bdcd2da1fe6f
3
+ size 597
xor_encoded_files/generation_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a93894f08d98d707cd9a0274f4c9a51bcfa27e701359e12befcc78ffb488817
3
+ size 137
xor_encoded_files/pytorch_model-00001-of-00002.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32da6d28e9f328b352c2463ae4c1c53c66959471295c9dd597e9c98de1767d6b
3
+ size 9976642558
xor_encoded_files/pytorch_model-00002-of-00002.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:711d3b12ff7a9252ddd9b03c2dbcae4c53601af6358399299c2a1549c841ca7d
3
+ size 3500318291
xor_encoded_files/pytorch_model.bin.index.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4be5669c4eb805f9afb7648438733b6ff1a3fa0d988b4165cf353929c2b89d4f
3
+ size 26788
xor_encoded_files/special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f7bcd85900e62abb00ce739eaad53d80170a4a6152d951b6825110d2fc17965
3
+ size 411
xor_encoded_files/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:721b8a5e36fc955713c7b5705281b2144e3505ee46a6194ebb39299d941094a9
3
+ size 1842665
xor_encoded_files/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68147850c080987172d24ad27a9ba2c65c71b46e248e8ee0f0c4eda90e2ca558
3
+ size 499723
xor_encoded_files/tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8cf2eeac1040c1965ce9f8333c2e763b4aba5366d3b3f3367807741325304dfb
3
+ size 831