TheBloke commited on
Commit
ff1a81c
·
1 Parent(s): 0588255

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -4
README.md CHANGED
@@ -1,8 +1,65 @@
1
  ---
2
- license: other
 
 
 
 
 
 
 
 
 
 
3
  ---
4
- This is an attempt to GPTQ [Galpaca 30B](https://huggingface.co/GeorgiaTechResearchInstitute/galpaca-30b).
5
 
6
- In my testing so far it currently does not work.
 
7
 
8
- If it works for you, let me know!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-nc-4.0
3
+ datasets:
4
+ - tatsu-lab/alpaca
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - galactica
9
+ - alpaca
10
+ - opt
11
+ - gptq
12
+ inference: false
13
  ---
 
14
 
15
+ ## About this repository
16
+ This is an attempt to create a GPTQ 4-bit version of [Galpaca 30B](https://huggingface.co/GeorgiaTechResearchInstitute/galpaca-30b).
17
 
18
+ I created these files on request. I have no previous experience of Galactica or Galpaca, and have not done much testing to confirm that the output is useful and usable.
19
+
20
+ You will need 18+GB VRAM to load these models on a GPU.
21
+
22
+ ## Provided files
23
+ * `galpaca-30B-4bit-128g.no-act-order.pt`
24
+ * Created with: `python3 opt.py /content/galpaca-30b c4 --wbits 4 --new-eval --groupsize 128 --save galpaca-30B-4bit-128g.no-act-order.pt`
25
+ * This file seems to create usable results, tested with [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
26
+
27
+ * `galpaca-30B-4bit-128g.pt`
28
+ * Created with: `python3 opt.py /content/galpaca-30b c4 --wbits 4 --new-eval --act-order --groupsize 128 --save galpaca-30B-4bit-128g.pt`
29
+ * In my testing so far, **this file does not work**. It produces garbage output.
30
+ * If you can get it working, please let me know!
31
+
32
+
33
+ ## GPTQ
34
+
35
+ The GPTQ code used to create these models can be found at [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
36
+
37
+ Note that during the GPTQ process, the following warning was seen:
38
+
39
+ `Token indices sequence length is longer than the specified maximum sequence length for this model (1915 > 512). Running this sequence through the model will result in indexing errors`
40
+
41
+ I do not know if this is indicates a potential problem in the GPTQ output, or if it can be ignored. If you know more about this, do let me know.
42
+
43
+
44
+ General model info below is as per the original [Galpaca 30B repository](https://huggingface.co/GeorgiaTechResearchInstitute/galpaca-30b).
45
+
46
+ For more information, example prompts and more, please see the original repository.
47
+
48
+ # GALPACA 30B (large)
49
+
50
+ GALACTICA 30B fine-tuned on the Alpaca dataset.
51
+
52
+ The model card from the original Galactica repo can be found [here](https://github.com/paperswithcode/galai/blob/main/docs/model_card.md), and the original paper [here](https://galactica.org/paper.pdf).
53
+
54
+ The dataset card for Alpaca can be found [here](https://huggingface.co/datasets/tatsu-lab/alpaca/blob/main/README.md), and the project homepage [here](https://crfm.stanford.edu/2023/03/13/alpaca.html).
55
+ The Alpaca dataset was collected with a modified version of the [Self-Instruct Framework](https://github.com/yizhongw/self-instruct), and was built using OpenAI's `text-davinci-003` model. As such it is subject to OpenAI's terms of service.
56
+
57
+ ## Model Details
58
+
59
+ The GALACTICA models are trained on a large-scale scientific corpus and are designed to perform scientific tasks.
60
+ The Alpaca dataset is a set of 52k instruct-response pairs designed to enhace the instruction following capabilites of pre-trained language models.
61
+
62
+ ## Model Use
63
+
64
+ The GALACTICA model card specifies that the primary indended users of the GALACTICA models are researchers studying language models applied to the scientific domain, and it cautions against production use of GALACTICA without safeguards due to the potential for the model to produce inaccurate information.
65
+ The original GALACTICA models are available under a non-commercial CC BY-NC 4.0 license, and the GALPACA model is additionally subject to the [OpenAI Terms of Service](https://openai.com/policies/terms-of-use).