maddes8cht commited on
Commit
5319f45
1 Parent(s): bf93a1a

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +182 -0
README.md ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
5
+
6
+ I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
7
+
8
+ # dragon-falcon-7b-v0 - GGUF
9
+ - Model creator: [llmware](https://huggingface.co/llmware)
10
+ - Original model: [dragon-falcon-7b-v0](https://huggingface.co/llmware/dragon-falcon-7b-v0)
11
+
12
+ # K-Quants in Falcon 7b models
13
+
14
+ New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models. This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
15
+
16
+ For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
17
+
18
+ So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
19
+
20
+
21
+ # Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
22
+
23
+ As previously noted on the [Llama.cpp GitHub repository](https://github.com/ggerganov/llama.cpp#hot-topics), all new Llama.cpp releases after October 18, 2023, required re-quantization due to the implementation of the new BPE tokenizer.
24
+
25
+ This re-quantization process for Falcon Models is now complete, the latest quantized models are available here for download. To ensure continued compatibility with recent llama.cpp software, You need to update your Falcon models.
26
+
27
+ - **Stay Informed:** Keep an eye on software application release schedules using llama.cpp libraries.
28
+ - **Monitor Upload Times:** Re-quantization is complete. Watch for updates on my Hugging Face Model pages.
29
+
30
+ This change only affects **Falcon** and **Starcoder** models, with other models remaining unaffected.
31
+
32
+
33
+
34
+
35
+ # About GGUF format
36
+
37
+ `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
38
+ A growing list of Software is using it and can therefore use this model.
39
+ The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
40
+
41
+ # Quantization variants
42
+
43
+ There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
44
+
45
+ # Legacy quants
46
+
47
+ Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
48
+ Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
49
+ ## Note:
50
+ Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
51
+ (This mainly refers to Falcon 7b and Starcoder models)
52
+
53
+ # K-quants
54
+
55
+ K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
56
+ So, if possible, use K-quants.
57
+ With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
58
+
59
+
60
+
61
+
62
+ ---
63
+
64
+ # Original Model Card:
65
+ # Model Card for Model ID
66
+
67
+ <!-- Provide a quick summary of what the model is/does. -->
68
+
69
+ dragon-falcon-7b-v0 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Falcon-7B base model.
70
+
71
+ DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation.
72
+
73
+ ### Benchmark Tests
74
+
75
+ Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)
76
+ Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
77
+
78
+ --**Accuracy Score**: **94** correct out of 100
79
+ --Not Found Classification: 75.0%
80
+ --Boolean: 81.25%
81
+ --Math/Logic: 66.75%
82
+ --Complex Questions (1-5): 3 (Medium)
83
+ --Summarization Quality (1-5): 3 (Coherent, extractive)
84
+ --Hallucinations: No hallucinations observed in test runs.
85
+
86
+ For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
87
+
88
+ ### Model Description
89
+
90
+ <!-- Provide a longer summary of what this model is. -->
91
+
92
+ - **Developed by:** llmware
93
+ - **Model type:** Falcon
94
+ - **Language(s) (NLP):** English
95
+ - **License:** Apache 2.0
96
+ - **Finetuned from model:** Falcon-7B-Base
97
+
98
+ ### Direct Use
99
+
100
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
101
+
102
+ DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
103
+ legal and regulatory industries with complex information sources.
104
+
105
+ DRAGON models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
106
+ without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
107
+
108
+
109
+ ## Bias, Risks, and Limitations
110
+
111
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
112
+
113
+ Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
114
+
115
+
116
+ ## How to Get Started with the Model
117
+
118
+ The fastest way to get started with dRAGon is through direct import in transformers:
119
+
120
+ from transformers import AutoTokenizer, AutoModelForCausalLM
121
+ tokenizer = AutoTokenizer.from_pretrained("dragon-falcon-7b-v0")
122
+ model = AutoModelForCausalLM.from_pretrained("dragon-falcon-7b-v0")
123
+
124
+ Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
125
+
126
+ The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
127
+
128
+ full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
129
+
130
+ The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
131
+
132
+ 1. Text Passage Context, and
133
+ 2. Specific question or instruction based on the text passage
134
+
135
+ To get the best results, package "my_prompt" as follows:
136
+
137
+ my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
138
+
139
+
140
+ If you are using a HuggingFace generation script:
141
+
142
+ # prepare prompt packaging used in fine-tuning process
143
+ new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
144
+
145
+ inputs = tokenizer(new_prompt, return_tensors="pt")
146
+ start_of_output = len(inputs.input_ids[0])
147
+
148
+ # temperature: set at 0.3 for consistency of output
149
+ # max_new_tokens: set at 100 - may prematurely stop a few of the summaries
150
+
151
+ outputs = model.generate(
152
+ inputs.input_ids.to(device),
153
+ eos_token_id=tokenizer.eos_token_id,
154
+ pad_token_id=tokenizer.eos_token_id,
155
+ do_sample=True,
156
+ temperature=0.3,
157
+ max_new_tokens=100,
158
+ )
159
+
160
+ output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
161
+
162
+
163
+ ## Model Card Contact
164
+
165
+ Darren Oberst & llmware team
166
+
167
+ ***End of original Model File***
168
+ ---
169
+
170
+
171
+ ## Please consider to support my work
172
+ **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
173
+
174
+ <center>
175
+
176
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
177
+ [![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911)
178
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht)
179
+ [![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht)
180
+ [![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966)
181
+
182
+ </center>