dartpain commited on
Commit
2575225
·
1 Parent(s): 678cf87

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md CHANGED
@@ -1,3 +1,88 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - rag
5
+ - closed-qa
6
+ - context
7
+ - mistral
8
  ---
9
+
10
+
11
+ DocsGPT is optimized for Documentation (RAG optimised): Specifically fine-tuned for providing answers that are based on context, making it particularly useful for developers and technical support teams.
12
+ We used the Lora fine tuning process.
13
+ This model is fine tuned on top of zephyr-7b-beta
14
+
15
+
16
+ It's an apache-2.0 license so you can use it for commercial purposes too.
17
+
18
+
19
+ Benchmarks:
20
+
21
+ Bacon:
22
+ The BACON test is an internal assessment designed to evaluate the capabilities of neural networks in handling questions with substantial content. It focuses on testing the model's understanding of context-driven queries, as well as its tendency for hallucination and attention span. The questions in both parts are carefully crafted, drawing from diverse sources such as scientific papers, complex code problems, and instructional prompts, providing a comprehensive test of the model's ability to process and generate information in various domains.
23
+ | Model | Score |
24
+ |------------------------------|-------|
25
+ | gpt-4 | 8.74 |
26
+ | DocsGPT-7b-Mistral | 8.64 |
27
+ | gpt-3.5-turbo | 8.42 |
28
+ | zephyr-7b-beta | 8.37 |
29
+ | neural-chat-7b-v3-1 | 7.88 |
30
+ | Mistral-7B-Instruct-v0.1 | 7.44 |
31
+ | openinstruct-mistral-7b | 5.86 |
32
+ | llama-2-13b | 2.29 |
33
+
34
+
35
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6220f5dfd0351748e114ca53/lWefx5b5uQAt4Uzf_0x-O.png)
36
+
37
+
38
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6220f5dfd0351748e114ca53/nAd4icZa2jIer-_JWOpZ0.png)
39
+
40
+
41
+
42
+
43
+ MTbench with llm judge:
44
+
45
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6220f5dfd0351748e114ca53/SOOVW_j908gpB8W804vsG.png)
46
+
47
+ ########## First turn ##########
48
+ | Model | Turn | Score |
49
+ |-----------------------|------|----------|
50
+ | gpt-4 | 1 | 8.956250 |
51
+ | gpt-3.5-turbo | 1 | 8.075000 |
52
+ | DocsGPT-7b-Mistral | 1 | 7.593750 |
53
+ | zephyr-7b-beta | 1 | 7.412500 |
54
+ | vicuna-13b-v1.3 | 1 | 6.812500 |
55
+ | alpaca-13b | 1 | 4.975000 |
56
+ | deepseek-coder-6.7b | 1 | 4.506329 |
57
+
58
+ ########## Second turn ##########
59
+ | Model | Turn | Score |
60
+ |-----------------------|------|----------|
61
+ | gpt-4 | 2 | 9.025000 |
62
+ | gpt-3.5-turbo | 2 | 7.812500 |
63
+ | DocsGPT-7b-Mistral | 2 | 6.740000 |
64
+ | zephyr-7b-beta | 2 | 6.650000 |
65
+ | vicuna-13b-v1.3 | 2 | 5.962500 |
66
+ | deepseek-coder-6.7b | 2 | 5.025641 |
67
+ | alpaca-13b | 2 | 4.087500 |
68
+
69
+ ########## Average ##########
70
+ | Model | Score |
71
+ |-----------------------|----------|
72
+ | gpt-4 | 8.990625 |
73
+ | gpt-3.5-turbo | 7.943750 |
74
+ | DocsGPT-7b-Mistral | 7.166875 |
75
+ | zephyr-7b-beta | 7.031250 |
76
+ | vicuna-13b-v1.3 | 6.387500 |
77
+ | deepseek-coder-6.7b | 4.764331 |
78
+ | alpaca-13b | 4.531250 |
79
+
80
+
81
+
82
+
83
+ To prepare your prompts make sure you keep this format:
84
+ ### Instruction
85
+ (where the question goes)
86
+ ### Context
87
+ (your document retrieval + system instructions)
88
+ ### Answer