Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,68 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
pipeline_tag: text-generation
|
4 |
+
tags:
|
5 |
+
- llm-foundry
|
6 |
+
- docsgpt
|
7 |
---
|
8 |
+
|
9 |
+
DocsGPT-7B is a decoder-style transformer that is fine-tuned specifically for providing answers based on documentation given in context. It is an extension of the MosaicPretrainedTransformer (MPT) family, being fine-tuned from the MPT-7B model developed by MosaicML. The model inherits the powerful language understanding capabilities of MPT-7B and has been specialized for the purpose of documentation-oriented question answering.
|
10 |
+
|
11 |
+
## Model Description
|
12 |
+
|
13 |
+
Architecture: Decoder-style Transformer
|
14 |
+
Language: English
|
15 |
+
Training data: Fine-tuned on approximately 1000 high-quality examples of documentation answering workflows.
|
16 |
+
Base model: Fine-tuned version of MPT-7B, which is pretrained from scratch on 1T tokens of English text and code.
|
17 |
+
License: Apache 2.0
|
18 |
+
|
19 |
+
## Features
|
20 |
+
|
21 |
+
* Attention with Linear Biases (ALiBi): Inherited from the MPT family, this feature eliminates the context length limits by replacing positional embeddings, allowing for efficient and effective processing of lengthy documents. In future we are planning to finish training on our larger dataset and to increase amount of tokens for context.
|
22 |
+
* Optimized for Documentation: Specifically fine-tuned for providing answers that are based on documentation provided in context, making it particularly useful for developers and technical support teams.
|
23 |
+
* Easy to Serve: Can be efficiently served using standard HuggingFace pipelines or NVIDIA's FasterTransformer.
|
24 |
+
|
25 |
+
|
26 |
+
## How to Use
|
27 |
+
|
28 |
+
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
|
29 |
+
|
30 |
+
```python
|
31 |
+
import transformers
|
32 |
+
model = transformers.AutoModelForCausalLM.from_pretrained(
|
33 |
+
'mosaicml/mpt-7b',
|
34 |
+
trust_remote_code=True
|
35 |
+
)
|
36 |
+
```
|
37 |
+
|
38 |
+
|
39 |
+
This model was uses [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
|
40 |
+
|
41 |
+
```python
|
42 |
+
from transformers import AutoTokenizer
|
43 |
+
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')
|
44 |
+
```
|
45 |
+
|
46 |
+
|
47 |
+
## Documentation
|
48 |
+
|
49 |
+
* [Base model documentation](https://github.com/mosaicml/llm-foundry/)
|
50 |
+
* Our community [Discord](https://discord.gg/n5BX8dh8rU)
|
51 |
+
* [DocGPT](https://github.com/arc53/DocsGPT) project
|
52 |
+
|
53 |
+
|
54 |
+
|
55 |
+
## Disclaimer
|
56 |
+
|
57 |
+
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
|
58 |
+
|
59 |
+
## Limitations
|
60 |
+
|
61 |
+
Please be aware this is a relatively small llm and its prone to biases and hallucinations
|
62 |
+
|
63 |
+
|
64 |
+
Our live [demo](https://docsgpt.arc53.com/) that uses a mixture of models
|
65 |
+
|
66 |
+
## Model License
|
67 |
+
|
68 |
+
Apache-2.0
|