File size: 2,702 Bytes
60edbfc
 
3c10c11
 
 
 
60edbfc
3c10c11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- llm-foundry
- docsgpt
---

DocsGPT-7B is a decoder-style transformer that is fine-tuned specifically for providing answers based on documentation given in context. It is an extension of the MosaicPretrainedTransformer (MPT) family, being fine-tuned from the MPT-7B model developed by MosaicML. The model inherits the powerful language understanding capabilities of MPT-7B and has been specialized for the purpose of documentation-oriented question answering.

## Model Description

    Architecture: Decoder-style Transformer
    Language: English
    Training data: Fine-tuned on approximately 1000 high-quality examples of documentation answering workflows.
    Base model: Fine-tuned version of MPT-7B, which is pretrained from scratch on 1T tokens of English text and code.
    License: Apache 2.0

## Features

* Attention with Linear Biases (ALiBi): Inherited from the MPT family, this feature eliminates the context length limits by replacing positional embeddings, allowing for efficient and effective processing of lengthy documents. In future we are planning to finish training on our larger dataset and to increase amount of tokens for context.
* Optimized for Documentation: Specifically fine-tuned for providing answers that are based on documentation provided in context, making it particularly useful for developers and technical support teams.
* Easy to Serve: Can be efficiently served using standard HuggingFace pipelines or NVIDIA's FasterTransformer.


## How to Use

This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.

```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
  'mosaicml/mpt-7b',
  trust_remote_code=True
)
```


This model was uses [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.

```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')
```


## Documentation

* [Base model documentation](https://github.com/mosaicml/llm-foundry/)
* Our community [Discord](https://discord.gg/n5BX8dh8rU)
* [DocGPT](https://github.com/arc53/DocsGPT) project
 


## Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.

## Limitations

Please be aware this is a relatively small llm and its prone to biases and hallucinations


Our live [demo](https://docsgpt.arc53.com/) that uses a mixture of models

## Model License

Apache-2.0