license: apache-2.0
pipeline_tag: text-generation
tags:
- llm-foundry
- docsgpt
DocsGPT-7B is a decoder-style transformer that is fine-tuned specifically for providing answers based on documentation given in context. It is built on to of the MosaicPretrainedTransformer (MPT), being fine-tuned from the MPT-7B model developed by MosaicML. The model inherits the powerful language understanding capabilities of MPT-7B and has been specialized for the purpose of documentation-oriented question answering.
Model Description
Architecture: Decoder-style Transformer
Training data: Fine-tuned on approximately 1000 high-quality examples of documentation answering workflows.
Base model: Fine-tuned version of MPT-7B, which is pretrained from scratch on 1T tokens of English text and code.
License: Apache 2.0
Features
- Attention with Linear Biases (ALiBi): Inherited from the MPT family, this feature eliminates the context length limits by replacing positional embeddings, allowing for efficient and effective processing of lengthy documents. In future we are planning to finish training on our larger dataset and to increase amount of tokens for context.
- Optimized for Documentation: Specifically fine-tuned for providing answers that are based on documentation provided in context, making it particularly useful for developers and technical support teams.
- Easy to Serve: Can be efficiently served using standard HuggingFace pipelines or NVIDIA's FasterTransformer.
How to Use
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'Arc53/DocsGPT-7B',
trust_remote_code=True
)
This model was uses EleutherAI/gpt-neox-20b tokenizer.
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')
Warning
This is a early version, fine tuning with 1k examples is just proof of concept we plan to fine tune on at least 100k more examples
Documentation
- Base model documentation
- Our community Discord
- DocGPT project
Training Configuration
It took 3 hours on 4xA100 GPU's on Google Cloud
Training data
Its base on all feedback that we have recieved from Here, There is a thumbs up or a down button next to each response. In this version we used 1k responses.
Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
Limitations
Please be aware this is a relatively small llm and its prone to biases and hallucinations
Our live demo that uses a mixture of models
Model License
Apache-2.0