Update README.md
Browse files
README.md
CHANGED
@@ -8,9 +8,7 @@ tags:
|
|
8 |
|
9 |
DocsGPT-7B is a decoder-style transformer that is fine-tuned specifically for providing answers based on documentation given in context. It is built on to of the MosaicPretrainedTransformer (MPT), being fine-tuned from the MPT-7B model developed by MosaicML. The model inherits the powerful language understanding capabilities of MPT-7B and has been specialized for the purpose of documentation-oriented question answering.
|
10 |
|
11 |
-
## Warning
|
12 |
|
13 |
-
This is a early version, 1000 examples is just proof of concept we plan to fine tune on at least 100k more examples
|
14 |
|
15 |
## Model Description
|
16 |
|
@@ -47,6 +45,9 @@ from transformers import AutoTokenizer
|
|
47 |
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')
|
48 |
```
|
49 |
|
|
|
|
|
|
|
50 |
|
51 |
## Documentation
|
52 |
|
|
|
8 |
|
9 |
DocsGPT-7B is a decoder-style transformer that is fine-tuned specifically for providing answers based on documentation given in context. It is built on to of the MosaicPretrainedTransformer (MPT), being fine-tuned from the MPT-7B model developed by MosaicML. The model inherits the powerful language understanding capabilities of MPT-7B and has been specialized for the purpose of documentation-oriented question answering.
|
10 |
|
|
|
11 |
|
|
|
12 |
|
13 |
## Model Description
|
14 |
|
|
|
45 |
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')
|
46 |
```
|
47 |
|
48 |
+
## Warning
|
49 |
+
|
50 |
+
This is a early version, fine tuning with 1k examples is just proof of concept we plan to fine tune on at least 100k more examples
|
51 |
|
52 |
## Documentation
|
53 |
|