Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama2
|
3 |
+
model_type: llama
|
4 |
+
tags:
|
5 |
+
- facebook
|
6 |
+
- meta
|
7 |
+
- pytorch
|
8 |
+
- llama
|
9 |
+
- llama-2
|
10 |
+
---
|
11 |
+
|
12 |
+
# GOAT-70B-STORYTELLING model
|
13 |
+
|
14 |
+
![GOAT-70B-STORYTELLING](img_here)
|
15 |
+
|
16 |
+
GOAT-70B-STORYTELLING model is supervised finetuned (SFT) version of LLaMA 2 developed by GOAT.AI lab for story-writing.
|
17 |
+
|
18 |
+
|
19 |
+
# GOAT-STORYTELLING-AGENT
|
20 |
+
The GOAT-70B-STORYTELLING model has been developed as an integral component within the GOAT-STORYTELLING-AGENT Framework. This framework is designed to facilitate the generation of high-quality, cohesive, and captivating narratives, including stories and books. It achieves this by utilizing inputs such as plot outlines, character profiles, their interrelationships, and other relevant details. Example is provided below.
|
21 |
+
|
22 |
+
# Model description
|
23 |
+
- **Base Architecture:** LLaMA 2 70B
|
24 |
+
- **License:** llama2
|
25 |
+
- **Context window length:** 4096 tokens
|
26 |
+
|
27 |
+
### Learn more
|
28 |
+
|
29 |
+
- **Blog:** TBA
|
30 |
+
- **Framework:** github.com/GOAT-STORYTELLING-AGENT (TBA)
|
31 |
+
## Uses
|
32 |
+
|
33 |
+
The main purpose of GOAT-70B-STORYTELLING is to generate books, novels, moviescripts and etc. as an agent in cope with our GOAT-STORYTELLING-AGENT Framework. It is specifically designed for storywriters.
|
34 |
+
|
35 |
+
## Usage
|
36 |
+
|
37 |
+
|
38 |
+
Usage can be either self-hosted via `transformers` or used with Spaces
|
39 |
+
|
40 |
+
```
|
41 |
+
import torch
|
42 |
+
|
43 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
44 |
+
|
45 |
+
model_name = "GOAT-AI/GOAT-70B-STORYTELLING"
|
46 |
+
|
47 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
48 |
+
model = AutoModelForCausalLM.from_pretrained(
|
49 |
+
model_name,
|
50 |
+
torch_dtype=torch.bfloat16
|
51 |
+
)
|
52 |
+
```
|
53 |
+
|
54 |
+
Use it inside via GOAT-STORYTELLING-AGENT framework:
|
55 |
+
|
56 |
+
```
|
57 |
+
GOAT-STORYTELLING-AGENT code
|
58 |
+
```
|
59 |
+
## Training dataset
|
60 |
+
|
61 |
+
Training dataset was collected via GOAT-STORYTELLING-AGENT framework and GPT-4, incorporated with opensource instruction data. We will not release the dataset.
|
62 |
+
|
63 |
+
## License
|
64 |
+
|
65 |
+
GOAT-70B-STORYTELLING model is based on [Meta's LLaMA-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf), and using own datasets.
|
66 |
+
|
67 |
+
GOAT-70B-STORYTELLING model weights are available under LLAMA-2 license. Note that the GOAT-70B-STORYTELLING model weights require access to the LLaMA-2 model weighs. The GOAT-70B-STORYTELLING model is based on LLaMA-2 and should be used according to the LLaMA-2 license.
|
68 |
+
|
69 |
+
### Risks and Biases
|
70 |
+
|
71 |
+
GOAT-70B-STORYTELLING model can produce factually incorrect output and should not be relied on to deliver factually accurate information. Therefore, the GOAT-70B-STORYTELLING model could possibly generate wrong, biased, or otherwise offensive outputs.
|