|
--- |
|
|
|
|
|
license: apache-2.0 |
|
--- |
|
|
|
# BloomChat V1.0 |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
BloomChat-v1.0 is based on [BigScience Group Bloom-176 model](https://huggingface.co/bigscience/bloom), and is instruction-tuned on a subset of 100k datapoints per data source from the [OIG dataset](https://huggingface.co/datasets/laion/OIG) provided by laion. Then aligned using [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and [Oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1). |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
- **Developed by:** [SambaNova Systems](https://sambanova.ai/) and [Together Computer](https://www.together.xyz/) |
|
- **Model type:** Language Model |
|
- **Language(s):** Multiple; see [training data from Bloom-176B](https://huggingface.co/bigscience/bloom#training-data) |
|
- **License:** apache-2.0 |
|
- **Instruction Tuned from model:** [BigScience Group Bloom-176B](https://huggingface.co/bigscience/bloom) |
|
|
|
### Additional Information |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Blogpost:** [More Information Needed] |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
[More Information Needed] |
|
|
|
### Downstream Use [optional] |
|
|
|
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
|
|
|
[More Information Needed] |
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
|
|
[More Information Needed] |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
Like all LLMs, BloomChat has certain limitations: |
|
- Hallucination: BloomChat may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information. |
|
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output. |
|
- Repetition: BloomChat may produce repetitive phrases or sentences, leading to less engaging and informative responses. |
|
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited. |
|
- Toxicity: BloomChat may inadvertently generate responses containing inappropriate or harmful content. |
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. |
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
[More Information Needed] |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
- [OIG dataset](https://huggingface.co/datasets/laion/OIG) |
|
- [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k) |
|
- [Oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) |
|
|
|
### Training Procedure |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
We trained BloomChat with SambaStudio, a platform built on SambaNova's in-house Reconfigurable Dataflow Unit (RDU). We started from [Bloom-176B](https://huggingface.co/bigscience/bloom), an OSS multilingual 176B GPT model pretrained by the [BigScience group](https://huggingface.co/bigscience). |
|
|
|
### Hyperparameters |
|
|
|
**Instruction-tuned Training on OIG** |
|
|
|
- Hardware: SambaNova Reconfigurable Dataflow Unit (RDU) |
|
- Optimizer: AdamW |
|
- Grad accumulation: 1 |
|
- Epochs: 1 |
|
- Global Batch size: 128 |
|
- Batch tokens: 128 * 2048 = 262,144 tokens |
|
- LR: 1e-5 |
|
- Weight decay: 0.1 |
|
|
|
**Instruction-tuned Training on Dolly 2.0 and Oasst1** |
|
|
|
- Hardware: SambaNova Reconfigurable Dataflow Unit (RDU) |
|
- Optimizer: AdamW |
|
- Grad accumulation: 1 |
|
- Epochs: 3 |
|
- Global Batch size: 128 |
|
- Batch tokens: 128 * 2048 = 262,144 tokens |
|
- LR: 1e-5 |
|
- Weight decay: 0.1 |
|
|
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
![HELM core-scenarios](HELM_core-senarios_CNN+MS_Marco_WIP.png) |
|
|
|
![Multilingual scores French and hindi](Multilinguality_WMT-14_on_French+Hindi.png) |
|
|
|
![Multilingual scores Chinese](Multilinguality_WMT-14_on_Simplified_Chinese.png) |
|
|
|
![Mean Win Rate on HELM](Open_source_model_Mean_Win_Rate_on_HELM_core_scenarios.png) |
|
|
|
## Community |
|
|
|
[Link to discord server] |
|
|
|
|
|
|