File size: 2,965 Bytes
f996710 d5f3e1c defc69d d5f3e1c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
library_name: keras-hub
---
### Model Overview
BLOOM as described in as descriped in [BLOOM: A 176B-Parameter Open-Access Multilingual Language Model](https://arxiv.org/pdf/2211.05100.pdf), is a large language model published by BigScience. BLOOM is able to output coherent text in 46 languages and 13 programming languages. BLOOM models range in size from 0.5 billion to 3 billion parameters. See the model card below for benchmarks, data sources, and intended use cases.
Weights are released under the [RAIL License](https://www.licenses.ai/ai-licenses). Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [BLOOM Quickstart Notebook](https://www.kaggle.com/code/smnsdt/bloom-quickstart)
* [BLOOM API Documentation](https://keras.io/api/keras_hub/models/bloom/)
* [BLOOM Model Card](https://huggingface.co/bigscience/bloom)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras>=3
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|---------------------|------------|------------------------------|
| `bloom_560m_multi` | 559M | 560M base model |
| `bloom_1.1b_multi` | 1.06B | 1B base model |
| `bloom_1.7b_multi` | 1.72B | 1.7B base model |
| `bloom_3b_multi` | 3B | 3B base model |
| `bloomz_560m_multi` | 559M | 560m instruction-tuned model |
| `bloomz_1.1b_multi` | 1.06B | 1B instruction-tuned model |
| `bloomz_1.7b_multi` | 1.72B | 1.7B instruction-tuned model |
| `bloomz_3b_multi` | 3B | 3B instruction-tuned model |
## Prompts
The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "Translate to English: Je t'aime" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "Translate to English: Je t'aime.", "Translate to English: Je t'aime. Translation:" "What is "Je t'aime." in English?", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "Explain in a sentence in Telugu what is backpropagation in neural networks.".
|