license: mit
datasets:
- Xilabs/instructmix
- CreitinGameplays/small-chat-assistant-for-bloom
- sahil2801/CodeAlpaca-20k
language:
- en
tags:
- uncensored
- unrestricted
- code
- biology
- chemistry
- finance
- legal
- music
- art
- climate
- merge
- text-generation-inference
- moe
widget:
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> who was
Nikola Tesla? </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a story
about a cat. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> what is an
essay? </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> Tell me 5
Brazilian waterfalls to visit. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a story
about how a virus called COVID-19 destroyed the world </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> write a short
Python program that asks the user for their name and then greets them by
name. </s> <|assistant|>
- text: >-
<|system|> You are a helpful AI assistant. </s> <|prompter|> What can you
do? </s> <|assistant|>
inference:
parameters:
temperature: 0.1
do_sample: true
top_k: 50
top_p: 0.14
max_new_tokens: 250
repetition_penalty: 1.155
🌸 BLOOM 3b Fine-tuned for Chat Assistant
Run this model on Kaggle Notebook
Model Name: bloom-3b-conversational
Model Architecture: bloom
Short Description: This model is a fine-tuned version of the BLOOM 3b language model, focusing on conversational interactions between an user and an AI assistant.
Intended Use: This model is intended for research purposes and exploration of conversational AI applications. It can be used for tasks like:
- Generating responses to user prompts in a chat assistant setting.
- Creating examples of chatbot interactions for further development.
- Studying the capabilities of language models for conversation.
Limitations:
- Fine-tuning Focus: The model's performance is optimized for the specific format and context of the fine-tuning data. It may not generalize well to significantly different conversation styles or topics.
- Potential Biases: The model may inherit biases from the training data. It's important to be aware of these potential biases and use the model responsibly.
- Limited Factual Accuracy: Language models are still under development and may generate responses that are not entirely factually accurate. It's important to verify information generated by the model with other sources.
- Primarily English: While the model can respond in other languages, the quality and accuracy of its responses may be lower compared to English. This is because the model was primarily fine-tuned on English data.
Specific Input Format:
The model was fine-tuned using a specific input format that goes like this:
<|system|> {system prompt} </s> <|prompter|> {user prompt} </s> <|assistant|> {model response}
Using this format when interacting with the model can improve its performance and generate more relevant responses.
Disclaimer: This model is for research and exploration purposes only. It should not be used in any applications that require high levels of accuracy or reliability.