Bangla Llama-2 7B Base v0.1 [pre-trained][Llama2 Original Tokenizer]
Welcome to the inaugural release of the Bangla Llama-2 7B base model, an important step in advancing LLMs for the Bangla language. This model is ready for immediate inference and primed for further fine-tuning to suit your specific NLP tasks.
Please Note: This model, labelled as a foundational Bangla Language Model (LLM), is designed primarily for Causal Language Modeling (LM) purposes.
Model description
This Bangla Llama model has been built upon the foundation set by the original Llama-2 with Llama-2 original tokenizer.
- Model type: A 7B parameter model for Causal LM pre-trained on Bangla 2B+ BERT dataset.
- Language(s): Bangla and English
- License: GNU General Public License v3.0
- Source Model: meta-llama/Llama-2-7b-hf
- Training Precision:
float16
- Code: GitHub
Usage Note
It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
Citation
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in understanding and producing the Bangla language.
- Downloads last month
- 41