|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
- he |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
base_model: yam-peleg/Hebrew-Mistral-7B |
|
--- |
|
# Hebrew-Mistral-7B-GGUF |
|
|
|
- This is quantized version of [yam-peleg/Hebrew-Mistral-7B]() created using llama.cpp |
|
|
|
# Model Description |
|
|
|
Hebrew-Mistral-7B is an open-source Large Language Model (LLM) pretrained in hebrew and english pretrained with 7B billion parameters, based on Mistral-7B-v1.0 from Mistral. |
|
|
|
It has an extended hebrew tokenizer with 64,000 tokens and is continuesly pretrained from Mistral-7B on tokens in both English and Hebrew. |
|
|
|
The resulting model is a powerful general-purpose language model suitable for a wide range of natural language processing tasks, with a focus on Hebrew language understanding and generation. |
|
|
|
### Notice |
|
|
|
Hebrew-Mistral-7B is a pretrained base model and therefore does not have any moderation mechanisms. |
|
|
|
### Authors of Original Model |
|
- Trained by Yam Peleg. |
|
- In collaboration with Jonathan Rouach and Arjeo, inc. |