--- license: cc-by-nc-4.0 language: - ro --- # Model Card for Model ID RoLlama2 is a family of pretrained and fine-tuned generative text models for Romanian. This is the repository for the **foundational 7B model**. Links to other models can be found at the bottom of this page. ## Model Details ### Model Description RoLlama2 represents the first open-source effort to build a LLM specialized for Romanian. OpenLLM-Ro developed and publicly releases a collection of Romanian LLMs, both in the form of foundational model and instruct and chat variants. - **Developed by:** OpenLLM-Ro - **Language(s):** Romanian - **License:** cc-by-nc-4.0 ### Model Sources - **Repository:** https://github.com/OpenLLM-Ro/llama-recipes - **Paper:** [More Information Needed] ## Intended Use ### Intended Use Cases RoLlama2 is intented for research use in Romanian. Base models can be adapted for a variety of natural language tasks while instruction and chat tuned models are intended for assistant-like chat. ### Out-of-Scope Use Use in any manner that violates the license, any applicable laws or regluations, use in languages other than Romanian. ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Base") model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Base") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` ## Benchmarks | Model | Average | ARC | MMLU |Winogrande|HellaSwag | GSM8k |TruthfulQA| |--------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | Llama-2-7b | 35.65 | 33.85 | 30.93 | 56.43 | 46.98 | 1.37 | 44.36 | | *RoLlama2-7b-Base* | *38.32* | *35.83* | *30.47* | *60.16* | *55.52* | *2.17* | *45.78* | | Llama-2-7b-chat | 35.58 | 34.92 | 32.37 | 54.26 | 44.52 | 2.05 | 45.38 | |RoLlama2-7b-Instruct| **44.42**|**40.36** |**37.41** |**69.58** | 55.64 | **17.59**| 45.96 | |RoLlama2-7b-Chat | 42.65 | 38.29 | 35.27 | 65.25 | **56.45**| 12.84 | **47.79**| ## MT-Bench | Model | Average | 1st turn | 2nd turn | |--------------------|:--------:|:--------:|:--------:| | Llama-2-7b-chat | 1.70 | 2.00 | 1.41 | |RoLlama2-7b-Instruct| **4.31**|**5.66** | 2.95 | |RoLlama2-7b-Chat | 3.91 | 4.25 | **3.57** | ## RoLlama2 Model Family | Model | Link | |--------------------|:--------:| |*RoLlama2-7b-Base* | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Base) | |RoLlama2-7b-Instruct| [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Instruct) | |RoLlama2-7b-Chat | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Chat) | ## Citation **BibTeX:** [More Information Needed] **APA:** [More Information Needed]