--- license: cc language: - fa - en library_name: transformers tags: - text-generation-inference inference: false # widget: # - text: # output: # url: PersianMind.jpg metrics: - bleu - comet - accuracy - perplexity - spearmanr pipeline_tag: text-generation co2_eq_emissions: emissions: 232380 --- PersianMind logo # PersianMind PersianMind is a a cross-lingual Persian-English large language model. ### Model Description - **Developed by:** [Pedram Rostami](mailto:pedram.rostami@ut.ac.ir), [Ali Salemi](mailto:alisalemi@ut.ac.ir), and [Mohammad Javad Dousti](mailto:mjdousti@ut.ac.ir) - **Model type:** Language model - **Languages:** English and Persian - **License:** [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) ## How to Get Started with the Model Use the code below to get started with the model. Note that you need to install `sentencepiece` and `accelerate` libraries to run this code. ```python from transformers import LlamaTokenizer, LlamaForCausalLM import torch device = "cuda" if torch.cuda.is_available() else "cpu" model = LlamaForCausalLM.from_pretrained( "universitytehran/PersianMind-v1.0", torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map={"": device}, ) tokenizer = LlamaTokenizer.from_pretrained( "universitytehran/PersianMind-v1.0", ) TEMPLATE = "{context}\nYou: {prompt}\nPersianMind: " CONTEXT = "This is a conversation with PersianMind. It is an artificial intelligence model designed by a team of " \ "NLP experts at the University of Tehran to help you with various tasks such as answering questions, " \ "providing recommendations, and helping with decision making. You can ask it anything you want and " \ "it will do its best to give you accurate and relevant information." PROMPT = "در مورد هوش مصنوعی توضیح بده." model_input = TEMPLATE.format(context=CONTEXT, prompt=PROMPT) input_tokens = tokenizer(model_input, return_tensors="pt") input_tokens = input_tokens.to(device) generate_ids = model.generate(**input_tokens, max_new_tokens=512, do_sample=False, repetition_penalty=1.1) model_output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] model_output = model_output.replace(model_input, "") print(model_output) ``` ## How to Get Started with the Quantized Model Quantized models can be run on resource-constrained devices. To use quantized models, you should install the `bitsandbytes` library. To get started with 8-bit quantized model, use the code below to define the model. ```python model = LlamaForCausalLM.from_pretrained( "universitytehran/PersianMind-v1.0", device_map="auto", low_cpu_mem_usage=True, load_in_8bit=True ) ``` To get started with 4-bit quantized model, use the code below to define the model. ```python from transformers import BitsAndBytesConfig quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", ) model = LlamaForCausalLM.from_pretrained( "universitytehran/PersianMind-v1.0", quantization_config=quantization_config, device_map="auto" ) ``` ## Evaluating Quantized Models | Model | Belebele (Persian) | Translation Fa2En | Translation En2Fa | Model Size | Words/sec | | :----------------- | :----------------: | :---------------: | :---------------: | :--------: | :-------: | | PersianMind | 73.9 | 83.61 | 79.44 | 13.66G | 25.35 | | PersianMind-8bit | 73.7 | 82.32 | 78.61 | 7.2G | 11.36 | | PersianMind-4bit | 70.2 | 82.07 | 80.36 | 3.9G | 24.36 | We evaluated quantized models in various tasks against the original model. Specifically, we evaluated all models using the reading comprehension multiple-choice question-answering benchmark of Belebele (Persian subset) and reported the accuracy of each model. Additionally, we evaluated our models for Persian-to-English and English-to-Persian translation tasks. For this, we utilized the Persian-English subset of the Flores-200 dataset and reported our results using the Comet metric. Furthermore, we calculated the average number of words generated by each model per second during running the translation tasks. To understand resource efficiency, we measured the memory usage of each model by employing the `get_memory_footprint` function. ## License PersianMind is subject to Meta's [LLaMa2 Community License](https://raw.githubusercontent.com/facebookresearch/llama/main/LICENSE). It is further licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/), which allows non-commercial use of the model. Commercial use of this model requires written agreement which must be obtained from the copyright holders who are listed as developers in this page. If you suspect any violations, please reach out to us. ## Citation If you find the following model helpful, please ensure to cite the following paper. **BibTeX:** ```bibtex @article{persianmind, title={{PersianMind: A Cross-Lingual Persian-English Large Language Model}}, author={Rostami, Pedram and Salemi, Ali and Dousti, Mohammad Javad}, journal={arXiv preprint}, year={2024} } ```