title: README
emoji: 🏢
colorFrom: indigo
colorTo: blue
sdk: static
pinned: true
license: apache-2.0
BiMediX: Bilingual Medical Mixture of Experts LLM
Welcome to the official HuggingFace repository for BiMediX, the bilingual medical Large Language Model (LLM) designed for English and Arabic interactions. BiMediX facilitates a broad range of medical interactions, including multi-turn chats, multiple-choice Q&A, and open-ended question answering.
Key Features
- Bilingual Support: Seamless interaction in both English and Arabic for a wide range of medical interactions, including multi-turn chats, multiple-choice question answering, and open-ended question answering.
- BiMed1.3M Dataset: Unique dataset with 1.3 million bilingual medical interactions across English and Arabic, including 250k synthesized multi-turn doctor-patient chats for instruction tuning.
- High-Quality Translation : Utilizes a semi-automated English-to-Arabic translation pipeline with human refinement to ensure accuracy and quality in translations.
- Evaluation Benchmark for Arabic Medical LLMs: Comprehensive benchmark for evaluating Arabic medical language models, setting a new standard in the field.
- State-of-the-Art Performance: Outperforms existing models in medical benchmarks, while 8-times faster than comparable existing models.
Getting Started
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "TODO"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "TODO"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=500)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Model Details
(Describe the model's architecture, focusing on its mixture of experts design.)
Dataset
(Details about the BiMed1.3M dataset, including composition and access.)
Benchmarks and Performance
(Details about benchmarks and results.)
Limitations and Ethical Considerations
This release, intended for research, is not ready for clinical or commercial use. Users are urged to employ BiMediX responsibly, especially when applying its outputs in real-world medical scenarios. It is imperative to verify the model's advice with qualified healthcare professionals and not to rely on AI for medical diagnoses or treatment decisions. Despite the overall advancements BiMediX brings to the field of medical NLP, it shares common challenges with other language models, including hallucinations, toxicity, and stereotypes. BiMediX's medical diagnoses and recommendations are not infallible.
License and Citation
BiMediX is released under the Apache License 2.0. For more details, please refer to the LICENSE file included in this repository.
If you use BiMediX in your research, please cite our work as follows:
@article{yourModel2024,
title={BiMediX: Bilingual Medical Mixture of Experts LLM},
author={Your Name and Collaborators},
journal={Journal of AI Research},
year={2024},
volume={xx},
number={xx},
pages={xx-xx},
doi={xx.xxxx/xxxxxx}
}
Visit our GitHub for more information and resources.