You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

EXF-Medistral-Nemo-12B

Model Description

EXF-Medistral-Nemo-12B is a fine-tuned version of the Mistral-Nemo-12B model, optimized for tasks in the medical domain. It has been trained on the Open-Nexus-MedQA dataset, which integrates a wide range of medical knowledge from public datasets like ChatDoctor, icliniq, and others, to enhance the model’s ability to answer medical questions accurately and reliably. This model is designed to assist in clinical decision support, medical coding, and patient care by generating responses based on comprehensive medical knowledge.

Model Architecture

  • Base Model: Mistral-Nemo-12B
  • Parameters: 12 billion
  • Fine-tuning Dataset: Open-Nexus-MedQA
  • Task: Medical question-answering (QA), medical coding, and healthcare information retrieval.

Training Data

The model was fine-tuned on the Open-Nexus-MedQA dataset, which aggregates data from multiple medical QA sources such as:

  • ChatDoctor
  • icliniq.com
  • HealthCareMagic
  • CareQA
  • MedInstruct

The dataset contains medical queries ranging from simple conditions to complex diagnoses, accompanied by accurate, domain-specific responses, making it a robust training source for real-world medical applications.

Intended Use

EXF-Medistral-Nemo-12B is ideal for:

  • Medical Question-Answering: It can be used for generating responses to patient queries or supporting healthcare professionals with clinical information.
  • Medical Coding: The model supports tasks related to CMS, OASIS, ICD-10, and other coding systems.
  • Clinical Decision Support: Assisting doctors and healthcare providers by offering evidence-based suggestions or answers.
  • Patient Care Tools: Powering medical chatbots or virtual assistants for patients seeking health information.

Performance

The model has been fine-tuned for precision in the medical domain, demonstrating high accuracy in understanding and generating responses to complex medical queries. It excels in:

  • Medical terminology comprehension
  • Providing accurate ICD-10 and CMS codes
  • Generating medically relevant and safe answers

Limitations

  • Not a Diagnostic Tool: This model is not intended to replace medical professionals or provide definitive medical diagnoses. Always consult with a licensed healthcare provider for medical advice.
  • Training Data Bias: The dataset is based on publicly available medical QA data, which might not cover all edge cases or international healthcare systems.

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("exafluence/EXF-Medistral-Nemo-12B")
model = AutoModelForCausalLM.from_pretrained("exafluence/EXF-Medistral-Nemo-12B")

input_text = "What are the symptoms of type 2 diabetes?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs, skip_special_tokens=True))

License

This model is provided under a proprietary license. Usage is restricted to non-commercial purposes unless explicit permission is granted.

Citation If you use this model, please cite:

@inproceedings{exafluence2024EXFMedistralNemo12B,
  title={EXF-Medistral-Nemo-12B: A Fine-Tuned Medical Language Model for Healthcare Applications},
  author={Exafluence Inc.},
  year={2024},
  url={https://huggingface.co/exafluence/EXF-Medistral-Nemo-12B}
  doi={https://doi.org/10.57967/hf/3284}
}

Contact

For any questions or inquiries regarding usage, licensing, or access, please contact Exafluence Inc..

Uploaded model

  • Developed by: exafluence
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-nemo-instruct-2407-bnb-4bit

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
1
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for exafluence/EXF-Medistral-Nemo-12B

Quantizations
1 model

Dataset used to train exafluence/EXF-Medistral-Nemo-12B

Collection including exafluence/EXF-Medistral-Nemo-12B