|
--- |
|
license: cc |
|
language: |
|
- en |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
tags: |
|
- medical |
|
--- |
|
# Medguanaco LoRA 33b 8bit |
|
|
|
|
|
## Table of Contents |
|
|
|
[Model Description](#model-description) |
|
- [Architecture](#architecture) |
|
- [Training Data](#trainig-data) |
|
[Model Usage](#model-usage) |
|
[Limitations](#limitations) |
|
|
|
## Model Description |
|
### Architecture |
|
`nmitchko/medguanaco-lora-33b-8bit` is a large language model LoRa specifically fine-tuned for medical domain tasks. |
|
It is based on the Guanaco LORA of LLaMA weighing in at 33B parameters. |
|
The primary goal of this model is to improve question-answering and medical dialogue tasks. |
|
It was trained using [LoRA](https://arxiv.org/abs/2106.09685) and reduced to 8bit, to reduce memory footprint. |
|
|
|
Steps to load this model: |
|
1. Load Guanaco-33b-merged https://huggingface.co/timdettmers/guanaco-33b-merged **in 8-bit** |
|
* I recommend using text-generation-ui to test it out: https://github.com/oobabooga/text-generation-webui/tree/main |
|
2. Apply this LoRA to the model, this was trainied in 8-bit mode and results may vary in higher dimensions. |
|
|
|
|
|
```python |
|
# Some llama or alpaca model 65b |
|
base_model = "timdettmers/guanaco-33b-merged" |
|
model = LlamaForCausalLM.from_pretrained( |
|
base_model, |
|
load_in_8bit=load_8bit, |
|
torch_dtype=torch.float16 |
|
) |
|
# Load the LORA on top |
|
lora_weights = "nmitchko/medguanaco-lora-33b-8bit" |
|
model = PeftModel.from_pretrained( |
|
model, |
|
lora_weights, |
|
torch_dtype=torch.float16 |
|
) |
|
``` |
|
|
|
--- |
|
|
|
> The following README is taken from the source page [medalpaca](https://huggingface.co/medalpaca/medalpaca-lora-13b-8bit) |
|
|
|
### Training Data |
|
The training data for this project was sourced from various resources. |
|
Firstly, we used Anki flashcards to automatically generate questions, |
|
from the front of the cards and anwers from the back of the card. |
|
Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page). |
|
We extracted paragraphs with relevant headings, and used Chat-GPT 3.5 |
|
to generate questions from the headings and using the corresponding paragraphs |
|
as answers. This dataset is still under development and we believe |
|
that approximately 70% of these question answer pairs are factual correct. |
|
Thirdly, we used StackExchange to extract question-answer pairs, taking the |
|
top-rated question from five categories: Academia, Bioinformatics, Biology, |
|
Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070) |
|
consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor. |
|
|
|
| Source | n items | |
|
|------------------------------|--------| |
|
| ChatDoc large | 200000 | |
|
| wikidoc | 67704 | |
|
| Stackexchange academia | 40865 | |
|
| Anki flashcards | 33955 | |
|
| Stackexchange biology | 27887 | |
|
| Stackexchange fitness | 9833 | |
|
| Stackexchange health | 7721 | |
|
| Wikidoc patient information | 5942 | |
|
| Stackexchange bioinformatics | 5407 | |
|
|
|
|
|
## Limitations |
|
The model may not perform effectively outside the scope of the medical domain. |
|
The training data primarily targets the knowledge level of medical students, |
|
which may result in limitations when addressing the needs of board-certified physicians. |
|
The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown. |
|
It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only. |