Diabetica-7B / README.md
WaltonFuture's picture
Update README.md
0f7a50d verified
|
raw
history blame
No virus
3.31 kB
metadata
license: mit
base_model:
  - Qwen/Qwen2-7B-Instruct
tags:
  - medical

Diabetica-7B

An adapted large language model facilitates multiple medical tasks in diabetes care

CodePaper

Introduction

Hello! Welcome to the huggingface repository for Diabetica.

Our study introduced a reproducible framework for developing a specialized LLM capable of handling various diabetes tasks. We present three key contributions:

  • High-performance domain-specific model: Compared with previous generic LLMs, our model Diabetica, showed superior performance across a broad range of diabetes-related tasks, including diagnosis, treatment recommendations, medication management, lifestyle advice, patient education, and so on.

  • Reproducible framework: We offered a detailed method for creating specialized medical LLMs using open-source models, curated disease-specific datasets, and fine-tuning techniques. This approach can be adapted to other medical fields, potentially accelerating AI-assisted care development.

  • Comprehensive evaluation: We designed comprehensive benchmarks and conducted clinical trials to validate the model's effectiveness in clinical applications. This ensured our model's practical utility and sets a new standard for evaluating AI tools in diabetes care.

Please refer to our GitHub Repo for more details.

Model Inference

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

device = "cuda" # the device to load the model onto
model_path = 'WaltonFuture/Diabetica-7B'

model = AutoModelForCausalLM.from_pretrained(
    model_path,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_path)

def model_output(content):
    messages = [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": content}
    ]
    text = tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True
    )
    model_inputs = tokenizer([text], return_tensors="pt").to(device)
    generated_ids = model.generate(
        model_inputs.input_ids,
        max_new_tokens=2048,
        do_sample=True,
    )
    generated_ids = [
        output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
    ]
    response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
    return response

prompt = "Hello! Please tell me something about diabetes."

response = model_output(prompt)
print(response)

Citation

@misc{wei2024adaptedlargelanguagemodel,
      title={An adapted large language model facilitates multiple medical tasks in diabetes care}, 
      author={Lai Wei and Zhen Ying and Muyang He and Yutong Chen and Qian Yang and Yanzhe Hong and Jiaping Lu and Xiaoying Li and Weiran Huang and Ying Chen},
      year={2024},
      eprint={2409.13191},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2409.13191}, 
}