Here is our finetuned weight for Bloomz-7b1-mt with Low-Rank Adaptation and a chatdoctor-200k dataset from a paper, namely ChatDoctor: A Medical Chat Model Fine-tuned on LLaMA Model using Medical Domain Knowledge (https://arxiv.org/pdf/2303.14070.pdf). Our source code can be found at https://github.com/linhduongtuan/doctorwithbloom
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.