Llama-medx_v2 / README.md
skumar9's picture
Update README.md
3c95565 verified
---
library_name: transformers
license: apache-2.0
datasets:
- skumar9/orpo-mmlu
tags:
- medical
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is llama3 8b family chat model finetuned from base [`epfl-llm/meditron-7b`](https://huggingface.co/epfl-llm/meditron-7b) with [open assist dataset](https://huggingface.co/datasets/mlabonne/guanaco-llama2) using SFT [QLora](https://arxiv.org/abs/2305.14314) .<br>
All the linear parameters were made trainable with a rank of 16.<br>
# Prompt template: Llama
```
'<s> [INST] <<SYS>>
You are a helpful, respectful and medical honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>> {question} [/INST] {Model answer } </s>'
```
# Usage:
```python
model_name='jiviadmin/meditron-7b-guanaco-chat'
# Load the model
base_model = AutoModelForCausalLM.from_pretrained(
model_name,
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map={"": 0},
)
# Load tokenizer to save it
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True,add_eos_token=True)
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.pad_token_id = 18610
tokenizer.padding_side = "right"
default_system_prompt="You are a helpful, respectful and honest medical assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.Please consider the context below if applicable:
Context:NA"
#Initialize the hugging face pipeline
def format_prompt(question):
return f'''<s> [INST] <<SYS>> {default_system_prompt} <</SYS>> [INST] {question} [/INST]'''
question=' My father has a big white colour patch inside of his right cheek. please suggest a reason.'
pipe = pipeline(task="text-generation", model=base_model, tokenizer=tokenizer, max_length=512,repetition_penalty=1.1,return_full_text=False)
result = pipe(format_prompt(question))
answer=result[0]['generated_text']
print(answer)
```
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->