Edit model card

Disease Prognosis and Precautions Text2Text Generation

Welcome to the Disease Prognosis and Precautions Text2Text Generation repository! Fine-tuned microsoft/GODEL-v1_1-large-seq2seq. The model is designed to generate responses for disease prognosis and recommended precautions based on given symptoms.

Model Overview

The model in this repository is a text-to-text generation model. It takes a prompt in the form of symptoms related to a particular disease and generates a response that includes the potential disease prognosis along with recommended precautions. The columns used in the training dataset are:

  • Disease: The name of the disease related to the symptoms.
  • Symptoms: The list of symptoms provided in the prompt.
  • Precautions: The recommended precautions for the identified disease.

Examples

Here are some examples of how you can use the model:

Example 1

Prompt: "I am feeling continuous sneezing, shivering and chills" Response: "Seems like allergy. You should try to avoid dust and air pollution."

Example 2

Prompt: "I am feeling itching, skin rash and patches" Response: "Seems like fungal infection. You should bathe twice a day and use antifungal soap."

How to Use

To use the model for generating disease prognosis and precautions based on symptoms, you can use the generate function provided by the Hugging Face Transformers library. Here's a basic example using Python:

from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

# Load the model and tokenizer
model_name = "shanover/medbot_godel_v3"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Define your symptom prompt
prompt = "I am feeling continuous sneezing, shivering and chills"

def generate_response(input_text, model, tokenizer, max_length):
    input_ids = tokenizer.encode(input_text, return_tensors="pt", max_length=max_length, truncation=True)
    input_ids = input_ids.to(device)

    with torch.no_grad():
        output_ids = model.generate(input_ids)

    generated_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
    return generated_text

print(generate_response(prompt, model, tokenizer))

Remember to replace "shanover/medbot_godel_v3" with the actual name or path of the model you've downloaded or fine-tuned.

Acknowledgments

Trained on Microsoft/Godel: https://huggingface.co/microsoft/GODEL-v1_1-large-seq2seq

Issues and Contributions

If you encounter any issues while using the model or have suggestions for improvements, please feel free to open an issue in this repository. Contributions are also welcome!

Disclaimer

Please note that the information generated by the model is for informational purposes only and should not be considered a substitute for professional medical advice. Always consult a medical professional for accurate diagnoses and treatments.

Thank you for using the Disease Prognosis and Precautions Text2Text Generation model! We hope it proves to be a helpful tool.

Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using shanover/medbot_godel_v3 1