Ra-Is commited on
Commit
916bdb8
1 Parent(s): fade6b2

update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -3
README.md CHANGED
@@ -1,3 +1,65 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ pipeline_tag: text-generation
5
+ base_model: t5-small
6
+ library_name: transformers
7
+
8
+ widget:
9
+ - text: "A 35-year-old female presents with a 2-week history of persistent cough..."
10
+ ---
11
+
12
+ # Medical Generation Model
13
+
14
+ ## Overview
15
+
16
+ This repository contains a fine-tuned T5 model designed to generate medical diagnoses and treatment recommendations. The model was trained on clinical scenarios to provide accurate and contextually relevant medical outputs based on input prompts.
17
+
18
+ ## Model Details
19
+
20
+ - **Model Type**: T5
21
+ - **Model Size**: small
22
+ - **Tokenizer**: T5 tokenizer
23
+ - **Training Data**: Clinical scenarios and medical texts
24
+
25
+ ## Installation
26
+
27
+ To use this model, install the required libraries with `pip`:
28
+
29
+ ```bash
30
+ pip install transformers
31
+ pip install tensorflow
32
+
33
+ # Load the fine-tuned model and tokenizer
34
+ from transformers import T5Tokenizer, TFT5ForConditionalGeneration
35
+
36
+ model_id = "Ra-Is/medical-gen-small"
37
+ model = TFT5ForConditionalGeneration.from_pretrained(model_id)
38
+ tokenizer = T5Tokenizer.from_pretrained(model_id)
39
+
40
+ # Prepare a sample input prompt
41
+ input_prompt = ("A 35-year-old female presents with a 2-week history of "
42
+ "persistent cough, shortness of breath, and fatigue. She has "
43
+ "a history of asthma and has recently been exposed to a sick "
44
+ "family member with a respiratory infection. Chest X-ray shows "
45
+ "bilateral infiltrates. What is the likely diagnosis, and what "
46
+ "should be the treatment?")
47
+
48
+ # Tokenize the input
49
+ input_ids = tokenizer(input_prompt, return_tensors="tf").input_ids
50
+
51
+ # Generate the output (diagnosis)
52
+ outputs = model.generate(
53
+ input_ids,
54
+ max_length=512,
55
+ num_beams=5,
56
+ temperature=1,
57
+ top_k=50,
58
+ top_p=0.9,
59
+ do_sample=True, # Enable sampling
60
+ early_stopping=True
61
+ )
62
+
63
+ # Decode and print the output
64
+ generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
65
+ print(generated_text)