Spaces:
Build error
Build error
File size: 2,427 Bytes
92aa748 a2a5b20 92aa748 d444637 5950f80 8b7fd62 5950f80 e8bfb0c 5950f80 c0f4c8a 5950f80 c0f4c8a 5950f80 f404c6c 415fd74 dc87457 8b7fd62 d444637 3b82920 42e38b7 741c74f f825554 96f2bb3 f825554 1c07639 d444637 92aa748 d444637 b8f132d 187b87b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
import gradio as gr
from fastai.vision.all import *
from efficientnet_pytorch import EfficientNet
title = "COVID_19 Infection Detectation App!"
head = (
"<body>"
"<center>"
"<img src='/file=Gradcam.png' width=200>"
"<h2>"
"This Space demonstrates a model based on efficientnet base model. The Model was trained to classify chest xray image. To test it, "
"</h2>"
"<h3>"
"Use the Example Images Provided or Upload your own xray images the space provided."
"</h3>"
"<h3>"
"!!!PLEASE NOTE MODEL WAS TRAINED and VALIDATED USING PNG FILES!!!"
"</h3>"
"</center>"
"<p>"
"The model is trained using [anasmohammedtahir/covidqu(https://www.kaggle.com/datasets/anasmohammedtahir/covidqu) dataset"
"</p>"
"<p>"
"The researchers of Qatar University have compiled the COVID-QU-Ex dataset, which consists of 33,920 chest X-ray (CXR) images including:"
"</p>"
"<ul>"
"<li>"
"11,956 COVID-19"
"</li>"
"<li>"
"11,263 Non-COVID infections (Viral or Bacterial Pneumonia)"
"</li>"
"<li>"
"10,701 Normal"
"</li>"
"</ul>"
"<p>"
"Ground-truth lung segmentation masks are provided for the entire dataset. This is the largest ever created lung mask dataset."
"</p>"
"</body>"
)
description = head
examples = [
['covid/covid_1038.png'], ['covid/covid_1034.png'],
['covid/cd.png'], ['covid/covid_1021.png'],
['covid/covid_1027.png'], ['covid/covid_1042.png'],
['covid/covid_1031.png']
]
#learn = load_learner('model/predictcovidfastaifinal18102023.pkl')
learn = load_learner('model/final_20102023_eb7_model.pkl')
categories = learn.dls.vocab
def predict_image(get_image):
pred, idx, probs = learn.predict(get_image)
return dict(zip(categories, map(float, probs)))
interpretation="default"
enable_queue=True
gr.Interface(fn=predict_image, inputs=gr.Image(shape=(224,224)),
outputs = gr.Label(num_top_classes=3),title=title,description=description,examples=examples, interpretation=interpretation,enable_queue=enable_queue).launch(share=False)
|