NeuronZero commited on
Commit
43fd7a2
·
verified ·
1 Parent(s): 4b0d140

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -3
README.md CHANGED
@@ -2,6 +2,7 @@
2
  tags:
3
  - autotrain
4
  - image-classification
 
5
  widget:
6
  - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
7
  example_title: Tiger
@@ -11,11 +12,23 @@ widget:
11
  example_title: Palace
12
  datasets:
13
  - mujammil131/eyeDiseasDdetectionModel
 
 
14
  ---
15
 
16
- # Model Trained Using AutoTrain
 
 
 
 
 
 
 
 
 
 
 
17
 
18
- - Problem type: Image Classification
19
 
20
  ## Validation Metrics
21
  loss: 0.1152728122006607
@@ -28,4 +41,28 @@ recall: 0.973109243697479
28
 
29
  auc: 0.9916270580630442
30
 
31
- accuracy: 0.9644607843137255
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  tags:
3
  - autotrain
4
  - image-classification
5
+ - vision
6
  widget:
7
  - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
8
  example_title: Tiger
 
12
  example_title: Palace
13
  datasets:
14
  - mujammil131/eyeDiseasDdetectionModel
15
+ license: apache-2.0
16
+ pipeline_tag: image-classification
17
  ---
18
 
19
+ ## EyeDiseaseClassifier(Small-Sized model)
20
+
21
+ It is a fine tuned version of [BEiT-base-path-16](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k). It is trained on this [dataset](https://huggingface.co/datasets/mujammil131/eyeDiseasDdetectionModel).
22
+
23
+ ## Model description
24
+
25
+ The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
26
+ Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
27
+
28
+ Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
29
+
30
+ By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
31
 
 
32
 
33
  ## Validation Metrics
34
  loss: 0.1152728122006607
 
41
 
42
  auc: 0.9916270580630442
43
 
44
+ accuracy: 0.9644607843137255
45
+
46
+ ### How to use
47
+
48
+ Here is how to use this model to identify from the image of the patient's retina:
49
+
50
+ ```python
51
+ from transformers import AutoImageProcessor, AutoModelForImageClassification
52
+ from PIL import Image
53
+ import requests
54
+
55
+ processor = AutoImageProcessor.from_pretrained("NeuronZero/EyeDiseaseClassifier")
56
+ model = AutoModelForImageClassification.from_pretrained("NeuronZero/EyeDiseaseClassifier")
57
+
58
+ #dataset URL: "https://www.kaggle.com/code/abdallahwagih/eye-diseases-classification-acc-93-8"
59
+
60
+ image_url = "https://storage.googleapis.com/kagglesdsdata/datasets/2440665/4130910/dataset/glaucoma/1212_left.jpg?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20240404%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20240404T083818Z&X-Goog-Expires=345600&X-Goog-SignedHeaders=host&X-Goog-Signature=1cff9e2a92a0c7c95480fe62cf34997660823af8e0191daac8d80cbe3bfc4cc719e64e6510a128eba87fa3753836214920bd44ba2ede9ba9991b0cc31f60813d9db185245055b72672016d7d2a4ff70bc4684d5756ac445aa3899d63998eee62067b7f4022697dd2baf9222f77b0b27e30f16310f3dafc3cc0249251006a4c48bf6d36ad37d7ca07b89c32f71482f71d62e6ae26b81af3678a9e3a76c1555d30e921cc721f6b72080c25d86d8a28b4f4b2530896a89dc00668c4f01ec960e5d4eb372c9aef85e85dd072d94178ef8fb4e494ae4cea4348717213954bdfa3f239c7cc92415bf4c6e01d497f479944d63844f1cd97d01c63c4651c0c7514ecfe4a"
61
+ image = Image.open(requests.get(image_url, stream=True).raw)
62
+
63
+ inputs = processor(images=image, return_tensors="pt")
64
+ outputs = model(**inputs)
65
+ logits = outputs.logits
66
+ predicted_class_idx = logits.argmax(-1).item()
67
+ print("Predicted class:", model.config.id2label[predicted_class_idx])
68
+ ```