bilal01 commited on
Commit
600d1e5
1 Parent(s): 1bd454e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -4
README.md CHANGED
@@ -7,6 +7,9 @@ tags:
7
  model-index:
8
  - name: segformer-b0-finetuned-segments-stamp-verification
9
  results: []
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -27,18 +30,46 @@ It achieves the following results on the evaluation set:
27
 
28
  ## Model description
29
 
30
- More information needed
 
 
31
 
32
  ## Intended uses & limitations
33
 
34
- More information needed
 
 
 
 
35
 
36
  ## Training and evaluation data
37
 
38
- More information needed
 
 
 
39
 
40
  ## Training procedure
41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
@@ -85,4 +116,4 @@ The following hyperparameters were used during training:
85
  - Transformers 4.28.0
86
  - Pytorch 2.0.0+cu118
87
  - Datasets 2.12.0
88
- - Tokenizers 0.13.3
 
7
  model-index:
8
  - name: segformer-b0-finetuned-segments-stamp-verification
9
  results: []
10
+ metrics:
11
+ - code_eval
12
+ - accuracy
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  ## Model description
32
 
33
+ The StampSegNet is a semantic segmentation model fine-tuned on a custom dataset specifically designed for stamp segmentation. It is based on the powerful Hugging Face framework and utilizes deep learning techniques to accurately and efficiently segment stamps from images.
34
+
35
+ The model has been trained to identify and classify different regions of an image as either belonging to a stamp or not. By leveraging its understanding of stamp-specific features such as intricate designs, borders, and distinct colors, the StampSegNet is capable of producing pixel-level segmentation maps that highlight the exact boundaries of stamps within an image.
36
 
37
  ## Intended uses & limitations
38
 
39
+ Stamp Collection Management: The StampSegNet model can be used by stamp collectors and enthusiasts to automatically segment stamps from images. It simplifies the process of organizing and cataloging stamp collections by accurately identifying and isolating stamps, saving time and effort.
40
+
41
+ E-commerce Platforms: Online marketplaces and auction platforms catering to stamp sellers and buyers can integrate the StampSegNet model to enhance their user experience. Sellers can easily upload images of stamps, and the model can automatically extract and display segmented stamps, facilitating search, categorization, and valuation for potential buyers.
42
+
43
+ While the StampSegNet exhibits high performance in stamp segmentation, it may encounter challenges in scenarios with heavily damaged or obscured stamps, unusual stamp shapes, or images with poor lighting conditions. Furthermore, as with any AI-based model, biases present in the training data could potentially influence the segmentation results, necessitating careful evaluation and mitigation of any ethical implications.
44
 
45
  ## Training and evaluation data
46
 
47
+ The dataset used was taken from kaggle. Link is provided below:
48
+ {dataset}(https://www.kaggle.com/datasets/rtatman/stamp-verification-staver-dataset)
49
+
50
+ We used 60 samples and annotated them on Segments.ai
51
 
52
  ## Training procedure
53
 
54
+ Data Collection and Preparation:
55
+
56
+ Collect a diverse dataset of stamp images along with their corresponding pixel-level annotations, indicating the regions of stamps within the images.
57
+ Ensure the dataset includes a wide variety of stamp designs, sizes, colors, backgrounds, and lighting conditions.
58
+ Split the dataset into training, validation set
59
+
60
+ Model Selection and Configuration:
61
+
62
+ Choose a semantic segmentation model architecture suitable for stamp segmentation tasks.
63
+ We used [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) as a pretrained model and fined tuned on it
64
+ Configure the model architecture and any necessary hyperparameters, such as learning rate, batch size, and optimizer.
65
+
66
+ Training:
67
+
68
+ Train the model on the labeled stamp dataset using the initialized weights.
69
+ Use a suitable loss function for semantic segmentation tasks, such as cross-entropy loss or Dice loss.
70
+ Perform mini-batch stochastic gradient descent (SGD) or an optimizer like Adam to update the model's parameters.
71
+ Monitor the training progress by calculating metrics such as pixel accuracy, mean Intersection over Union (IoU), or F1 score.
72
+
73
  ### Training hyperparameters
74
 
75
  The following hyperparameters were used during training:
 
116
  - Transformers 4.28.0
117
  - Pytorch 2.0.0+cu118
118
  - Datasets 2.12.0
119
+ - Tokenizers 0.13.3