Update README.md
Browse files
README.md
CHANGED
@@ -11,4 +11,108 @@ tags:
|
|
11 |
- ocr
|
12 |
- YOLOv8m
|
13 |
pipeline_tag: object-detection
|
14 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
- ocr
|
12 |
- YOLOv8m
|
13 |
pipeline_tag: object-detection
|
14 |
+
---
|
15 |
+
|
16 |
+
# License Plate Character Detection Model
|
17 |
+
|
18 |
+
This repository contains a YOLOv8-based model for detecting characters in license plates. The model is trained to identify and localize individual characters on vehicle license plates, which can be useful for various applications such as automated parking systems, traffic monitoring, and vehicle identification.
|
19 |
+
|
20 |
+
## Model Details
|
21 |
+
|
22 |
+
- **Model Architecture**: YOLOv8
|
23 |
+
- **Task**: Character detection in license plates
|
24 |
+
- **Performance Metrics**: Accuracy, Precision, Recall
|
25 |
+
|
26 |
+
## Visual Demonstration
|
27 |
+
|
28 |
+
![val_batch2_labels.jpg](https://cdn-uploads.huggingface.co/production/uploads/6537b44c01281b544234189c/3IJuAynR7Mg3bHISgeCgZ.jpeg)
|
29 |
+
|
30 |
+
This image demonstrates the model's ability to detect and localize individual characters on a license plate. The bounding boxes show the detected characters.
|
31 |
+
|
32 |
+
## Installation
|
33 |
+
|
34 |
+
To use this model, you'll need to have Python installed along with the following dependencies:
|
35 |
+
|
36 |
+
```
|
37 |
+
pip install ultralytics
|
38 |
+
pip install torch
|
39 |
+
pip install huggingface_hub
|
40 |
+
```
|
41 |
+
|
42 |
+
## Usage
|
43 |
+
|
44 |
+
Here's a basic example of how to use the model:
|
45 |
+
|
46 |
+
```python
|
47 |
+
from ultralytics import YOLO
|
48 |
+
import cv2
|
49 |
+
import numpy as np
|
50 |
+
|
51 |
+
# Load the YOLOv8 model
|
52 |
+
model = YOLO('path/to/your/best.pt') # Load your trained model
|
53 |
+
|
54 |
+
# Read the image
|
55 |
+
image = cv2.imread('path/to/your/image.jpg')
|
56 |
+
|
57 |
+
# Run inference on the image
|
58 |
+
results = model(image)
|
59 |
+
|
60 |
+
# Process the results
|
61 |
+
for result in results:
|
62 |
+
boxes = result.boxes.cpu().numpy() # Get bounding boxes
|
63 |
+
for box in boxes:
|
64 |
+
# Get box coordinates
|
65 |
+
x1, y1, x2, y2 = box.xyxy[0]
|
66 |
+
x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2)
|
67 |
+
|
68 |
+
# Draw bounding box
|
69 |
+
cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0), 2)
|
70 |
+
|
71 |
+
# If you have class names, you can add them to the image
|
72 |
+
if box.cls is not None:
|
73 |
+
label = f"{result.names[int(box.cls[0])]} {box.conf[0]:.2f}"
|
74 |
+
cv2.putText(image, label, (x1, y1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 255, 0), 2)
|
75 |
+
|
76 |
+
# Save or display the result
|
77 |
+
cv2.imwrite('output_image.jpg', image)
|
78 |
+
# Or to display (if running in an environment with GUI):
|
79 |
+
# cv2.imshow('Result', image)
|
80 |
+
# cv2.waitKey(0)
|
81 |
+
# cv2.destroyAllWindows()
|
82 |
+
```
|
83 |
+
|
84 |
+
## Training
|
85 |
+
|
86 |
+
If you want to train the model on your own dataset:
|
87 |
+
|
88 |
+
1. Prepare your dataset in the appropriate format for YOLOv8.
|
89 |
+
2. Use the YOLOv8 training script with your custom configuration.
|
90 |
+
|
91 |
+
## Model Performance
|
92 |
+
|
93 |
+
### Accuracy
|
94 |
+
|
95 |
+
Our model achieves an overall accuracy of [97.12]% on the test set. Here's a breakdown of accuracy for each character:
|
96 |
+
![labels.jpg](https://cdn-uploads.huggingface.co/production/uploads/6537b44c01281b544234189c/EegpnQl3Fn9UO48Z242Gq.jpeg)
|
97 |
+
|
98 |
+
### Confusion Matrix
|
99 |
+
|
100 |
+
Below is the confusion matrix for our model, showing its performance across all characters:
|
101 |
+
|
102 |
+
![confusion_matrix.png](https://cdn-uploads.huggingface.co/production/uploads/6537b44c01281b544234189c/_IyyLB2_9W8drXRi5UZ_h.png)
|
103 |
+
|
104 |
+
This matrix provides insights into which characters are most often confused with each other, helping to identify areas for potential improvement.
|
105 |
+
|
106 |
+
### Additional Metrics
|
107 |
+
|
108 |
+
![results.png](https://cdn-uploads.huggingface.co/production/uploads/6537b44c01281b544234189c/YLmzwlKgSN_qZ5Ix_SvgP.png)
|
109 |
+
- Precision: [99.3]%
|
110 |
+
- Recall: [93.45]%
|
111 |
+
- mAP (mean Average Precision): [97.544]%
|
112 |
+
|
113 |
+
|
114 |
+
|
115 |
+
|
116 |
+
|
117 |
+
For any questions or feedback, please open an issue in this repository.
|
118 |
+
|