ucsahin commited on
Commit
f59e589
1 Parent(s): 6dd145f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -24,9 +24,9 @@ language:
24
  - tr
25
  ---
26
 
27
- This dataset is combined and deduplicated version of (coco-2014)[https://huggingface.co/datasets/detection-datasets/coco] and (coco-2017)[https://huggingface.co/datasets/rafaelpadilla/coco2017] datasets for object detection. The labels are in Turkish and the dataset is in an instruction-tuning format with separate columns for prompts and completion labels.
28
 
29
- For the bounding boxes, a similar annotation scheme to that of (PaliGemma)[https://huggingface.co/blog/paligemma#Detection] annotation is used. That is,
30
  ```
31
  The bounding box coordinates are in the form of special <loc[value]> tokens, where value is a number that represents a normalized coordinate. Each detection is represented by four location coordinates in the order x_min(left), y_min(top), x_max(right), y_max(bottom), followed by the label that was detected in that box. To convert values to coordinates, you first need to divide the numbers by 1024, then multiply y by the image height and x by its width. This will give you the coordinates of the bounding boxes, relative to the original image size.
32
  ```
 
24
  - tr
25
  ---
26
 
27
+ This dataset is combined and deduplicated version of [coco-2014](https://huggingface.co/datasets/detection-datasets/coco) and [coco-2017](https://huggingface.co/datasets/rafaelpadilla/coco2017) datasets for object detection. The labels are in Turkish and the dataset is in an instruction-tuning format with separate columns for prompts and completion labels.
28
 
29
+ For the bounding boxes, a similar annotation scheme to that of [PaliGemma](https://huggingface.co/blog/paligemma#Detection) annotation is used. That is,
30
  ```
31
  The bounding box coordinates are in the form of special <loc[value]> tokens, where value is a number that represents a normalized coordinate. Each detection is represented by four location coordinates in the order x_min(left), y_min(top), x_max(right), y_max(bottom), followed by the label that was detected in that box. To convert values to coordinates, you first need to divide the numbers by 1024, then multiply y by the image height and x by its width. This will give you the coordinates of the bounding boxes, relative to the original image size.
32
  ```