File size: 3,302 Bytes
db872fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70e4fe5
db872fc
 
 
 
 
 
 
 
 
 
 
 
d4b12e5
db872fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# -*- coding: utf-8 -*-
"""Deploy Barcelo demo.ipynb

Automatically generated by Colaboratory.

Original file is located at
    https://colab.research.google.com/drive/1FxaL8DcYgvjPrWfWruSA5hvk3J81zLY9

![   ](https://www.vicentelopez.gov.ar/assets/images/logo-mvl.png)

# Modelo

YOLO es una familia de modelos de detecci贸n de objetos a escala compuesta entrenados en COCO dataset, e incluye una funcionalidad simple para Test Time Augmentation (TTA), model ensembling, hyperparameter evolution, and export to ONNX, CoreML and TFLite.


## Gradio Inferencia

![](https://i.ibb.co/982NS6m/header.png)

Este Notebook se acelera opcionalmente con un entorno de ejecuci贸n de GPU


----------------------------------------------------------------------

 YOLOv5 Gradio demo

*Author: Ultralytics LLC and Gradio*

# C贸digo
"""

#!pip install -qr https://raw.githubusercontent.com/ultralytics/yolov5/master/requirements.txt gradio # install dependencies

import gradio as gr
import torch
from PIL import Image

# Images
torch.hub.download_url_to_file('https://i.pinimg.com/originals/7f/5e/96/7f5e9657c08aae4bcd8bc8b0dcff720e.jpg', 'ejemplo1.jpg')
torch.hub.download_url_to_file('https://i.pinimg.com/originals/c2/ce/e0/c2cee05624d5477ffcf2d34ca77b47d1.jpg', 'ejemplo2.jpg')

# Model
#model = torch.hub.load('ultralytics/yolov5', 'yolov5s')  # force_reload=True to update

model = torch.hub.load('ultralytics/yolov5', 'custom', path='./best.pt')  # local model  o google colab
#model = torch.hub.load('path/to/yolov5', 'custom', path='/content/yolov56.pt', source='local')  # local repo

def yolo(im, size=640):
    g = (size / max(im.size))  # gain
    im = im.resize((int(x * g) for x in im.size), Image.ANTIALIAS)  # resize

    results = model(im)  # inference
    results.render()  # updates results.imgs with boxes and labels
    return Image.fromarray(results.imgs[0])


inputs = gr.inputs.Image(type='pil', label=" Imagen Original")
outputs = gr.outputs.Image(type="pil", label="Resultado")

title = 'Trampas Barcel贸'
        
description = "Sistemas de Desarrollado por Subcretar铆a de Innovaci贸n del Municipio de Vicente Lopez"
              
article = "<p style='text-align: center'>YOLOv5 is a family of compound-scaled object detection models trained on the COCO dataset, and includes " \
          "simple functionality for Test Time Augmentation (TTA), model ensembling, hyperparameter evolution, " \
          "and export to ONNX, CoreML and TFLite. <a href='https://colab.research.google.com/drive/1fbeB71yD09WK2JG9P3Ladu9MEzQ2rQad?usp=sharing'>Source code</a> |" \
          "<a href='https://colab.research.google.com/drive/1FxaL8DcYgvjPrWfWruSA5hvk3J81zLY9?usp=sharing'>Colab Deploy</a> | <a href='https://github.com/ultralytics/yolov5'>PyTorch Hub</a></p>"
          
examples = [['ejemplo1.jpg'], ['ejemplo2.jpg']]
gr.Interface(yolo, inputs, outputs, title=title, description=description, article=article, examples=examples, analytics_enabled=False).launch(
    debug=True)

"""For YOLOv5 PyTorch Hub inference with **PIL**, **OpenCV**, **Numpy** or **PyTorch** inputs please see the full [YOLOv5 PyTorch Hub Tutorial](https://github.com/ultralytics/yolov5/issues/36).


## Citation

[![DOI](https://zenodo.org/badge/264818686.svg)](https://zenodo.org/badge/latestdoi/264818686)
"""