Shiro commited on
Commit
7e08cd6
·
1 Parent(s): 432c3d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +96 -0
README.md CHANGED
@@ -1,3 +1,99 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # DOLG in torch and tensorflow (TF2)
6
+
7
+ Re-implementation (Non Official) of the paper DOLG: Single-Stage Image Retrieval with Deep Orthogonal Fusion of Local and Global Features accepted at ICCV 2021.
8
+ [paper](https://arxiv.org/pdf/2108.02927.pdf)
9
+
10
+ The pytorch checkpoint has been converted into tensorflow format (.h5) from this repository : https://github.com/feymanpriv/DOLG (Official)
11
+
12
+
13
+
14
+ ## Installation
15
+
16
+ > pip install opencv-python==4.5.5.64
17
+
18
+ > pip install huggingface-hub
19
+
20
+ to install dolg :
21
+
22
+ > pip install dolg
23
+ OR
24
+ > pip install -e .
25
+
26
+ ## Inference
27
+
28
+ To do some inference on single sample, you can use python script in examples/ folder or use as follows:
29
+
30
+ ```
31
+ import dolg
32
+ import numpy as np
33
+ from dolg.utils.extraction import process_data
34
+
35
+ depth = 50
36
+
37
+ # for pytorch
38
+
39
+ import torch
40
+ from dolg.dolg_model_pt import DOLG
41
+ from dolg.resnet_pt import ResNet
42
+
43
+ backbone = ResNet(depth=depth, num_groups=1, width_per_group=64, bn_eps=1e-5,
44
+ bn_mom=0.1, trans_fun="bottleneck_transform")
45
+ model = DOLG(backbone, s4_dim=2048, s3_dim=1024, s2_dim=512, head_reduction_dim=512,
46
+ with_ma=False, num_classes=None, pretrained=f"r{depth}")
47
+ img = process_data("image.jpg", "", mode="pt").unsqueeze(0)
48
+
49
+ with torch.no_grad():
50
+ output = model(img)
51
+ print(output)
52
+
53
+ # for tensorflow
54
+
55
+ import tensorflow as tf
56
+ from dolg.dolg_model_tf2 import DOLG
57
+ from dolg.resnet_tf2 import ResNet
58
+
59
+
60
+ backbone = ResNet(depth=depth, num_groups=1, width_per_group=64, bn_eps=1e-5,
61
+ bn_mom=0.1, trans_fun="bottleneck_transform", name="globalmodel")
62
+ model = DOLG(backbone, s4_dim=2048, s3_dim=1024, s2_dim=512, head_reduction_dim=512,
63
+ with_ma=False, num_classes=None, pretrained=f"r{depth}")
64
+ img = process_data("image.jpg", "", mode="tf")
65
+ img = np.expand_dims(img, axis=0)
66
+ output = model.predict(img)
67
+ print(output)
68
+ ```
69
+
70
+ ## Data
71
+
72
+ The model has been trained on google landmark v2. You can find the dataset on the official repository : https://github.com/cvdfoundation/google-landmark .
73
+
74
+
75
+ # Citation :
76
+
77
+ ```bibtex
78
+
79
+ @misc{yang2021dolg,
80
+ title={DOLG: Single-Stage Image Retrieval with Deep Orthogonal Fusion of Local and Global Features},
81
+ author={Min Yang and Dongliang He and Miao Fan and Baorong Shi and Xuetong Xue and Fu Li and Errui Ding and Jizhou Huang},
82
+ year={2021},
83
+ eprint={2108.02927},
84
+ archivePrefix={arXiv},
85
+ primaryClass={cs.CV}
86
+ }
87
+
88
+
89
+ @misc{https://doi.org/10.48550/arxiv.2004.01804,
90
+ doi = {10.48550/ARXIV.2004.01804},
91
+
92
+ url = {https://arxiv.org/abs/2004.01804},
93
+
94
+ author = {Weyand, Tobias and Araujo, Andre and Cao, Bingyi and Sim, Jack},
95
+
96
+ keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
97
+
98
+ title = {Google Landmarks Dataset v2 -- A Large-Scale Benchmark for Instance-Level Recognition and Retrieval},
99
+ ```