Adapting README content
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ metrics:
|
|
13 |
---
|
14 |
|
15 |
## Model name: mobilenet_v3_small_100_224
|
16 |
-
## Description
|
17 |
|
18 |
# Overview
|
19 |
|
@@ -32,7 +32,7 @@ For a quick comparison between these variants, please refer to the following tab
|
|
32 |
|Small|1.0|67.5|15.8|19.4|14.4|
|
33 |
|Small|0.75|65.4|12.8|15.9|11.6|
|
34 |
|
35 |
-
This
|
36 |
|
37 |
The model contains a trained instance of the network, packaged to do the [image classification](https://www.tensorflow.org/hub/common_signatures/images#classification) that the network was trained on. If you merely want to transform images into feature vectors, use [`google/imagenet/mobilenet_v3_small_100_224/feature_vector/5`](https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/feature_vector/5) instead, and save the space occupied by the classification layer.
|
38 |
|
@@ -45,11 +45,21 @@ The checkpoint exported into this model was `v3-small_224_1.0_float/ema/model-38
|
|
45 |
|
46 |
This model can be used with the `hub.KerasLayer` as follows. It cannot be used with the `hub.Module` API for TensorFlow 1.
|
47 |
|
|
|
48 |
```
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
```
|
54 |
|
55 |
The output is a batch of logits vectors. The indices into the logits are the `num_classes` = 1001 classes of the classification from the original training (see above). The mapping from indices to class labels can be found in the file at [download.tensorflow.org/data/ImageNetLabels.txt](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt) (with class 0 for "background", followed by 1000 actual ImageNet classes).
|
@@ -62,9 +72,21 @@ In principle, consumers of this model can [fine-tune](https://www.tensorflow.org
|
|
62 |
|
63 |
However, fine-tuning through a large classification might be prone to overfit.
|
64 |
|
65 |
-
The momentum (a.k.a. decay coefficient) of batch norm's exponential moving averages defaults to 0.99 for this model, in order to accelerate training on small datasets (or with huge batch sizes).
|
66 |
|
|
|
67 |
```
|
68 |
-
|
69 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
```
|
|
|
13 |
---
|
14 |
|
15 |
## Model name: mobilenet_v3_small_100_224
|
16 |
+
## Description adapted from [TFHub](https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/classification/5)
|
17 |
|
18 |
# Overview
|
19 |
|
|
|
32 |
|Small|1.0|67.5|15.8|19.4|14.4|
|
33 |
|Small|0.75|65.4|12.8|15.9|11.6|
|
34 |
|
35 |
+
This model uses the TF-Slim implementation of [`mobilenet_v3`](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet_v3.py) as a small network with a depth multiplier of 1.0.
|
36 |
|
37 |
The model contains a trained instance of the network, packaged to do the [image classification](https://www.tensorflow.org/hub/common_signatures/images#classification) that the network was trained on. If you merely want to transform images into feature vectors, use [`google/imagenet/mobilenet_v3_small_100_224/feature_vector/5`](https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/feature_vector/5) instead, and save the space occupied by the classification layer.
|
38 |
|
|
|
45 |
|
46 |
This model can be used with the `hub.KerasLayer` as follows. It cannot be used with the `hub.Module` API for TensorFlow 1.
|
47 |
|
48 |
+
### Using TF Hub and HF Hub
|
49 |
```
|
50 |
+
model_path = snapshot_download(repo_id="Dimitre/mobilenet_v3_small")
|
51 |
+
model = KerasLayer(handle=model_path)
|
52 |
+
|
53 |
+
img = np.random.rand(1, 224, 224, 3) # (batch_size, height, width, num_channels)
|
54 |
+
model(img) # output shape (1, 1001)
|
55 |
+
```
|
56 |
+
|
57 |
+
### Using [TF Hub fork](https://github.com/dimitreOliveira/hub)
|
58 |
+
```
|
59 |
+
model = pull_from_hub(repo_id="Dimitre/mobilenet_v3_small")
|
60 |
+
|
61 |
+
img = np.random.rand(1, 224, 224, 3) # (batch_size, height, width, num_channels)
|
62 |
+
model(img) # output shape (1, 1001)
|
63 |
```
|
64 |
|
65 |
The output is a batch of logits vectors. The indices into the logits are the `num_classes` = 1001 classes of the classification from the original training (see above). The mapping from indices to class labels can be found in the file at [download.tensorflow.org/data/ImageNetLabels.txt](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt) (with class 0 for "background", followed by 1000 actual ImageNet classes).
|
|
|
72 |
|
73 |
However, fine-tuning through a large classification might be prone to overfit.
|
74 |
|
75 |
+
The momentum (a.k.a. decay coefficient) of batch norm's exponential moving averages defaults to 0.99 for this model, in order to accelerate training on small datasets (or with huge batch sizes).
|
76 |
|
77 |
+
### Using TF Hub and HF Hub
|
78 |
```
|
79 |
+
model_path = snapshot_download(repo_id="Dimitre/mobilenet_v3_small")
|
80 |
+
model = KerasLayer(handle=model_path, trainable=True)
|
81 |
+
|
82 |
+
img = np.random.rand(1, 224, 224, 3) # (batch_size, height, width, num_channels)
|
83 |
+
model(img) # output shape (1, 1001)
|
84 |
+
```
|
85 |
+
|
86 |
+
### Using [TF Hub fork](https://github.com/dimitreOliveira/hub)
|
87 |
+
```
|
88 |
+
model = pull_from_hub(repo_id="Dimitre/mobilenet_v3_small", trainable=True)
|
89 |
+
|
90 |
+
img = np.random.rand(1, 224, 224, 3) # (batch_size, height, width, num_channels)
|
91 |
+
model(img) # output shape (1, 1001)
|
92 |
```
|