File size: 3,315 Bytes
88100c8
85a4679
 
 
 
 
88100c8
85a4679
 
 
 
 
 
 
 
6520dd0
19b1af6
 
88100c8
85a4679
1f8a62d
85a4679
 
 
 
 
 
 
 
 
 
 
c9df162
1f8a62d
 
 
 
 
 
 
85a4679
 
1f8a62d
 
 
 
85a4679
c9df162
85a4679
9d02c6a
85a4679
c9df162
85a4679
c9df162
85a4679
95f679e
85a4679
c9df162
85a4679
95f679e
 
 
85a4679
 
 
c9df162
95f679e
c9df162
85a4679
c9df162
85a4679
c9df162
95f679e
 
85a4679
c9df162
85a4679
c9df162
95f679e
c9df162
85a4679
 
 
c9df162
85a4679
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
'[object Object]': null
language:
- en
library_name: timm
pipeline_tag: image-classification
tags:
- vision
- mapreader
- maps
- National Library of Scotland
- historical
- lam
- humanities
- heritage
license: apache-2.0
datasets:
- Livingwithmachines/MapReader_Data_SIGSPATIAL_2022
---

# Model Card for mr_tf_efficientnet_b3_ns_timm_pretrain_railspace_and_building

A EfficientNet image classification model. 
Trained on ImageNet-1k and unlabeled JFT-300m using Noisy Student semi-supervised learning in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. 
Fine-tuned on gold standard annotations and outputs from early experiments using MapReader  (found [here](https://huggingface.co/datasets/Livingwithmachines/MapReader_Data_SIGSPATIAL_2022)).

## Model Details

### Model Description

- **Model type:** Image classification /feature backbone
- **Finetuned from model:**  https://huggingface.co/timm/tf_efficientnet_b3.ns_jft_in1k

### Classes and labels

- 0: no
- 1: railspace
- 2: building
- 3: railspace & building

## Uses

This fine-tuned version of the model is an output of the MapReader pipeline. 
It was used to classify 'patch' images (cells/regions) of scanned nineteenth-century series maps of Britain provided by the National Library of Scotland (learn more [here](https://maps.nls.uk/os/)). 
We classified patches to indicate the presence of buildings and railway infrastructure. 
See [our paper](https://dl.acm.org/doi/10.1145/3557919.3565812) for more details about labels.

## How to Get Started with the Model in MapReader

Please go to [the MapReader documentation](https://mapreader.readthedocs.io/en/latest/User-guide/Classify.html) for instructions on how to use this model in MapReader.

## Training, Evaluation and Testing Details

### Training, Evaluation and Testing Data

This model was fine-tuned on [manually-annotated data](https://huggingface.co/datasets/Livingwithmachines/MapReader_Data_SIGSPATIAL_2022).

### Training, Evaluation and Testing Procedure 

Details can be found [here](https://dl.acm.org/doi/10.1145/3557919.3565812). 

Open access version of the article available [here](https://arxiv.org/abs/2111.15592).

### Results

Data outputs can be found [here](https://huggingface.co/datasets/Livingwithmachines/MapReader_Data_SIGSPATIAL_2022).

Further details can be found [here](https://dl.acm.org/doi/10.1145/3557919.3565812).

## More Information 

This model was fine-tuned using MapReader.

The code for MapReader can be found [here](https://github.com/Living-with-machines/MapReader) and the documentation can be found [here](https://mapreader.readthedocs.io/en/latest/).

## Model Card Authors 

Katie McDonough ([email protected])

Rosie Wood ([email protected])

## Model Card Contact

Katie McDonough ([email protected])

## Funding Statement

This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1). 
Living with Machines, funded by the UK Research and Innovation (UKRI) Strategic Priority Fund, is a multidisciplinary collaboration delivered by the Arts and Humanities Research Council (AHRC), with The Alan Turing Institute, the British Library and Cambridge, King's College London, East Anglia, Exeter, and Queen Mary University of London.