'[object Object]': null
language:
- en
library_name: timm
pipeline_tag: image-classification
tags:
- vision
- mapreader
- maps
- National Library of Scotland
- historical
- lam
- humanities
- heritage
Model Card for mr_tf_efficientnet_b3_ns_timm_pretrain
A EfficientNet image classification model. Trained on ImageNet-1k and unlabeled JFT-300m using Noisy Student semi-supervised learning in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. Fine-tuned on gold standard annotations and outputs from early experiments using MapReader (found here).
Model Details
Model Description
- Model type: Image classification /feature backbone
- Finetuned from model: https://huggingface.co/timm/tf_efficientnet_b3.ns_jft_in1k
Uses
Direct Use
{{ direct_use | default("[More Information Needed]", true)}}
Downstream Use [optional]
{{ downstream_use | default("[More Information Needed]", true)}}
Out-of-Scope Use
{{ out_of_scope_use | default("[More Information Needed]", true)}}
Bias, Risks, and Limitations
{{ bias_risks_limitations | default("[More Information Needed]", true)}}
Recommendations
{{ bias_recommendations | default("Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", true)}}
How to Get Started with the Model
Use the code below to get started with the model.
{{ get_started_code | default("[More Information Needed]", true)}}
Training Details
Training Data
{{ training_data | default("[More Information Needed]", true)}}
Training Procedure
Preprocessing [optional]
{{ preprocessing | default("[More Information Needed]", true)}}
Training Hyperparameters
- Training regime: {{ training_regime | default("[More Information Needed]", true)}}
Speeds, Sizes, Times [optional]
{{ speeds_sizes_times | default("[More Information Needed]", true)}}
Evaluation
Testing Data, Factors & Metrics
Testing Data
{{ testing_data | default("[More Information Needed]", true)}}
Factors
{{ testing_factors | default("[More Information Needed]", true)}}
Metrics
{{ testing_metrics | default("[More Information Needed]", true)}}
Results
{{ results | default("[More Information Needed]", true)}}
Summary
{{ results_summary | default("", true) }}
Model Examination [optional]
{{ model_examination | default("[More Information Needed]", true)}}
Citation [optional]
BibTeX:
{{ citation_bibtex | default("[More Information Needed]", true)}}
APA:
{{ citation_apa | default("[More Information Needed]", true)}}
More Information [optional]
{{ more_information | default("[More Information Needed]", true)}}
Model Card Authors [optional]
{{ model_card_authors | default("[More Information Needed]", true)}}
Model Card Contact
{{ model_card_contact | default("[More Information Needed]", true)}}
Funding Statement
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1). Living with Machines, funded by the UK Research and Innovation (UKRI) Strategic Priority Fund, is a multidisciplinary collaboration delivered by the Arts and Humanities Research Council (AHRC), with The Alan Turing Institute, the British Library and Cambridge, King's College London, East Anglia, Exeter, and Queen Mary University of London.