Spaces:
Sleeping
SPTS: Single-Point Text Spotting
Description
This is an implementation of SPTS based on MMOCR, MMCV, and MMEngine.
Existing scene text spotting (i.e., end-to-end text detection and recognition) methods rely on costly bounding box annotations (e.g., text-line, word-level, or character-level bounding boxes). For the first time, we demonstrate that training scene text spotting models can be achieved with an extremely low-cost annotation of a single-point for each instance. We propose an end-to-end scene text spotting method that tackles scene text spotting as a sequence prediction task. Given an image as input, we formulate the desired detection and recognition results as a sequence of discrete tokens and use an auto-regressive Transformer to predict the sequence. The proposed method is simple yet effective, which can achieve state-of-the-art results on widely used benchmarks. Most significantly, we show that the performance is not very sensitive to the positions of the point annotation, meaning that it can be much easier to be annotated or even be automatically generated than the bounding box that requires precise positions. We believe that such a pioneer attempt indicates a significant opportunity for scene text spotting applications of a much larger scale than previously possible.
Usage
Prerequisites
All the commands below rely on the correct configuration of PYTHONPATH
, which should point to the project's directory so that Python can locate the module files. In SPTS/
root directory, run the following line to add the current directory to PYTHONPATH
:
# Linux
export PYTHONPATH=`pwd`:$PYTHONPATH
# Windows PowerShell
$env:PYTHONPATH=Get-Location
Dataset
As of now, the implementation uses datasets provided by SPTS for pre-training, and uses MMOCR's datasets for fine-tuning and testing. It's because the test split of SPTS's datasets does not contain enough information for e2e evaluation; and MMOCR's dataset preparer has not yet supported all the datasets used in SPTS. We are working on this issue, and they will be available in MMOCR's dataset preparer very soon.
Please follow these steps to prepare the datasets:
Download and extract all the SPTS datasets into
spts-data/
following SPTS official guide.Use Dataset Preparer to prepare
icdar2013
,icdar2015
andtotaltext
fortextspotting
task.# Run in MMOCR's root directory python tools/dataset_converters/prepare_dataset.py icdar2013 icdar2015 totaltext --task textspotting
Then create a soft link to
data/
directory in the project root directory:ln -s ../../data/ .
Training commands
In the current directory, run the following command to train the model:
Pretrain
mim train mmocr config/spts/spts_resnet50_8xb8-150e_pretrain-spts.py --work-dir work_dirs/ --amp
To train on multiple GPUs, e.g. 8 GPUs, run the following command:
mim train mmocr config/spts/spts_resnet50_8xb8-150e_pretrain-spts.py --work-dir work_dirs/ --launcher pytorch --gpus 8 --amp
Finetune
Similarly, run the following command to finetune the model on a dataset (e.g. icdar2013):
mim train mmocr config/spts/spts_resnet50_8xb8-200e_icdar2013.py --work-dir work_dirs/ --cfg-options "load_from={CHECKPOINT_PATH}" --amp
To finetune on multiple GPUs, e.g. 8 GPUs, run the following command:
mim train mmocr config/spts/spts_resnet50_8xb8-200e_icdar2013.py --work-dir work_dirs/ --launcher pytorch --gpus 8 --cfg-options "load_from={CHECKPOINT_PATH}" --amp
Testing commands
In the current directory, run the following command to test the model on a dataset (e.g. icdar2013):
mim test mmocr config/spts/spts_resnet50_8xb8-200e_icdar2013.py --work-dir work_dirs/ --checkpoint ${CHECKPOINT_PATH}
Convert Weights from Official Repo
Users may download the weights from SPTS and use the conversion script to convert them into MMOCR format.
python tools/ckpt_adapter.py [SPTS_WEIGHTS_PATH] [MMOCR_WEIGHTS_PATH]
Results
All the models are trained on 8x A100 GPUs with AMP on (--amp
). The overall batch size is 64.
Name | Pretrained | Generic | Weak | Strong | Download |
---|---|---|---|---|---|
ICDAR 2013 | model / log | 87.10 | 91.46 | 93.41 | model / log |
ICDAR 2015 | model / log | 69.09 | 73.45 | 79.19 | model / log |
Citation
If you find SPTS useful in your research or applications, please cite SPTS with the following BibTeX entry.
@inproceedings{peng2022spts,
title={SPTS: Single-Point Text Spotting},
author={Peng, Dezhi and Wang, Xinyu and Liu, Yuliang and Zhang, Jiaxin and Huang, Mingxin and Lai, Songxuan and Zhu, Shenggao and Li, Jing and Lin, Dahua and Shen, Chunhua and Bai, Xiang and Jin, Lianwen},
booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
year={2022}
}
Checklist
Milestone 1: PR-ready, and acceptable to be one of the
projects/
.Finish the code
Basic docstrings & proper citation
Test-time correctness
A full README
Milestone 2: Indicates a successful model implementation.
Training-time correctness
Milestone 3: Good to be a part of our core package!
Type hints and docstrings
Unit tests
Code polishing
Metafile.yml
Move your modules into the core package following the codebase's file hierarchy structure.
Refactor your modules into the core package following the codebase's file hierarchy structure.