File size: 5,730 Bytes
af00a85
 
89aa6e4
 
 
 
af00a85
89aa6e4
 
 
 
 
 
 
 
 
 
 
 
af00a85
 
 
 
 
 
 
 
e4167ee
 
af00a85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89aa6e4
 
 
 
af00a85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c43423
 
 
 
 
 
84faedf
 
 
 
3c43423
 
 
 
 
 
ae91c9d
84faedf
 
3c43423
ae91c9d
3c43423
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
---
license: apache-2.0
size_categories:
- 100K<n<1M
task_categories:
- image-to-text
dataset_info:
- config_name: default
  features:
  - name: image
    dtype: image
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 392473380.05
    num_examples: 76318
  download_size: 383401054
  dataset_size: 392473380.05
- config_name: full
  features:
  - name: image
    dtype: image
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 385291867
    num_examples: 76318
  - name: validation
    num_bytes: 43364061.55
    num_examples: 8475
  - name: test
    num_bytes: 47643036.303
    num_examples: 9443
  download_size: 473618552
  dataset_size: 483485587.878
- config_name: human_handwrite
  features:
  - name: image
    dtype: image
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 16181778
    num_examples: 1200
  - name: validation
    num_bytes: 962283
    num_examples: 68
  - name: test
    num_bytes: 906906
    num_examples: 70
  download_size: 18056029
  dataset_size: 18050967
- config_name: human_handwrite_print
  features:
  - name: image
    dtype: image
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 3152122.8
    num_examples: 1200
  - name: validation
    num_bytes: 182615
    num_examples: 68
  - name: test
    num_bytes: 181698
    num_examples: 70
  download_size: 1336052
  dataset_size: 3516435.8
- config_name: small
  features:
  - name: image
    dtype: image
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 261296
    num_examples: 50
  - name: validation
    num_bytes: 156489
    num_examples: 30
  - name: test
    num_bytes: 156489
    num_examples: 30
  download_size: 588907
  dataset_size: 574274
- config_name: synthetic_handwrite
  features:
  - name: image
    dtype: image
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 496610333.066
    num_examples: 76266
  - name: validation
    num_bytes: 63147351.515
    num_examples: 9565
  - name: test
    num_bytes: 62893132.805
    num_examples: 9593
  download_size: 616418996
  dataset_size: 622650817.3859999
configs:
- config_name: default
  data_files:
  - split: train
    path: full/train-*
- config_name: full
  data_files:
  - split: train
    path: full/train-*
  - split: validation
    path: full/validation-*
  - split: test
    path: full/test-*
- config_name: human_handwrite
  data_files:
  - split: train
    path: human_handwrite/train-*
  - split: validation
    path: human_handwrite/validation-*
  - split: test
    path: human_handwrite/test-*
- config_name: human_handwrite_print
  data_files:
  - split: train
    path: human_handwrite_print/train-*
  - split: validation
    path: human_handwrite_print/validation-*
  - split: test
    path: human_handwrite_print/test-*
- config_name: small
  data_files:
  - split: train
    path: small/train-*
  - split: validation
    path: small/validation-*
  - split: test
    path: small/test-*
- config_name: synthetic_handwrite
  data_files:
  - split: train
    path: synthetic_handwrite/train-*
  - split: validation
    path: synthetic_handwrite/validation-*
  - split: test
    path: synthetic_handwrite/test-*
tags:
- code
---

# LaTeX OCR 的数据仓库

本数据仓库是专为 [LaTeX_OCR](https://github.com/LinXueyuanStdio/LaTeX_OCR) 及 [LaTeX_OCR_PRO](https://github.com/LinXueyuanStdio/LaTeX_OCR) 制作的数据,来源于 `https://zenodo.org/record/56198#.V2p0KTXT6eA` 以及 `https://www.isical.ac.in/~crohme/` 以及我们自己构建。

如果这个数据仓库有帮助到你的话,请点亮 ❤️like ++

后续追加新的数据也会放在这个仓库 ~~

> 原始数据仓库在github [LinXueyuanStdio/Data-for-LaTeX_OCR](https://github.com/LinXueyuanStdio/Data-for-LaTeX_OCR).

## 数据集

本仓库有 5 个数据集

1. `small` 是小数据集,样本数 110 条,用于测试
2. `full` 是印刷体约 100k 的完整数据集。实际上样本数略小于 100k,因为用 LaTeX 的抽象语法树剔除了很多不能渲染的 LaTeX。
3. `synthetic_handwrite` 是手写体 100k 的完整数据集,基于 `full` 的公式,使用手写字体合成而来,可以视为人类在纸上的手写体。样本数实际上略小于 100k,理由同上。
4. `human_handwrite` 是手写体较小数据集,更符合人类在电子屏上的手写体。主要来源于 `CROHME`。我们用 LaTeX 的抽象语法树校验过了。
5. `human_handwrite_print` 是来自 `human_handwrite` 的印刷体数据集,公式部分和 `human_handwrite` 相同,图片部分由公式用 LaTeX 渲染而来。

## 使用

加载训练集

- name 可选 small, full, synthetic_handwrite, human_handwrite, human_handwrite_print
- split 可选 train, validation, test

```python
>>> from datasets import load_dataset
>>> train_dataset = load_dataset("linxy/LaTeX_OCR", name="small", split="train")
>>> train_dataset[2]["text"]
\rho _ { L } ( q ) = \sum _ { m = 1 } ^ { L } \ P _ { L } ( m ) \ { \frac { 1 } { q ^ { m - 1 } } } .
>>> train_dataset[2]
{'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=200x50 at 0x15A5D6CE210>,
 'text': '\\rho _ { L } ( q ) = \\sum _ { m = 1 } ^ { L } \\ P _ { L } ( m ) \\ { \\frac { 1 } { q ^ { m - 1 } } } .'}
>>> len(train_dataset)
50
```

加载所有
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("linxy/LaTeX_OCR", name="small")
>>> dataset
DatasetDict({
    train: Dataset({
        features: ['image', 'text'],
        num_rows: 50
    })
    validation: Dataset({
        features: ['image', 'text'],
        num_rows: 30
    })
    test: Dataset({
        features: ['image', 'text'],
        num_rows: 30
    })
})
```