This view is limited to 50 files because it contains too many changes.  See the raw diff here.
Files changed (50) hide show
  1. .gitattributes +4 -2
  2. README.md +59 -144
  3. align.py +0 -81
  4. cut.py +24 -52
  5. data/saamgwokjinji-00000-of-00005.parquet +0 -3
  6. data/saamgwokjinji-00001-of-00005.parquet +0 -3
  7. data/saamgwokjinji-00002-of-00005.parquet +0 -3
  8. data/saamgwokjinji-00003-of-00005.parquet +0 -3
  9. data/saamgwokjinji-00004-of-00005.parquet +0 -3
  10. data/seoiwuzyun-00000-of-00003.parquet +0 -3
  11. data/seoiwuzyun-00001-of-00003.parquet +0 -3
  12. data/seoiwuzyun-00002-of-00003.parquet +0 -3
  13. data/train-00000-of-00078.parquet +3 -0
  14. data/train-00001-of-00078.parquet +3 -0
  15. data/train-00002-of-00078.parquet +3 -0
  16. data/train-00003-of-00078.parquet +3 -0
  17. data/train-00004-of-00078.parquet +3 -0
  18. data/train-00005-of-00078.parquet +3 -0
  19. data/train-00006-of-00078.parquet +3 -0
  20. data/train-00007-of-00078.parquet +3 -0
  21. data/train-00008-of-00078.parquet +3 -0
  22. data/train-00009-of-00078.parquet +3 -0
  23. data/train-00010-of-00078.parquet +3 -0
  24. data/train-00011-of-00078.parquet +3 -0
  25. data/train-00012-of-00078.parquet +3 -0
  26. data/train-00013-of-00078.parquet +3 -0
  27. data/train-00014-of-00078.parquet +3 -0
  28. data/train-00015-of-00078.parquet +3 -0
  29. data/train-00016-of-00078.parquet +3 -0
  30. data/train-00017-of-00078.parquet +3 -0
  31. data/train-00018-of-00078.parquet +3 -0
  32. data/train-00019-of-00078.parquet +3 -0
  33. data/train-00020-of-00078.parquet +3 -0
  34. data/train-00021-of-00078.parquet +3 -0
  35. data/train-00022-of-00078.parquet +3 -0
  36. data/train-00023-of-00078.parquet +3 -0
  37. data/train-00024-of-00078.parquet +3 -0
  38. data/train-00025-of-00078.parquet +3 -0
  39. data/train-00026-of-00078.parquet +3 -0
  40. data/train-00027-of-00078.parquet +3 -0
  41. data/train-00028-of-00078.parquet +3 -0
  42. data/train-00029-of-00078.parquet +3 -0
  43. data/train-00030-of-00078.parquet +3 -0
  44. data/train-00031-of-00078.parquet +3 -0
  45. data/train-00032-of-00078.parquet +3 -0
  46. data/train-00033-of-00078.parquet +3 -0
  47. data/train-00034-of-00078.parquet +3 -0
  48. data/train-00035-of-00078.parquet +3 -0
  49. data/train-00036-of-00078.parquet +3 -0
  50. data/train-00037-of-00078.parquet +3 -0
.gitattributes CHANGED
@@ -44,8 +44,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
44
  *.mp3 filter=lfs diff=lfs merge=lfs -text
45
  *.ogg filter=lfs diff=lfs merge=lfs -text
46
  *.wav filter=lfs diff=lfs merge=lfs -text
47
- *.opus filter=lfs diff=lfs merge=lfs -text
48
- *.webm filter=lfs diff=lfs merge=lfs -text
49
  # Image files - uncompressed
50
  *.bmp filter=lfs diff=lfs merge=lfs -text
51
  *.gif filter=lfs diff=lfs merge=lfs -text
@@ -55,3 +53,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
55
  *.jpg filter=lfs diff=lfs merge=lfs -text
56
  *.jpeg filter=lfs diff=lfs merge=lfs -text
57
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
44
  *.mp3 filter=lfs diff=lfs merge=lfs -text
45
  *.ogg filter=lfs diff=lfs merge=lfs -text
46
  *.wav filter=lfs diff=lfs merge=lfs -text
 
 
47
  # Image files - uncompressed
48
  *.bmp filter=lfs diff=lfs merge=lfs -text
49
  *.gif filter=lfs diff=lfs merge=lfs -text
 
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
56
+ 001.webm filter=lfs diff=lfs merge=lfs -text
57
+ *.webm filter=lfs diff=lfs merge=lfs -text
58
+ 002.webm filter=lfs diff=lfs merge=lfs -text
59
+ 003.webm filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,62 +1,54 @@
1
  ---
2
  language:
3
- - yue
4
  license: cc0-1.0
5
  size_categories:
6
- - 10K<n<100K
7
  task_categories:
8
- - automatic-speech-recognition
9
- - text-to-speech
10
- - text-generation
11
- - feature-extraction
12
- - audio-to-audio
13
- - audio-classification
14
- - text-to-audio
15
  pretty_name: c
16
  configs:
17
- - config_name: default
18
- data_files:
19
- - split: saamgwokjinji
20
- path: data/saamgwokjinji-*
21
- - split: seoiwuzyun
22
- path: data/seoiwuzyun-*
23
  tags:
24
- - cantonese
25
- - audio
26
- - art
27
  dataset_info:
28
  features:
29
- - name: audio
30
- dtype: audio
31
- - name: id
32
- dtype: string
33
- - name: episode_id
34
- dtype: int64
35
- - name: audio_duration
36
- dtype: float64
37
- - name: transcription
38
- dtype: string
39
  splits:
40
- - name: saamgwokjinji
41
- num_bytes: 2398591354.589
42
- num_examples: 39173
43
- - name: seoiwuzyun
44
- num_bytes: 1243416911.25
45
- num_examples: 18881
46
- download_size: 3690072406
47
- dataset_size: 3642008265.839
48
  ---
49
 
50
- # 張悦楷講《三國演義》《水滸傳》語音數據集
51
 
52
- [English](#the-zoeng-jyut-gaai-story-telling-speech-dataset)
53
 
54
- ## Dataset Description
55
 
56
- - **Homepage:** [張悦楷講古語音數據集 The Zoeng Jyut Gaai Story-telling Speech Dataset](https://canclid.github.io/zoengjyutgaai/)
57
- - **License:** [CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/)
58
-
59
- 呢個係張悦楷講《三國演義》同《水滸傳》語音數據集。[張悦楷](https://zh.wikipedia.org/wiki/%E5%BC%A0%E6%82%A6%E6%A5%B7)係廣州最出名嘅講古佬 / 粵語説書藝人。佢從上世紀七十年代開始就喺廣東各個收音電台度講古,佢把聲係好多廣州人嘅共同回憶。本數據集《三國演義》係佢最知名嘅作品一。
60
 
61
  數據集用途:
62
 
@@ -72,15 +64,11 @@ TTS 效果演示:https://huggingface.co/spaces/laubonghaudoi/zoengjyutgaai_tts
72
  - 所有文本都根據 https://jyutping.org/blog/typo/ 同 https://jyutping.org/blog/particles/ 規範用字。
73
  - 所有文本都使用全角標點,冇半角標點。
74
  - 所有文本都用漢字轉寫,無阿拉伯數字無英文字母
75
- - 所有音頻源都存放喺`/source`,為方便直接用作訓練數據,切分後嘅音頻都放喺 `opus/`
76
- - 所有 opus 音頻皆為 48000 Hz 採樣率。
77
- - 所有源字幕 SRT 文件都存放喺 `srt/` 路經下,搭配 `source/` 下嘅音源可以直接作為帶字幕嘅錄音直接欣賞。
78
- - `cut.py` 係切分腳本,將對應嘅音源根據 srt 切分成短句並生成一個文本轉寫 csv。
79
- - `stats.py` 係統計腳本,運行佢就會顯示成個數據集嘅各項統計數據。
80
 
81
- ## 下載使用
82
 
83
- 要下載使用呢個數據集,可以喺 Python 入面直接跑:
84
 
85
  ```python
86
  from datasets import load_dataset
@@ -88,30 +76,16 @@ from datasets import load_dataset
88
  ds = load_dataset("CanCLID/zoengjyutgaai_saamgwokjinji")
89
  ```
90
 
91
- 如果想單純將 `opus/` 入面所有嘢下載落嚟,可以跑下面嘅 Python 代碼,注意要安裝 `pip install --upgrade huggingface_hub` 先:
92
-
93
- ```python
94
- from huggingface_hub import snapshot_download
95
-
96
- # 如果你淨係想下載啲字幕或者源音頻,噉就將下面嘅 `wav/*` 改成 `srt/*` 或者 `webm/*`
97
- snapshot_download(repo_id="CanCLID/zoengjyutgaai_saamgwokjinji",allow_patterns="opus/*",local_dir="./",repo_type="dataset")
98
- ```
99
-
100
- 如果唔想用 python,你亦都可以用命令行叫 git 針對克隆個`opus/`或者其他路經,避免將成個 repo 都克隆落嚟浪費空間同下載時間:
101
 
102
  ```bash
103
- mkdir zoengjyutgaai_saamgwokjinji
 
104
  cd zoengjyutgaai_saamgwokjinji
105
- git init
106
 
107
- git remote add origin https://huggingface.co/datasets/CanCLID/zoengjyutgaai_saamgwokjinji
108
  git sparse-checkout init --cone
109
-
110
- # 指定凈係下載個別路徑
111
- git sparse-checkout set opus
112
-
113
- # 開始下載
114
- git pull origin main
115
  ```
116
 
117
  ### 數據集構建流程
@@ -120,21 +94,14 @@ git pull origin main
120
 
121
  1. 從 YouTube 或者國內評書網站度下載錄音源文件,一般都係每集半個鐘長嘅 `.webm` 或者 `.mp3`。
122
  1. 用加字幕工具幫呢啲錄音加字幕,得到對應嘅 `.srt` 文件。
123
- 1. 將啲源錄音用下面嘅命令儘可能無壓縮噉轉換成 `.opus` 格式。
124
- 1. 運行`cut.py`,將每一集 `.opus` 按照 `.srt` 入面嘅時間點切分成一句一個 `.opus`,然後對應嘅文本寫入本數據集嘅 `xxx.csv`。
125
  1. 然後打開一個 IPython,逐句跑下面嘅命令,將啲數據推上 HuggingFace。
126
 
127
  ```python
128
- from datasets import load_dataset, DatasetDict
129
  from huggingface_hub import login
130
-
131
- sg = load_dataset('audiofolder', data_dir='./opus/saamgwokjinji')
132
- sw = load_dataset('audiofolder', data_dir='./opus/seoiwuzyun')
133
- dataset = DatasetDict({
134
- "saamgwokjinji": sg["train"],
135
- "seoiwuzyun": sw["train"],
136
- })
137
-
138
  # 檢查下讀入嘅數據有冇問題
139
  dataset['train'][0]
140
  # 準備好個 token 嚟登入
@@ -143,31 +110,19 @@ login()
143
  dataset.push_to_hub("CanCLID/zoengjyutgaai_saamgwokjinji")
144
  ```
145
 
146
- ### 音頻格式轉換
147
 
148
  首先要安裝 [ffmpeg](https://www.ffmpeg.org/download.html),然後運行:
149
 
150
  ```bash
151
- # 將下載嘅音源由 webm 轉成 opus
152
- ffmpeg -i webm/saamgwokjinji/001.webm -c:a copy source/saamgwokjinji/001.opus
153
- # 或者轉 mp3
154
- ffmpeg -i mp3/mouzaakdung/001.mp3 -c:a libopus -map_metadata -1 -b:a 48k -vbr on source/mouzaakdung/001.opus
155
- # 將 opus 轉成無損 wav
156
- ffmpeg -i source/saamgwokjinji/001.opus wav/saamgwokjinji/001.wav
157
  ```
158
 
159
- 如果想將所有 opus 文件全部轉換成 wav,可以直接運行`to_wav.sh`:
160
 
161
- ```
162
- chmod +x to_wav.sh
163
- ./to_wav.sh
164
- ```
165
 
166
- 跟住就會生成一個 `wav/` 路經,入面都係 `opus/` 對應嘅音頻。注意 wav 格式非常掗埞,成個 `opus/` 轉晒後會佔用至少 500GB 儲存空間,所以轉換之前記得確保有足夠空間。如果你想對音頻重採樣,亦都可以修改 `to_wav.sh` 入面嘅命令順便做重採樣。
167
-
168
- # The Zoeng Jyut Gaai Story-telling Speech Dataset
169
-
170
- This is a speech dataset of Zoeng Jyut Gaai story-telling _Romance of the Three Kingdoms_ and _Water Margin_. [Zoeng Jyut Gaai](https://zh.wikipedia.org/wiki/%E5%BC%A0%E6%82%A6%E6%A5%B7) is a famous actor, stand-up commedian and story-teller (講古佬) in 20th centry Canton. His voice remains in the memories of thousands of Cantonese people. This dataset is built from one of his most well-known story-telling piece: _Romance of the Three Kingdoms_.
171
 
172
  Use case of this dataset:
173
 
@@ -183,11 +138,7 @@ TTS demo: https://huggingface.co/spaces/laubonghaudoi/zoengjyutgaai_tts
183
  - All transcriptions follow the prescribed orthography detailed in https://jyutping.org/blog/typo/ and https://jyutping.org/blog/particles/
184
  - All transcriptions use full-width punctuations, no half-width punctuations is used.
185
  - All transcriptions are in Chinese characters, no Arabic numbers or Latin letters.
186
- - All source audio are stored in `source/`. For the convenice of training, segmented audios are stored in `opus/`.
187
- - All opus audio are in 48000 Hz sampling rate.
188
- - All source subtitle SRT files are stored in `srt/`. Use them with the webm files to enjoy subtitled storytelling pieces.
189
- - `cut.py` is the script for cutting opus audios into senteneces based on the srt, and generates a csv file for transcriptions.
190
- - `stats.py` is the script for getting stats of this dataset.
191
 
192
  ## Usage
193
 
@@ -199,50 +150,14 @@ from datasets import load_dataset
199
  ds = load_dataset("CanCLID/zoengjyutgaai_saamgwokjinji")
200
  ```
201
 
202
- If you only want to download a certain directory to save time and space from cloning the entire repo, run the Python codes below. Make sure you have `pip install --upgrade huggingface_hub` first:
203
-
204
- ```python
205
- from huggingface_hub import snapshot_download
206
-
207
- # If you only want to download the source audio or the subtitles, change the `wav/*` below into `srt/*` or `webm/*`
208
- snapshot_download(repo_id="CanCLID/zoengjyutgaai_saamgwokjinji",allow_patterns="opus/*",local_dir="./",repo_type="dataset")
209
- ```
210
-
211
- If you don't want to run python codes and want to do this via command lines, you can selectively clone only a directory of the repo:
212
 
213
  ```bash
214
- mkdir zoengjyutgaai_saamgwokjinji
 
215
  cd zoengjyutgaai_saamgwokjinji
216
- git init
217
 
218
- git remote add origin https://huggingface.co/datasets/CanCLID/zoengjyutgaai_saamgwokjinji
219
  git sparse-checkout init --cone
220
-
221
- # Tell git which directory you want
222
- git sparse-checkout set opus
223
-
224
- # Pull the content
225
- git pull origin main
226
- ```
227
-
228
- ### Audio format conversion
229
-
230
- Install [ffmpeg](https://www.ffmpeg.org/download.html) first, then run:
231
-
232
- ```bash
233
- # convert all webm into opus
234
- ffmpeg -i webm/saamgwokjinji/001.webm -c:a copy source/saamgwokjinji/001.opus
235
- # or into mp3
236
- ffmpeg -i mp3/mouzaakdung/001.mp3 -c:a libopus -map_metadata -1 -b:a 48k -vbr on source/mouzaakdung/001.opus
237
- # convert all opus into loseless wav
238
- ffmpeg -i source/saamgwokjinji/001.opus wav/saamgwokjinji/001.wav
239
- ```
240
-
241
- If you want to convert all opus to wav, run `to_wav.sh`:
242
-
243
- ```
244
- chmod +x to_wav.sh
245
- ./to_wav.sh
246
- ```
247
-
248
- It will generate a `wav/` path which contains all audios converted from `opus/`. Be aware the wav format is very space-consuming. A full conversion will take up at least 500GB space so make sure you have enough storage. If you want to resample the audio, modify the line within `to_wav.sh` to resample the audio while doing the conversion.
 
1
  ---
2
  language:
3
+ - yue
4
  license: cc0-1.0
5
  size_categories:
6
+ - 10K<n<100K
7
  task_categories:
8
+ - automatic-speech-recognition
9
+ - text-to-speech
10
+ - text-generation
11
+ - feature-extraction
12
+ - audio-to-audio
13
+ - audio-classification
14
+ - text-to-audio
15
  pretty_name: c
16
  configs:
17
+ - config_name: default
18
+ data_files:
19
+ - split: train
20
+ path: data/train-*
 
 
21
  tags:
22
+ - cantonese
23
+ - audio
24
+ - art
25
  dataset_info:
26
  features:
27
+ - name: audio
28
+ dtype: audio
29
+ - name: id
30
+ dtype: string
31
+ - name: episode_id
32
+ dtype: int64
33
+ - name: audio_duration
34
+ dtype: float64
35
+ - name: transcription
36
+ dtype: string
37
  splits:
38
+ - name: train
39
+ num_bytes: 38792803349.64
40
+ num_examples: 39190
41
+ download_size: 38782029113
42
+ dataset_size: 38792803349.64
 
 
 
43
  ---
44
 
45
+ # 張悦楷講《三國演義》語音數據集
46
 
47
+ [主頁 Home page](https://canclid.github.io/zoengjyutgaai/)
48
 
49
+ [English](#zoeng-jyut-gaai-story-telling-romance-of-the-three-kingdoms-dataset)
50
 
51
+ 呢個係張悦楷講《三國演義》語音數據集。[張悦楷](https://zh.wikipedia.org/wiki/%E5%BC%A0%E6%82%A6%E6%A5%B7)係廣州最出名嘅講古佬 / 粵語説書藝人。佢從上世紀七十年代開始就喺廣東各個收音電台度講古,佢把聲係好多廣州人嘅共同回憶。本數據集《三國演義》係佢最知名嘅作品一。
 
 
 
52
 
53
  數據集用途:
54
 
 
64
  - 所有文本都根據 https://jyutping.org/blog/typo/ 同 https://jyutping.org/blog/particles/ 規範用字。
65
  - 所有文本都使用全角標點,冇半角標點。
66
  - 所有文本都用漢字轉寫,無阿拉伯數字無英文字母
67
+ - 所有音頻源都存放喺`/webm`,為方便直接用作訓練數據,切分後嘅音頻都重採樣升 44100Hz 放喺 `wav/`
 
 
 
 
68
 
69
+ 要使用呢個數據集,可以喺 Python 入面直接跑:
70
 
71
+ ## 使用
72
 
73
  ```python
74
  from datasets import load_dataset
 
76
  ds = load_dataset("CanCLID/zoengjyutgaai_saamgwokjinji")
77
  ```
78
 
79
+ 如果想單純將所有 wav 文件同對應嘅轉寫複製落嚟,可以跑下面嘅命令行嚟針對克隆個`wav/`路經,避免將成個 repo 都克隆落嚟浪費空間同下載時間:
 
 
 
 
 
 
 
 
 
80
 
81
  ```bash
82
+ git clone --filter=blob:none --sparse https://huggingface.co/datasets/CanCLID/zoengjyutgaai_saamgwokjinji
83
+
84
  cd zoengjyutgaai_saamgwokjinji
 
85
 
 
86
  git sparse-checkout init --cone
87
+ git sparse-checkout set wav
88
+ git checkout
 
 
 
 
89
  ```
90
 
91
  ### 數據集構建流程
 
94
 
95
  1. 從 YouTube 或者國內評書網站度下載錄音源文件,一般都係每集半個鐘長嘅 `.webm` 或者 `.mp3`。
96
  1. 用加字幕工具幫呢啲錄音加字幕,得到對應嘅 `.srt` 文件。
97
+ 1. 將啲源錄音用下面嘅命令儘可能無壓縮噉轉換成 `.wav` 格式。
98
+ 1. 運行`cut.py`,將每一集 `.wav` 按照 `.srt` 入面嘅時間點切分成一句一個 `.wav`,然後對應嘅文本寫入本數據集嘅 `xxx.csv`。
99
  1. 然後打開一個 IPython,逐句跑下面嘅命令,將啲數據推上 HuggingFace。
100
 
101
  ```python
102
+ from datasets import load_dataset
103
  from huggingface_hub import login
104
+ dataset = load_dataset('audiofolder', data_dir='./wav')
 
 
 
 
 
 
 
105
  # 檢查下讀入嘅數據有冇問題
106
  dataset['train'][0]
107
  # 準備好個 token 嚟登入
 
110
  dataset.push_to_hub("CanCLID/zoengjyutgaai_saamgwokjinji")
111
  ```
112
 
113
+ ### 將`.webm`無損轉為`.wav`
114
 
115
  首先要安裝 [ffmpeg](https://www.ffmpeg.org/download.html),然後運行:
116
 
117
  ```bash
118
+ ffmpeg -i "001.webm" -vn -ar 44100 -c:a pcm_s16le "001.wav"
 
 
 
 
 
119
  ```
120
 
121
+ 如果唔想指定採樣率,儘可能無損轉換,可以將上面嘅`-ar 44100`刪去。**本數據集入面所有 wav 都已經轉為 44100 採樣率**。
122
 
123
+ # Zoeng Jyut Gaai story-telling _Romance of the Three Kingdoms_ voice dataset
 
 
 
124
 
125
+ This is a speech dataset of Zoeng Jyut Gaai story-telling _Romance of the Three Kingdoms_. [Zoeng Jyut Gaai](https://zh.wikipedia.org/wiki/%E5%BC%A0%E6%82%A6%E6%A5%B7) is a famous actor, stand-up commedian and story-teller (講古佬) in 20th centry Canton. His voice remains in the memories of thousands of Cantonese people. This dataset is built from one of his most well-known story-telling piece: _Romance of the Three Kingdoms_.
 
 
 
 
126
 
127
  Use case of this dataset:
128
 
 
138
  - All transcriptions follow the prescribed orthography detailed in https://jyutping.org/blog/typo/ and https://jyutping.org/blog/particles/
139
  - All transcriptions use full-width punctuations, no half-width punctuations is used.
140
  - All transcriptions are in Chinese characters, no Arabic numbers or Latin letters.
141
+ - All source audio are stored in `/webm`. For the convenice of training, segmented audios are resampled into 44.1 kHz and stored in `wav/`.
 
 
 
 
142
 
143
  ## Usage
144
 
 
150
  ds = load_dataset("CanCLID/zoengjyutgaai_saamgwokjinji")
151
  ```
152
 
153
+ To save space and downloading time and avoid clonin the entire repo, you can selectively clone only the `wav./` directory which contains all the wav files and transcriptions:
 
 
 
 
 
 
 
 
 
154
 
155
  ```bash
156
+ git clone --filter=blob:none --sparse https://huggingface.co/datasets/CanCLID/zoengjyutgaai_saamgwokjinji
157
+
158
  cd zoengjyutgaai_saamgwokjinji
 
159
 
 
160
  git sparse-checkout init --cone
161
+ git sparse-checkout set wav
162
+ git checkout
163
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
align.py DELETED
@@ -1,81 +0,0 @@
1
- import re
2
- import audioread
3
-
4
-
5
- def fix_srt_delay(input_file, output_file, delay_start_ms, delay_end_ms, audio_file):
6
- """
7
- Fixes a progressive delay in an SRT file by linearly adjusting timestamps.
8
- Gets video duration from an associated audio file.
9
-
10
- Args:
11
- input_file: Path to the input SRT file.
12
- output_file: Path to the output SRT file with corrected timestamps.
13
- delay_start_ms: Delay at the beginning of the video (in milliseconds).
14
- delay_end_ms: Delay at the end of the video (in milliseconds).
15
- audio_file: Path to the audio file to get duration from.
16
- """
17
-
18
- try:
19
- with audioread.audio_open(audio_file) as f:
20
- total_duration_ms = int(f.duration * 1000) # Get duration in milliseconds
21
- except audioread.NoBackendError:
22
- print("Error: No suitable audio backend found. Please install ffmpeg or another supported library.")
23
- return
24
- except Exception as e:
25
- print(f"Error reading audio file: {e}")
26
- return
27
-
28
- with open(input_file, 'r', encoding='utf-8') as infile, open(output_file, 'w', encoding='utf-8') as outfile:
29
- for line in infile:
30
- if '-->' in line:
31
- # Extract start and end timestamps
32
- start_time_str, end_time_str = line.strip().split(' --> ')
33
- start_time_ms = srt_time_to_milliseconds(start_time_str)
34
- end_time_ms = srt_time_to_milliseconds(end_time_str)
35
-
36
- # Calculate the adjusted delay for this subtitle
37
- progress = start_time_ms / total_duration_ms
38
- current_delay_ms = delay_start_ms + progress * (delay_end_ms - delay_start_ms)
39
-
40
- # Adjust the timestamps
41
- adjusted_start_time_ms = start_time_ms - current_delay_ms
42
- adjusted_end_time_ms = end_time_ms - current_delay_ms
43
-
44
- # Convert back to SRT time format
45
- adjusted_start_time_str = milliseconds_to_srt_time(adjusted_start_time_ms)
46
- adjusted_end_time_str = milliseconds_to_srt_time(adjusted_end_time_ms)
47
-
48
- # Write the corrected line to the output file
49
- outfile.write(f"{adjusted_start_time_str} --> {adjusted_end_time_str}\n")
50
- else:
51
- # Write non-timestamp lines as they are
52
- outfile.write(line)
53
-
54
-
55
- def srt_time_to_milliseconds(time_str):
56
- """Converts an SRT timestamp string to milliseconds."""
57
- hours, minutes, seconds_milliseconds = time_str.split(':')
58
- seconds, milliseconds = seconds_milliseconds.split(',')
59
- total_milliseconds = (int(hours) * 3600 + int(minutes) * 60 + int(seconds)) * 1000 + int(milliseconds)
60
- return total_milliseconds
61
-
62
-
63
- def milliseconds_to_srt_time(milliseconds):
64
- """Converts milliseconds to an SRT timestamp string."""
65
- milliseconds = int(milliseconds)
66
- seconds, milliseconds = divmod(milliseconds, 1000)
67
- minutes, seconds = divmod(seconds, 60)
68
- hours, minutes = divmod(minutes, 60)
69
- return f"{hours:02d}:{minutes:02d}:{seconds:02d},{milliseconds:03d}"
70
-
71
-
72
- delay_at_start_ms = 10 # Delay at the beginning (you think it starts from 0)
73
- delay_at_end_ms = 2200 # Delay at the end in milliseconds
74
-
75
- for i in range(46, 51):
76
- input_srt_file = f'srt/seoiwuzyun/{i:03d}.srt' # Your input SRT file
77
- output_srt_file = f'{i:03d}.srt' # Output file name
78
- audio_file_path = f'source/seoiwuzyun/{i:03d}.opus' # Your Opus audio file
79
-
80
- fix_srt_delay(input_srt_file, output_srt_file, delay_at_start_ms, delay_at_end_ms, audio_file_path)
81
- print(f"SRT file fixed and saved to {output_srt_file}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cut.py CHANGED
@@ -1,38 +1,25 @@
1
  import csv
2
  import os
3
- from typing import Literal
4
 
5
  import pysrt
6
  from pydub import AudioSegment
7
 
8
 
9
- def srt_to_segments_with_metadata(srt_file, episode, audio_file, output_dir, metadata_file, subset: Literal["saamgwokjinji", "seoiwuzyun"]):
10
  # Load the SRT file
11
  subs = pysrt.open(srt_file)
12
 
13
- # Load the audio file (handling Opus)
14
- try:
15
- audio = AudioSegment.from_file(audio_file, codec="opus")
16
- except Exception as e:
17
- print(f"Error loading audio file: {audio_file}")
18
- print(f"Error message: {e}")
19
- return # Skip this file and move to the next
20
 
21
  # Ensure the output directory exists
22
  os.makedirs(output_dir, exist_ok=True)
23
 
24
- # Prepare the metadata file (appending to a single file)
25
  with open(metadata_file, mode='a', newline='', encoding='utf-8') as csvfile:
26
  csvwriter = csv.writer(csvfile)
27
-
28
- # Write header row if the file is empty
29
- if os.stat(metadata_file).st_size == 0:
30
- csvwriter.writerow(['id', 'episode_id', 'file_name', 'audio_duration', 'transcription'])
31
-
32
- if subset == "saamgwokjinji":
33
- prefix = "sg"
34
- elif subset == "seoiwuzyun":
35
- prefix = "sw"
36
 
37
  for index, sub in enumerate(subs):
38
  # Get start and end times in milliseconds
@@ -45,46 +32,31 @@ def srt_to_segments_with_metadata(srt_file, episode, audio_file, output_dir, met
45
  # Extract the audio segment
46
  segment = audio[start_time:end_time]
47
 
48
- # Define output filenames (using .opus extension)
49
- segment_filename = f"{episode}_{index + 1:03d}.opus"
50
- segment_id = f"{prefix}_{episode}_{index + 1:03d}"
51
  audio_filename = os.path.join(output_dir, segment_filename)
52
 
53
- # Export the audio segment as Opus
54
- try:
55
- segment.export(audio_filename, format="opus", codec="libopus")
56
- except Exception as e:
57
- print(f"Error exporting segment: {audio_filename}")
58
- print(f"Error message: {e}")
59
- continue # Skip to the next segment
60
 
61
  # Write to the metadata CSV file
62
  csvwriter.writerow(
63
  [segment_id, episode, f'{episode}/{segment_filename}', f'{duration:.3f}', sub.text])
64
 
65
- print(f"Segmentation of {audio_file} complete. Files saved to: {output_dir}")
66
- print(f"Metadata appended to: {metadata_file}")
67
 
68
 
69
  if __name__ == "__main__":
70
- # subset = "saamgwokjinji"
71
- subset = "seoiwuzyun"
72
-
73
- print(f"Processing subset: {subset}")
74
- # If append to existing metadata file
75
- metadata_file = os.path.join("opus", subset, "metadata.csv") # Define metadata file path here
76
-
77
- for episode in range(46, 51):
78
- # If creating new csv
79
- # metadata_file = f"{episode}.csv"
80
- episode_str = f'{episode:03d}' # Format episode as 3-digit string
81
-
82
- srt_file = f'srt/{subset}/{episode_str}.srt'
83
- audio_file = f'source/{subset}/{episode_str}.opus'
84
- output_dir = f'opus/{subset}/{episode_str}'
85
-
86
- # Only process if the audio file exists
87
- if os.path.exists(audio_file):
88
- srt_to_segments_with_metadata(srt_file, episode_str, audio_file, output_dir, metadata_file, subset)
89
- else:
90
- print(f"Audio file not found: {audio_file}. Skipping.")
 
1
  import csv
2
  import os
 
3
 
4
  import pysrt
5
  from pydub import AudioSegment
6
 
7
 
8
+ def srt_to_segments_with_metadata(srt_file, episode, audio_file, output_dir, metadata_file):
9
  # Load the SRT file
10
  subs = pysrt.open(srt_file)
11
 
12
+ # Load the audio file
13
+ audio = AudioSegment.from_wav(audio_file)
 
 
 
 
 
14
 
15
  # Ensure the output directory exists
16
  os.makedirs(output_dir, exist_ok=True)
17
 
18
+ # Prepare the metadata file
19
  with open(metadata_file, mode='a', newline='', encoding='utf-8') as csvfile:
20
  csvwriter = csv.writer(csvfile)
21
+ # csvwriter.writerow(['id', 'episode_id', 'file_name', 'audio_duration',
22
+ # 'transcription'])
 
 
 
 
 
 
 
23
 
24
  for index, sub in enumerate(subs):
25
  # Get start and end times in milliseconds
 
32
  # Extract the audio segment
33
  segment = audio[start_time:end_time]
34
 
35
+ # Define output filenames
36
+ segment_filename = f"{episode}_{index + 1:03d}.wav"
37
+ segment_id = f"{episode}_{index + 1:03d}"
38
  audio_filename = os.path.join(output_dir, segment_filename)
39
 
40
+ # Export the audio segment
41
+ segment.export(audio_filename, format="wav")
 
 
 
 
 
42
 
43
  # Write to the metadata CSV file
44
  csvwriter.writerow(
45
  [segment_id, episode, f'{episode}/{segment_filename}', f'{duration:.3f}', sub.text])
46
 
47
+ print("Segmentation complete. Files saved to:", output_dir)
48
+ print("Metadata saved to:", metadata_file)
49
 
50
 
51
  if __name__ == "__main__":
52
+ metadata_file = 'full.csv'
53
+
54
+ # ffmpeg -i "webm/110.webm" -vn -ar 44100 -c:a pcm_s16le "110.wav"
55
+ for episode in range(110, 111):
56
+ episode = f'{episode:03d}'
57
+
58
+ srt_file = f'srt/{episode}.srt'
59
+ audio_file = f'{episode}.wav'
60
+ output_dir = f'./wav/{episode}'
61
+ srt_to_segments_with_metadata(
62
+ srt_file, episode, audio_file, output_dir, metadata_file)
 
 
 
 
 
 
 
 
 
 
data/saamgwokjinji-00000-of-00005.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:20eca243c963ed547fc009e8fbc0f104ee242f51923ed57794c3dee84ebb533e
3
- size 506328526
 
 
 
 
data/saamgwokjinji-00001-of-00005.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ef5d77a349af8431985b22d7cf14d3b381282a319bfb66f2b6492f729d1aa013
3
- size 490324223
 
 
 
 
data/saamgwokjinji-00002-of-00005.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6d0720b693303668c7cc097434c3ef5956bea602a6497b8df8b1ce25a4830215
3
- size 506386657
 
 
 
 
data/saamgwokjinji-00003-of-00005.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:0ec77147df6bc3509b01b94655d55717f2d547a61f782b795738af17dea752f8
3
- size 537882717
 
 
 
 
data/saamgwokjinji-00004-of-00005.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:215bc6feed66a0a43e5abdadf3fddf43799a065f70cc20bf6f209a4f2a7034f3
3
- size 554860857
 
 
 
 
data/seoiwuzyun-00000-of-00003.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:11139ea152132a00291a008116e5a5e8598d0a64f684883f3614957684c290c2
3
- size 385168578
 
 
 
 
data/seoiwuzyun-00001-of-00003.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f95793aab9f15d1e47fcaf98e162b8de308db6ca2413d49c804ce4ac271b4a48
3
- size 360869451
 
 
 
 
data/seoiwuzyun-00002-of-00003.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e93430b2b602fbb7de9fbfd0ce8fd7fa148125f3e6cbdaaef6dbbc5064f4cb70
3
- size 348251397
 
 
 
 
data/train-00000-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:804e36a7a2602228ec5495d77f2a2cef9e2a1d66ccf707d92e444e073940fe4b
3
+ size 405754482
data/train-00001-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06e1481e92b1dbf7975cc72517fca855d3293496fbba4d7da852d433b898de0b
3
+ size 440880352
data/train-00002-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4569ce7727fae686099aede9f4c5a0867827dce94d0c478c6a0a8464ecff82b4
3
+ size 427659951
data/train-00003-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8f9bfb23d58d9a82e9faff9a5288f731462c5e4317505dd090e296e51f136ac
3
+ size 434456815
data/train-00004-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6aa60e529619be7e462090967e62b661db11343101bb5475b088d5d3d6515afc
3
+ size 473926628
data/train-00005-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1497ed72fe43d651f663599cdba041abffa4eb068929b38bfee70a0962f6bff
3
+ size 442239390
data/train-00006-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9d19965d549435b15716770993cc786db8353cef59cd27524e2090a8ed7b807
3
+ size 474846239
data/train-00007-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b8dfed614ed172872db062f0ad2dfa4abb5980f207860987e44f1c6af8ada03
3
+ size 476779199
data/train-00008-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1326033cae056ba84ddf1439f701fec52c8aa9dd22605407557ce41b8181ebd4
3
+ size 538243739
data/train-00009-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa82212c5cd31adbbba787d0d210fb990ded9a4ebe7ec9463ebebd8303fc0854
3
+ size 535165156
data/train-00010-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfdf1e3e73581481c34997f2076a41a141b72396931dfda0693cc8b8ee845dd0
3
+ size 502723320
data/train-00011-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e5dce021351b595a2ee1004b29e4685b00becd3b6790aadf46f019d0edc3d61
3
+ size 478763333
data/train-00012-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00a685e3b64b38d255966bbbf60692acad44f61af5614a1906f1931cca1465cd
3
+ size 462527658
data/train-00013-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a62c232f974b9ced2a3f230c3a9df80564615fe5e33d7cd3c6fbe14653df108
3
+ size 466536608
data/train-00014-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:294b6242ab7e3470c6a2cbe975787b6b2d877cf7e0c62a752ef720aa46aa13c5
3
+ size 463199767
data/train-00015-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:375bfdade3987811b7016a20bfbaaad9c39fc933aea2378a39e114e5d53bd6d5
3
+ size 457393514
data/train-00016-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e7ee00c4ce286d932c18e2932f1ae29867b93c86412aa2432163f329928ec67
3
+ size 454875565
data/train-00017-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49208f22f7c1d53c2e290f195f2640041f229a6c9561d185dbe2b74099bf9a03
3
+ size 454591512
data/train-00018-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8816ffddd681d3d4d544d28b877b42ca6c1ef4a43d222be6adcdc26ed5e2693d
3
+ size 443412635
data/train-00019-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0f20e95ac9db3b22ed8d4ff5ccc4248e7ac9a634819175fddf13d951c2c4967
3
+ size 501378059
data/train-00020-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7d6c141f16702d306d281a3c0475e3128fa95f6900e4922800527ba8ecc1c35
3
+ size 490186601
data/train-00021-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf37e68dc24a6d3a259961de9de731fdcf66fc56307f413964c71271cacb7849
3
+ size 467487693
data/train-00022-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eeade60b199bd90a3a36bbceb11c21dd9328a4bc1fe9ab2f2ad577020e959fa9
3
+ size 452852202
data/train-00023-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:983393f2805aaaebbf282e6fdf1d1603bb9eaef78046eb0fa6421f9f8cd85e68
3
+ size 455877744
data/train-00024-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b83ecb5bcd31f0421588c3a9650fa831b49e4f1d42f306e1ed407d1c64931d3
3
+ size 464886892
data/train-00025-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44e2794b20eea6794a30b96248531167ee7aacdf1ebd588c32d3fc70546bd8bb
3
+ size 431354330
data/train-00026-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04921a0b0a68cb2639e0db54bfcd9a6c7616bbc971d14f1abd185d482893622c
3
+ size 475163089
data/train-00027-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3fe416692f8eaa41fadf76b357014f1ccd69179ea0a290ed1867fd3c2b0073c
3
+ size 463432868
data/train-00028-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:285e9fbbf938a14e86d545962540caad081488d838a7660f079cd73b7505fa50
3
+ size 469288804
data/train-00029-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2bdf5792ac05ba6253dc4ab6edec3369bbd2835f3042012868077e64f870f188
3
+ size 506261326
data/train-00030-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5b5fdf54adf70e769d881a219813abac4b60d754a575c7bb92dbcabdee9a0b4
3
+ size 464575468
data/train-00031-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47712dc05d677b4e4595f188ee29cfdfc8a606c3bdb0072f4424e33bd863fe5a
3
+ size 412632577
data/train-00032-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d07324e5eb04a3e0bf536d99b3d7a519e4d015481a2f60b9322caa8e72b30c6c
3
+ size 470850865
data/train-00033-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae1634fe6bff92533c6ebda5788bf43463f8fc5d096923dee2106125f63b5ccc
3
+ size 500377206
data/train-00034-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e991ab054b9516a55a58a37b444691023ae570f2371ba5b24b24995f5f2a7611
3
+ size 478536558
data/train-00035-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16ad98ba2bf8c5d809d699a47b64271fa6459b9110811b011f8643af0d441a9a
3
+ size 448183800
data/train-00036-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b51b5b225ec8fa15fead539255b5966b70228714a11c375a8dfc445aef6796f2
3
+ size 475662192
data/train-00037-of-00078.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fd28dfd4d56e36ea8927b26d750cb298669b8b695ca87c73082156b7a60f440
3
+ size 520689475