File size: 3,764 Bytes
31504a3 63356a6 31504a3 63356a6 b6e8cfc 63356a6 ddbced5 63356a6 ddbced5 63356a6 ddbced5 63356a6 0cce16c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
language:
- ko # Example: fr
license: apache-2.0 # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
library_name: transformers # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
tags:
- audio
- automatic-speech-recognition
datasets:
- KsponSpeech
metrics:
- wer # Example: wer. Use metric id from https://hf.co/metrics
---
# ko-spelling-wav2vec2-conformer-del-1s
## Table of Contents
- [ko-spelling-wav2vec2-conformer-del-1s](#ko-spelling-wav2vec2-conformer-del-1s)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Evaluation](#evaluation)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
- **Model Description:**
ํด๋น ๋ชจ๋ธ์ wav2vec2-conformer base architecture์ scratch pre-training ๋์์ต๋๋ค. <br />
Wav2Vec2ConformerForCTC๋ฅผ ์ด์ฉํ์ฌ KsponSpeech์ ๋ํ Fine-Tuning ๋ชจ๋ธ์
๋๋ค. <br />
- Dataset use [AIHub KsponSpeech](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=123) <br />
Datasets๋ ํด๋น Data๋ฅผ ์ ์ฒ๋ฆฌํ์ฌ ์์๋ก ๋ง๋ค์ด ์ฌ์ฉํ์์ต๋๋ค. <br />
del-1s์ ์๋ฏธ๋ 1์ด ์ดํ์ ๋ฐ์ดํฐ ํํฐ๋ง์ ์๋ฏธํฉ๋๋ค. <br />
ํด๋น ๋ชจ๋ธ์ **์ฒ ์์ ์ฌ** ๊ธฐ์ค์ ๋ฐ์ดํฐ๋ก ํ์ต๋ ๋ชจ๋ธ์
๋๋ค. (์ซ์์ ์์ด๋ ๊ฐ ํ๊ธฐ๋ฒ์ ๋ฐ๋ฆ) <br />
- **Developed by:** TADev (@lIlBrother, @ddobokki, @jp42maru)
- **Language(s):** Korean
- **License:** apache-2.0
- **Parent Model:** See the [wav2vec2-conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer) for more information about the pre-trained base model. (ํด๋น ๋ชจ๋ธ์ wav2vec2-conformer base architecture์ scratch pre-training ๋์์ต๋๋ค.)
## Evaluation
Just using `load_metric("wer")` and `load_metric("wer")` in huggingface `datasets` library <br />
## How to Get Started With the Model
```python
import librosa
from pyctcdecode import build_ctcdecoder
from transformers import (
AutoConfig,
AutoFeatureExtractor,
AutoModelForCTC,
AutoTokenizer,
Wav2Vec2ProcessorWithLM,
)
from transformers.pipelines import AutomaticSpeechRecognitionPipeline
audio_path = ""
# ๋ชจ๋ธ๊ณผ ํ ํฌ๋์ด์ , ์์ธก์ ์ํ ๊ฐ ๋ชจ๋๋ค์ ๋ถ๋ฌ์ต๋๋ค.
model = AutoModelForCTC.from_pretrained("42MARU/ko-spelling-wav2vec2-conformer-del-1s")
feature_extractor = AutoFeatureExtractor.from_pretrained("42MARU/ko-spelling-wav2vec2-conformer-del-1s")
tokenizer = AutoTokenizer.from_pretrained("42MARU/ko-spelling-wav2vec2-conformer-del-1s")
beamsearch_decoder = build_ctcdecoder(
labels=list(tokenizer.encoder.keys()),
kenlm_model_path=None,
)
processor = Wav2Vec2ProcessorWithLM(
feature_extractor=feature_extractor, tokenizer=tokenizer, decoder=beamsearch_decoder
)
# ์ค์ ์์ธก์ ์ํ ํ์ดํ๋ผ์ธ์ ์ ์๋ ๋ชจ๋๋ค์ ์ฝ์
.
asr_pipeline = AutomaticSpeechRecognitionPipeline(
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
decoder=processor.decoder,
device=-1,
)
# ์์ฑํ์ผ์ ๋ถ๋ฌ์ค๊ณ beamsearch ํ๋ผ๋ฏธํฐ๋ฅผ ํน์ ํ์ฌ ์์ธก์ ์ํํฉ๋๋ค.
raw_data, _ = librosa.load(audio_path, sr=16000)
kwargs = {"decoder_kwargs": {"beam_width": 100}}
pred = asr_pipeline(inputs=raw_data, **kwargs)["text"]
# ๋ชจ๋ธ์ด ์์ ๋ถ๋ฆฌ ์ ๋์ฝ๋ ํ
์คํธ๋ก ๋์ค๋ฏ๋ก, ์ผ๋ฐ String์ผ๋ก ๋ณํํด์ค ํ์๊ฐ ์์ต๋๋ค.
result = unicodedata.normalize("NFC", pred)
print(result)
# ์๋
ํ์ธ์ 123 ํ
์คํธ์
๋๋ค.
```
*Beam-100 Result (WER)*:
| "clean" | "other" |
| ------- | ------- |
| 22.01 | 27.34 | |