language:
- ko
license: apache-2.0
library_name: transformers
tags:
- audio
- automatic-speech-recognition
datasets:
- KsponSpeech
metrics:
- wer
ko-spelling-wav2vec2-conformer-del-1s
Table of Contents
Model Details
Model Description: ํด๋น ๋ชจ๋ธ์ wav2vec2-conformer base architecture์ scratch pre-training ๋์์ต๋๋ค.
Wav2Vec2ConformerForCTC๋ฅผ ์ด์ฉํ์ฌ KsponSpeech์ ๋ํ Fine-Tuning ๋ชจ๋ธ์ ๋๋ค.Dataset use AIHub KsponSpeech
Datasets๋ ํด๋น Data๋ฅผ ์ ์ฒ๋ฆฌํ์ฌ ์์๋ก ๋ง๋ค์ด ์ฌ์ฉํ์์ต๋๋ค.
del-1s์ ์๋ฏธ๋ 1์ด ์ดํ์ ๋ฐ์ดํฐ ํํฐ๋ง์ ์๋ฏธํฉ๋๋ค.
ํด๋น ๋ชจ๋ธ์ ์ฒ ์์ ์ฌ ๊ธฐ์ค์ ๋ฐ์ดํฐ๋ก ํ์ต๋ ๋ชจ๋ธ์ ๋๋ค. (์ซ์์ ์์ด๋ ๊ฐ ํ๊ธฐ๋ฒ์ ๋ฐ๋ฆ)Developed by: TADev (@lIlBrother, @ddobokki, @jp42maru)
Language(s): Korean
License: apache-2.0
Parent Model: See the wav2vec2-conformer for more information about the pre-trained base model. (ํด๋น ๋ชจ๋ธ์ wav2vec2-conformer base architecture์ scratch pre-training ๋์์ต๋๋ค.)
Evaluation
Just using load_metric("wer")
and load_metric("wer")
in huggingface datasets
library
How to Get Started With the Model
import librosa
from pyctcdecode import build_ctcdecoder
from transformers import (
AutoConfig,
AutoFeatureExtractor,
AutoModelForCTC,
AutoTokenizer,
Wav2Vec2ProcessorWithLM,
)
from transformers.pipelines import AutomaticSpeechRecognitionPipeline
audio_path = ""
# ๋ชจ๋ธ๊ณผ ํ ํฌ๋์ด์ , ์์ธก์ ์ํ ๊ฐ ๋ชจ๋๋ค์ ๋ถ๋ฌ์ต๋๋ค.
model = AutoModelForCTC.from_pretrained("42MARU/ko-spelling-wav2vec2-conformer-del-1s")
feature_extractor = AutoFeatureExtractor.from_pretrained("42MARU/ko-spelling-wav2vec2-conformer-del-1s")
tokenizer = AutoTokenizer.from_pretrained("42MARU/ko-spelling-wav2vec2-conformer-del-1s")
beamsearch_decoder = build_ctcdecoder(
labels=list(tokenizer.encoder.keys()),
kenlm_model_path=None,
)
processor = Wav2Vec2ProcessorWithLM(
feature_extractor=feature_extractor, tokenizer=tokenizer, decoder=beamsearch_decoder
)
# ์ค์ ์์ธก์ ์ํ ํ์ดํ๋ผ์ธ์ ์ ์๋ ๋ชจ๋๋ค์ ์ฝ์
.
asr_pipeline = AutomaticSpeechRecognitionPipeline(
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
decoder=processor.decoder,
device=-1,
)
# ์์ฑํ์ผ์ ๋ถ๋ฌ์ค๊ณ beamsearch ํ๋ผ๋ฏธํฐ๋ฅผ ํน์ ํ์ฌ ์์ธก์ ์ํํฉ๋๋ค.
raw_data, _ = librosa.load(audio_path, sr=16000)
kwargs = {"decoder_kwargs": {"beam_width": 100}}
pred = asr_pipeline(inputs=raw_data, **kwargs)["text"]
# ๋ชจ๋ธ์ด ์์ ๋ถ๋ฆฌ ์ ๋์ฝ๋ ํ
์คํธ๋ก ๋์ค๋ฏ๋ก, ์ผ๋ฐ String์ผ๋ก ๋ณํํด์ค ํ์๊ฐ ์์ต๋๋ค.
result = unicodedata.normalize("NFC", pred)
print(result)
# ์๋
ํ์ธ์ 123 ํ
์คํธ์
๋๋ค.
Beam-100 Result (WER):
"clean" | "other" |
---|---|
22.01 | 27.34 |