Upload 16 files
Browse files- fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/config_st.yaml +19 -0
- fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/data/tst-COMMON/txt/tst-COMMON.en +32 -0
- fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/data/tst-COMMON/txt/tst-COMMON.hi +32 -0
- fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/data/tst-COMMON/txt/tst-COMMON.yaml +32 -0
- fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/data/tst-COMMON/wav/ted_1096.wav +3 -0
- fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip +3 -0
- fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/spm_unigram8000_st.model +3 -0
- fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/spm_unigram8000_st.txt +0 -0
- fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/tst-COMMON_st.tsv +33 -0
- fairseq_mustc_single_inference/app.py +64 -0
- fairseq_mustc_single_inference/gen.py +55 -0
- fairseq_mustc_single_inference/prep_mustc_data_hindi_single.py +263 -0
- fairseq_mustc_single_inference/s2t_en2hi.py +32 -0
- fairseq_mustc_single_inference/s2t_en2hi_nolog.py +32 -0
- fairseq_mustc_single_inference/st_avg_last_10_checkpoints.pt +3 -0
- fairseq_mustc_single_inference/test.wav +3 -0
fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/config_st.yaml
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
bpe_tokenizer:
|
2 |
+
bpe: sentencepiece
|
3 |
+
sentencepiece_model: ./spm_unigram8000_st.model
|
4 |
+
input_channels: 1
|
5 |
+
input_feat_per_channel: 80
|
6 |
+
specaugment:
|
7 |
+
freq_mask_F: 27
|
8 |
+
freq_mask_N: 1
|
9 |
+
time_mask_N: 1
|
10 |
+
time_mask_T: 100
|
11 |
+
time_mask_p: 1.0
|
12 |
+
time_wrap_W: 0
|
13 |
+
transforms:
|
14 |
+
'*':
|
15 |
+
- utterance_cmvn
|
16 |
+
_train:
|
17 |
+
- utterance_cmvn
|
18 |
+
- specaugment
|
19 |
+
vocab_filename: spm_unigram8000_st.txt
|
fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/data/tst-COMMON/txt/tst-COMMON.en
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Back in New York, I am the head of development for a non-profit called Robin Hood.
|
2 |
+
When I'm not fighting poverty, I'm fighting fires as the assistant captain of a volunteer fire company.
|
3 |
+
Now in our town, where the volunteers supplement a highly skilled career staff, you have to get to the fire scene pretty early to get in on any action.
|
4 |
+
I remember my first fire.
|
5 |
+
I was the second volunteer on the scene, so there was a pretty good chance I was going to get in.
|
6 |
+
But still it was a real footrace against the other volunteers to get to the captain in charge to find out what our assignments would be.
|
7 |
+
When I found the captain, he was having a very engaging conversation with the homeowner, who was surely having one of the worst days of her life.
|
8 |
+
Here it was, the middle of the night, she was standing outside in the pouring rain, under an umbrella, in her pajamas, barefoot, while her house was in flames.
|
9 |
+
The other volunteer who had arrived just before me -- let's call him Lex Luther --
|
10 |
+
(Laughter)
|
11 |
+
got to the captain first and was asked to go inside and save the homeowner's dog.
|
12 |
+
The dog! I was stunned with jealousy.
|
13 |
+
Here was some lawyer or money manager who, for the rest of his life, gets to tell people that he went into a burning building to save a living creature, just because he beat me by five seconds.
|
14 |
+
Well, I was next.
|
15 |
+
The captain waved me over.
|
16 |
+
He said, "Bezos, I need you to go into the house. I need you to go upstairs, past the fire, and I need you to get this woman a pair of shoes."
|
17 |
+
(Laughter)
|
18 |
+
I swear.
|
19 |
+
So, not exactly what I was hoping for, but off I went -- up the stairs, down the hall, past the 'real' firefighters, who were pretty much done putting out the fire at this point, into the master bedroom to get a pair of shoes.
|
20 |
+
Now I know what you're thinking, but I'm no hero.
|
21 |
+
I carried my payload back downstairs where I met my nemesis and the precious dog by the front door.
|
22 |
+
We took our treasures outside to the homeowner, where, not surprisingly, his received much more attention than did mine.
|
23 |
+
A few weeks later, the department received a letter from the homeowner thanking us for the valiant effort displayed in saving her home.
|
24 |
+
The act of kindness she noted above all others: someone had even gotten her a pair of shoes.
|
25 |
+
In both my vocation at Robin Hood and my avocation as a volunteer firefighter, I am witness to acts of generosity and kindness on a monumental scale, but I'm also witness to acts of grace and courage on an individual basis.
|
26 |
+
And you know what I've learned?
|
27 |
+
They all matter.
|
28 |
+
So as I look around this room at people who either have achieved, or are on their way to achieving, remarkable levels of success, I would offer this reminder: don't wait.
|
29 |
+
Don't wait until you make your first million to make a difference in somebody's life.
|
30 |
+
If you have something to give, give it now.
|
31 |
+
Serve food at a soup kitchen. Clean up a neighborhood park.
|
32 |
+
Be a mentor.
|
fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/data/tst-COMMON/txt/tst-COMMON.hi
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
न्यूयॉर्क में वापस, मैं रॉबिन हुड नामक एक गैर-लाभकारी संस्था के विकास का प्रमुख हूं।
|
2 |
+
जब मैं गरीबी से नहीं लड़ रहा हूं, तो मैं स्वयंसेवी फायर कंपनी के सहायक कप्तान के रूप में आग से लड़ रहा हूं।
|
3 |
+
अब हमारे शहर में, जहां स्वयंसेवक एक अत्यधिक कुशल कैरियर स्टाफ के पूरक हैं, आपको किसी भी कार्रवाई में शामिल होने के लिए आग के दृश्य पर बहुत जल्दी पहुंचना होगा।
|
4 |
+
मुझे अपनी पहली आग याद है।
|
5 |
+
मैं इस दृश्य पर दूसरा स्वयंसेवक था, इसलिए मेरे अंदर आने का एक अच्छा मौका था।
|
6 |
+
लेकिन फिर भी यह अन्य स्वयंसेवकों के खिलाफ एक वास्तविक पदयात्रा थी जो प्रभारी कप्तान के पास यह पता लगाने के लिए थी कि हमारा कार्य क्या होगा।
|
7 |
+
जब मैंने कप्तान को पाया, तो वह गृहस्वामी के साथ बहुत ही आकर्षक बातचीत कर रहा था, जो निश्चित रूप से उसके जीवन के सबसे बुरे दिनों में से एक था।
|
8 |
+
यहाँ यह आधी रात थी, वह बारिश में बाहर, एक छतरी के नीचे, अपने पजामे में, नंगे पाँव खड़ी थी, जबकि उसका घर आग की लपटों में था।
|
9 |
+
दूसरा स्वयंसेवक जो मुझसे ठीक पहले आया था -- चलो उसे लेक्स लूथर कहते हैं --
|
10 |
+
(हँसी)
|
11 |
+
पहले कप्तान के पास गया और उसे अंदर जाकर गृहस्वामी के कुत्ते को बचाने के लिए कहा गया।
|
12 |
+
कुत्ता!
|
13 |
+
यहाँ कोई वकील या मनी मैनेजर था, जो अपने पूरे जीवन के लिए लोगों को बताता है कि वह एक जलती हुई इमारत में एक जीवित प्राणी को बचाने के लिए गया था, सिर्फ इसलिए कि उसने मुझे पाँच सेकंड से पीटा।
|
14 |
+
खैर, मैं अगला था।
|
15 |
+
कप्तान ने मुझे लहराया।
|
16 |
+
उन्होंने कहा, "बेज़ोस, मैं चाहता हूं कि आप घर में जाएं। मैं चाहता हूं कि आप ऊपर जाएं, आग को पार करें, और मैं चाहता हूं कि आप इस महिला को एक जोड़ी जूते दिलवाएं।"
|
17 |
+
(हँसी)
|
18 |
+
कसम है।
|
19 |
+
तो, ठीक वैसा नहीं जैसा मैं उम्मीद कर रहा था, लेकिन मैं चला गया - सीढ़ियों से ऊपर, हॉल के नीचे, 'असली' अग्निशामकों के पीछे, जो इस बिंदु पर आग बुझाने के लिए बहुत कुछ कर चुके थे, मास्टर बेडरूम में
|
20 |
+
अब मुझे पता है कि तुम क्या सोच रहे हो, लेकिन मैं हीरो नहीं हूं।
|
21 |
+
मैं अपना पेलोड वापस नीचे की ओर ले गया जहाँ मैं अपने दास और कीमती कुत्ते से सामने के दरवाजे से मिला।
|
22 |
+
हम अपने खजानों को बाहर गृहस्वामी के पास ले गए, जहां आश्चर्य की बात नहीं कि मेरे खजानों की तुलना में उनका अधिक ध्यान गया।
|
23 |
+
कुछ सप्ताह बाद, विभाग को गृहस्वामी की ओर से एक पत्र प्राप्त हुआ जिसमें उन्होंने उसके घर को बचाने के लिए किए गए साहसिक प्रयास के लिए हमें धन्यवाद दिया।
|
24 |
+
दयालुता का कार्य उसने अन्य सभी से ऊपर देखा: किसी ने उसे एक जोड़ी जूते भी दिलवाए थे।
|
25 |
+
रॉबिन हुड में मेरे व्यवसाय और स्वयंसेवी फायर फाइटर के रूप में मेरे व्यवसाय दोनों में, मैं एक बड़े पैमाने पर उदारता और दयालुता के कृत्यों का साक्षी हूं, लेकिन मैं व्यक्तिगत आधार पर अनुग्रह और साहस के कार्यों का भी गवाह हूं।
|
26 |
+
और आप जानते हैं कि मैंने क्या सीखा है?
|
27 |
+
वे सब मायने रखते हैं।
|
28 |
+
इसलिए जब मैं इस कमरे के चारों ओर ऐसे लोगों को देखता हूं, जिन्होंने या तो सफलता के उल्लेखनीय स्तर हासिल किए हैं, या हासिल करने के रास्ते पर हैं, तो मैं यह याद दिलाता हूं: प्रतीक्षा न करें।
|
29 |
+
किसी के जीवन में बदलाव लाने के लिए अपना पहला मिलियन बनाने तक प्रतीक्षा न करें।
|
30 |
+
अगर आपके पास देने के लिए कुछ है, तो अभी दे दो।
|
31 |
+
सूप किचन में खाना परोसें।
|
32 |
+
एक संरक्षक बनें।
|
fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/data/tst-COMMON/txt/tst-COMMON.yaml
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
- {duration: 5.0, offset: 0.0, rW: 17, uW: 0, speaker_id: spk.1096, wav: test.wav}
|
2 |
+
- {duration: 5.160000, offset: 20.290000, rW: 17, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
3 |
+
- {duration: 8.110000, offset: 25.930000, rW: 29, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
4 |
+
- {duration: 1.560000, offset: 34.920000, rW: 5, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
5 |
+
- {duration: 4.180000, offset: 36.730000, rW: 21, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
6 |
+
- {duration: 5.580000, offset: 41.880000, rW: 26, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
7 |
+
- {duration: 8.610001, offset: 48.309999, rW: 27, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
8 |
+
- {duration: 9.680000, offset: 57.510000, rW: 29, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
9 |
+
- {duration: 4.280001, offset: 68.549999, rW: 14, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
10 |
+
- {duration: 0.140000, offset: 74.600000, rW: 1, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
11 |
+
- {duration: 6.950000, offset: 75.520000, rW: 16, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
12 |
+
- {duration: 3.280000, offset: 83.610000, rW: 7, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
13 |
+
- {duration: 10.350000, offset: 87.110000, rW: 37, uW: 1, speaker_id: spk.1096, wav: ted_1096.wav}
|
14 |
+
- {duration: 2.049999, offset: 97.550000, rW: 4, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
15 |
+
- {duration: 1.420000, offset: 100.210000, rW: 5, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
16 |
+
- {duration: 8.520001, offset: 101.639999, rW: 32, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
17 |
+
- {duration: 0.210000, offset: 112.450000, rW: 1, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
18 |
+
- {duration: 0.480000, offset: 113.590000, rW: 2, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
19 |
+
- {duration: 13.990000, offset: 115.530000, rW: 44, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
20 |
+
- {duration: 4.790000, offset: 129.540000, rW: 10, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
21 |
+
- {duration: 6.080000, offset: 139.880000, rW: 19, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
22 |
+
- {duration: 7.420000, offset: 147.260000, rW: 19, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
23 |
+
- {duration: 6.040000, offset: 155.630000, rW: 23, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
24 |
+
- {duration: 5.920001, offset: 162.549999, rW: 18, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
25 |
+
- {duration: 13.310000, offset: 170.420000, rW: 41, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
26 |
+
- {duration: 0.910000, offset: 184.310000, rW: 6, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
27 |
+
- {duration: 0.760000, offset: 186.220000, rW: 3, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
28 |
+
- {duration: 10.600000, offset: 188.130000, rW: 31, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
29 |
+
- {duration: 3.050000, offset: 200.420000, rW: 15, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
30 |
+
- {duration: 2.810000, offset: 203.740000, rW: 9, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
31 |
+
- {duration: 2.920000, offset: 207.610000, rW: 11, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
32 |
+
- {duration: 0.780000, offset: 211.120000, rW: 3, uW: 0, speaker_id: spk.1096, wav: ted_1096.wav}
|
fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/data/tst-COMMON/wav/ted_1096.wav
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:69a122c3ad89320ec24cad84b622a01f26c3138b3e5869dc033e65bd0ab73fe1
|
3 |
+
size 8990102
|
fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0bef03a45d7514d5018c4de30d352c736359248e6e8d70d586796aa32b30f4e2
|
3 |
+
size 5242360
|
fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/spm_unigram8000_st.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bf7b26c17db61dcd76400fbb74c5395d5f13837ed0fd5fa1098930de4f2a8202
|
3 |
+
size 449800
|
fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/spm_unigram8000_st.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/tst-COMMON_st.tsv
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
id audio n_frames tgt_text speaker
|
2 |
+
test_0 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:3674535:136768 427 न्यूयॉर्क में वापस, मैं रॉबिन हुड नामक एक गैर-लाभकारी संस्था के विकास का प्रमुख हूं। spk.1096
|
3 |
+
ted_1096_0 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:44:44928 140 कप्तान ने मुझे लहराया। spk.1096
|
4 |
+
ted_1096_1 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:296095:272128 850 उन्होंने कहा, "बेज़ोस, मैं चाहता हूं कि आप घर में जाएं। मैं चाहता हूं कि आप ऊपर जाएं, आग को पार करें, और मैं चाहता हूं कि आप इस महिला को एक जोड़ी जूते दिलवाएं।" spk.1096
|
5 |
+
ted_1096_2 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:4231705:6208 19 (हँसी) spk.1096
|
6 |
+
ted_1096_3 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:5032741:14848 46 कसम है। spk.1096
|
7 |
+
ted_1096_4 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:1725316:447168 1397 तो, ठीक वैसा नहीं जैसा मैं उम्मीद कर रहा था, लेकिन मैं चला गया - सीढ़ियों से ऊपर, हॉल के नीचे, 'असली' अग्निशामकों के पीछे, जो इस बिंदु पर आग बुझाने के लिए बहुत कुछ कर चुके थे, मास्टर बेडरूम में spk.1096
|
8 |
+
ted_1096_5 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:3811347:152768 477 अब मुझे पता है कि तुम क्या सोच रहे हो, लेकिन मैं हीरो नहीं हूं। spk.1096
|
9 |
+
ted_1096_6 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:4237957:194048 606 मैं अपना पेलोड वापस नीचे की ओर ले गया जहाँ मैं अपने दास और कीमती कुत्ते से सामने के दरवाजे से मिला। spk.1096
|
10 |
+
ted_1096_7 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:725413:236928 740 हम अपने खजानों को बाहर गृहस्वामी के पास ले गए, जहां आश्चर्य की बात नहीं कि मेरे खजानों की तुलना में उनका अधिक ध्यान गया। spk.1096
|
11 |
+
ted_1096_8 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:5047633:192768 602 कुछ सप्ताह बाद, विभाग को गृहस्वामी की ओर से एक पत्र प्राप्त हुआ जिसमें उन्होंने उसके घर को बचाने के लिए किए गए साहसिक प्रयास के लिए हमें धन्यवाद दिया। spk.1096
|
12 |
+
ted_1096_9 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:1184318:188928 590 दयालुता का कार्य उसने अन्य सभी से ऊपर देखा: किसी ने उसे एक जोड़ी जूते भी दिलवाए थे। spk.1096
|
13 |
+
ted_1096_10 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:2172529:425408 1329 रॉबिन हुड में मेरे व्यवसाय और स्वयंसेवी फायर फाइटर के रूप में मेरे व्यवसाय दोनों में, मैं एक बड़े पैमाने पर उदारता और दयालुता के कृत्यों का साक्षी हूं, लेकिन मैं व्यक्तिगत आधार पर अनुग्रह और साहस के कार्यों का भी गवाह हूं। spk.1096
|
14 |
+
ted_1096_11 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:3267448:28608 89 और आप जानते हैं कि मैंने क्या सीखा है? spk.1096
|
15 |
+
ted_1096_12 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:701561:23808 74 वे सब मायने रखते हैं। spk.1096
|
16 |
+
ted_1096_13 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:2597982:338688 1058 इसलिए जब मैं इस कमरे के चारों ओर ऐसे लोगों को देखता हूं, जिन्होंने या तो सफलता के उल्लेखनीय स्तर हासिल किए हैं, या हासिल करने के रास्ते पर हैं, तो मैं यह याद दिलाता हूं: प्रतीक्षा न करें। spk.1096
|
17 |
+
ted_1096_14 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:4432050:164608 514 जब मैं गरीबी से नहीं लड़ रहा हूं, तो मैं स्वयंसेवी फायर कंपनी के सहायक कप्तान के रूप में आग से लड़ रहा हूं। spk.1096
|
18 |
+
ted_1096_15 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:198963:97088 303 किसी के जीवन में बदलाव लाने के लिए अपना पहला मिलियन बनाने तक प्रतीक्षा न करें। spk.1096
|
19 |
+
ted_1096_16 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:3964160:89408 279 अगर आपके पास देने के लिए कुछ है, तो अभी दे दो। spk.1096
|
20 |
+
ted_1096_17 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:1373291:92928 290 सूप किचन में खाना परोसें। spk.1096
|
21 |
+
ted_1096_18 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:4596703:24448 76 एक संरक्षक बनें। spk.1096
|
22 |
+
ted_1096_19 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:1466264:259008 809 अब हमारे शहर में, जहां स्वयंसेवक एक अत्यधिक कुशल कैरियर स्टाफ के पूरक हैं, आपको किसी भी कार्रवाई में शामिल होने के लिए आग के दृश्य पर बहुत जल्दी पहुंचना होगा। spk.1096
|
23 |
+
ted_1096_20 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:45017:49408 154 मुझे अपनी पहली आग याद है। spk.1096
|
24 |
+
ted_1096_21 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:568268:133248 416 मैं इस दृश्य पर दूसरा स्वयंसेवक था, इसलिए मेरे अंदर आने का एक अच्छा मौका था। spk.1096
|
25 |
+
ted_1096_22 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:4053613:178048 556 लेकिन फिर भी यह अन्य स्वयंसेवकों के खिलाफ एक वास्तविक पदयात्रा थी जो प्रभारी कप्तान के पास यह पता लगाने के लिए थी कि हमारा कार्य क्या होगा। spk.1096
|
26 |
+
ted_1096_23 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:4757689:275008 859 जब मैंने कप्तान को पाया, तो वह गृहस्वामी के साथ बहुत ही आकर्षक बातचीत कर रहा था, जो निश्चित रूप से उसके जीवन के सबसे बुरे दिनों में से एक था। spk.1096
|
27 |
+
ted_1096_24 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:3365247:309248 966 यहाँ यह आधी रात थी, वह बारिश में बाहर, एक छतरी के नीचे, अपने पजामे में, नंगे पाँव खड़ी थी, जबकि उसका घर आग की लपटों में था। spk.1096
|
28 |
+
ted_1096_25 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:4621196:136448 426 दूसरा स्वयंसेवक जो मुझसे ठीक पहले आया था -- चलो उसे लेक्स लूथर कहते हैं -- spk.1096
|
29 |
+
ted_1096_26 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:3296101:3968 12 (हँसी) spk.1096
|
30 |
+
ted_1096_27 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:962386:221888 693 पहले कप्तान के पास गया और उसे अंदर जाकर गृहस्वामी के कुत्ते को बचाने के लिए कहा गया। spk.1096
|
31 |
+
ted_1096_28 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:94470:104448 326 कुत्ता! spk.1096
|
32 |
+
ted_1096_29 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:2936715:330688 1033 यहाँ कोई वकील या मनी मैनेजर था, जो अपने पूरे जीवन के लिए लोगों को बताता है कि वह एक जलती हुई इमारत में एक जीवित प्राणी को बचाने के लिए गया था, सिर्फ इसलिए कि उसने मुझे पाँच सेकंड से पीटा। spk.1096
|
33 |
+
ted_1096_30 /home/deepakprasad/nlp_code/fairseq_mustc_single_inference/MUSTC_ROOT/en-hi/fbank80.zip:3300114:65088 203 खैर, मैं अगला था। spk.1096
|
fairseq_mustc_single_inference/app.py
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Script to translate given single english audio file to corresponding hindi text
|
3 |
+
Usage : python s2t_en2hi.py <audio_file_path> <averaged_checkpoints_file_path>
|
4 |
+
"""
|
5 |
+
|
6 |
+
|
7 |
+
|
8 |
+
import gradio as gr
|
9 |
+
import sys
|
10 |
+
import os
|
11 |
+
import subprocess
|
12 |
+
from huggingface_hub import snapshot_download
|
13 |
+
|
14 |
+
|
15 |
+
def install_fairseq():
|
16 |
+
try:
|
17 |
+
# Run pip install command to install fairseq
|
18 |
+
subprocess.check_call(["pip", "install", "fairseq"])
|
19 |
+
subprocess.check_call(["pip", "install", "sentencepiece"])
|
20 |
+
return "fairseq successfully installed!"
|
21 |
+
except subprocess.CalledProcessError as e:
|
22 |
+
return f"An error occurred while installing fairseq: {str(e)}"
|
23 |
+
|
24 |
+
huggingface_model_dir = snapshot_download(repo_id="balaramas/en_hi_s2t")
|
25 |
+
print(huggingface_model_dir)
|
26 |
+
|
27 |
+
os.system("cd fairseq_mustc_single_inference")
|
28 |
+
|
29 |
+
|
30 |
+
def run_my_code(input_text):
|
31 |
+
# TODO better argument handling
|
32 |
+
hi_wav = input_text
|
33 |
+
en2hi_model_checkpoint = "st_avg_last_10_checkpoints.pt"
|
34 |
+
os.system(f"cp {hi_wav} ./MUSTC_ROOT/en-hi/data/tst-COMMON/wav/test.wav")
|
35 |
+
|
36 |
+
print("------Starting data prepration...")
|
37 |
+
subprocess.run(["python", "prep_mustc_data_hindi_single.py", "--data-root", "MUSTC_ROOT/", "--task", "st", "--vocab-type", "unigram", "--vocab-size", "8000"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
|
38 |
+
|
39 |
+
print("------Performing translation...")
|
40 |
+
translation_result = subprocess.run(["fairseq-generate", "./MUSTC_ROOT/en-hi/", "--config-yaml", "config_st.yaml", "--gen-subset", "tst-COMMON_st", "--task", "speech_to_text", "--path", en2hi_model_checkpoint, "--max-tokens", "50000", "--beam", "5", "--scoring", "sacrebleu"], capture_output=True, text=True)
|
41 |
+
translation_result_text = translation_result.stdout
|
42 |
+
lines = translation_result_text.split("\n")
|
43 |
+
output_text=""
|
44 |
+
print("\n\n------Translation results are:")
|
45 |
+
for i in lines:
|
46 |
+
if (i.startswith("D-0")):
|
47 |
+
print(i.split("\t")[2])
|
48 |
+
output_text=i.split("\t")[2]
|
49 |
+
break
|
50 |
+
|
51 |
+
os.system("rm ./MUSTC_ROOT/en-hi/data/tst-COMMON/wav/test.wav")
|
52 |
+
return output_text
|
53 |
+
|
54 |
+
install_fairseq()
|
55 |
+
|
56 |
+
# Define the input and output interfaces for Gradio
|
57 |
+
input_textbox = gr.inputs.Textbox(label="Input Text")
|
58 |
+
output_textbox = gr.outputs.Textbox(label="Output Text")
|
59 |
+
|
60 |
+
# Create a Gradio interface
|
61 |
+
iface = gr.Interface(fn=run_my_code, inputs=input_textbox, outputs=output_textbox, title="My Code Runner")
|
62 |
+
|
63 |
+
# Launch the interface
|
64 |
+
iface.launch()
|
fairseq_mustc_single_inference/gen.py
ADDED
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import torch
|
2 |
+
import torchaudio
|
3 |
+
import torchaudio.transforms as transforms
|
4 |
+
from fairseq.models.wav2vec import Wav2VecModel
|
5 |
+
from fairseq.data.data_utils import post_process
|
6 |
+
from fairseq.tasks.audio import AudioPretrainingTask
|
7 |
+
from fairseq import checkpoint_utils
|
8 |
+
from examples.speech_to_text.data_utils import extract_fbank_features
|
9 |
+
|
10 |
+
def main(audio_path, checkpoint_path):
|
11 |
+
# Load the audio file
|
12 |
+
|
13 |
+
def extract_features(audio_path):
|
14 |
+
waveform, sample_rate = sf.read(audio_path)
|
15 |
+
features = extract_fbank_features(waveform, sample_rate)
|
16 |
+
return features
|
17 |
+
|
18 |
+
|
19 |
+
fbank_features = extract_features(audio_path).numpy()
|
20 |
+
|
21 |
+
# Load the pre-trained model checkpoint
|
22 |
+
model, cfg, task = checkpoint_utils.load_model_ensemble_and_task([checkpoint_path])
|
23 |
+
model = model[0]
|
24 |
+
model.eval()
|
25 |
+
|
26 |
+
# Convert the fbank features to a torch tensor
|
27 |
+
fbank_tensor = torch.from_numpy(fbank_features)
|
28 |
+
|
29 |
+
# Apply normalization if necessary
|
30 |
+
fbank_tensor = task.apply_input_transform(fbank_tensor)
|
31 |
+
|
32 |
+
# Move the tensor to the same device as the model
|
33 |
+
fbank_tensor = fbank_tensor.to(cfg.common.device)
|
34 |
+
|
35 |
+
# Wrap the fbank tensor in a dictionary to match the fairseq batch format
|
36 |
+
sample = {"net_input": {"source": fbank_tensor.unsqueeze(0)}}
|
37 |
+
|
38 |
+
# Perform fairseq generation
|
39 |
+
with torch.no_grad():
|
40 |
+
hypos = task.inference_step(generator=model, models=[model], sample=sample)
|
41 |
+
|
42 |
+
# Extract the predicted tokens from the top hypothesis
|
43 |
+
hypo_tokens = hypos[0][0]["tokens"].int().cpu()
|
44 |
+
|
45 |
+
# Convert tokens to string using the target dictionary and post-processing
|
46 |
+
hypo_str = post_process(hypo_tokens, cfg.task.target_dictionary)
|
47 |
+
|
48 |
+
return hypo_str
|
49 |
+
|
50 |
+
|
51 |
+
if __name__ == "__main__":
|
52 |
+
audio_file_path = "/content/drive/MyDrive/en2hi/fairseq_mustc_single_inference/test.wav"
|
53 |
+
checkpoint_path = "/content/drive/MyDrive/en2hi/fairseq_mustc_single_inference/st_avg_last_10_checkpoints.pt"
|
54 |
+
prediction = main(audio_file_path, checkpoint_path)
|
55 |
+
print("Predicted text:", prediction)
|
fairseq_mustc_single_inference/prep_mustc_data_hindi_single.py
ADDED
@@ -0,0 +1,263 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
# Copyright (c) Facebook, Inc. and its affiliates.
|
3 |
+
#
|
4 |
+
# This source code is licensed under the MIT license found in the
|
5 |
+
# LICENSE file in the root directory of this source tree.
|
6 |
+
|
7 |
+
import argparse
|
8 |
+
import logging
|
9 |
+
import os
|
10 |
+
from pathlib import Path
|
11 |
+
import shutil
|
12 |
+
from itertools import groupby
|
13 |
+
from tempfile import NamedTemporaryFile
|
14 |
+
from typing import Tuple
|
15 |
+
|
16 |
+
import numpy as np
|
17 |
+
import pandas as pd
|
18 |
+
import soundfile as sf
|
19 |
+
from examples.speech_to_text.data_utils import (
|
20 |
+
create_zip,
|
21 |
+
extract_fbank_features,
|
22 |
+
filter_manifest_df,
|
23 |
+
gen_config_yaml,
|
24 |
+
gen_vocab,
|
25 |
+
get_zip_manifest,
|
26 |
+
load_df_from_tsv,
|
27 |
+
save_df_to_tsv,
|
28 |
+
cal_gcmvn_stats,
|
29 |
+
)
|
30 |
+
import torch
|
31 |
+
from torch.utils.data import Dataset
|
32 |
+
from tqdm import tqdm
|
33 |
+
|
34 |
+
from fairseq.data.audio.audio_utils import get_waveform, convert_waveform
|
35 |
+
|
36 |
+
|
37 |
+
log = logging.getLogger(__name__)
|
38 |
+
|
39 |
+
|
40 |
+
MANIFEST_COLUMNS = ["id", "audio", "n_frames", "tgt_text", "speaker"]
|
41 |
+
|
42 |
+
|
43 |
+
class MUSTC(Dataset):
|
44 |
+
"""
|
45 |
+
Create a Dataset for MuST-C. Each item is a tuple of the form:
|
46 |
+
waveform, sample_rate, source utterance, target utterance, speaker_id,
|
47 |
+
utterance_id
|
48 |
+
"""
|
49 |
+
|
50 |
+
SPLITS = ["tst-COMMON"]
|
51 |
+
LANGUAGES = ["de", "es", "fr", "it", "nl", "pt", "ro", "ru", "hi"]
|
52 |
+
|
53 |
+
def __init__(self, root: str, lang: str, split: str) -> None:
|
54 |
+
assert split in self.SPLITS and lang in self.LANGUAGES
|
55 |
+
_root = Path(root) / f"en-{lang}" / "data" / split
|
56 |
+
wav_root, txt_root = _root / "wav", _root / "txt"
|
57 |
+
#print(_root, wav_root, txt_root)
|
58 |
+
assert _root.is_dir() and wav_root.is_dir() and txt_root.is_dir()
|
59 |
+
# Load audio segments
|
60 |
+
try:
|
61 |
+
import yaml
|
62 |
+
except ImportError:
|
63 |
+
print("Please install PyYAML to load the MuST-C YAML files")
|
64 |
+
with open(txt_root / f"{split}.yaml") as f:
|
65 |
+
segments = yaml.load(f, Loader=yaml.BaseLoader)
|
66 |
+
# Load source and target utterances
|
67 |
+
for _lang in ["en", lang]:
|
68 |
+
with open(txt_root / f"{split}.{_lang}") as f:
|
69 |
+
utterances = [r.strip() for r in f]
|
70 |
+
print(len(segments), len(utterances))
|
71 |
+
assert len(segments) == len(utterances)
|
72 |
+
for i, u in enumerate(utterances):
|
73 |
+
segments[i][_lang] = u
|
74 |
+
# Gather info
|
75 |
+
self.data = []
|
76 |
+
for wav_filename, _seg_group in groupby(segments, lambda x: x["wav"]):
|
77 |
+
wav_path = wav_root / wav_filename
|
78 |
+
sample_rate = sf.info(wav_path.as_posix()).samplerate
|
79 |
+
seg_group = sorted(_seg_group, key=lambda x: x["offset"])
|
80 |
+
for i, segment in enumerate(seg_group):
|
81 |
+
offset = int(float(segment["offset"]) * sample_rate)
|
82 |
+
n_frames = int(float(segment["duration"]) * sample_rate)
|
83 |
+
_id = f"{wav_path.stem}_{i}"
|
84 |
+
self.data.append(
|
85 |
+
(
|
86 |
+
wav_path.as_posix(),
|
87 |
+
offset,
|
88 |
+
n_frames,
|
89 |
+
sample_rate,
|
90 |
+
segment["en"],
|
91 |
+
segment[lang],
|
92 |
+
segment["speaker_id"],
|
93 |
+
_id,
|
94 |
+
)
|
95 |
+
)
|
96 |
+
|
97 |
+
def __getitem__(
|
98 |
+
self, n: int
|
99 |
+
) -> Tuple[torch.Tensor, int, str, str, str, str]:
|
100 |
+
wav_path, offset, n_frames, sr, src_utt, tgt_utt, spk_id, \
|
101 |
+
utt_id = self.data[n]
|
102 |
+
waveform, _ = get_waveform(wav_path, frames=n_frames, start=offset)
|
103 |
+
waveform = torch.from_numpy(waveform)
|
104 |
+
return waveform, sr, src_utt, tgt_utt, spk_id, utt_id
|
105 |
+
|
106 |
+
def __len__(self) -> int:
|
107 |
+
return len(self.data)
|
108 |
+
|
109 |
+
|
110 |
+
def process(args):
|
111 |
+
root = Path(args.data_root).absolute()
|
112 |
+
for lang in MUSTC.LANGUAGES:
|
113 |
+
cur_root = root / f"en-{lang}"
|
114 |
+
if not cur_root.is_dir():
|
115 |
+
print(f"{cur_root.as_posix()} does not exist. Skipped.")
|
116 |
+
continue
|
117 |
+
# Extract features
|
118 |
+
audio_root = cur_root / ("flac" if args.use_audio_input else "fbank80")
|
119 |
+
audio_root.mkdir(exist_ok=True)
|
120 |
+
|
121 |
+
for split in MUSTC.SPLITS:
|
122 |
+
print(f"Fetching split {split}...")
|
123 |
+
dataset = MUSTC(root.as_posix(), lang, split)
|
124 |
+
if args.use_audio_input:
|
125 |
+
print("Converting audios...")
|
126 |
+
for waveform, sample_rate, _, _, _, utt_id in tqdm(dataset):
|
127 |
+
tgt_sample_rate = 16_000
|
128 |
+
_wavform, _ = convert_waveform(
|
129 |
+
waveform, sample_rate, to_mono=True,
|
130 |
+
to_sample_rate=tgt_sample_rate
|
131 |
+
)
|
132 |
+
sf.write(
|
133 |
+
(audio_root / f"{utt_id}.flac").as_posix(),
|
134 |
+
_wavform.T.numpy(), tgt_sample_rate
|
135 |
+
)
|
136 |
+
else:
|
137 |
+
print("Extracting log mel filter bank features...")
|
138 |
+
gcmvn_feature_list = []
|
139 |
+
if split == 'train' and args.cmvn_type == "global":
|
140 |
+
print("And estimating cepstral mean and variance stats...")
|
141 |
+
|
142 |
+
for waveform, sample_rate, _, _, _, utt_id in tqdm(dataset):
|
143 |
+
features = extract_fbank_features(
|
144 |
+
waveform, sample_rate, audio_root / f"{utt_id}.npy"
|
145 |
+
)
|
146 |
+
if split == 'train' and args.cmvn_type == "global":
|
147 |
+
if len(gcmvn_feature_list) < args.gcmvn_max_num:
|
148 |
+
gcmvn_feature_list.append(features)
|
149 |
+
|
150 |
+
if split == 'train' and args.cmvn_type == "global":
|
151 |
+
# Estimate and save cmv
|
152 |
+
stats = cal_gcmvn_stats(gcmvn_feature_list)
|
153 |
+
with open(cur_root / "gcmvn.npz", "wb") as f:
|
154 |
+
np.savez(f, mean=stats["mean"], std=stats["std"])
|
155 |
+
|
156 |
+
# Pack features into ZIP
|
157 |
+
zip_path = cur_root / f"{audio_root.name}.zip"
|
158 |
+
print("ZIPing audios/features...")
|
159 |
+
create_zip(audio_root, zip_path)
|
160 |
+
print("Fetching ZIP manifest...")
|
161 |
+
audio_paths, audio_lengths = get_zip_manifest(
|
162 |
+
zip_path,
|
163 |
+
is_audio=args.use_audio_input,
|
164 |
+
)
|
165 |
+
# Generate TSV manifest
|
166 |
+
print("Generating manifest...")
|
167 |
+
train_text = []
|
168 |
+
for split in MUSTC.SPLITS:
|
169 |
+
is_train_split = split.startswith("train")
|
170 |
+
manifest = {c: [] for c in MANIFEST_COLUMNS}
|
171 |
+
dataset = MUSTC(args.data_root, lang, split)
|
172 |
+
for _, _, src_utt, tgt_utt, speaker_id, utt_id in tqdm(dataset):
|
173 |
+
manifest["id"].append(utt_id)
|
174 |
+
manifest["audio"].append(audio_paths[utt_id])
|
175 |
+
manifest["n_frames"].append(audio_lengths[utt_id])
|
176 |
+
manifest["tgt_text"].append(
|
177 |
+
src_utt if args.task == "asr" else tgt_utt
|
178 |
+
)
|
179 |
+
manifest["speaker"].append(speaker_id)
|
180 |
+
if is_train_split:
|
181 |
+
train_text.extend(manifest["tgt_text"])
|
182 |
+
df = pd.DataFrame.from_dict(manifest)
|
183 |
+
df = filter_manifest_df(df, is_train_split=is_train_split)
|
184 |
+
save_df_to_tsv(df, cur_root / f"{split}_{args.task}.tsv")
|
185 |
+
# Clean up
|
186 |
+
shutil.rmtree(audio_root)
|
187 |
+
|
188 |
+
|
189 |
+
def process_joint(args):
|
190 |
+
cur_root = Path(args.data_root)
|
191 |
+
assert all(
|
192 |
+
(cur_root / f"en-{lang}").is_dir() for lang in MUSTC.LANGUAGES
|
193 |
+
), "do not have downloaded data available for all 8 languages"
|
194 |
+
# Generate vocab
|
195 |
+
vocab_size_str = "" if args.vocab_type == "char" else str(args.vocab_size)
|
196 |
+
spm_filename_prefix = f"spm_{args.vocab_type}{vocab_size_str}_{args.task}"
|
197 |
+
with NamedTemporaryFile(mode="w") as f:
|
198 |
+
for lang in MUSTC.LANGUAGES:
|
199 |
+
tsv_path = cur_root / f"en-{lang}" / f"train_{args.task}.tsv"
|
200 |
+
df = load_df_from_tsv(tsv_path)
|
201 |
+
for t in df["tgt_text"]:
|
202 |
+
f.write(t + "\n")
|
203 |
+
special_symbols = None
|
204 |
+
if args.task == 'st':
|
205 |
+
special_symbols = [f'<lang:{lang}>' for lang in MUSTC.LANGUAGES]
|
206 |
+
gen_vocab(
|
207 |
+
Path(f.name),
|
208 |
+
cur_root / spm_filename_prefix,
|
209 |
+
args.vocab_type,
|
210 |
+
args.vocab_size,
|
211 |
+
special_symbols=special_symbols
|
212 |
+
)
|
213 |
+
# Generate config YAML
|
214 |
+
gen_config_yaml(
|
215 |
+
cur_root,
|
216 |
+
spm_filename=spm_filename_prefix + ".model",
|
217 |
+
yaml_filename=f"config_{args.task}.yaml",
|
218 |
+
specaugment_policy="ld",
|
219 |
+
prepend_tgt_lang_tag=(args.task == "st"),
|
220 |
+
)
|
221 |
+
# Make symbolic links to manifests
|
222 |
+
for lang in MUSTC.LANGUAGES:
|
223 |
+
for split in MUSTC.SPLITS:
|
224 |
+
src_path = cur_root / f"en-{lang}" / f"{split}_{args.task}.tsv"
|
225 |
+
desc_path = cur_root / f"{split}_{lang}_{args.task}.tsv"
|
226 |
+
if not desc_path.is_symlink():
|
227 |
+
os.symlink(src_path, desc_path)
|
228 |
+
|
229 |
+
|
230 |
+
def main():
|
231 |
+
parser = argparse.ArgumentParser()
|
232 |
+
parser.add_argument("--data-root", "-d", required=True, type=str)
|
233 |
+
parser.add_argument(
|
234 |
+
"--vocab-type",
|
235 |
+
default="unigram",
|
236 |
+
required=True,
|
237 |
+
type=str,
|
238 |
+
choices=["bpe", "unigram", "char"],
|
239 |
+
),
|
240 |
+
parser.add_argument("--vocab-size", default=8000, type=int)
|
241 |
+
parser.add_argument("--task", type=str, choices=["asr", "st"])
|
242 |
+
parser.add_argument("--joint", action="store_true", help="")
|
243 |
+
parser.add_argument(
|
244 |
+
"--cmvn-type", default="utterance",
|
245 |
+
choices=["global", "utterance"],
|
246 |
+
help="The type of cepstral mean and variance normalization"
|
247 |
+
)
|
248 |
+
parser.add_argument(
|
249 |
+
"--gcmvn-max-num", default=150000, type=int,
|
250 |
+
help="Maximum number of sentences to use to estimate global mean and "
|
251 |
+
"variance"
|
252 |
+
)
|
253 |
+
parser.add_argument("--use-audio-input", action="store_true")
|
254 |
+
args = parser.parse_args()
|
255 |
+
|
256 |
+
if args.joint:
|
257 |
+
process_joint(args)
|
258 |
+
else:
|
259 |
+
process(args)
|
260 |
+
|
261 |
+
|
262 |
+
if __name__ == "__main__":
|
263 |
+
main()
|
fairseq_mustc_single_inference/s2t_en2hi.py
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Script to translate given single english audio file to corresponding hindi text
|
3 |
+
|
4 |
+
Usage : python s2t_en2hi.py <audio_file_path> <averaged_checkpoints_file_path>
|
5 |
+
"""
|
6 |
+
|
7 |
+
import sys
|
8 |
+
import os
|
9 |
+
import subprocess
|
10 |
+
|
11 |
+
# TODO better argument handling
|
12 |
+
hi_wav = sys.argv[1]
|
13 |
+
en2hi_model_checkpoint = sys.argv[2]
|
14 |
+
|
15 |
+
os.system(f"cp {hi_wav} ./MUSTC_ROOT/en-hi/data/tst-COMMON/wav/test.wav")
|
16 |
+
|
17 |
+
print("------Starting data prepration...")
|
18 |
+
subprocess.run(["python", "prep_mustc_data_hindi_single.py", "--data-root", "MUSTC_ROOT/", "--task", "st", "--vocab-type", "unigram", "--vocab-size", "8000"], stdout=subprocess.DEVNULL)
|
19 |
+
|
20 |
+
print("------Performing translation...")
|
21 |
+
translation_result = subprocess.run(["fairseq-generate", "./MUSTC_ROOT/en-hi/", "--config-yaml", "config_st.yaml", "--gen-subset", "tst-COMMON_st", "--task", "speech_to_text", "--path", sys.argv[2], "--max-tokens", "50000", "--beam", "5", "--scoring", "sacrebleu"], capture_output=True, text=True)
|
22 |
+
translation_result_text = translation_result.stdout
|
23 |
+
print(translation_result.std)
|
24 |
+
lines = translation_result_text.split("\n")
|
25 |
+
|
26 |
+
print("\n\n------Translation results are:")
|
27 |
+
for i in lines:
|
28 |
+
if (i.startswith("D-0")):
|
29 |
+
print(i)
|
30 |
+
break
|
31 |
+
|
32 |
+
os.system("rm ./MUSTC_ROOT/en-hi/data/tst-COMMON/wav/test.wav")
|
fairseq_mustc_single_inference/s2t_en2hi_nolog.py
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Script to translate given single english audio file to corresponding hindi text
|
3 |
+
|
4 |
+
Usage : python s2t_en2hi.py <audio_file_path> <averaged_checkpoints_file_path>
|
5 |
+
"""
|
6 |
+
|
7 |
+
import sys
|
8 |
+
import os
|
9 |
+
import subprocess
|
10 |
+
|
11 |
+
# TODO better argument handling
|
12 |
+
hi_wav = sys.argv[1]
|
13 |
+
en2hi_model_checkpoint = sys.argv[2]
|
14 |
+
|
15 |
+
os.system(f"cp {hi_wav} ./MUSTC_ROOT/en-hi/data/tst-COMMON/wav/test.wav")
|
16 |
+
|
17 |
+
print("------Starting data prepration...")
|
18 |
+
subprocess.run(["python", "prep_mustc_data_hindi_single.py", "--data-root", "MUSTC_ROOT/", "--task", "st", "--vocab-type", "unigram", "--vocab-size", "8000"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
|
19 |
+
|
20 |
+
print("------Performing translation...")
|
21 |
+
translation_result = subprocess.run(["fairseq-generate", "./MUSTC_ROOT/en-hi/", "--config-yaml", "config_st.yaml", "--gen-subset", "tst-COMMON_st", "--task", "speech_to_text", "--path", sys.argv[2], "--max-tokens", "50000", "--beam", "5", "--scoring", "sacrebleu"], capture_output=True, text=True)
|
22 |
+
translation_result_text = translation_result.stdout
|
23 |
+
print(translation_result.std)
|
24 |
+
lines = translation_result_text.split("\n")
|
25 |
+
|
26 |
+
print("\n\n------Translation results are:")
|
27 |
+
for i in lines:
|
28 |
+
if (i.startswith("D-0")):
|
29 |
+
print(i.split("\t")[2])
|
30 |
+
break
|
31 |
+
|
32 |
+
os.system("rm ./MUSTC_ROOT/en-hi/data/tst-COMMON/wav/test.wav")
|
fairseq_mustc_single_inference/st_avg_last_10_checkpoints.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:47e8bfef22034ac859da3a2726b142876793113cf18ac18bb6f6eb85415a7893
|
3 |
+
size 373227272
|
fairseq_mustc_single_inference/test.wav
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6534ffb9201355071f28be524928683cc745570945a3aef3b289a4a9c5a5df90
|
3 |
+
size 141300
|