MushanW_GLOBE_V2 · New Discussion
MushanW_GLOBE_V2 · New Discussion
Hello!
I listened to a few dozen samples and the audio quality seems lower in GLOBE_V2 than in GLOBE_V1. The supersampling might have introduced audio artifacts.
Also, in some cases the speaker voice characteristics completely changed.
For example, from the train split, first page using the viewer for both datasets:
S_001289 - "somewhere you are holding the person i love, the boy said."
S_001289 - "they seemed to laugh at him, and he laughed back, his heart bursting with joy."
These two rows have the same speaker id, but they seem to belong to two different voices in V2, one male and one female. In V1, these entries are read by the same voice.
Hi fatcatrat,
Thank you for your feedback!
“I listened to a few dozen samples, and the audio quality seems lower in GLOBE_V2 than in GLOBE_V1. The supersampling might have introduced audio artifacts.”
• You are correct. GLOBE_V1 utilized Adobe’s speech enhancement tools, while GLOBE_V2 relied on open-source tools, which may result in lower audio quality. The reason for releasing GLOBE_V2 was to address an issue identified by some users who noted that GLOBE_V1 contained many samples with volume inconsistencies.
“Also, in some cases, the speaker voice characteristics completely changed.”
• Thanks for pointing this out! Unfortunately, some metadata was lost after the publication of GLOBE_V1, so the V2 version was processed with new scripts. I will investigate further to identify any mistakes or mismatches in the speaker assignments.
Do the number of files in the test set match the metadata? I downloaded the dataset through huggingface, but get this during verification:
Generating test split: 93% 5046/5455 [00:02<00:00, 2647.26 examples/s]
I also downloaded each test*.parquet file separately and overrode the ones in the cache just in case there was a download error, but the same thing happens during verification.
Is there only 5046 files in the test set?
I had to do this to bypass verification and use the dataset:
ds = load_dataset("MushanW/GLOBE_V2", verification_mode='no_checks', download_config=DownloadConfig(resume_download=True))
Just like the commenter above, there seems to be a mixup with the speaker ids. For example, S_001001 - "in his early military career, he fought in flanders." sounds like a young female speaker, but is tagged as a male, fourties speaker. The voice also sounds different from the other S_001001 utterances
Do the number of files in the test set match the metadata? I downloaded the dataset through huggingface, but get this during verification:
Generating test split: 93% 5046/5455 [00:02<00:00, 2647.26 examples/s]I also downloaded each test*.parquet file separately and overrode the ones in the cache just in case there was a download error, but the same thing happens during verification.
Is there only 5046 files in the test set?I had to do this to bypass verification and use the dataset:
ds = load_dataset("MushanW/GLOBE_V2", verification_mode='no_checks', download_config=DownloadConfig(resume_download=True))Just like the commenter above, there seems to be a mixup with the speaker ids. For example, S_001001 - "in his early military career, he fought in flanders." sounds like a young female speaker, but is tagged as a male, fourties speaker. The voice also sounds different from the other S_001001 utterances
Hi iwong,
Sorry about that! It looks like there was some mix-up due to poor code version management, and part of the metadata might have been incorrectly processed for GLOBE V2.
We are currently working on fully rebuilding the dataset using Common Voice Corpus 21.0 to correct these errors. Once it's ready, we will upload the corrected dataset.
Thanks for your patience and understanding!