File size: 1,009 Bytes
443d5a6 0f81a85 443d5a6 0f81a85 443d5a6 7b60dc1 0f81a85 61f5efe 443d5a6 56bc626 b2a2a34 9e0e053 b2a2a34 9e0e053 b2a2a34 875b6e0 0f81a85 ea871a5 56bc626 0f81a85 56bc626 0f81a85 56bc626 0f81a85 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
inference: false
fine-tuning: true
tags:
- generative error correction
- large language model
- LLaMA
metrics:
- wer
datasets:
- PeacefulData/Robust-HyPoradise
---
This repo releases the trained LLaMA-adapter weights in paper "Large Language Models are Efficient Learners of Noise-Robust Speech Recognition."
**GitHub:** https://github.com/YUCHEN005/RobustGER
**Data:** https://huggingface.co/datasets/PeacefulData/Robust-HyPoradise
**Model:** This repo
If you consider this work would be related or useful for your research, please kindly consider to cite the work in ICLR 2024. Thank you.
```bib
@inproceedings{hu2024large,
title={Large Language Models are Efficient Learners of Noise-Robust Speech Recognition},
author={Hu, Yuchen and Chen, Chen and Yang, Chao-Han Huck and Li, Ruizhe and Zhang, Chao and Chen, Pin-Yu and Chng, Eng Siong},
booktitle={International Conference on Learning Representations},
year={2024}
}
``` |