Spaces:
Sleeping
Sleeping
File size: 2,568 Bytes
377e4e1 ca430b9 377e4e1 ca430b9 377e4e1 ca430b9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
---
title: ASR Evaluation Tool
emoji: 🎯
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 5.16.0
app_file: app.py
pinned: false
---
# ASR Evaluation Tool (Ver 1.1)
This Gradio app provides a user-friendly interface for calculating Word Error Rate (WER) and related metrics between reference and hypothesis texts. It's particularly useful for evaluating speech recognition or machine translation outputs.
## Features
- Calculate WER, MER, WIL, and WIP metrics
- Text normalization options
- Custom word filtering
- Detailed error analysis
- Example inputs for testing
## How to Use
1. Enter or paste your reference text
2. Enter or paste your hypothesis text
3. Configure options (normalization, word filtering)
4. Click "Calculate WER" to see results
NOTE: There might be a 30-second delay due to the r1:1.5B model being called for medical term recall calculations.

## Local Development
1. Clone the repository:
```bash
git clone https://github.com/yourusername/wer-evaluation-tool.git
cd wer-evaluation-tool
```
2. Create and activate a virtual environment using `uv`:
```bash
uv venv
source .venv/bin/activate # On Unix/macOS
# or
.venv\Scripts\activate # On Windows
```
3. Install dependencies:
```bash
uv pip install -r requirements.txt
```
4. Run the app locally:
```bash
uv run python app_gradio.py
```
## Installation
You can install the package directly from PyPI:
```bash
uv pip install wer-evaluation-tool
```
## Testing
Run the test suite using pytest:
```bash
uv run pytest tests/
```
## Contributing
1. Fork the repository
2. Create a new branch (`git checkout -b feature/improvement`)
3. Make your changes
4. Run tests to ensure everything works
5. Commit your changes (`git commit -am 'Add new feature'`)
6. Push to the branch (`git push origin feature/improvement`)
7. Create a Pull Request
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Acknowledgments
- Thanks to all contributors who have helped with the development
- Inspired by the need for better speech recognition evaluation tools
- Built with [Gradio](https://gradio.app/)
## Contact
For questions or feedback, please:
- Open an issue in the GitHub repository
- Contact the maintainers at [email/contact information]
## Citation
If you use this tool in your research, please cite:
```bibtex
@software{wer_evaluation_tool,
title = {WER Evaluation Tool},
author = {Your Name},
year = {2024},
url = {https://github.com/yourusername/wer-evaluation-tool}
}
```
|