Kyara: Knowledge Yielding Adaptive Retrieval Augmentation for LLM Fine-tuning

DOI

πŸ€— Hugging Face  | πŸš€Github  |  πŸ“‘ Paper  |  πŸ“– English  |  πŸ“– Chinese  |  πŸ’» Kaggle Notebook

kyara

Kyara (Knowledge Yielding Adaptive Retrieval Augmentation) is an experimental project aimed at improving language models through knowledge retrieval processes. The project seeks to enhance the model’s ability to adapt knowledge and improve language comprehension, particularly in underrepresented languages like Traditional Chinese. Given the relatively scarce availability of Traditional Chinese data compared to the vast corpus of English data used for model training, Kyara addresses this gap by expanding the limited corpus for this language.

This is a preview model, with the stable version set to be released soon.

Benchmark

All evaluations are conducted in a zero-shot setting.

Metric Kyara-3b-it Llama3.2-3b-it
TMMLUPlus 42.54 40.01
 - STEM 45.17 40.37
 - Humanities 39.66 38.65
 - Other 41.18 39.06
 - Social-Science 44.16 41.98
MMLU-Redux 57.24 56.91
GSM8K 67.25 57.16
MATH-L5 19.97 16.23
CRUX 31.25 25.25
AlpacaEval 23.87 19.35
Downloads last month
18
Safetensors
Model size
3.21B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for zake7749/Llama-3.2-3B-it-chinese-kyara

Finetuned
(166)
this model
Quantizations
3 models

Collection including zake7749/Llama-3.2-3B-it-chinese-kyara