Update README.md
Browse files
README.md
CHANGED
@@ -30,3 +30,96 @@ configs:
|
|
30 |
- split: test
|
31 |
path: data/test-*
|
32 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
- split: test
|
31 |
path: data/test-*
|
32 |
---
|
33 |
+
|
34 |
+
# π KoCommonGEN v2
|
35 |
+
|
36 |
+
KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models (ACL 2024-Findings)
|
37 |
+
|
38 |
+
*Jaehyung Seo, Jaewook Lee, Chanjun Park, SeongTae Hong, Seungjun Lee and Heuiseok Lim*
|
39 |
+
|
40 |
+
π« [NLP & AI Lab](https://blpkorea.cafe24.com/wp/), Korea University
|
41 |
+
|
42 |
+
---
|
43 |
+
### π₯ News
|
44 |
+
- September 27, 2023: Provided data support for the [Open Ko-LLM Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)
|
45 |
+
- August 7, 2024: Dataset Release
|
46 |
+
- August 10, 2024: Experimental Results for the New Models Added
|
47 |
+
- August 14, 2024: Presented a research paper at ACL 2024
|
48 |
+
|
49 |
+
|
50 |
+
|
51 |
+
|
52 |
+
|
53 |
+
### π₯ Human Evaluation
|
54 |
+
|
55 |
+
We recruited 22 native Korean speaking volunteers as human evaluators and paid them $0.8 per question.
|
56 |
+
|
57 |
+
| Model | # | Average Score | cohen's kappa | Krippendorff's alpha |
|
58 |
+
| :-------: | :--: | :-----------: | :-----------: | :------------------: |
|
59 |
+
| **Human** | 22 | 0.8395 | 0.7693 | 0.7706 |
|
60 |
+
|
61 |
+
### π€ Models (August 10, 2024)
|
62 |
+
|
63 |
+
The results of 2-shot evaluation of the newly released models.
|
64 |
+
|
65 |
+
| Model | Size | Acc_norm | Stderr | Link |
|
66 |
+
| :----------------------------: | :---: | :--------: | :----: | :----------------------------------------------------------: |
|
67 |
+
| **GPT-4** (June 13, 2023) | | **0.7450** | | |
|
68 |
+
| **Mistral-Nemo-Instruct** | 12B | 0.6612 | 0.0163 | [π](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) |
|
69 |
+
| **Mistral-Nemo-Base** | 12B | 0.6340 | 0.0166 | [π](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) |
|
70 |
+
| **Meta-Llama-3.1-8B** | 8B | 0.6246 | 0.0166 | [π](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) |
|
71 |
+
| **QWEN2-7B base** | 7B | 0.6187 | 0.0167 | [π](https://huggingface.co/Qwen/Qwen2-7B) |
|
72 |
+
| **EXAONE-3.0-7.8B-Instruct** | 7.8B | 0.6088 | 0.0168 | [π](https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct) |
|
73 |
+
| **MLP-KTLim-Bllossom-8B** | 8B | 0.6057 | 0.0168 | [π](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) |
|
74 |
+
| **Meta-Llama-3.1-8B-Instruct** | 8B | 0.6057 | 0.0168 | [π](meta-llama/Meta-Llama-3.1-8B-Instruct) |
|
75 |
+
| **KULLM3** | 10.8B | 0.6033 | 0.0168 | [π](https://huggingface.co/nlpai-lab/KULLM3) |
|
76 |
+
| **QWEN2-7B inst** | 7B | 0.5832 | 0.017 | [π](Qwen/Qwen2-7B-Instruct) |
|
77 |
+
| **Gemma-2-9b-it** | 9B | 0.5714 | 0.0170 | [π](https://huggingface.co/google/gemma-2-9b-it) |
|
78 |
+
| **Aya-23-8B** | 8B | 0.5159 | 0.0172 | [π](CohereForAI/aya-23-8B) |
|
79 |
+
| **Allganize-Alpha-Instruct** | 8B | 0.4970 | 0.0172 | [π](https://huggingface.co/allganize/Llama-3-Alpha-Ko-8B-Instruct) |
|
80 |
+
|
81 |
+
As mentioned in the paper, it is possible to evaluate various models.
|
82 |
+
|
83 |
+
|
84 |
+
|
85 |
+
### π°π·πΊπΈπ―π΅π¨π³πͺπΈ Code-switching
|
86 |
+
|
87 |
+
The dataset can be found on Hugging Face at: [nlpai-lab/ko_commongen_v2_code_switching](https://huggingface.co/datasets/nlpai-lab/ko_commongen_v2_code_switching)
|
88 |
+
|
89 |
+
This dataset contains code-switching data for the following languages:
|
90 |
+
- Korean (korean)
|
91 |
+
- English (english)
|
92 |
+
- Japanese (japan)
|
93 |
+
- Chinese (china)
|
94 |
+
- Spanish (espanol)
|
95 |
+
|
96 |
+
(The code-switching data relies on machine translation, which may result in some inaccuracies.)
|
97 |
+
|
98 |
+
### π Citation
|
99 |
+
|
100 |
+
```
|
101 |
+
@inproceedings{seo2024Kocommongenv2,
|
102 |
+
title = "KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models",
|
103 |
+
author = "Jaehyung Seo and Jaewook Lee and Chanjun Park and SeongTae Hong and Seungjun Lee and Heuiseok Lim",
|
104 |
+
booktitle = "Findings of the Association for Computational Linguistics: ACL 2024",
|
105 |
+
month = August,
|
106 |
+
year = "2024",
|
107 |
+
address = "Bangkok, Thailand",
|
108 |
+
publisher = "Association for Computational Linguistics",
|
109 |
+
url = "TBD",
|
110 |
+
doi = "TBD",
|
111 |
+
pages = "TBD"}
|
112 |
+
```
|
113 |
+
|
114 |
+
### π¨ Warning!
|
115 |
+
|
116 |
+
This dataset contains some instances of toxic speech.
|
117 |
+
|
118 |
+
|
119 |
+
### π Acknowledgement
|
120 |
+
|
121 |
+
We sincerely appreciate the dedication of Chanjun Park, Sanghoon Kim and Sunghun Kim (Sung Kim) from **Upstage AI** in managing one of the benchmark datasets for the
|
122 |
+
[Open Ko-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
|
123 |
+
|
124 |
+
|
125 |
+
|