--- language: - ko license: cc-by-nc-4.0 model-index: - name: K2S3-SOLAR-11b-v1.0 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 33.7 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Changgil/K2S3-SOLAR-11b-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 51.39 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Changgil/K2S3-SOLAR-11b-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 30.05 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Changgil/K2S3-SOLAR-11b-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 45.99 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Changgil/K2S3-SOLAR-11b-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 57.54 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Changgil/K2S3-SOLAR-11b-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 1.36 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Changgil/K2S3-SOLAR-11b-v1.0 name: Open LLM Leaderboard --- --- ## Developed by : * K2S3 ## Model Number: * K2S3-SOLAR-11b-v1.0 ## Base Model : * [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) ### Training Data * The training data for this model includes the Standard Korean Dictionary, training data from KULLM at Korea University, abstracts of master's and doctoral theses, and Korean language samples from AI Hub. * 이 모델의 훈련 데이터에는 표준대국어사전, 고려대 KULLM의 훈련 데이터, 석박사학위 논문의 초록, 그리고 AI Hub의 한국어 데이터 샘플들이 포함됩니다. ### Training Method * This model was fine-tuned on the "upstage/SOLAR-10.7B-Instruct-v1.0" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning). * 이 모델은 "upstage/SOLAR-10.7B-Instruct-v1.0" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다. ### Hardware * Hardware: Utilized two A100 (80G*2EA) GPUs for training. * Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp. Key training adjustments include the addition of new Korean tokens trained with the SentencePieceBPETokenizer, trained for 2 epochs, batch size of 1, and gradient accumulation of 32. * 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다. 주요 훈련 조정으로는 SentencePieceBPETokenizer로 훈련된 새로운 한글 토큰들을 추가, 2 에폭 훈련, 배치 크기 1, 그리고 그라디언트 누적 32를 포함합니다. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Changgil__K2S3-SOLAR-11b-v1.0) | Metric |Value| |---------------------------------|----:| |Avg. |36.67| |AI2 Reasoning Challenge (25-Shot)|33.70| |HellaSwag (10-Shot) |51.39| |MMLU (5-Shot) |30.05| |TruthfulQA (0-shot) |45.99| |Winogrande (5-shot) |57.54| |GSM8k (5-shot) | 1.36|