gollm-12.8b-instruct-v2.3

This model is a fine-tuned version of EleutherAI/polyglot-ko-12.8b on a custom mixed dataset

Model description

  • No-context template
μ•„λž˜λŠ” μž‘μ—…μ„ μ„€λͺ…ν•˜λŠ” μ§ˆλ¬Έμ–΄μ™€ μΆ”κ°€ μ»¨ν…μŠ€νŠΈλ₯Ό μ œκ³΅ν•˜λŠ” λ§₯락이 ν•¨κ»˜ μ œκ³΅λ©λ‹ˆλ‹€. μš”μ²­μ„ 적절히 μ™„λ£Œν•˜λŠ” 닡변을 μž‘μ„±ν•˜μ„Έμš”.

### 질문:
{instruction}

### λ‹΅λ³€:
  • With context template
μ•„λž˜λŠ” μž‘μ—…μ„ μ„€λͺ…ν•˜λŠ” μ§ˆλ¬Έμ–΄μ™€ μΆ”κ°€ μ»¨ν…μŠ€νŠΈλ₯Ό μ œκ³΅ν•˜λŠ” λ§₯락이 ν•¨κ»˜ μ œκ³΅λ©λ‹ˆλ‹€. μš”μ²­μ„ 적절히 μ™„λ£Œν•˜λŠ” 닡변을 μž‘μ„±ν•˜μ„Έμš”.

### λ§₯락:
{input}

### 질문:
{instruction}

### λ‹΅λ³€:

Intended uses & limitations

More information needed

Training and evaluation data

  • self-introduction (20 samples)
  • High-quality reasoning dataset from private documents, QAs generated by Claude AI (1.3k samples)
  • EverythingLM-v2 (0.9k samples)
  • KoCoT (2k samples)
  • Private MRC dataset - answer generated by GPT-4 (32k samples) Original data have ~12k question-answer pairs with context, and augmentation is applied to make 20k samples with triplet contexts case (1 correct context out of 3)

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8
  • saved_checkpoint_at_epoch: 1 (condition: loss < 0.3)

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.0+cu117
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
4,832
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tlphams/gollm-12.8b-instruct-v2.3

Finetuned
(11)
this model