Translation
GGUF
English
Korean

Gugugo-koen-7B-V1.1-GGUF

Detail repo: https://github.com/jwj7140/Gugugo Gugugo

This is GGUF model from squarelike/Gugugo-koen-7B-V1.1

Base Model: Llama-2-ko-7b

Training Dataset: sharegpt_deepl_ko_translation.

I trained with 1x A6000 GPUs for 90 hours.

Prompt Template

KO->EN

### ν•œκ΅­μ–΄: {sentence}</끝>
### μ˜μ–΄:

EN->KO

### μ˜μ–΄: {sentence}</끝>
### ν•œκ΅­μ–΄:
Downloads last month
473
GGUF
Model size
6.86B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train squarelike/Gugugo-koen-7B-V1.1-GGUF