image/jpeg

Gemma Ko 7B Instruct v0.71

  • Eval Loss: 1.51977
  • Train Loss: 0.48541
  • lr: 5e-5
  • optimizer: adamw
  • lr_scheduler_type: cosine

Model Details

Model Description

The Gemma Ko 7B Instruct v0.71 model is designed for generating human-like text in the Korean language. It can be used for a variety of natural language processing tasks, such as language translation, text summarization, question answering, and conversation generation. This model is particularly well-suited for applications that require high-quality, coherent, and contextually relevant Korean text generation.

Limitations and Ethical Considerations

As Gemma Ko 7B has been trained on extensive web data, biases present in the training data may be reflected in the model. Additionally, there is a possibility that it may generate sentences containing errors or incorrect information. Therefore, rather than blindly trusting the model's output, it is necessary to refer to it with caution.

Downloads last month
4,764
Safetensors
Model size
8.54B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for lemon-mint/gemma-ko-7b-instruct-v0.71

Finetuned
(11)
this model
Quantizations
1 model