results

This model is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct on Renicames/turkish-law-chatbot dataset, designed to generate responses based on Turkish legal questions and answers.

Model description

This model is fine-tuned to imporove its ability to generate responses for a Turkish law chatbot. meta-llama/Llama-3.2-3B-Instruct was used as the base model, which can pretty much handle various tasks. Just specialized it with the dataset of Renicames/turkish-law-chatbot.

Intended uses & limitations

Intended uses

  • This model can be deployed in applications aimed at providing legal information or answering legal questions in Turkish.
  • Useful for generating legal text, such as explanations of laws or providing examples of legal processes in Turkish.

Limitaions

  • While the model has been fine-tuned on the Turkish legal domain, it may still lack the depth and specificity required for complex legal inquiries. It might not be suitable for professional legal advice.
  • As with any AI model, it may reflect biases found in the training data, so it should be used with caution in critical applications.
  • This model is focused on the Turkish language, so it may not perform well in other languages or mixed-language queries.

Training and evaluation data

Training data details:

  • Source: Renicames/turkish-law-chatbot dataset
  • Languages: Turkish
  • Content: Legal questions and answers
  • Size: The dataset consists of several thousand question-answer pairs related to Turkish law.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 3

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.3.1
  • Tokenizers 0.21.0
Downloads last month
21
Safetensors
Model size
3.21B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for JuLsez4R/Llama-turkish-lawbot

Finetuned
(321)
this model

Dataset used to train JuLsez4R/Llama-turkish-lawbot