metadata
library_name: transformers
license: llama3.2
language:
- hu
base_model:
- meta-llama/Llama-3.2-1B-Instruct
Model Card for Llama-3.2-1B-HuAMR
Model Details
Model Description
This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct on the None dataset. It achieves the following results on the evaluation set:
- Model type: Abstract Meaning Representation parser
- Language(s) (NLP): Hungarian
Model Sources [optional]
- Repository: GitHub Repo
Training Details
Training Procedure
Training Hyperparameters
- learning_rate: 5e-05
- train_batch_size: 1
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: AdamW
- lr_scheduler_type: linear
- max_grad_norm: 0.3
Metrics
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
Framework versions
- Transformers 4.34.1
- Pytorch 2.3.0+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1