|
--- |
|
library_name: transformers |
|
license: llama3.2 |
|
language: |
|
- hu |
|
base_model: |
|
- meta-llama/Llama-3.2-1B-Instruct |
|
--- |
|
|
|
# Model Card for Llama-3.2-1B-HuAMR |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the None dataset. It achieves the following results on the evaluation set: |
|
|
|
- **Model type:** Abstract Meaning Representation parser |
|
- **Language(s) (NLP):** Hungarian |
|
|
|
### Model Sources [optional] |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** [GitHub Repo](https://github.com/botondbarta/HuAMR) |
|
|
|
## Training Details |
|
|
|
### Training Procedure |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
#### Training Hyperparameters |
|
|
|
- learning_rate: 5e-05 |
|
- train_batch_size: 1 |
|
- gradient_accumulation_steps: 16 |
|
- total_train_batch_size: 16 |
|
- optimizer: AdamW |
|
- lr_scheduler_type: linear |
|
- max_grad_norm: 0.3 |
|
|
|
#### Metrics |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
[More Information Needed] |
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
[More Information Needed] |
|
|
|
## Framework versions |
|
- Transformers 4.34.1 |
|
- Pytorch 2.3.0+cu118 |
|
- Datasets 2.19.0 |
|
- Tokenizers 0.19.1 |