File size: 1,606 Bytes
8d1c2d5
 
943773a
 
 
 
 
8d1c2d5
 
05f53e3
8d1c2d5
 
 
 
 
 
 
 
 
 
943773a
8d1c2d5
943773a
 
8d1c2d5
 
 
 
 
943773a
8d1c2d5
 
 
 
 
 
 
 
 
943773a
 
 
 
 
 
 
8d1c2d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
943773a
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
library_name: transformers
license: llama3.2
language:
- hu
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---

# Model Card for Llama-3.2-1B-HuAMR

<!-- Provide a quick summary of what the model is/does. -->


## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the None dataset. It achieves the following results on the evaluation set:

- **Model type:** Abstract Meaning Representation parser
- **Language(s) (NLP):** Hungarian

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [GitHub Repo](https://github.com/botondbarta/HuAMR)

## Training Details

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

#### Training Hyperparameters

- learning_rate: 5e-05
- train_batch_size: 1
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: AdamW
- lr_scheduler_type: linear
- max_grad_norm: 0.3

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

[More Information Needed]

## Citation [optional]

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]

## Framework versions
 - Transformers 4.34.1
 - Pytorch 2.3.0+cu118
 - Datasets 2.19.0
 - Tokenizers 0.19.1