FemkeBakker commited on
Commit
61c1cc7
1 Parent(s): 24184df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -9
README.md CHANGED
@@ -8,6 +8,10 @@ tags:
8
  model-index:
9
  - name: AmsterdamDocClassificationLlama200T3Epochs
10
  results: []
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,24 +19,24 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # AmsterdamDocClassificationLlama200T3Epochs
17
 
18
- This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the [AmsterdamDocClassification](https://huggingface.co/datasets/FemkeBakker/AmsterdamBalancedFirst200Tokens) dataset.
 
 
 
 
19
  It achieves the following results on the evaluation set:
20
  - Loss: 0.8116
21
 
22
- ## Model description
23
-
24
- More information needed
25
-
26
- ## Intended uses & limitations
27
-
28
- More information needed
29
 
30
  ## Training and evaluation data
31
 
32
- More information needed
 
33
 
34
  ## Training procedure
35
 
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
@@ -67,6 +71,8 @@ The following hyperparameters were used during training:
67
  | 0.9744 | 2.7855 | 1722 | 0.8116 |
68
  | 1.0399 | 2.9842 | 1845 | 0.8116 |
69
 
 
 
70
 
71
  ### Framework versions
72
 
@@ -74,3 +80,8 @@ The following hyperparameters were used during training:
74
  - Pytorch 2.3.0+cu121
75
  - Datasets 2.19.1
76
  - Tokenizers 0.19.1
 
 
 
 
 
 
8
  model-index:
9
  - name: AmsterdamDocClassificationLlama200T3Epochs
10
  results: []
11
+ datasets:
12
+ - FemkeBakker/AmsterdamBalancedFirst200Tokens
13
+ language:
14
+ - nl
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
19
 
20
  # AmsterdamDocClassificationLlama200T3Epochs
21
 
22
+ As part of the Assessing Large Language Models for Document Classification project by the Municipality of Amsterdam, we fine-tune Mistral, Llama, and GEITje for document classification.
23
+ The fine-tuning is performed using the [AmsterdamBalancedFirst200Tokens](https://huggingface.co/datasets/FemkeBakker/AmsterdamBalancedFirst200Tokens) dataset, which consists of documents truncated to the first 200 tokens.
24
+ In our research, we evaluate the fine-tuning of these LLMs across one, two, and three epochs.
25
+ This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) and has been fine-tuned for three epochs.
26
+
27
  It achieves the following results on the evaluation set:
28
  - Loss: 0.8116
29
 
 
 
 
 
 
 
 
30
 
31
  ## Training and evaluation data
32
 
33
+ - The training data consists of 9900 documents and their labels formatted into conversations.
34
+ - The evaluation data consists of 1100 documents and their labels formatted into conversations.
35
 
36
  ## Training procedure
37
 
38
+ See the [GitHub](https://github.com/Amsterdam-Internships/document-classification-using-large-language-models) for specifics about the training and the code.
39
+
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
 
71
  | 0.9744 | 2.7855 | 1722 | 0.8116 |
72
  | 1.0399 | 2.9842 | 1845 | 0.8116 |
73
 
74
+ Training time: in total it took 2 hours and 3 minutes to fine-tune the model for three epochs.
75
+
76
 
77
  ### Framework versions
78
 
 
80
  - Pytorch 2.3.0+cu121
81
  - Datasets 2.19.1
82
  - Tokenizers 0.19.1
83
+
84
+
85
+
86
+ ### Acknowledgements
87
+ This model was trained as part of [insert thesis info] in collaboration with Amsterdam Intelligence for the City of Amsterdam.