zibajoon commited on
Commit
82b6cc1
1 Parent(s): 92057f5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -4,6 +4,9 @@ tags:
4
  model-index:
5
  - name: 20231102-20_epochs_layoutlmv2-base-uncased_finetuned_docvqa
6
  results: []
 
 
 
7
  ---
8
 
9
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -17,15 +20,15 @@ It achieves the following results on the evaluation set:
17
 
18
  ## Model description
19
 
20
- More information needed
21
 
22
  ## Intended uses & limitations
23
 
24
- More information needed
25
 
26
  ## Training and evaluation data
27
 
28
- More information needed
29
 
30
  ## Training procedure
31
 
@@ -58,4 +61,4 @@ The following hyperparameters were used during training:
58
  - Transformers 4.34.1
59
  - Pytorch 2.0.1+cu118
60
  - Datasets 2.10.1
61
- - Tokenizers 0.14.1
 
4
  model-index:
5
  - name: 20231102-20_epochs_layoutlmv2-base-uncased_finetuned_docvqa
6
  results: []
7
+ license: mit
8
+ datasets:
9
+ - zibajoon/20231109_layoutlm2_5k_20_epochs
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
20
 
21
  ## Model description
22
 
23
+ This DocVQA model, built on the Layout LM v2 framework, represents an initial step in a series of experimental models aimed at document visual question answering. It's the "mini" version in a planned series, trained on a relatively small dataset of 1.2k samples (1,000 for training and 200 for testing) over 20 epochs. The training setup was modest, employing mixed precision (fp16), with manageable batch sizes and a focused approach to learning rate adjustment (warmup steps and weight decay). Notably, this model was trained without external reporting tools, emphasizing internal evaluation. As the first iteration in a progressive series that will later include medium (5k samples) and large (50k samples) models, this version serves as a foundational experiment, setting the stage for more extensive and complex models in the future.
24
 
25
  ## Intended uses & limitations
26
 
27
+ Experimental Only
28
 
29
  ## Training and evaluation data
30
 
31
+ Based on the sample 1.2 dataset released by DocVQA
32
 
33
  ## Training procedure
34
 
 
61
  - Transformers 4.34.1
62
  - Pytorch 2.0.1+cu118
63
  - Datasets 2.10.1
64
+ - Tokenizers 0.14.1