lmv2-g-aadhaar-236doc-06-14

This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0427
  • Aadhaar Precision: 0.9783
  • Aadhaar Recall: 1.0
  • Aadhaar F1: 0.9890
  • Aadhaar Number: 45
  • Dob Precision: 0.9787
  • Dob Recall: 1.0
  • Dob F1: 0.9892
  • Dob Number: 46
  • Gender Precision: 1.0
  • Gender Recall: 0.9787
  • Gender F1: 0.9892
  • Gender Number: 47
  • Name Precision: 0.9574
  • Name Recall: 0.9375
  • Name F1: 0.9474
  • Name Number: 48
  • Overall Precision: 0.9785
  • Overall Recall: 0.9785
  • Overall F1: 0.9785
  • Overall Accuracy: 0.9939

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Aadhaar Precision Aadhaar Recall Aadhaar F1 Aadhaar Number Dob Precision Dob Recall Dob F1 Dob Number Gender Precision Gender Recall Gender F1 Gender Number Name Precision Name Recall Name F1 Name Number Overall Precision Overall Recall Overall F1 Overall Accuracy
1.0024 1.0 188 0.5819 0.9348 0.9556 0.9451 45 1.0 1.0 1.0 46 1.0 0.9574 0.9783 47 0.5172 0.625 0.5660 48 0.8410 0.8817 0.8609 0.9744
0.4484 2.0 376 0.3263 0.8980 0.9778 0.9362 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.6842 0.8125 0.7429 48 0.8838 0.9409 0.9115 0.9733
0.2508 3.0 564 0.2230 0.9318 0.9111 0.9213 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.8913 0.8542 0.8723 48 0.9560 0.9355 0.9457 0.9811
0.165 4.0 752 0.1728 0.9362 0.9778 0.9565 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.8444 0.7917 0.8172 48 0.9457 0.9355 0.9405 0.9844
0.1081 5.0 940 0.0987 0.8958 0.9556 0.9247 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 1.0 0.9167 0.9565 48 0.9728 0.9624 0.9676 0.9928
0.0834 6.0 1128 0.0984 0.8980 0.9778 0.9362 45 1.0 1.0 1.0 46 1.0 0.9574 0.9783 47 0.8148 0.9167 0.8627 48 0.9227 0.9624 0.9421 0.9833
0.0676 7.0 1316 0.0773 0.9362 0.9778 0.9565 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.9111 0.8542 0.8817 48 0.9620 0.9516 0.9568 0.9894
0.0572 8.0 1504 0.0786 0.8235 0.9333 0.8750 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.8936 0.875 0.8842 48 0.9263 0.9462 0.9362 0.9872
0.0481 9.0 1692 0.0576 0.9375 1.0 0.9677 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.9362 0.9167 0.9263 48 0.9679 0.9731 0.9705 0.99
0.0349 10.0 1880 0.0610 0.9574 1.0 0.9783 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.8958 0.8958 0.8958 48 0.9626 0.9677 0.9651 0.9894
0.0287 11.0 2068 0.0978 0.9091 0.8889 0.8989 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.9348 0.8958 0.9149 48 0.9615 0.9409 0.9511 0.985
0.0297 12.0 2256 0.0993 0.9375 1.0 0.9677 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.7959 0.8125 0.8041 48 0.9312 0.9462 0.9387 0.9833
0.0395 13.0 2444 0.0824 0.9362 0.9778 0.9565 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.875 0.875 0.875 48 0.9519 0.9570 0.9544 0.9872
0.0333 14.0 2632 0.0788 0.8913 0.9111 0.9011 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.9556 0.8958 0.9247 48 0.9617 0.9462 0.9539 0.9867
0.0356 15.0 2820 0.0808 0.84 0.9333 0.8842 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.9565 0.9167 0.9362 48 0.9468 0.9570 0.9519 0.9867
0.0192 16.0 3008 0.0955 0.8462 0.9778 0.9072 45 0.9787 1.0 0.9892 46 0.9583 0.9787 0.9684 47 0.9070 0.8125 0.8571 48 0.9211 0.9409 0.9309 0.9822
0.016 17.0 3196 0.0936 0.9130 0.9333 0.9231 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.9318 0.8542 0.8913 48 0.9615 0.9409 0.9511 0.9867
0.0218 18.0 3384 0.1009 0.9545 0.9333 0.9438 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.8571 0.875 0.8660 48 0.9514 0.9462 0.9488 0.9844
0.0165 19.0 3572 0.0517 0.9574 1.0 0.9783 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.9333 0.875 0.9032 48 0.9728 0.9624 0.9676 0.9906
0.0198 20.0 3760 0.0890 0.9167 0.9778 0.9462 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.9149 0.8958 0.9053 48 0.9572 0.9624 0.9598 0.9867
0.0077 21.0 3948 0.0835 0.9574 1.0 0.9783 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.88 0.9167 0.8980 48 0.9577 0.9731 0.9653 0.9872
0.0088 22.0 4136 0.0427 0.9783 1.0 0.9890 45 0.9787 1.0 0.9892 46 1.0 0.9787 0.9892 47 0.9574 0.9375 0.9474 48 0.9785 0.9785 0.9785 0.9939
0.0078 23.0 4324 0.0597 0.9574 1.0 0.9783 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.8654 0.9375 0.9 48 0.9529 0.9785 0.9655 0.9889
0.0178 24.0 4512 0.0524 0.9574 1.0 0.9783 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 1.0 0.875 0.9333 48 0.9890 0.9624 0.9755 0.9922
0.012 25.0 4700 0.0637 0.9375 1.0 0.9677 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.8491 0.9375 0.8911 48 0.9430 0.9785 0.9604 0.9867
0.0135 26.0 4888 0.0668 0.9184 1.0 0.9574 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.86 0.8958 0.8776 48 0.9424 0.9677 0.9549 0.9867
0.0123 27.0 5076 0.0713 0.9565 0.9778 0.9670 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.9375 0.9375 0.9375 48 0.9731 0.9731 0.9731 0.9911
0.0074 28.0 5264 0.0675 0.9362 0.9778 0.9565 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.9 0.9375 0.9184 48 0.9577 0.9731 0.9653 0.99
0.0051 29.0 5452 0.0713 0.9362 0.9778 0.9565 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.9167 0.9167 0.9167 48 0.9626 0.9677 0.9651 0.9906
0.0027 30.0 5640 0.0725 0.9362 0.9778 0.9565 45 1.0 1.0 1.0 46 1.0 0.9787 0.9892 47 0.9167 0.9167 0.9167 48 0.9626 0.9677 0.9651 0.9906

Framework versions

  • Transformers 4.20.0.dev0
  • Pytorch 1.11.0+cu113
  • Datasets 2.2.2
  • Tokenizers 0.12.1
Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.