Edit model card

layoutlmv3-for-receipt-understanding

This model is a fine-tuned version of microsoft/layoutlmv3-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1719
  • Precision: 0.9739
  • Recall: 0.9837
  • F1: 0.9788
  • Accuracy: 0.9784

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 5
  • eval_batch_size: 5
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 2500

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 0.3125 50 1.0562 0.6706 0.7585 0.7118 0.7602
No log 0.625 100 0.6576 0.8260 0.8548 0.8401 0.8379
No log 0.9375 150 0.4929 0.8312 0.8758 0.8529 0.8693
No log 1.25 200 0.4159 0.8685 0.9076 0.8876 0.9028
No log 1.5625 250 0.3272 0.8998 0.9340 0.9166 0.9342
No log 1.875 300 0.3002 0.8953 0.9293 0.9120 0.9261
No log 2.1875 350 0.2275 0.9289 0.9542 0.9414 0.9423
No log 2.5 400 0.2756 0.9147 0.9402 0.9273 0.9317
No log 2.8125 450 0.2767 0.9312 0.9457 0.9384 0.9397
0.5451 3.125 500 0.2000 0.9416 0.9635 0.9524 0.9529
0.5451 3.4375 550 0.1736 0.9564 0.9705 0.9634 0.9648
0.5451 3.75 600 0.1865 0.9511 0.9674 0.9592 0.9643
0.5451 4.0625 650 0.1916 0.9511 0.9666 0.9588 0.9622
0.5451 4.375 700 0.1681 0.9519 0.9674 0.9596 0.9648
0.5451 4.6875 750 0.1808 0.9463 0.9705 0.9582 0.9660
0.5451 5.0 800 0.2651 0.9299 0.9581 0.9438 0.9406
0.5451 5.3125 850 0.2116 0.9497 0.9682 0.9589 0.9614
0.5451 5.625 900 0.2377 0.9481 0.9643 0.9561 0.9554
0.5451 5.9375 950 0.1865 0.9473 0.9627 0.9549 0.9614
0.1 6.25 1000 0.1875 0.9639 0.9736 0.9687 0.9686
0.1 6.5625 1050 0.1848 0.9542 0.9713 0.9627 0.9652
0.1 6.875 1100 0.2124 0.9548 0.9666 0.9606 0.9618
0.1 7.1875 1150 0.1733 0.9602 0.9744 0.9672 0.9690
0.1 7.5 1200 0.1955 0.9580 0.9728 0.9653 0.9631
0.1 7.8125 1250 0.1986 0.9505 0.9697 0.9600 0.9643
0.1 8.125 1300 0.1908 0.9579 0.9713 0.9645 0.9677
0.1 8.4375 1350 0.1689 0.9625 0.9752 0.9688 0.9694
0.1 8.75 1400 0.1836 0.9570 0.9682 0.9626 0.9665
0.1 9.0625 1450 0.1955 0.9639 0.9744 0.9691 0.9682
0.0396 9.375 1500 0.1577 0.9716 0.9814 0.9764 0.9758
0.0396 9.6875 1550 0.1689 0.9663 0.9790 0.9726 0.9745
0.0396 10.0 1600 0.1700 0.9693 0.9806 0.9749 0.9754
0.0396 10.3125 1650 0.1666 0.9708 0.9821 0.9765 0.9754
0.0396 10.625 1700 0.1713 0.9708 0.9798 0.9753 0.9745
0.0396 10.9375 1750 0.1892 0.9648 0.9775 0.9711 0.9716
0.0396 11.25 1800 0.1881 0.9693 0.9790 0.9741 0.9728
0.0396 11.5625 1850 0.1820 0.9625 0.9759 0.9692 0.9720
0.0396 11.875 1900 0.1757 0.9723 0.9821 0.9772 0.9750
0.0396 12.1875 1950 0.1772 0.9655 0.9767 0.9711 0.9720
0.0185 12.5 2000 0.1685 0.9701 0.9821 0.9761 0.9758
0.0185 12.8125 2050 0.1716 0.9716 0.9814 0.9764 0.9762
0.0185 13.125 2100 0.1697 0.9723 0.9814 0.9768 0.9762
0.0185 13.4375 2150 0.1704 0.9723 0.9821 0.9772 0.9771
0.0185 13.75 2200 0.1686 0.9739 0.9837 0.9788 0.9788
0.0185 14.0625 2250 0.1714 0.9724 0.9829 0.9776 0.9779
0.0185 14.375 2300 0.1709 0.9731 0.9837 0.9784 0.9788
0.0185 14.6875 2350 0.1713 0.9739 0.9837 0.9788 0.9784
0.0185 15.0 2400 0.1715 0.9739 0.9837 0.9788 0.9784
0.0185 15.3125 2450 0.1717 0.9739 0.9837 0.9788 0.9784
0.0083 15.625 2500 0.1719 0.9739 0.9837 0.9788 0.9784

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1
Downloads last month
34
Safetensors
Model size
126M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NLPmonster/layoutlmv3-for-receipt-understanding

Finetuned
(194)
this model