muhtasham's picture
Create README.md
2b1bd01
|
raw
history blame
622 Bytes
metadata
datasets:
  - albertvillanova/legal_contracts

bert-tiny-finetuned-legal-contracts-longer

This model is a fine-tuned version of google/bert_uncased_L-4_H-512_A-8 on the portion of legal_contracts dataset.

Note

The model was not trained on the whole dataset which is around 9.5 GB, but only

The first 10% of train + the last 10% of train.

datasets_train = load_dataset('albertvillanova/legal_contracts' , split='train[:10%]')
datasets_validation = load_dataset('albertvillanova/legal_contracts' , split='train[-10%:]')