β οΈ This model has been superseded by the Open Australian Legal LLM, the largest open source language model trained on Australian law. You are encouraged to use that model instead. β οΈ
Open Australian Legal GPT2 ββοΈ
Open Australian Legal GPT2 is the first open source language model trained on Australian law.
Naturally, as a finetune of GPT2, the model may be used for any of the tasks for which GPT2 is suitable, including text generation, text completion and question answering.
Trained on 37,560 laws and regulations, comprising 635,482,112 tokens, taken from the Open Australian Legal Corpus, the model is intended specifically to be finetuned for downstream natural language processing tasks applied to the Australian legal domain.
To ensure its accessibility to as wide an audience as possible, the model is issued under the Apache Licence 2.0.
Those interested in learning more about the model are encouraged to read Umar Butler's accompanying article, How I built the first open LLM for Australian law.
A smaller, distilled version of the model trained on the same dataset may be found here, and a larger model trained on a greater number of Australian legal documents is available here.
Usage π©βπ»
The code snippet below demonstrates just one of the many ways in which the model may be accessed:
>>> from transformers import pipeline, set_seed
>>> set_seed(42) # We set a seed for reproducibility.
>>> generator = pipeline('text-generation', model='umarbutler/open-australian-legal-gpt2')
>>> generator('Under the Crimes Act 1914')
[{'generated_text': 'Under the Crimes Act 1914, a person who is liable to a payment of a benefit under the Act is also liable to pay'}]
Creation π§ͺ
37,560 documents were sampled from the Open Australian Legal Corpus by filtering for primary and secondary legislation that, when stripped of whitespace, was not empty. Such documents were then randomly shuffled and added to blocks 1,024-tokens-long, with GPT2's end-of-sequence token ('<|endoftext|>') being used as a delimiter as well as to pad the end of the final block, resulting in a training dataset of 620,588 blocks, or 635,482,112 tokens.
The training dataset was subsequently fed to GPT2 via transformers.Trainer
with the following hyperparameters:
Hyperparameter | Value |
---|---|
Sequence length | 1,024 |
Epochs | 3 |
Optimiser | AdamW |
Learning rate | 1e-5 |
Learning rate scheduler | Linear with warmup |
Batch size per device | 4 |
Weight decay | 0.01 |
Warmup ratio | 0.06 |
After training for 3 epochs, or 465,441 steps, over a period of ~25 hours on two GeForce RTX 4090s, the model achieved a training loss of 0.61.
Limitations π§
Although the model has not been tested for bias, one would expect it to exhibit much of the same, if not all, the biases of GPT2.
One might also expect the model to exhibit a bias towards the type of language employed in legislation and regulations (its source materials) as well as towards Commonwealth law (the largest source of legislation in the Open Australian Legal Corpus at the time of the model's creation).
Finally, it is worth noting that the model may lack knowledge of Victorian, Northern Territory and Australian Capital Territory law as licensing restrictions had prevented their inclusion in the training data.
Licence π
The model is issued under the Apache Licence 2.0.
Citation π
If you've relied on the model for your work, please cite:
@misc{butler-2023-open-australian-legal-gpt2,
author = {Butler, Umar},
year = {2023},
title = {Open Australian Legal GPT2},
publisher = {Hugging Face},
version = {1.0.0},
url = {https://huggingface.co/datasets/umarbutler/open-australian-legal-gpt2}
}
Acknowledgements π
In the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today.
The author thanks the sources of the Open Australian Legal Corpus for making their data available under open licences.
The author also acknowledges the developers of the many Python libraries relied upon in the training of the model, as well as the makers of GPT2, which the model was built atop.
Finally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs.
- Downloads last month
- 34
Model tree for umarbutler/open-australian-legal-gpt2
Base model
openai-community/gpt2