File size: 1,411 Bytes
06d695d
 
 
10302ad
06d695d
 
 
 
 
f624145
 
 
 
 
 
bcad9ac
f624145
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: apache-2.0
datasets:
- umarbutler/open-australian-legal-qa
tags:
- law
- legal
- australia
---
# AusLegalQA

AusLegalQA is a fine-tune of [Mistral-8x7B-Instruct-0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) using PEFT techniques, trained on the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). 

The model achieved an eval loss of 1.1391 on a subset of 100 prompts and answers from the original dataset.

The model was trained with the following hyperparameters for 3 epochs. The step with the lowest eval loss was selected (coinciding with end of epoch 2) and the resulting qLoRA (4 bits) was merged into the base model. 
| Hyperparameter | Value |
| --- | --- |
| Sequence length | 1024 |
| Epochs | 2 |
| Optimiser | AdamW |
| Learning rate | 1e-4 |
| Learning rate scheduler | Cosine |
| Batch size | 1 |
| Weight decay | 0.01 |
| Warmup ratio | 0.05 |
| LoRA rank | 64 |
| LoRA alpha | 128 |
| LoRA dropout | 0.1 |
| LoRA target | q_proj,v_proj |
| NEFTune alpha | 5 |
| Flash Attention | on |

## Strengths
The model is strong at summarisation and short-form answers with the key details. It is more likely to provide responses which assume the user is located in Australia. Ideal use-case is in a LLamaIndex/LangChain environment. 

## Limitations
Just as the base model it does not have any moderation mechanisms.