shreyans92dhankhar
commited on
Commit
•
902c773
1
Parent(s):
7d0e0af
Update README.md
Browse files
README.md
CHANGED
@@ -1,20 +1,107 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
3 |
---
|
4 |
-
## Training procedure
|
5 |
|
|
|
6 |
|
7 |
-
|
8 |
-
-
|
9 |
-
- load_in_4bit: False
|
10 |
-
- llm_int8_threshold: 6.0
|
11 |
-
- llm_int8_skip_modules: None
|
12 |
-
- llm_int8_enable_fp32_cpu_offload: False
|
13 |
-
- llm_int8_has_fp16_weight: False
|
14 |
-
- bnb_4bit_quant_type: fp4
|
15 |
-
- bnb_4bit_use_double_quant: False
|
16 |
-
- bnb_4bit_compute_dtype: float32
|
17 |
-
### Framework versions
|
18 |
|
19 |
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
library_name: transformers
|
5 |
+
license: other
|
6 |
---
|
|
|
7 |
|
8 |
+
# Model Card for ContractAssist model
|
9 |
|
10 |
+
<!-- Provide a quick summary of what the model is/does. [Optional] -->
|
11 |
+
Intruction tuned model using FlanT5-XXL on data generated via ChatGPT for generating and/or modifying the Legal Clauses.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
|
14 |
+
|
15 |
+
# Model Details
|
16 |
+
|
17 |
+
## Model Description
|
18 |
+
|
19 |
+
<!-- Provide a longer summary of what this model is/does. -->
|
20 |
+
|
21 |
+
- **Developed by:** Jaykumar Kasundra, Shreyans Dhankhar
|
22 |
+
- **Model type:** Language model
|
23 |
+
- **Language(s) (NLP):** en
|
24 |
+
- **License:** other
|
25 |
+
- **Resources for more information:**
|
26 |
+
|
27 |
+
- [Associated Paper](<Add Link>)
|
28 |
+
|
29 |
+
# Uses
|
30 |
+
|
31 |
+
|
32 |
+
</details>
|
33 |
+
|
34 |
+
### Running the model on a GPU using different precisions
|
35 |
+
|
36 |
+
#### FP16
|
37 |
+
|
38 |
+
<details>
|
39 |
+
<summary> Click to expand </summary>
|
40 |
+
|
41 |
+
```python
|
42 |
+
# pip install accelerate peft bitsandbytes
|
43 |
+
import torch
|
44 |
+
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
45 |
+
from peft import PeftModel,PeftConfig
|
46 |
+
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl")
|
47 |
+
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto", torch_dtype=torch.float16)
|
48 |
+
input_text = "translate English to German: How old are you?"
|
49 |
+
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
|
50 |
+
outputs = model.generate(input_ids)
|
51 |
+
print(tokenizer.decode(outputs[0]))
|
52 |
+
```
|
53 |
+
|
54 |
+
</details>
|
55 |
+
|
56 |
+
#### INT8
|
57 |
+
|
58 |
+
<details>
|
59 |
+
<summary> Click to expand </summary>
|
60 |
+
|
61 |
+
```python
|
62 |
+
# pip install bitsandbytes accelerate
|
63 |
+
from transformers import T5Tokenizer, T5ForConditionalGeneration
|
64 |
+
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl")
|
65 |
+
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto", load_in_8bit=True)
|
66 |
+
input_text = "translate English to German: How old are you?"
|
67 |
+
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
|
68 |
+
outputs = model.generate(input_ids)
|
69 |
+
print(tokenizer.decode(outputs[0]))
|
70 |
+
```
|
71 |
+
|
72 |
+
</details>
|
73 |
+
|
74 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
75 |
+
|
76 |
+
## Direct Use
|
77 |
+
|
78 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
79 |
+
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
|
80 |
+
|
81 |
+
The model can directly be used to generate/modify legal clauses and help assist in drafting contracts. It likely works best on english language.
|
82 |
+
|
83 |
+
## Compute Infrastructure
|
84 |
+
|
85 |
+
Amazon SageMaker Training Job.
|
86 |
+
|
87 |
+
### Hardware
|
88 |
+
|
89 |
+
1 x 24GB NVIDIA A10G
|
90 |
+
|
91 |
+
### Software
|
92 |
+
|
93 |
+
Transformers, PEFT, BitsandBytes
|
94 |
+
|
95 |
+
# Citation
|
96 |
+
|
97 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
98 |
+
|
99 |
+
**BibTeX:**
|
100 |
+
|
101 |
+
<Coming Soon>
|
102 |
+
|
103 |
+
# Model Card Authors
|
104 |
+
|
105 |
+
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
|
106 |
+
|
107 |
+
Jaykumar Kasundra, Shreyans Dhankhar
|