Transformers
English
Inference Endpoints
File size: 2,424 Bytes
7493b45
902c773
 
 
 
7493b45
 
902c773
7493b45
902c773
258ab9f
7493b45
 
902c773
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
258ab9f
902c773
 
 
 
 
 
 
 
 
258ab9f
 
902c773
258ab9f
902c773
258ab9f
 
 
 
902c773
 
 
 
258ab9f
902c773
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
language:
- en
library_name: transformers
license: other
---

# Model Card for ContractAssist model

<!-- Provide a quick summary of what the model is/does. [Optional] -->
Instruction tuned FlanT5-XXL on Legal Clauses data generated via ChatGPT. The model is capable for generating and/or modifying the Legal Clauses.



# Model Details

## Model Description

<!-- Provide a longer summary of what this model is/does. -->

- **Developed by:** Jaykumar Kasundra, Shreyans Dhankhar
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** other
- **Resources for more information:** 

    - [Associated Paper](<Add Link>)

# Uses


</details>

### Running the model on a GPU in 8bit


<details>
<summary> Click to expand </summary>

```python
# pip install accelerate peft bitsandbytes
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from peft import PeftModel,PeftConfig
peft_model_id = 'NebulaSense/ContractAssist'

peft_config = PeftConfig.from_pretrained(peft_model_id) 

model =  AutoModelForSeq2SeqLM.from_pretrained(peft_config.base_model_name_or_path, device_map="auto",load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(peft_config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, peft_model_id)
model.eval()
```

</details>


<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

## Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->

The model can directly be used to generate/modify legal clauses and help assist in drafting contracts. It likely works best on english language.

## Compute Infrastructure

Amazon SageMaker Training Job.

### Hardware

1 x 24GB NVIDIA A10G

### Software

Transformers, PEFT, BitsandBytes

# Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

<Coming Soon>

# Model Card Authors

<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->

Jaykumar Kasundra, Shreyans Dhankhar