Transformers
English
Inference Endpoints
ContractAssist / README.md
shreyans92dhankhar's picture
Update README.md
258ab9f
|
raw
history blame
2.42 kB
---
language:
- en
library_name: transformers
license: other
---
# Model Card for ContractAssist model
<!-- Provide a quick summary of what the model is/does. [Optional] -->
Instruction tuned FlanT5-XXL on Legal Clauses data generated via ChatGPT. The model is capable for generating and/or modifying the Legal Clauses.
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
- **Developed by:** Jaykumar Kasundra, Shreyans Dhankhar
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** other
- **Resources for more information:**
- [Associated Paper](<Add Link>)
# Uses
</details>
### Running the model on a GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate peft bitsandbytes
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from peft import PeftModel,PeftConfig
peft_model_id = 'NebulaSense/ContractAssist'
peft_config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(peft_config.base_model_name_or_path, device_map="auto",load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(peft_config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, peft_model_id)
model.eval()
```
</details>
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
The model can directly be used to generate/modify legal clauses and help assist in drafting contracts. It likely works best on english language.
## Compute Infrastructure
Amazon SageMaker Training Job.
### Hardware
1 x 24GB NVIDIA A10G
### Software
Transformers, PEFT, BitsandBytes
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
<Coming Soon>
# Model Card Authors
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
Jaykumar Kasundra, Shreyans Dhankhar