configs:
- config_name: law_knowledge_prob
data_files:
- split: test
path: test.jsonl
task_categories:
- text-classification
- question-answering
- zero-shot-classification
language:
- en
tags:
- legal
Domain Adaptation of Large Language Models
This repo contains the Law Knowledge Probing dataset used in our ICLR 2024 paper Adapting Large Language Models via Reading Comprehension.
We explore continued pre-training on domain-specific corpora for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to transform large-scale pre-training corpora into reading comprehension texts, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. Our 7B model competes with much larger domain-specific models like BloombergGPT-50B.
🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗
**************************** Updates ****************************
- 2024/4/14: Released the knowledge probing datasets at med_knowledge_prob and law_knowledge_prob
- 2024/4/2: Released the raw data splits (train and test) of all the evaluation datasets
- 2024/1/16: 🎉 Our research paper has been accepted by ICLR 2024!!!🎉
- 2023/12/19: Released our 13B base models developed from LLaMA-1-13B.
- 2023/12/8: Released our chat models developed from LLaMA-2-Chat-7B.
- 2023/9/18: Released our paper, code, data, and base models developed from LLaMA-1-7B.
Domain-Specific LLMs
LLaMA-1-7B
In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: Biomedicine-LLM, Finance-LLM and Law-LLM, the performances of our AdaptLLM compared to other domain-specific LLMs are:
LLaMA-1-13B
Moreover, we scale up our base model to LLaMA-1-13B to see if our method is similarly effective for larger-scale models, and the results are consistently positive too: Biomedicine-LLM-13B, Finance-LLM-13B and Law-LLM-13B.
Domain-Specific LLaMA-2-Chat
Our method is also effective for aligned models! LLaMA-2-Chat requires a specific data format, and our reading comprehension can perfectly fit the data format by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: Biomedicine-Chat, Finance-Chat and Law-Chat
Domain-Specific Tasks
Pre-templatized/Formatted Testing Splits
To easily reproduce our prompting results, we have uploaded the filled-in zero/few-shot input instructions and output completions of the test each domain-specific task: biomedicine-tasks, finance-tasks, and law-tasks.
Note: those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
Raw Datasets
We have also uploaded the raw training and testing splits, for facilitating fine-tuning or other usages: ChemProt, RCT, ConvFinQA, FiQA_SA, Headline, NER, FPB
The other datasets used in our paper have already been available in huggingface.
Domain Knowledge Probing
Our pre-processed knowledge probing datasets are available at: med_knowledge_prob and law_knowledge_prob
Citation
If you find our work helpful, please cite us:
@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
and the original dataset:
@inproceedings{LEDGAR,
author = {Don Tuggener and
Pius von D{\"{a}}niken and
Thomas Peetz and
Mark Cieliebak},
title = {{LEDGAR:} {A} Large-Scale Multi-label Corpus for Text Classification
of Legal Provisions in Contracts},
booktitle = {{LREC}},
pages = {1235--1241},
publisher = {European Language Resources Association},
year = {2020}
}