Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Libraries:
Datasets
pandas
License:
mikeg112 commited on
Commit
4628cea
·
1 Parent(s): 1cd317a

Add non-data

Browse files
Files changed (3) hide show
  1. .gitignore +3 -0
  2. README.md +8 -0
  3. construct.py +23 -0
.gitignore ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ annotated
2
+ outcomes
3
+ .env
README.md CHANGED
@@ -1,3 +1,11 @@
1
  ---
2
  license: cc-by-sa-4.0
3
  ---
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-sa-4.0
3
  ---
4
+ ## Health Insurance Appeal Adjudication Benchmark
5
+
6
+
7
+ This data repository houses manually labeled, and pseudo-labeled background spans corresponding to real external appeal adjudications for coverage denials from U.S. health insurance cases. The data is referenced by and described in a more general work documented here: [https://github.com/TPAFS/hicric](https://github.com/TPAFS/hicric).
8
+
9
+
10
+ ## Contact
11
+ For questions or comments, please reach out to `[email protected]`.`
construct.py ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datasets import DatasetDict, load_dataset
2
+
3
+
4
+ def construct_hf_dataset(repo_name: str):
5
+ """Construct a HF DatasetDict class from the HICRIC outcome data."""
6
+
7
+ # Load the datasets from JSONL files
8
+ train_jsonl_path = "outcomes/train_backgrounds_suff.jsonl"
9
+ test_jsonl_path = "outcomes/test_backgrounds_suff.jsonl"
10
+ train_dataset = load_dataset("json", data_files=train_jsonl_path, split="train")
11
+ test_dataset = load_dataset("json", data_files=test_jsonl_path, split="train")
12
+
13
+ # Create a DatasetDict to combine both splits
14
+ dataset = DatasetDict({"train": train_dataset, "test": test_dataset})
15
+
16
+ # Save each sub-directory dataset as a separate dataset within a DatasetDict
17
+ dataset.push_to_hub(repo_name, private=True)
18
+
19
+ return None
20
+
21
+
22
+ if __name__ == "__main__":
23
+ construct_hf_dataset("mike-persius/imr-appeals")