Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Polish
DOI:
Libraries:
Datasets
Dask
pl-court-raw / README.md
jbinkowski's picture
Update dataset schema
b0f2bb8 verified
metadata
language: pl
multilinguality: monolingual
size_categories: 100K<n<1M
source_datasets:
  - original
pretty_name: Polish Court Judgments Raw
tags:
  - polish court
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train_*.parquet
dataset_info:
  features:
    - name: source
      dtype: large_string
    - name: judgment_id
      dtype: large_string
    - name: docket_number
      dtype: large_string
    - name: judgment_date
      dtype: timestamp[us]
    - name: publication_date
      dtype: timestamp[us]
    - name: last_update
      dtype: timestamp[us]
    - name: court_id
      dtype: large_string
    - name: department_id
      dtype: large_string
    - name: judgment_type
      dtype: large_string
    - name: excerpt
      dtype: large_string
    - name: xml_content
      dtype: large_string
    - name: presiding_judge
      dtype: large_string
    - name: decision
      dtype: 'null'
    - name: judges
      large_list: large_string
    - name: legal_bases
      large_list: large_string
    - name: publisher
      dtype: large_string
    - name: recorder
      dtype: large_string
    - name: reviser
      dtype: large_string
    - name: keywords
      large_list: large_string
    - name: num_pages
      dtype: int64
    - name: full_text
      dtype: large_string
    - name: volume_number
      dtype: int64
    - name: volume_type
      dtype: large_string
    - name: court_name
      dtype: large_string
    - name: department_name
      dtype: large_string
    - name: extracted_legal_bases
      large_list:
        - name: address
          dtype: large_string
        - name: art
          dtype: large_string
        - name: isap_id
          dtype: large_string
        - name: text
          dtype: large_string
        - name: title
          dtype: large_string
    - name: references
      large_list: large_string
    - name: thesis
      dtype: large_string
    - name: country
      dtype: large_string
    - name: court_type
      dtype: large_string
  config_name: default
  splits:
    - name: train
      num_bytes: 25682589828
      num_examples: 437450
  download_size: 9217840711
  dataset_size: 25682589828

Dataset Card for JuDDGES/pl-court-raw

Table of Contents

Dataset Description

Dataset Summary

The dataset consists of Polish Court judgments available at https://orzeczenia.ms.gov.pl/, containing full content of the judgments along with metadata sourced from official API and extracted from the judgment contents. This dataset contains raw data. For instruction dataset see JuDDGES/pl-court-instruct. For graph dataset see JuDDGES/pl-court-graph.

Supported Tasks and Leaderboards

The dataset can be used for various tasks. However, it contains raw data acquired from official API, and we rather recommend using instruction dataset JuDDGES/pl-court-instruct for straightforward usage.

Languages

pl-PL Polish

Dataset Structure

Data Fields

Field Description Type
source Source of the data, can be one of: [pl-court, nsa] String
judgment_id unique identifier of the judgment String
docket_number signature of judgment (unique within court) String
judgment_date date of judgment Datetime(time_unit='us', time_zone=None)
publication_date date of judgment publication Datetime(time_unit='us', time_zone=None)
last_update date of last update of judgment Datetime(time_unit='us', time_zone=None)
court_id system unique identifier of the court String
department_id system unique identifier of the court's department String
judgment_type type of the judgment (one of) String
excerpt First 500 characters of the judgment String
xml_content Full content of judgment in XML format String
presiding_judge chairman judge name String
decision decision Null
judges list of judge names participating in the judgment List(String)
legal_bases legal acts which are bases for the judgment List(String)
publisher name of the person publishing the judgment String
recorder name of the person recording the judgment String
reviser name of the person revising the judgment String
keywords list of phrases representing the themes/topics of the judgment List(String)
num_pages number of pages in the judgment Int64
full_text full text of the judgment String
volume_number volume number Int64
volume_type type of volume String
court_name name of the court where the judgment was made String
department_name name of the department within the court where the judgment was made String
extracted_legal_bases textual representation of the legal bases for the judgment (with references to online repository) List(Struct({'address': String, 'art': String, 'isap_id': String, 'text': String, 'title': String}))
references Plain-text references to legal acts List(String)
thesis thesis of the judgment String
country the country of origin of the judgment (one of [Poland, England]) String
court_type type of the court (one of ['ordinary court', 'administrative court', 'crown court']) String

Data Splits

This dataset is not split into subsets. The dataset has only train split.

Dataset Creation

For details on the dataset creation, see the paper TBA and the code repository here.

Curation Rationale

Created to enable cross-jurisdictional legal analytics.

Source Data

Initial Data Collection and Normalization

  1. Download judgments metadata.
  2. Download judgments text (XML content of judgments).
  3. Download additional details available for each judgment.
  4. Map id of courts and departments to court name.
  5. Extract raw text from XML content and details of judgments not available through API.
  6. For further processing prepare local dataset dump in parquet file, version it with dvc and push to remote storage.

Who are the source language producers?

Produced by human legal professionals (judges, court clerks). Demographics was not analysed. Sourced from public court databases.

Annotations

Annotation process

No annotation was performed by us. All features were provided via API (anonymization and publication of the data were performed by court employees).

Who are the annotators?

As above.

Personal and Sensitive Information

Pseudoanonymized to comply with GDPR (art. 4 sec. 5 GDPR).

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

We license the actual packaging of these data under Attribution 4.0 International (CC BY 4.0) https://creativecommons.org/licenses/by/4.0/

Citation Information

TBA

Statistics

Dataset size

Dataset size: 437_450 examples

Missing values

Field name Null count Null fraction
source 0 0
judgment_id 0 0
docket_number 0 0
judgment_date 0 0
publication_date 0 0
last_update 0 0
court_id 0 0
department_id 0 0
judgment_type 0 0
excerpt 0 0
xml_content 75 0
presiding_judge 51513 0.12
decision 437450 1
judges 43335 0.1
legal_bases 123934 0.28
publisher 629 0
recorder 113551 0.26
reviser 182 0
keywords 127508 0.29
num_pages 0 0
full_text 0 0
volume_number 0 0
volume_type 0 0
court_name 1491 0
department_name 1491 0
extracted_legal_bases 0 0
references 43735 0.1
thesis 395756 0.9
country 0 0
court_type 0 0

Analysis of selected fields

png

png

png

png

png