Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 183 fields in line 4, saw 263

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2361, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 325, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 190, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
                  return self.get_chunk()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
                  return self.read(nrows=size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1923, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                File "parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
                File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 183 fields in line 4, saw 263

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

PatClass2011 Dataset

CLEFIP-2011

Dataset Summary

The PatClass2011 dataset is a comprehensive collection of approximately 719,000 patent documents from the CLEF-IP 2011 Test Collection, focusing on patent classification tasks. Each entry encompasses detailed metadata and textual content, including titles, abstracts, descriptions, and claims. The dataset is structured to facilitate research in patent classification, information retrieval, and natural language processing.

Languages

The dataset contains English, French and German text.

Domain

Patents (intellectual property).

Dataset Curators

The dataset was created by Eleni Kamateri and Tasos Mylonidis

Dataset Structure

The dataset consists of 28 folders which correspond to a specific year, ranging from 1978 to 2005. Within each yearly subdirectory, you'll find a CSV file named in the format clefip2011_en_classification_.csv. These files contain patent data that were all published that year. This structure facilitates year-wise analysis, allowing researchers to study trends and patterns in patent classifications over time. In total, there are 19 data fields for each CSV

Data Fields

The dataset is provided in CSV format and includes the aforementioned fields

  • ucid: Unique identifier for the patent document.
  • doc_number: Patent document number.
  • country: Country code of the patent.
  • kind: Kind code indicating the type of patent document.
  • lang: Language of the patent document.
  • date: Publication date of the patent.
  • application_date: Date when the patent application was filed.
  • date_produced: Date when the data was inserted in the dataset.
  • status: Status of the patent document.
  • main_code: Primary classification code assigned to the patent.
  • further_codes: Additional classification codes.
  • ipcr_codes: International Patent Classification codes.
  • ecla_codes: European Classification codes.
  • title: Title of the patent document.
  • abstract: Abstract summarizing the patent.
  • description: Detailed description of the patent.
  • claims: Claims defining the scope of the patent protection.
  • applicants: Entities or individuals who applied for the patent.
  • inventors: Inventors credited in the patent document.

Usage

Loading the Dataset

Sample ( 1985 March to April )

The following script can be used to load a sample version of the dataset, which contains all the patent applications that were published from March until April in 1985.


from datasets import load_dataset
import pandas as pd
from datetime import datetime
import gc

def load_csvs_from_huggingface(start_date, end_date):
    """
    Load only the necessary CSV files from a Hugging Face dataset repository.

    :param start_date: str, the start date in 'YYYY-MM-DD' format (inclusive)
    :param end_date: str, the end date in 'YYYY-MM-DD' format (inclusive)

    :return: pd.DataFrame, combined data from selected CSVs
    """

    huggingface_dataset_name = "amylonidis/PatClass2011"

    column_types = {
        "ucid": "string",
        "country": "category",
        "doc_number": "int64",
        "kind": "category",
        "lang": "category",
        "date": "int32",
        "application_date": "int32",
        "date_produced": "int32",
        "status": "category",
        "main_code": "string",
        "further_codes": "string",
        "ipcr_codes": "string",
        "ecla_codes": "string",
        "title": "string",
        "abstract": "string",
        "description": "string",
        "claims": "string",
        "applicants": "string",
        "inventors": "string",
    }

    dataset_years = ['1978', '1979', '1980', '1981', '1982', '1983', '1984', '1985', '1986',
                     '1987', '1988', '1989', '1990', '1991', '1992', '1993', '1994', '1995',
                     '1996','1997', '1998', '1999', '2000', '2001', '2002','2003', '2004', '2005']

    start_date_int = int(datetime.strptime(start_date, "%Y-%m-%d").strftime("%Y%m%d"))
    end_date_int = int(datetime.strptime(end_date, "%Y-%m-%d").strftime("%Y%m%d"))

    start_year, end_year = str(start_date_int)[:4], str(end_date_int)[:4]
    given_years = [str(year) for year in range(int(start_year), int(end_year) + 1)]
    matching_years = [f  for f in dataset_years for year in given_years if f==year]

    if not matching_years:
        raise ValueError(f"No matching CSV files found for {start_date} to {end_date}")

    df_list = []
    for year in matching_years:
        filepath = f"data/years/{year}/clefip2011_en_classification_{year}_validated.csv"

        try:
            dataset = load_dataset(huggingface_dataset_name, data_files=filepath, split="train")
            df = dataset.to_pandas().astype(column_types)
            mask = (df["date"] >= start_date_int) & (df["date"] <= end_date_int)
            df_filtered = df[mask].copy()


            if not df_filtered.empty:
                df_list.append(df_filtered)

            del df, dataset, df_filtered, mask
            gc.collect()

        except Exception as e:
            print(f"Error processing {filepath}: {e}")

    return pd.concat(df_list, ignore_index=True) if df_list else pd.DataFrame()


start_date = "1985-03-01"
end_date = "1985-04-30"

df = load_csvs_from_huggingface(start_date, end_date)

Full

To load the complete dataset using the Hugging Face datasets library:

from datasets import load_dataset

dataset = load_dataset("amylonidis/PatClass2011")

This will load the dataset into a DatasetDict object, please make sure you have enough disk space.

Google Colab Analytics

You can also use the following Google Colab notebooks to explore the Analytics that were performed to the dataset.

Dataset Creation

Source Data

The PatClass2011 dataset aggregates the patent documents from the CLEF-IP 2011 Test Collection using a parsing script. The data includes both metadata and full-text fields, facilitating a wide range of research applications.

Annotations

The dataset does not contain any human-written or computer-generated annotations beyond those produced by patent documents of the Source Data.

Licensing Information

This dataset is distributed under the MIT License. Users are free to use, modify, and distribute the dataset, provided that the original authors are credited.

Citation

If you utilize the PatClass2011 dataset in your research or applications, please cite it appropriately.


Downloads last month
163