ArchitRastogi's picture
added yaml data
b7b7aff verified
metadata
license: apache-2.0
task_categories:
  - text-classification
  - question-answering
language:
  - it
tags:
  - italian
  - embeddings
  - bert
  - fine-tuning
  - information-retrieval
  - semantic-search
  - natural-language-processing
  - dense-retrieval
  - c4-dataset
pretty_name: Fine-Tuned BERT for Italian Embeddings
size_categories:
  - 10M<n<100M

Italian-BERT-FineTuning-Embeddings

This repository contains a comprehensive dataset designed for fine-tuning BERT-based Italian embedding models. The dataset aims to enhance performance on tasks such as information retrieval, semantic search, and embeddings generation.


Dataset Overview

This dataset leverages the C4 dataset (Italian subset) and employs advanced techniques like sliding window segmentation and in-document sampling to create high-quality, diverse examples from large Italian documents.

Data Format

The dataset is stored in .jsonl format with the following fields:

  • query: A query or sentence fragment.
  • positive: A relevant text segment closely associated with the query.
  • hard_negative: A challenging non-relevant text segment, similar in context but unrelated to the query.

Example:

{
  "query": "Stanchi di non riuscire a trovare il partner perfetto?.",
  "positive": "La cosa principale da fare è pubblicare il proprio annuncio e aspettare una risposta.",
  "hard_negative": "Quale rapporto tra investimenti IT e sicurezza?"
}

Dataset Statistics

  • Training Set: 1.13 million rows (~4.5 GB)
  • Test Set: 9.09 million rows (~0.5 GB)

Dataset Construction

This dataset was built using the following methodologies:

  1. Sliding Window Segmentation
    Extracting overlapping text segments to preserve contextual information and maximize coverage of the source material.

  2. In-Document Sampling
    Sampling relevant (positive) and challenging non-relevant (hard_negative) examples within the same document to ensure robust and meaningful examples.

Why C4?
The C4 dataset was selected due to its vast collection of high-quality Italian text, providing a rich source for creating varied training samples.


Fine-Tuned Model

A fine-tuned BERT-based Italian embedding model trained on this dataset is available:
Fine-Tuned Model Repository

Model Base:


Licensing and Usage

This dataset is licensed under the Apache 2.0 License. If you use this dataset or the fine-tuned model in your research or applications, please provide appropriate credit:

Archit Rastogi
Email: [email protected]


Contact

For any questions, feedback, or collaboration inquiries, feel free to reach out:

Archit Rastogi
Email: [email protected]


Feel free to suggest improvements. Your feedback is highly appreciated!