Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
thread: struct<uuid: string, url: string, site_full: string, site: string, site_section: string, site_categories: list<item: string>, section_title: string, title: string, title_full: string, published: string, replies_count: int64, participants_count: int64, site_type: string, country: string, main_image: string, performance_score: int64, domain_rank: int64, domain_rank_updated: string, reach: null, social: struct<facebook: struct<likes: int64, comments: int64, shares: int64>, gplus: struct<shares: int64>, pinterest: struct<shares: int64>, linkedin: struct<shares: int64>, stumbledupon: struct<shares: int64>, vk: struct<shares: int64>>>
uuid: string
url: string
ord_in_thread: int64
parent_url: null
author: string
published: string
title: string
text: string
highlightText: string
highlightTitle: string
highlightThreadTitle: string
language: string
sentiment: string
categories: list<item: string>
webz_reporter: bool
external_links: list<item: null>
external_images: list<item: null>
entities: struct<persons: list<item: null>, organizations: list<item: null>, locations: list<item: null>>
rating: null
crawled: string
updated: string
vs
thread: struct<uuid: string, url: string, site_full: string, site: string, site_section: string, site_categories: list<item: string>, section_title: string, title: string, title_full: string, published: string, replies_count: int64, participants_count: int64, site_type: string, country: string, main_image: string, performance_score: int64, domain_rank: int64, domain_rank_updated: string, reach: null, social: struct<facebook: struct<likes: int64, comments: int64, shares: int64>, gplus: struct<shares: int64>, pinterest: struct<shares: int64>, linkedin: struct<shares: int64>, stumbledupon: struct<shares: int64>, vk: struct<shares: int64>>>
uuid: string
url: string
ord_in_thread: int64
parent_url: null
author: string
published: string
title: string
text: string
highlightText: string
highlightTitle: string
highlightThreadTitle: string
language: string
sentiment: string
categories: list<item: string>
webz_reporter: bool
external_links: list<item: string>
external_images: list<item: null>
entities: struct<persons: list<item: struct<name: string, sentiment: string>>, organizations: list<item: null>, locations: list<item: struct<name: string, sentiment: string>>>
rating: null
crawled: string
updated: string
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3335, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2296, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 504, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              thread: struct<uuid: string, url: string, site_full: string, site: string, site_section: string, site_categories: list<item: string>, section_title: string, title: string, title_full: string, published: string, replies_count: int64, participants_count: int64, site_type: string, country: string, main_image: string, performance_score: int64, domain_rank: int64, domain_rank_updated: string, reach: null, social: struct<facebook: struct<likes: int64, comments: int64, shares: int64>, gplus: struct<shares: int64>, pinterest: struct<shares: int64>, linkedin: struct<shares: int64>, stumbledupon: struct<shares: int64>, vk: struct<shares: int64>>>
              uuid: string
              url: string
              ord_in_thread: int64
              parent_url: null
              author: string
              published: string
              title: string
              text: string
              highlightText: string
              highlightTitle: string
              highlightThreadTitle: string
              language: string
              sentiment: string
              categories: list<item: string>
              webz_reporter: bool
              external_links: list<item: null>
              external_images: list<item: null>
              entities: struct<persons: list<item: null>, organizations: list<item: null>, locations: list<item: null>>
              rating: null
              crawled: string
              updated: string
              vs
              thread: struct<uuid: string, url: string, site_full: string, site: string, site_section: string, site_categories: list<item: string>, section_title: string, title: string, title_full: string, published: string, replies_count: int64, participants_count: int64, site_type: string, country: string, main_image: string, performance_score: int64, domain_rank: int64, domain_rank_updated: string, reach: null, social: struct<facebook: struct<likes: int64, comments: int64, shares: int64>, gplus: struct<shares: int64>, pinterest: struct<shares: int64>, linkedin: struct<shares: int64>, stumbledupon: struct<shares: int64>, vk: struct<shares: int64>>>
              uuid: string
              url: string
              ord_in_thread: int64
              parent_url: null
              author: string
              published: string
              title: string
              text: string
              highlightText: string
              highlightTitle: string
              highlightThreadTitle: string
              language: string
              sentiment: string
              categories: list<item: string>
              webz_reporter: bool
              external_links: list<item: string>
              external_images: list<item: null>
              entities: struct<persons: list<item: struct<name: string, sentiment: string>>, organizations: list<item: null>, locations: list<item: struct<name: string, sentiment: string>>>
              rating: null
              crawled: string
              updated: string

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

This dataset is derived from the Global News Dataset. Please refer to the original source (also cited below) and ensure that your use complies with its terms and conditions.


Webz.io News Dataset Repository

webz.io Logo

Introduction

Welcome to the Webz.io News Dataset Repository! This repository is created by Webz.io and is dedicated to providing free datasets of publicly available news articles. We release new datasets weekly, each containing around 1,000 news articles focused on various themes, topics, or metadata characteristics like sentiment analysis, and top IPTC categories such as finance, sports, and politics.

To get ongoing free access to online news data, you can use Webz.io's free News API Lite. Here is an open-source demo demonstrating what can be done with it.

Dataset Overview

  • Weekly Releases: New dataset available every week.
  • Thematic Focus: Datasets based on specific themes, topics, or metadata.
  • Rich Metadata: Includes sentiment analysis, categories, publication dates.
  • Diverse Sources: Articles from a wide range of news websites.

Usage

The datasets are free for academic, research, and journalistic purposes:

  • Data Analysis: For statistical analyses, trend identification, and pattern recognition.
  • Machine Learning: Suitable for training NLP models, sentiment analysis, etc.
  • Journalistic Research: Helps journalists in data-driven storytelling.

Accessing the Datasets

  • Browse the repository.
  • Find a dataset that suits your needs.
  • Download the dataset with its detailed description and metadata file.
  • We created a simple React application that you can use to preview the data

Contribution

Contributions are welcome! If you have suggestions or want to contribute, please open an issue or a pull request.

Support

For questions or support, raise an issue in the repository.

License/Terms of Use

By using the Dataset Repository you agree to the following TOU.


Downloads last month
158