Dataset Viewer
Auto-converted to Parquet
The dataset viewer is not available for this split.
The info cannot be fetched for the config 'default' of the dataset.
Error code:   InfoError
Exception:    ReadTimeout
Message:      (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: c9f2a8e4-42f0-4410-9541-8cff89f6b544)')
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 211, in compute_first_rows_from_streaming_response
                  info = get_dataset_config_info(path=dataset, config_name=config, token=hf_token)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 278, in get_dataset_config_info
                  builder = load_dataset_builder(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1781, in load_dataset_builder
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1663, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1620, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1017, in get_module
                  patterns = get_data_patterns(base_path, download_config=self.download_config)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 474, in get_data_patterns
                  return _get_data_files_patterns(resolver)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 263, in _get_data_files_patterns
                  data_files = pattern_resolver(pattern)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 361, in resolve_pattern
                  for filepath, info in fs.glob(pattern, detail=True, **glob_kwargs).items()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 520, in glob
                  path = self.resolve_path(path, revision=kwargs.get("revision")).unresolve()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path
                  repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist
                  self._api.repo_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2704, in repo_info
                  return method(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2561, in dataset_info
                  r = get_session().get(path, headers=headers, timeout=timeout, params=params)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get
                  return self.request("GET", url, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
                  resp = self.send(prep, **send_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
                  r = adapter.send(request, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 93, in send
                  return super().send(request, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send
                  raise ReadTimeout(e, request=request)
              requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: c9f2a8e4-42f0-4410-9541-8cff89f6b544)')

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for dwb2023/gdelt-mentions-2025-v3

This dataset contains the mentions records from the GDELT (Global Database of Events, Language, and Tone) Project, tracking how global events are mentioned across media sources over time.

Dataset Details

Dataset Description

The GDELT Mentions table is a component of the GDELT Event Database that tracks each mention of an event across all monitored news sources. Unlike the Event table which records unique events, the Mentions table records every time an event is referenced in media, allowing researchers to track the network trajectory and media lifecycle of stories as they flow through the global information ecosystem.

  • Curated by: The GDELT Project
  • Funded by: Google Ideas, supported by Google Cloud Platform
  • Language(s) (NLP): Multi-language source data, processed into standardized English format
  • License: All GDELT data is available for free download and use with proper attribution
  • Updates: Every 15 minutes, 24/7

Dataset Sources

Uses

Direct Use

  • Tracking media coverage patterns for specific events
  • Analyzing information diffusion across global media
  • Measuring event importance through mention frequency
  • Studying reporting biases across different media sources
  • Assessing the confidence of event reporting
  • Analyzing narrative framing through tonal differences
  • Tracking historical event references and anniversary coverage

Out-of-Scope Use

  • Exact source text extraction (only character offsets are provided)
  • Definitive audience reach measurement (mentions don't equate to readership)
  • Direct access to all mentioned source documents (URLs are provided but access may be limited)
  • Language analysis of original non-English content (translation information is provided but original text is not included)

Dataset Structure

The dataset consists of tab-delimited files with 16 fields per mention record:

  1. Event Reference Information

    • GlobalEventID: Links to the event being mentioned
    • EventTimeDate: Timestamp when the event was first recorded (YYYYMMDDHHMMSS)
    • MentionTimeDate: Timestamp of the mention (YYYYMMDDHHMMSS)
  2. Source Information

    • MentionType: Numeric identifier for source collection (1=Web, 2=Citation, etc.)
    • MentionSourceName: Human-friendly identifier (domain name, "BBC Monitoring", etc.)
    • MentionIdentifier: Unique external identifier (URL, DOI, citation)
  3. Mention Context Details

    • SentenceID: Sentence number within the article where the event was mentioned
    • Actor1CharOffset: Character position where Actor1 was found in the text
    • Actor2CharOffset: Character position where Actor2 was found in the text
    • ActionCharOffset: Character position where the core Action was found
    • InRawText: Whether event was found in original text (1) or required processing (0)
    • Confidence: Percent confidence in the extraction (10-100%)
    • MentionDocLen: Length of source document in characters
    • MentionDocTone: Average tone of the document (-100 to +100)
    • MentionDocTranslationInfo: Info about translation (semicolon delimited)
    • Extras: Reserved for future use

Dataset Creation

Curation Rationale

The GDELT Mentions table was created to track the lifecycle of news stories and provide a deeper understanding of how events propagate through the global media ecosystem. It enables analysis of the importance of events based on coverage patterns and allows researchers to trace narrative evolution across different sources and time periods.

Curation Method

Source Data

Data Collection and Processing

  • Every mention of an event is tracked across all monitored sources
  • Each mention is recorded regardless of when the original event occurred
  • Translation information is preserved for non-English sources
  • Confidence scores indicate the level of natural language processing required
  • Character offsets are provided to locate mentions within articles

Who are the source data producers?

Primary sources include:

  • International news media
  • Web news
  • Broadcast transcripts
  • Print media
  • Academic repositories (with DOIs)
  • Various online platforms

Personal and Sensitive Information

Similar to the Events table, this dataset focuses on public events and may contain:

  • URLs to news articles mentioning public figures and events
  • Information about how events were framed by different media outlets
  • Translation metadata for non-English sources
  • Document tone measurements

Bias, Risks, and Limitations

  1. Media Coverage Biases

    • Over-representation of widely covered events
    • Variance in coverage across different regions and languages
    • Digital divide affecting representation of less-connected regions
  2. Technical Limitations

    • Varying confidence levels in event extraction
    • Translation quality differences across languages
    • Character offsets may not perfectly align with rendered web content
    • Not all MentionIdentifiers (URLs) remain accessible over time
  3. Coverage Considerations

    • Higher representation of English and major world languages
    • Potential duplication when similar articles appear across multiple outlets
    • Varying confidence scores based on linguistic complexity

Recommendations

  1. Users should:

    • Consider confidence scores when analyzing mentions
    • Account for translation effects when studying non-English sources
    • Use MentionDocLen to distinguish between focused coverage and passing references
    • Recognize that URL accessibility may diminish over time
    • Consider SentenceID to assess prominence of event mention within articles
  2. Best Practices:

    • Filter by Confidence level appropriate to research needs
    • Use InRawText field to identify direct versus synthesized mentions
    • Analyze MentionDocTone in context with the overall event
    • Account for temporal patterns in media coverage
    • Cross-reference with Events table for comprehensive analysis

Citation

BibTeX:

@inproceedings{leetaru2013gdelt,
  title={GDELT: Global Data on Events, Language, and Tone, 1979-2012},
  author={Leetaru, Kalev and Schrodt, Philip},
  booktitle={International Studies Association Annual Conference},
  year={2013},
  address={San Francisco, CA}
}

APA: Leetaru, K., & Schrodt, P. (2013). GDELT: Global Data on Events, Language, and Tone, 1979-2012. Paper presented at the International Studies Association Annual Conference, San Francisco, CA.

Dataset Card Contact

dwb2023

Downloads last month
59