dwb2023 commited on
Commit
5544db7
·
verified ·
1 Parent(s): a9cc751

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +156 -3
README.md CHANGED
@@ -1,3 +1,156 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+ # Dataset Card for dwb2023/gdelt-mentions-2025
5
+
6
+ This dataset contains the mentions records from the GDELT (Global Database of Events, Language, and Tone) Project, tracking how global events are mentioned across media sources over time.
7
+
8
+ ## Dataset Details
9
+
10
+ ### Dataset Description
11
+
12
+ The GDELT Mentions table is a component of the GDELT Event Database that tracks each mention of an event across all monitored news sources. Unlike the Event table which records unique events, the Mentions table records every time an event is referenced in media, allowing researchers to track the network trajectory and media lifecycle of stories as they flow through the global information ecosystem.
13
+
14
+ - **Curated by:** The GDELT Project
15
+ - **Funded by:** Google Ideas, supported by Google Cloud Platform
16
+ - **Language(s) (NLP):** Multi-language source data, processed into standardized English format
17
+ - **License:** All GDELT data is available for free download and use with proper attribution
18
+ - **Updates:** Every 15 minutes, 24/7
19
+
20
+ ### Dataset Sources
21
+
22
+ - **Repository:** http://gdeltproject.org/
23
+ - **Documentation:** http://data.gdeltproject.org/documentation/GDELT-Event_Codebook-V2.0.pdf
24
+
25
+ ## Uses
26
+
27
+ ### Direct Use
28
+
29
+ - Tracking media coverage patterns for specific events
30
+ - Analyzing information diffusion across global media
31
+ - Measuring event importance through mention frequency
32
+ - Studying reporting biases across different media sources
33
+ - Assessing the confidence of event reporting
34
+ - Analyzing narrative framing through tonal differences
35
+ - Tracking historical event references and anniversary coverage
36
+
37
+ ### Out-of-Scope Use
38
+
39
+ - Exact source text extraction (only character offsets are provided)
40
+ - Definitive audience reach measurement (mentions don't equate to readership)
41
+ - Direct access to all mentioned source documents (URLs are provided but access may be limited)
42
+ - Language analysis of original non-English content (translation information is provided but original text is not included)
43
+
44
+ ## Dataset Structure
45
+
46
+ The dataset consists of tab-delimited files with 16 fields per mention record:
47
+
48
+ 1. Event Reference Information
49
+ - GlobalEventID: Links to the event being mentioned
50
+ - EventTimeDate: Timestamp when the event was first recorded (YYYYMMDDHHMMSS)
51
+ - MentionTimeDate: Timestamp of the mention (YYYYMMDDHHMMSS)
52
+
53
+ 2. Source Information
54
+ - MentionType: Numeric identifier for source collection (1=Web, 2=Citation, etc.)
55
+ - MentionSourceName: Human-friendly identifier (domain name, "BBC Monitoring", etc.)
56
+ - MentionIdentifier: Unique external identifier (URL, DOI, citation)
57
+
58
+ 3. Mention Context Details
59
+ - SentenceID: Sentence number within the article where the event was mentioned
60
+ - Actor1CharOffset: Character position where Actor1 was found in the text
61
+ - Actor2CharOffset: Character position where Actor2 was found in the text
62
+ - ActionCharOffset: Character position where the core Action was found
63
+ - InRawText: Whether event was found in original text (1) or required processing (0)
64
+ - Confidence: Percent confidence in the extraction (10-100%)
65
+ - MentionDocLen: Length of source document in characters
66
+ - MentionDocTone: Average tone of the document (-100 to +100)
67
+ - MentionDocTranslationInfo: Info about translation (semicolon delimited)
68
+ - Extras: Reserved for future use
69
+
70
+ ## Dataset Creation
71
+
72
+ ### Curation Rationale
73
+
74
+ The GDELT Mentions table was created to track the lifecycle of news stories and provide a deeper understanding of how events propagate through the global media ecosystem. It enables analysis of the importance of events based on coverage patterns and allows researchers to trace narrative evolution across different sources and time periods.
75
+
76
+ ### Source Data
77
+
78
+ #### Data Collection and Processing
79
+
80
+ - Every mention of an event is tracked across all monitored sources
81
+ - Each mention is recorded regardless of when the original event occurred
82
+ - Translation information is preserved for non-English sources
83
+ - Confidence scores indicate the level of natural language processing required
84
+ - Character offsets are provided to locate mentions within articles
85
+
86
+ #### Who are the source data producers?
87
+
88
+ Primary sources include:
89
+ - International news media
90
+ - Web news
91
+ - Broadcast transcripts
92
+ - Print media
93
+ - Academic repositories (with DOIs)
94
+ - Various online platforms
95
+
96
+ ### Personal and Sensitive Information
97
+
98
+ Similar to the Events table, this dataset focuses on public events and may contain:
99
+ - URLs to news articles mentioning public figures and events
100
+ - Information about how events were framed by different media outlets
101
+ - Translation metadata for non-English sources
102
+ - Document tone measurements
103
+
104
+ ## Bias, Risks, and Limitations
105
+
106
+ 1. Media Coverage Biases
107
+ - Over-representation of widely covered events
108
+ - Variance in coverage across different regions and languages
109
+ - Digital divide affecting representation of less-connected regions
110
+
111
+ 2. Technical Limitations
112
+ - Varying confidence levels in event extraction
113
+ - Translation quality differences across languages
114
+ - Character offsets may not perfectly align with rendered web content
115
+ - Not all MentionIdentifiers (URLs) remain accessible over time
116
+
117
+ 3. Coverage Considerations
118
+ - Higher representation of English and major world languages
119
+ - Potential duplication when similar articles appear across multiple outlets
120
+ - Varying confidence scores based on linguistic complexity
121
+
122
+ ### Recommendations
123
+
124
+ 1. Users should:
125
+ - Consider confidence scores when analyzing mentions
126
+ - Account for translation effects when studying non-English sources
127
+ - Use MentionDocLen to distinguish between focused coverage and passing references
128
+ - Recognize that URL accessibility may diminish over time
129
+ - Consider SentenceID to assess prominence of event mention within articles
130
+
131
+ 2. Best Practices:
132
+ - Filter by Confidence level appropriate to research needs
133
+ - Use InRawText field to identify direct versus synthesized mentions
134
+ - Analyze MentionDocTone in context with the overall event
135
+ - Account for temporal patterns in media coverage
136
+ - Cross-reference with Events table for comprehensive analysis
137
+
138
+ ## Citation
139
+
140
+ **BibTeX:**
141
+ ```bibtex
142
+ @inproceedings{leetaru2013gdelt,
143
+ title={GDELT: Global Data on Events, Language, and Tone, 1979-2012},
144
+ author={Leetaru, Kalev and Schrodt, Philip},
145
+ booktitle={International Studies Association Annual Conference},
146
+ year={2013},
147
+ address={San Francisco, CA}
148
+ }
149
+ ```
150
+
151
+ **APA:**
152
+ Leetaru, K., & Schrodt, P. (2013). GDELT: Global Data on Events, Language, and Tone, 1979-2012. Paper presented at the International Studies Association Annual Conference, San Francisco, CA.
153
+
154
+ ## Dataset Card Contact
155
+
156
+ dwb2023