Datasets:
language:
- en
- zh
pretty_name: GeoComp
tags:
- GeoLocation
size_categories:
- 10M<n<100M
GeoComp
Dataset description
Inspired by geoguessr.com, we developed a free geolocation game platform that tracks participants' competition histories. Unlike most geolocation websites, including Geoguessr, which rely solely on samples from Google Street View, our platform integrates Baidu Maps and Gaode Maps to address coverage gaps in regions like mainland China, ensuring broader global accessibility. The platform offers various engaging competition modes to enhance user experience, such as team contests and solo matches. Each competition consists of multiple questions, and teams are assigned a "vitality score". Users mark their predicted location on the map, and the evaluation is based on the ground truth's surface distance from the predicted location. Larger errors result in greater deductions from the team's vitality score. At the end of the match, the team with the higher vitality score wins. We also provide diverse game modes, including street views, natural landscapes, and iconic landmarks. Users can choose specific opponents or engage in random matches. To prevent cheating, external search engines are banned, and each round is time-limited. To ensure predictions are human-generated rather than machine-generated, users must register with a phone number, enabling tracking of individual activities. Using this platform, we collected GeoComp, a comprehensive dataset covering 1,000 days of user competition.
File Introduction
The GeoComp dataset is now primarily provided in Parquet format within the /data
directory for efficient access and processing. You can find the following files in this repository:
/data/tuxun_combined.parquet
: This is the main dataset file containing combined competition history in Parquet format.tuxun_sample.csv
: An example CSV file to preview the structure of the data.selected_panoids
: The 500 panoids we used in our work. You can add a.csv
or.json
suffix to this file.download_panoramas.py
: A script to download street view images using the provided panoids.
Requirement
The GeoComp is only for research.
Start
Data format of tuxun_combined.csv
The tuxun_combined.parquet
file contains data in a same structure to the original tuxun_combined.csv
.
Example Schema:
id | data | gmt_create | timestamp |
---|---|---|---|
Game | Json style metadata | 1734188074762.0 |
Explanation:
- We hide data items that may reveal personal privacy like changing the value of key "userId" to "User", "hostUserId" to "HostUser", "playerIds" to "Players", "id" to "Game".
- The data under the "data" column is in JSON style. This column contains detailed geolocation information like "lat", "lng", "nation", and "panoId".
Extracting Specific Fields from the 'data' Column
The 'data' column contains rich game-specific information in a JSON string format. To access individual fields like guessPlace
, targetPlace
, score
, or panoId
, you'll need to parse this JSON string.
Here’s a Python example using pandas
and json
to extract these fields from the tuxun_combined.parquet
file:
import pandas as pd
import json
# Assuming your Parquet file is at 'data/tuxun_combined.parquet'
# Adjust the file_path if necessary
file_path = 'data/tuxun_combined.parquet'
# Read the Parquet file into a DataFrame
df = pd.read_parquet(file_path)
# Define a function to parse the 'data' column and extract desired information
def extract_game_details(data_json_str):
try:
# Parse the JSON string into a Python dictionary
game_data = json.loads(data_json_str)
# Initialize variables to None in case a field is missing
guess_place = None
target_place = None
score = None
pano_id = None
# Extract guessPlace, targetPlace, and score from 'player' -> 'lastRoundResult'
if 'player' in game_data and 'lastRoundResult' in game_data['player']:
last_round_result = game_data['player']['lastRoundResult']
guess_place = last_round_result.get('guessPlace')
target_place = last_round_result.get('targetPlace')
score = last_round_result.get('score')
# Extract panoId from the first element of the 'rounds' list
if 'rounds' in game_data and len(game_data['rounds']) > 0:
first_round = game_data['rounds'][0]
pano_id = first_round.get('panoId')
return guess_place, target_place, score, pano_id
except json.JSONDecodeError:
print(f"Error decoding JSON for row: {data_json_str[:100]}...") # Print first 100 chars for context
return None, None, None, None
except KeyError as e:
print(f"Missing key: {e} in row: {data_json_str[:100]}...") # Print first 100 chars for context
return None, None, None, None
# Apply the function to the 'data' column and create new columns in the DataFrame
df[['guessPlace', 'targetPlace', 'score', 'panoId']] = df['data'].apply(
lambda x: pd.Series(extract_game_details(x))
)
# Display the first few rows with the newly extracted columns
print(df[['id', 'guessPlace', 'targetPlace', 'score', 'panoId']].head())
Additional Information
Citation Information
@misc{song2025geolocationrealhumangameplay,
title={Geolocation with Real Human Gameplay Data: A Large-Scale Dataset and Human-Like Reasoning Framework},
author={Zirui Song and Jingpu Yang and Yuan Huang and Jonathan Tonglet and Zeyu Zhang and Tao Cheng and Meng Fang and Iryna Gurevych and Xiuying Chen},
year={2025},
eprint={2502.13759},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2502.13759},
}