Datasets:

Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
MapEval-API / README.md
mahirlabibdihan's picture
Update README.md
ca6a119 verified
metadata
license: apache-2.0
task_categories:
  - question-answering
  - multiple-choice
language:
  - en
size_categories:
  - n<1K
configs:
  - config_name: benchmark
    data_files:
      - split: test
        path: dataset.json
paperswithcode_id: mapeval-api
tags:
  - geospatial

MapEval-API

MapEval-API is created using MapQaTor.

Usage

from datasets import load_dataset

# Load dataset
ds = load_dataset("MapEval/MapEval-API", name="benchmark")

# Generate better prompts
for item in ds["test"]:
    # Start with a clear task description
    prompt = (
        "You are a highly intelligent assistant. "
        "Answer the multiple-choice question by selecting the correct option.\n\n"
        "Question:\n" + item["question"] + "\n\n"
        "Options:\n"
    )
    
    # List the options more clearly
    for i, option in enumerate(item["options"], start=1):
        prompt += f"{i}. {option}\n"
    
    # Add a concluding sentence to encourage selection of the answer
    prompt += "\nSelect the best option by choosing its number."

    # Use the prompt as needed
    print(prompt)  # Replace with your processing logic

Citation

If you use this dataset, please cite the original paper:

@article{dihan2024mapeval,
  title={MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models},
  author={Dihan, Mahir Labib and Hassan, Md Tanvir and Parvez, Md Tanvir and Hasan, Md Hasebul and Alam, Md Almash and Cheema, Muhammad Aamir and Ali, Mohammed Eunus and Parvez, Md Rizwan},
  journal={arXiv preprint arXiv:2501.00316},
  year={2024}
}