--- annotations_creators: - Duygu Altinok language: - tr license: - cc-by-sa-4.0 multilinguality: - monolingual size_categories: - 100K ## Dataset Summary Turkish Hate Map (TuHaMa for short) is a big scale Turkish hate speech dataset that includes diverse target groups such as misogyny, political animosity, animal aversion, vegan antipathy, ethnic group hostility, and more. The dataset includes a total of 52K instances with 13 target groups. The dataset includes 4 labels, **offensive**, **hate**, **neutral** and **civilized**. Here is the distribution of target groups: | Target group | size | |---|---| | Animals | 1.2K | | Cities | 1.2K | | Ethnic groups | 4.4K | | LGBT | 1.1K | | Misogyny | 19.9K | | Occupations | 0.8K | | Politics | 12.6 | | Political orientation | 3.4K | | Refugees | 2.1K | | Religion | 2.1K | | Sects | 1.5K | | Veganism | 1.3K | | Total | 52K | All text is scraped from Eksisozluk.com in a targeted manner and sampled. The annotations are done by the data company [Co-one](https://www.co-one.co/). For more details please refer to the [research paper]() ## Dataset Instances An instance looks like: ``` { "baslik": "soyleyecek-cok-seyi-oldugu-halde-susan-kadin", "text": "her susuşunda anlatmak istediği şeyi içine atan kadındır, zamanla hissettiği her şeyi tüketir. aynı zamanda çok cookdur kendisi.", "label": 2 } ``` ## Data Split | name |train|validation|test| |---------|----:|---:|---:| |Turkish Hate Map|42175|5000|5000| ## Benchmarking This dataset is a part of [SentiTurca](https://huggingface.co/datasets/turkish-nlp-suite/SentiTurca) benchmark, in the benchmark the subset name is **hate**, named according to the GLUE tasks. Model benchmarking information can be found under SentiTurca HF repo and benchmarking scripts can be found under [SentiTurca Github repo](https://github.com/turkish-nlp-suite/SentiTurca). For this dataset we benchmarked a transformer based model BERTurk and a handful of LLMs. Success of each model is follows: | Model | acc./F1 | |---|---| | Gemini 1.0 Pro | 0.33/0.29 | | GPT-4 Turbo | 0.38/0.32 | | Claude 3 Sonnet | 0.16/0.29 | | Llama 3 70B | 0.55/0.35 | | Qwen2-72B | 0.70/0.35 | | BERTurk | 0.61/0.58 | For a critique of the results, misclassified instances and more please consult to the [research paper](). ## Citation Coming soon!!