chdausgaard's picture
Update README.md
1217a33 verified
metadata
license: mit
language:
  - en
base_model:
  - MoritzLaurer/deberta-v3-large-zeroshot-v2.0
  - mlburnham/deberta-v3-large-polistance-affect-v1.1
pipeline_tag: zero-shot-classification
library_name: transformers
tags:
  - politics
  - text-classification

Model Card for groupappeals_classifier_positive

This model classifies the valence of rhetorical appeals by politicians to groups ("group appeals") in political speech.

Model Details

Model Description

This model adapts Mike Burnham's zero shot model for political stance detection, which is itself an adaptation of Moritz Laurer's zero shot model for classifying political texts. It is trained for the more specific use of classifying the valence of rhetorical appeals by politicians to groups ("group appeals") in political speech. The model takes in sentences that are formatted so as to mention the sender/speaker and the group mentioned (i.e. the 'dyad') of the form: "Politician from {party} mentioning a group ({group}): '{text}'". It returns the probability that the speaker is making a positive appeal to the group.

  • Developed by: Christoffer H. Dausgaard & Frederik Hjorth
  • Model type: Fine-tuned DeBERTa-model
  • License: mit
  • Finetuned from model: deberta-v3-base-polistance-affect-v1.0
  • Paper [optional]: {{ paper | default("[More Information Needed]", true)}}

Uses

How to Get Started with the Model

Training Details

Training Data

The model was trained using a subset of the ParlSpeech v2 dataset that covers the universe of parliamentary speeches in the UK House of Commons from 1988-2019. The subset consists of 2,534 sentences manually coded by the authors. The sentences were randomly sampled within party- and group-strata, with oversampling of negative sentences.

Training Procedure

Preprocessing [optional]

Training Hyperparameters

  • Training regime: {{ training_regime | default("[More Information Needed]", true)}}

Evaluation

Citation [optional]

BibTeX: