This RoBERTa-based model ("MindMiner") can classify the degree of mind perception in English language text in 2 classes:

  • high mind perception ๐Ÿ‘ฉ
  • low mind perception ๐Ÿค–

The model was fine-tuned on 997 manually annotated open-ended survey responses. The hold-out accuracy is 75.5% (vs. a balanced 50% random-chance baseline).

Hartmann, J., Bergner, A., & Hildebrand, C. (2023). MindMiner: Uncovering Linguistic Markers of Mind Perception as a New Lens to Understand Consumer-Smart Object Relationships. Journal of Consumer Psychology, Forthcoming.

Downloads last month
23
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using j-hartmann/MindMiner-Binary 1