MentalManip is a dataset curated for the development and assessment of NLP models aimed at detecting and analyzing mental manipulation in dialogues:
Using language to influence, alter, or control an individual's psychological state or perception for the manipulator's benefit.
Mental manipulation, a significant form of abuse in interpersonal conversations, presents a challenge to identify due to its context-dependent and often subtle nature. The detection of manipulative language is essential for protecting potential victims, yet the field of Natural Language Processing (NLP) currently faces a scarcity of resources and research on this topic. Our study addresses this gap by introducing a new dataset, named MentalManip, which consists of 4,000 annotated movie dialogues. This dataset enables a comprehensive analysis of mental manipulation, pinpointing both the techniques utilized for manipulation and the vulnerabilities targeted in victims. Our research further explores the effectiveness of leading-edge models in recognizing manipulative dialogue and its components through a series of experiments with various configurations. The results demonstrate that these models inadequately identify and categorize manipulative content. Attempts to improve their performance by fine-tuning with existing datasets on mental health and toxicity have not overcome these limitations. We anticipate that MentalManip will stimulate further research, leading to progress in both understanding and mitigating the impact of mental manipulation in conversations.
MentalManip contains 4,000 multi-turn fictional dialogues between two characters extracted from online movie scripts. To enable fine-grained analysis, our Labeling Taxonomy covers three dimensions:
Statistics
We have two versions of MentalManip regarding the criteria used for generating the gold labels:
@article{wang2024mentalmanip,
title={MentalManip: A Dataset For Fine-grained Analysis of Mental Manipulation in Conversations},
author={Wang, Yuxin and Yang, Ivory and Hassanpour, Saeed and Vosoughi, Soroush},
journal={arXiv preprint arXiv:2405.16584},
year={2024}
}