Konferenzbeitrag
Identifying implicitly abusive remarks about identity groups using a linguistically informed approach
We address the task of distinguishing implicitly abusive sentences on identity groups (“Muslims contaminate our planet”) from other group-related negative polar sentences (“Muslims despise terrorism”). Implicitly abusive language are utterances not conveyed by abusive words (e.g. “bimbo” or “scum”). So far, the detection of such utterances could not be properly addressed since existing datasets displaying a high degree of implicit abuse are fairly biased. Following the recently-proposed strategy to solve implicit abuse by separately addressing its different subtypes, we present a new focused and less biased dataset that consists of the subtype of atomic negative sentences about identity groups. For that task, we model components that each address one facet of such implicit abuse, i.e. depiction as perpetrators, aspectual classification and non-conformist views. The approach generalizes across different identity groups and languages.
- Language
-
Englisch
- Subject
-
Datensatz
Beleidigung
Beschimpfung
Computerlinguistik
Sprache
- Event
-
Geistige Schöpfung
- (who)
-
Wiegand, Michael
Eder, Elisabeth
Ruppenhofer, Josef
- Event
-
Veröffentlichung
- (who)
-
Association for Computational Linguistics : Stroudsburg
Mannheim : Leibniz-Institut für Deutsche Sprache (IDS)
- (when)
-
2022-10-07
- URN
-
urn:nbn:de:bsz:mh39-112614
- Last update
-
06.03.2025, 9:00 AM CET
Data provider
Leibniz-Institut für Deutsche Sprache - Bibliothek. If you have any questions about the object, please contact the data provider.
Object type
- Konferenzbeitrag
Associated
- Wiegand, Michael
- Eder, Elisabeth
- Ruppenhofer, Josef
- Association for Computational Linguistics : Stroudsburg
- Mannheim : Leibniz-Institut für Deutsche Sprache (IDS)
Time of origin
- 2022-10-07