Konferenzbeitrag

Implicitly abusive language – What does it actually look like and why are we not getting there?

Abusive language detection is an emerging field in natural language processing which has received a large amount of attention recently. Still the success of automatic detection is limited. Particularly, the detection of implicitly abusive language, i.e. abusive language that is not conveyed by abusive words (e.g. dumbass or scum), is not working well. In this position paper, we explain why existing datasets make learning implicit abuse difficult and what needs to be changed in the design of such datasets. Arguing for a divide-and-conquer strategy, we present a list of subtypes of implicitly abusive language and formulate research tasks and questions for future research.

Implicitly abusive language – What does it actually look like and why are we not getting there?

Urheber*in: Wiegand, Michael; Ruppenhofer, Josef; Eder, Elisabeth

Attribution 4.0 International

0
/
0

Language
Englisch

Subject
Automatische Sprachanalyse
Forschungsdaten
Datensatz
Beleidigung
Beschimpfung
Sprache

Event
Geistige Schöpfung
(who)
Wiegand, Michael
Ruppenhofer, Josef
Eder, Elisabeth
Event
Veröffentlichung
(who)
Stroudsburg, Pennsylvania : Association for Computational Linguistics
(when)
2021-06-04

URN
urn:nbn:de:bsz:mh39-104498
Last update
06.03.2025, 9:00 AM CET

Data provider

This object is provided by:
Leibniz-Institut für Deutsche Sprache - Bibliothek. If you have any questions about the object, please contact the data provider.

Object type

  • Konferenzbeitrag

Associated

  • Wiegand, Michael
  • Ruppenhofer, Josef
  • Eder, Elisabeth
  • Stroudsburg, Pennsylvania : Association for Computational Linguistics

Time of origin

  • 2021-06-04

Other Objects (12)