Konferenzbeitrag

Implicitly abusive language – What does it actually look like and why are we not getting there?

Abusive language detection is an emerging field in natural language processing which has received a large amount of attention recently. Still the success of automatic detection is limited. Particularly, the detection of implicitly abusive language, i.e. abusive language that is not conveyed by abusive words (e.g. dumbass or scum), is not working well. In this position paper, we explain why existing datasets make learning implicit abuse difficult and what needs to be changed in the design of such datasets. Arguing for a divide-and-conquer strategy, we present a list of subtypes of implicitly abusive language and formulate research tasks and questions for future research.

Implicitly abusive language – What does it actually look like and why are we not getting there?

Urheber*in: Wiegand, Michael; Ruppenhofer, Josef; Eder, Elisabeth

Namensnennung 4.0 International

0
/
0

Sprache
Englisch

Thema
Automatische Sprachanalyse
Forschungsdaten
Datensatz
Beleidigung
Beschimpfung
Sprache

Ereignis
Geistige Schöpfung
(wer)
Wiegand, Michael
Ruppenhofer, Josef
Eder, Elisabeth
Ereignis
Veröffentlichung
(wer)
Stroudsburg, Pennsylvania : Association for Computational Linguistics
(wann)
2021-06-04

URN
urn:nbn:de:bsz:mh39-104498
Letzte Aktualisierung
06.03.2025, 09:00 MEZ

Datenpartner

Dieses Objekt wird bereitgestellt von:
Leibniz-Institut für Deutsche Sprache - Bibliothek. Bei Fragen zum Objekt wenden Sie sich bitte an den Datenpartner.

Objekttyp

  • Konferenzbeitrag

Beteiligte

  • Wiegand, Michael
  • Ruppenhofer, Josef
  • Eder, Elisabeth
  • Stroudsburg, Pennsylvania : Association for Computational Linguistics

Entstanden

  • 2021-06-04

Ähnliche Objekte (12)