Arbeitspapier

Decoding GPT's hidden "rationality" of cooperation

In current discussions on large language models (LLMs) such as GPT, understanding their ability to emulate facets of human intelligence stands central. Using behavioral economic paradigms and structural models, we investigate GPT's cooperativeness in human interactions and assess its rational goal-oriented behavior. We discover that GPT cooperates more than humans and has overly optimistic expectations about human cooperation. Intriguingly, additional analyses reveal that GPT's behavior isn't random; it displays a level of goal-oriented rationality surpassing human counterparts. Our findings suggest that GPT hyper-rationally aims to maximize social welfare, coupled with a strive of self-preservation. Methodologically, our research highlights how structural models, typically employed to decipher human behavior, can illuminate the rationality and goal-orientation of LLMs. This opens a compelling path for future research into the intricate rationality of sophisticated, yet enigmatic artificial agents.

Sprache
Englisch

Erschienen in
Series: SAFE Working Paper ; No. 401

Klassifikation
Wirtschaft
Thema
large language models
cooperation
goal orientation
economic rationality

Ereignis
Geistige Schöpfung
(wer)
Bauer, Kevin
Liebich, Lena
Hinz, Oliver
Kosfeld, Michael
Ereignis
Veröffentlichung
(wer)
Leibniz Institute for Financial Research SAFE
(wo)
Frankfurt a. M.
(wann)
2023

DOI
doi:10.2139/ssrn.4576036
Handle
Letzte Aktualisierung
10.03.2025, 11:46 MEZ

Datenpartner

Dieses Objekt wird bereitgestellt von:
ZBW - Deutsche Zentralbibliothek für Wirtschaftswissenschaften - Leibniz-Informationszentrum Wirtschaft. Bei Fragen zum Objekt wenden Sie sich bitte an den Datenpartner.

Objekttyp

  • Arbeitspapier

Beteiligte

  • Bauer, Kevin
  • Liebich, Lena
  • Hinz, Oliver
  • Kosfeld, Michael
  • Leibniz Institute for Financial Research SAFE

Entstanden

  • 2023

Ähnliche Objekte (12)