Artikel

Contracts for difference: A reinforcement learning approach

We present a deep reinforcement learning framework for an automatic trading of contracts for difference (CfD) on indices at a high frequency. Our contribution proves that reinforcement learning agents with recurrent long short-term memory (LSTM) networks can learn from recent market history and outperform the market. Usually, these approaches depend on a low latency. In a real-world example, we show that an increased model size may compensate for a higher latency. As the noisy nature of economic trends complicates predictions, especially in speculative assets, our approach does not predict courses but instead uses a reinforcement learning agent to learn an overall lucrative trading policy. Therefore, we simulate a virtual market environment, based on historical trading data. Our environment provides a partially observable Markov decision process (POMDP) to reinforcement learners and allows the training of various strategies.

Sprache
Englisch

Erschienen in
Journal: Journal of Risk and Financial Management ; ISSN: 1911-8074 ; Volume: 13 ; Year: 2020 ; Issue: 4 ; Pages: 1-12 ; Basel: MDPI

Klassifikation
Wirtschaft
Thema
CfD
contract for difference
deep learning
long short-term memory
LSTM
neural networks
Q-learning
reinforcement learning

Ereignis
Geistige Schöpfung
(wer)
Zengeler, Nico
Handmann, Uwe
Ereignis
Veröffentlichung
(wer)
MDPI
(wo)
Basel
(wann)
2020

DOI
doi:10.3390/jrfm13040078
Handle
Letzte Aktualisierung
10.03.2025, 11:46 MEZ

Datenpartner

Dieses Objekt wird bereitgestellt von:
ZBW - Deutsche Zentralbibliothek für Wirtschaftswissenschaften - Leibniz-Informationszentrum Wirtschaft. Bei Fragen zum Objekt wenden Sie sich bitte an den Datenpartner.

Objekttyp

  • Artikel

Beteiligte

  • Zengeler, Nico
  • Handmann, Uwe
  • MDPI

Entstanden

  • 2020

Ähnliche Objekte (12)