Artikel
Contracts for difference: A reinforcement learning approach
We present a deep reinforcement learning framework for an automatic trading of contracts for difference (CfD) on indices at a high frequency. Our contribution proves that reinforcement learning agents with recurrent long short-term memory (LSTM) networks can learn from recent market history and outperform the market. Usually, these approaches depend on a low latency. In a real-world example, we show that an increased model size may compensate for a higher latency. As the noisy nature of economic trends complicates predictions, especially in speculative assets, our approach does not predict courses but instead uses a reinforcement learning agent to learn an overall lucrative trading policy. Therefore, we simulate a virtual market environment, based on historical trading data. Our environment provides a partially observable Markov decision process (POMDP) to reinforcement learners and allows the training of various strategies.
- Language
-
Englisch
- Bibliographic citation
-
Journal: Journal of Risk and Financial Management ; ISSN: 1911-8074 ; Volume: 13 ; Year: 2020 ; Issue: 4 ; Pages: 1-12 ; Basel: MDPI
- Classification
-
Wirtschaft
- Subject
-
CfD
contract for difference
deep learning
long short-term memory
LSTM
neural networks
Q-learning
reinforcement learning
- Event
-
Geistige Schöpfung
- (who)
-
Zengeler, Nico
Handmann, Uwe
- Event
-
Veröffentlichung
- (who)
-
MDPI
- (where)
-
Basel
- (when)
-
2020
- DOI
-
doi:10.3390/jrfm13040078
- Handle
- Last update
-
10.03.2025, 11:46 AM CET
Data provider
ZBW - Deutsche Zentralbibliothek für Wirtschaftswissenschaften - Leibniz-Informationszentrum Wirtschaft. If you have any questions about the object, please contact the data provider.
Object type
- Artikel
Associated
- Zengeler, Nico
- Handmann, Uwe
- MDPI
Time of origin
- 2020