Sentiment-Enhanced Trading Deep Q-Network: Advancing Financial Trading with Deep Reinforcement Learning
Keywords:
Deep Reinforcement Learning, Financial Trading, Sentiment Analysis, Calmar Ratio, Multi-Modal DataAbstract
This paper presents the Sentiment-Enhanced Trading Deep Q-Network (SETDQN), a novel deep reinforcement learning (DRL) framework for optimizing financial trading strategies. By integrating historical price data, technical indicators, sentiment embeddings from social media platforms, and macroeconomic indicators, the SETDQN maximizes the Calmar ratio, a risk-adjusted performance metric. Trained onS&P 500 ETF (SPY) data from 2010–2020 and tested on 2021–2024, the SETDQN achieves a 17.5% annualized return and a 2.1 Calmar ratio, surpassing traditional strategies like recurrent reinforcement learning (RRL), technical analysis, and buy-andhold. The implementation, provided in Python, is reproducible on Kaggle, incorporating realistic market frictions such as transaction costs and bid-ask spreads. This work advances DRL applications in finance, offering a scalable and robust framework for algorithmic trading.











