Sentiment-Enhanced Trading Deep Q-Network: Advancing Financial Trading with Deep Reinforcement Learning

Authors

  • Dr.G.Siva Nageswara Rao
  • Mekala Bhanu Venkata Yeswanth Reddy

Keywords:

Deep Reinforcement Learning, Financial Trading, Sentiment Analysis, Calmar Ratio, Multi-Modal Data

Abstract

This paper presents the Sentiment-Enhanced Trading Deep Q-Network (SETDQN), a novel deep reinforcement learning (DRL) framework for optimizing financial trading strategies. By integrating historical price data, technical indicators, sentiment embeddings from social media platforms, and macroeconomic indicators, the SETDQN maximizes the Calmar ratio, a risk-adjusted performance metric. Trained onS&P 500 ETF (SPY) data from 2010–2020 and tested on 2021–2024, the SETDQN achieves a 17.5% annualized return and a 2.1 Calmar ratio, surpassing traditional strategies like recurrent reinforcement learning (RRL), technical analysis, and buy-andhold. The implementation, provided in Python, is reproducible on Kaggle, incorporating realistic market frictions such as transaction costs and bid-ask spreads. This work advances DRL applications in finance, offering a scalable and robust framework for algorithmic trading.

Downloads

Published

2025-08-30

How to Cite

Dr.G.Siva Nageswara Rao, & Mekala Bhanu Venkata Yeswanth Reddy. (2025). Sentiment-Enhanced Trading Deep Q-Network: Advancing Financial Trading with Deep Reinforcement Learning. Utilitas Mathematica, 122(2), 521–527. Retrieved from https://utilitasmathematica.com/index.php/Index/article/view/2740

Citation Check

Most read articles by the same author(s)

Obs.: This plugin requires at least one statistics/report plugin to be enabled. If your statistics plugins provide more than one metric then please also select a main metric on the admin's site settings page and/or on the journal manager's settings pages.