Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters











Database
Language
Publication year range
1.
PLoS One ; 19(4): e0302197, 2024.
Article in English | MEDLINE | ID: mdl-38662755

ABSTRACT

Our study aims to investigate the interdependence between international stock markets and sentiments from financial news in stock forecasting. We adopt the Temporal Fusion Transformers (TFT) to incorporate intra and inter-market correlations and the interaction between the information flow, i.e. causality, of financial news sentiment and the dynamics of the stock market. The current study distinguishes itself from existing research by adopting Dynamic Transfer Entropy (DTE) to establish an accurate information flow propagation between stock and sentiments. DTE has the advantage of providing time series that mine information flow propagation paths between certain parts of the time series, highlighting marginal events such as spikes or sudden jumps, which are crucial in financial time series. The proposed methodological approach involves the following elements: a FinBERT-based textual analysis of financial news articles to extract sentiment time series, the use of the Transfer Entropy and corresponding heat maps to analyze the net information flows, the calculation of the DTE time series, which are considered as co-occurring covariates of stock Price, and TFT-based stock forecasting. The Dow Jones Industrial Average index of 13 countries, along with daily financial news data obtained through the New York Times API, are used to demonstrate the validity and superiority of the proposed DTE-based causality method along with TFT for accurate stock Price and Return forecasting compared to state-of-the-art time series forecasting methods.


Subject(s)
Forecasting , Investments , Investments/economics , Forecasting/methods , Humans , Entropy , Models, Economic , Commerce/trends
2.
IEEE Trans Neural Netw Learn Syst ; 31(5): 1710-1723, 2020 May.
Article in English | MEDLINE | ID: mdl-31283489

ABSTRACT

In this paper, we present a novel strategy to combine a set of compact descriptors to leverage an associated recognition task. We formulate the problem from a multiple kernel learning (MKL) perspective and solve it following a stochastic variance reduced gradient (SVRG) approach to address its scalability, currently an open issue. MKL models are ideal candidates to jointly learn the optimal combination of features along with its associated predictor. However, they are unable to scale beyond a dozen thousand of samples due to high computational and memory requirements, which severely limits their applicability. We propose SVRG-MKL, an MKL solution with inherent scalability properties that can optimally combine multiple descriptors involving millions of samples. Our solution takes place directly in the primal to avoid Gram matrices computation and memory allocation, whereas the optimization is performed with a proposed algorithm of linear complexity and hence computationally efficient. Our proposition builds upon recent progress in SVRG with the distinction that each kernel is treated differently during optimization, which results in a faster convergence than applying off-the-shelf SVRG into MKL. Extensive experimental validation conducted on several benchmarking data sets confirms a higher accuracy and a significant speedup of our solution. Our technique can be extended to other MKL problems, including visual search and transfer learning, as well as other formulations, such as group-sensitive (GMKL) and localized MKL (LMKL) in convex settings.

SELECTION OF CITATIONS
SEARCH DETAIL