<![CDATA[MoneyScience: Research]]>
http://beta.moneyscience.com/pg/blog-directory/research?view=rss
http://www.moneyscience.com/pg/blog/arXiv/read/839357/stacking-with-neural-network-for-cryptocurrency-investment-arxiv190207855v1-statmlThu, 21 Feb 2019 22:02:20 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/839357/stacking-with-neural-network-for-cryptocurrency-investment-arxiv190207855v1-statml
<![CDATA[Stacking with Neural network for Cryptocurrency investment. (arXiv:1902.07855v1 [stat.ML])]]>Predicting the direction of assets have been an active area of study and a
difficult task. Machine learning models have been used to build robust models
to model the above task. Ensemble methods is one of them showing results better
than a single supervised method. In this paper, we have used generative and
discriminative classifiers to create the stack, particularly 3 generative and 9
discriminative classifiers and optimized over one-layer Neural Network to model
the direction of price cryptocurrencies. Features used are technical indicators
used are not limited to trend, momentum, volume, volatility indicators, and
sentiment analysis has also been used to gain useful insight combined with the
above features. For Cross-validation, Purged Walk forward cross-validation has
been used. In terms of accuracy, we have done a comparative analysis of the
performance of Ensemble method with Stacking and Ensemble method with blending.
We have also developed a methodology for combined features importance for the
stacked model. Important indicators are also identified based on feature
importance.
]]>839357http://www.moneyscience.com/pg/blog/arXiv/read/839356/deep-adaptive-input-normalization-for-price-forecasting-using-limit-order-book-data-arxiv190207892v1-qfincpThu, 21 Feb 2019 22:02:10 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/839356/deep-adaptive-input-normalization-for-price-forecasting-using-limit-order-book-data-arxiv190207892v1-qfincp
<![CDATA[Deep Adaptive Input Normalization for Price Forecasting using Limit Order Book Data. (arXiv:1902.07892v1 [q-fin.CP])]]>Deep Learning (DL) models can be used to tackle time series analysis tasks
with great success. However, the performance of DL models can degenerate
rapidly if the data are not appropriately normalized. This issue is even more
apparent when DL is used for financial time series forecasting tasks, where the
non-stationary and multimodal nature of the data pose significant challenges
and severely affect the performance of DL models. In this work, a simple, yet
effective, neural layer, that is capable of adaptively normalizing the input
time series, while taking into account the distribution of the data, is
proposed. The proposed layer is trained in an end-to-end fashion using
back-propagation and can lead to significant performance improvements. The
effectiveness of the proposed method is demonstrated using a large-scale limit
order book dataset.
]]>839356http://www.moneyscience.com/pg/blog/arXiv/read/839355/what-is-the-central-bank-of-wikipedia-arxiv190207920v1-cssiThu, 21 Feb 2019 22:02:06 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/839355/what-is-the-central-bank-of-wikipedia-arxiv190207920v1-cssi
<![CDATA[What is the central bank of Wikipedia?. (arXiv:1902.07920v1 [cs.SI])]]>We analyze the influence and interactions of 60 largest world banks for 195
world countries using the reduced Google matrix algorithm for the English
Wikipedia network with 5 416 537 articles. While the top asset rank positions
are taken by the banks of China, with China Industrial and Commercial Bank of
China at the first place, we show that the network influence is dominated by
USA banks with Goldman Sachs being the central bank. We determine the network
structure of interactions of banks and countries and PageRank sensitivity of
countries to selected banks. We also present GPU oriented code which
significantly accelerates the numerical computations of reduced Google matrix.
]]>839355http://www.moneyscience.com/pg/blog/arXiv/read/839273/robust-asset-allocation-for-roboadvisors-arxiv190207449v1-qfinpmThu, 21 Feb 2019 10:07:20 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/839273/robust-asset-allocation-for-roboadvisors-arxiv190207449v1-qfinpm
<![CDATA[Robust Asset Allocation for Robo-Advisors. (arXiv:1902.07449v1 [q-fin.PM])]]>In the last few years, the financial advisory industry has been impacted by
the emergence of digitalization and robo-advisors. This phenomenon affects
major financial services, including wealth management, employee savings plans,
asset managers, etc. Since the robo-advisory model is in its early stages, we
estimate that robo-advisors will help to manage around $1 trillion of assets in
2020 (OECD, 2017). And this trend is not going to stop with future generations,
who will live in a technology-driven and social media-based world. In the
investment industry, robo-advisors face different challenges: client profiling,
customization, asset pooling, liability constraints, etc. In its primary sense,
robo-advisory is a term for defining automated portfolio management. This
includes automated trading and rebalancing, but also automated portfolio
allocation. And this last issue is certainly the most important challenge for
robo-advisory over the next five years. Today, in many robo-advisors, asset
allocation is rather human-based and very far from being computer-based. The
reason is that portfolio optimization is a very difficult task, and can lead to
optimized mathematical solutions that are not optimal from a financial point of
view (Michaud, 1989). The big challenge for robo-advisors is therefore to be
able to optimize and rebalance hundreds of optimal portfolios without human
intervention. In this paper, we show that the mean-variance optimization
approach is mainly driven by arbitrage factors that are related to the concept
of hedging portfolios. This is why regularization and sparsity are necessary to
define robust asset allocation. However, this mathematical framework is more
complex and requires understanding how norm penalties impacts portfolio
optimization. From a numerical point of view, it also requires the
implementation of non-traditional algorithms based on ADMM methods.
]]>839273http://www.moneyscience.com/pg/blog/arXiv/read/839274/matching-refugees-to-host-country-locations-based-on-preferences-and-outcomes-arxiv190207355v1-econgnThu, 21 Feb 2019 10:07:20 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/839274/matching-refugees-to-host-country-locations-based-on-preferences-and-outcomes-arxiv190207355v1-econgn
<![CDATA[Matching Refugees to Host Country Locations Based on Preferences and Outcomes. (arXiv:1902.07355v1 [econ.GN])]]>Facilitating the integration of refugees has become a major policy challenge
in many host countries in the context of the global displacement crisis. One of
the first policy decisions host countries make in the resettlement process is
the assignment of refugees to locations within the country. We develop a
mechanism to match refugees to locations in a way that takes into account their
expected integration outcomes and their preferences over where to be settled.
Our proposal is based on a priority mechanism that allows the government first
to specify a threshold g for the minimum level of expected integration success
that should be achieved. Refugees are then matched to locations based on their
preferences subject to meeting the government's specified threshold. The
mechanism is both strategy-proof and constrained efficient in that it always
generates a matching that is not Pareto dominated by any other matching that
respects the government's threshold. We demonstrate our approach using
simulations and a real-world application to refugee data from the United
States.
]]>839274http://www.moneyscience.com/pg/blog/arXiv/read/839271/market-impact-a-systematic-study-of-the-high-frequency-options-market-arxiv190205418v3-qfintr-updatedThu, 21 Feb 2019 10:07:20 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/839271/market-impact-a-systematic-study-of-the-high-frequency-options-market-arxiv190205418v3-qfintr-updated
<![CDATA[Market Impact: A Systematic Study of the High Frequency Options Market. (arXiv:1902.05418v3 [q-fin.TR] UPDATED)]]>This paper deals with a fundamental subject that has seldom been addressed in
recent years, that of market impact in the options market. Our analysis is
based on a proprietary database of metaorders-large orders that are split into
smaller pieces before being sent to the market on one of the main Asian
markets. In line with our previous work on the equity market [Said et al.,
2018], we propose an algorithmic approach to identify metaorders, based on some
implied volatility parameters, the at the money forward volatility and at the
money forward skew. In both cases, we obtain results similar to the now well
understood equity market: Square-root law, Fair Pricing Condition and Market
Impact Dynamics.
]]>839271http://www.moneyscience.com/pg/blog/arXiv/read/839272/divestment-may-burst-the-carbon-bubble-if-investors-beliefs-tip-to-anticipating-strong-future-climate-policy-arxiv190207481v1-qfingnThu, 21 Feb 2019 10:07:20 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/839272/divestment-may-burst-the-carbon-bubble-if-investors-beliefs-tip-to-anticipating-strong-future-climate-policy-arxiv190207481v1-qfingn
<![CDATA[Divestment may burst the carbon bubble if investors' beliefs tip to anticipating strong future climate policy. (arXiv:1902.07481v1 [q-fin.GN])]]>To achieve the ambitious aims of the Paris climate agreement, the majority of
fossil-fuel reserves needs to remain underground. As current national
government commitments to mitigate greenhouse gas emissions are insufficient by
far, actors such as institutional and private investors and the social movement
on divestment from fossil fuels could play an important role in putting
pressure on national governments on the road to decarbonization. Using a
stochastic agent-based model of co-evolving financial market and investors'
beliefs about future climate policy on an adaptive social network, here we find
that the dynamics of divestment from fossil fuels shows potential for social
tipping away from a fossil-fuel based economy. Our results further suggest that
socially responsible investors have leverage: a small share of 10--20\,\% of
such moral investors is sufficient to initiate the burst of the carbon bubble,
consistent with the Pareto Principle. These findings demonstrate that
divestment has potential for contributing to decarbonization alongside other
social movements and policy instruments, particularly given the credible
imminence of strong international climate policy. Our analysis also indicates
the possible existence of a carbon bubble with potentially destabilizing
effects to the economy.
]]>839272http://www.moneyscience.com/pg/blog/arXiv/read/839014/uncovering-the-drivers-behind-urban-economic-complexity-and-their-connection-to-urban-economic-performance-arxiv181202842v1-physicssocphSun, 09 Dec 2018 19:56:32 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/839014/uncovering-the-drivers-behind-urban-economic-complexity-and-their-connection-to-urban-economic-performance-arxiv181202842v1-physicssocph
<![CDATA[Uncovering the drivers behind urban economic complexity and their connection to urban economic performance. (arXiv:1812.02842v1 [physics.soc-ph])]]>The distribution of employment across industries determines the economic
profiles of cities. But what drives the distribution of employment? We study a
simple model for the probability that an individual in a city is employed in a
given urban activity. The theory posits that three quantities drive this
probability: the activity-specific complexity, individual-specific knowhow, and
the city-specific collective knowhow. We use data on employment across
industries and metropolitan statistical areas in the US, from 1990 to 2016, to
show that these drivers can be measured and have measurable consequences.
First, we analyze the functional form of the probability function proposed by
the theory, and show its superiority when compared to competing alternatives.
Second, we show that individual and collective knowhow correlate with measures
of urban economic performance, suggesting the theory can provide testable
implications for why some cities are more prosperous than others.
]]>839014http://www.moneyscience.com/pg/blog/arXiv/read/839013/optimal-investment-demand-and-arbitrage-under-price-impact-arxiv180409151v2-qfinmf-updatedSun, 09 Dec 2018 19:56:32 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/839013/optimal-investment-demand-and-arbitrage-under-price-impact-arxiv180409151v2-qfinmf-updated
<![CDATA[Optimal Investment, Demand and Arbitrage under Price Impact. (arXiv:1804.09151v2 [q-fin.MF] UPDATED)]]>This paper studies the optimal investment problem with random endowment in an
inventory-based price impact model with competitive market makers. Our goal is
to analyze how price impact affects optimal policies, as well as both pricing
rules and demand schedules for contingent claims. For exponential market makers
preferences, we establish two effects due to price impact: constrained trading,
and non-linear hedging costs. To the former, wealth processes in the impact
model are identified with those in a model without impact, but with constrained
trading, where the (random) constraint set is generically neither closed nor
convex. Regarding hedging, non-linear hedging costs motivate the study of
arbitrage free prices for the claim. We provide three such notions, which
coincide in the frictionless case, but which dramatically differ in the
presence of price impact. Additionally, we show arbitrage opportunities, should
they arise from claim prices, can be exploited only for limited position sizes,
and may be ignored if outweighed by hedging considerations. We also show that
arbitrage inducing prices may arise endogenously in equilibrium, and that
equilibrium positions are inversely proportional to the market makers'
representative risk aversion. Therefore, large positions endogenously arise in
the limit of either market maker risk neutrality, or a large number of market
makers.
]]>839013http://www.moneyscience.com/pg/blog/arXiv/read/838921/continuous-learning-augmented-investment-decisions-arxiv181202340v1-cslgThu, 06 Dec 2018 19:47:07 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/838921/continuous-learning-augmented-investment-decisions-arxiv181202340v1-cslg
<![CDATA[Continuous Learning Augmented Investment Decisions. (arXiv:1812.02340v1 [cs.LG])]]>Investment decisions can benefit from incorporating an accumulated knowledge
of the past to drive future decision making. We introduce Continuous Learning
Augmentation (CLA) which is based on an explicit memory structure and a feed
forward neural network (FFNN) base model and used to drive long term financial
investment decisions. We demonstrate that our approach improves accuracy in
investment decision making while memory is addressed in an explainable way. Our
approach introduces novel remember cues, consisting of empirically learned
change points in the absolute error series of the FFNN. Memory recall is also
novel, with contextual similarity assessed over time by sampling distances
using dynamic time warping (DTW). We demonstrate the benefits of our approach
by using it in an expected return forecasting task to drive investment
decisions. In an investment simulation in a broad international equity universe
between 2003-2017, our approach significantly outperforms FFNN base models. We
also illustrate how CLA's memory addressing works in practice, using a worked
example to demonstrate the explainability of our approach.
]]>838921http://www.moneyscience.com/pg/blog/arXiv/read/838923/general-compound-hawkes-processes-in-limit-order-books-arxiv181202298v1-qfintrThu, 06 Dec 2018 19:47:07 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/838923/general-compound-hawkes-processes-in-limit-order-books-arxiv181202298v1-qfintr
<![CDATA[General Compound Hawkes Processes in Limit Order Books. (arXiv:1812.02298v1 [q-fin.TR])]]>In this paper, we study various new Hawkes processes. Specifically, we
construct general compound Hawkes processes and investigate their properties in
limit order books. With regards to these general compound Hawkes processes, we
prove a Law of Large Numbers (LLN) and a Functional Central Limit Theorems
(FCLT) for several specific variations. We apply several of these FCLTs to
limit order books to study the link between price volatility and order flow,
where the volatility in mid-price changes is expressed in terms of parameters
describing the arrival rates and mid-price process.
]]>838923http://www.moneyscience.com/pg/blog/arXiv/read/838922/in-stochastic-search-of-a-fairer-alife-arxiv181202311v1-qfingnThu, 06 Dec 2018 19:47:07 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/838922/in-stochastic-search-of-a-fairer-alife-arxiv181202311v1-qfingn
<![CDATA[In (Stochastic) Search of a Fairer Alife. (arXiv:1812.02311v1 [q-fin.GN])]]>Economies and societal structures in general are complex stochastic systems
which may not lend themselves well to algebraic analysis. An addition of
subjective value criteria to the mechanics of interacting agents will further
complicate analysis. The purpose of this short study is to demonstrate
capabilities of agent-based computational economics to be a platform for
fairness or equity analysis in both a broad and practical sense.
]]>838922http://www.moneyscience.com/pg/blog/arXiv/read/838920/quantification-of-market-efficiency-based-on-informationalentropy-arxiv181202371v1-qfingnThu, 06 Dec 2018 19:47:07 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/838920/quantification-of-market-efficiency-based-on-informationalentropy-arxiv181202371v1-qfingn
<![CDATA[Quantification of market efficiency based on informational-entropy. (arXiv:1812.02371v1 [q-fin.GN])]]>Since the 1960s, the question whether markets are efficient or not is
controversially discussed. One reason for the difficulty to overcome the
controversy is the lack of a universal, but also precise, quantitative
definition of efficiency that is able to graduate between different states of
efficiency. The main purpose of this article is to fill this gap by developing
a measure for the efficiency of markets that fulfill all the stated
requirements. It is shown that the new definition of efficiency, based on
informational-entropy, is equivalent to the two most used definitions of
efficiency from Fama and Jensen. The new measure therefore enables steps to
settle the dispute over the state of efficiency in markets. Moreover, it is
shown that inefficiency in a market can either arise from the possibility to
use information to predict an event with higher than chance level, or can
emerge from wrong pricing/ quotes that do not reflect the right probabilities
of possible events. Finally, the calculation of efficiency is demonstrated on a
simple game (of coin tossing), to show how one could exactly quantify the
efficiency in any market-like system, if all probabilities are known.
]]>838920http://www.moneyscience.com/pg/blog/arXiv/read/838918/evaluating-the-building-blocks-of-a-dynamically-adaptive-systematic-trading-strategy-arxiv181202527v1-qfinstThu, 06 Dec 2018 19:47:06 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/838918/evaluating-the-building-blocks-of-a-dynamically-adaptive-systematic-trading-strategy-arxiv181202527v1-qfinst
<![CDATA[Evaluating the Building Blocks of a Dynamically Adaptive Systematic Trading Strategy. (arXiv:1812.02527v1 [q-fin.ST])]]>Financial markets change their behaviours abruptly. The mean, variance and
correlation patterns of stocks can vary dramatically, triggered by fundamental
changes in macroeconomic variables, policies or regulations. A trader needs to
adapt her trading style to make the best out of the different phases in the
stock markets. Similarly, an investor might want to invest in different asset
classes in different market regimes for a stable risk adjusted return profile.
Here, we explore the use of State Switching Markov Autoregressive models for
identifying and predicting different market regimes loosely modeled on the
Wyckoff Price Regimes of accumulation, distribution, advance and decline. We
explore the behaviour of various asset classes and market sectors in the
identified regimes. We look at the trading strategies like trend following,
range trading, retracement trading and breakout trading in the given market
regimes and tailor them for the specific regimes. We tie together the best
trading strategy and asset allocation for the identified market regimes to come
up with a robust dynamically adaptive trading system to outperform simple
traditional alphas.
]]>838918http://www.moneyscience.com/pg/blog/arXiv/read/838919/using-published-bidask-curves-to-error-dress-spot-electricity-price-forecasts-arxiv181202433v1-qfinstThu, 06 Dec 2018 19:47:06 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/838919/using-published-bidask-curves-to-error-dress-spot-electricity-price-forecasts-arxiv181202433v1-qfinst
<![CDATA[Using published bid/ask curves to error dress spot electricity price forecasts. (arXiv:1812.02433v1 [q-fin.ST])]]>Accurate forecasts of electricity spot prices are essential to the daily
operational and planning decisions made by power producers and distributors.
Typically, point forecasts of these quantities suffice, particularly in the
Nord Pool market where the large quantity of hydro power leads to price
stability. However, when situations become irregular, deviations on the price
scale can often be extreme and difficult to pinpoint precisely, which is a
result of the highly varying marginal costs of generating facilities at the
edges of the load curve. In these situations it is useful to supplant a point
forecast of price with a distributional forecast, in particular one whose tails
are adaptive to the current production regime. This work outlines a methodology
for leveraging published bid/ask information from the Nord Pool market to
construct such adaptive predictive distributions. Our methodology is a
non-standard application of the concept of error-dressing, which couples a
feature driven error distribution in volume space with a non-linear
transformation via the published bid/ask curves to obtain highly non-symmetric,
adaptive price distributions. Using data from the Nord Pool market, we show
that our method outperforms more standard forms of distributional modeling. We
further show how such distributions can be used to render `warning systems'
that issue reliable probabilities of prices exceeding various important
thresholds.
]]>838919http://www.moneyscience.com/pg/blog/arXiv/read/838917/simulation-of-stylized-facts-in-agentbased-computational-economic-market-models-arxiv181202726v1-econgnThu, 06 Dec 2018 19:47:00 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/838917/simulation-of-stylized-facts-in-agentbased-computational-economic-market-models-arxiv181202726v1-econgn
<![CDATA[Simulation of Stylized Facts in Agent-Based Computational Economic Market Models. (arXiv:1812.02726v1 [econ.GN])]]>We study the qualitative and quantitative appearance of stylized facts in
several agent-based computational economic market (ABCEM) models. We perform
our simulations with the SABCEMM (Simulator for Agent-Based Computational
Economic Market Models) tool recently introduced by the authors (Trimborn et
al. 2018a). The SABCEMM simulator is implemented in C++ and is well suited for
large scale computations. Thanks to its object-oriented software design, the
SABCEMM tool enables the creation of new models by plugging together novel and
existing agent and market designs as easily as plugging together pieces of a
puzzle. We present new ABCEM models created by recombining existing models and
study them with respect to stylized facts as well. The code is available on
GitHub (Trimborn et al. 2018b), such that all results can be reproduced by the
reader.
]]>838917http://www.moneyscience.com/pg/blog/arXiv/read/838841/the-alphaheston-stochastic-volatility-model-arxiv181201914v1-qfinmfWed, 05 Dec 2018 19:58:55 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/838841/the-alphaheston-stochastic-volatility-model-arxiv181201914v1-qfinmf
<![CDATA[The Alpha-Heston Stochastic Volatility Model. (arXiv:1812.01914v1 [q-fin.MF])]]>We introduce an affine extension of the Heston model where the instantaneous
variance process contains a jump part driven by $\alpha$-stable processes with
$\alpha\in(1,2]$. In this framework, we examine the implied volatility and its
asymptotic behaviors for both asset and variance options. Furthermore, we
examine the jump clustering phenomenon observed on the variance market and
provide a jump cluster decomposition which allows to analyse the cluster
processes.
]]>838841http://www.moneyscience.com/pg/blog/arXiv/read/838842/on-dynamics-of-wageprice-spiral-and-stagflation-in-some-model-economic-systems-arxiv181201707v1-qfingnWed, 05 Dec 2018 19:58:55 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/838842/on-dynamics-of-wageprice-spiral-and-stagflation-in-some-model-economic-systems-arxiv181201707v1-qfingn
<![CDATA[On dynamics of wage-price spiral and stagflation in some model economic systems. (arXiv:1812.01707v1 [q-fin.GN])]]>This article aims to present an elementary analytical solution to the
question of the formation of a structure of differentiation of rates of return
in a classical gravitation model and in a model of the dynamics of price-wage
spirals.
]]>838842http://www.moneyscience.com/pg/blog/arXiv/read/838786/machine-learning-for-yield-curve-feature-extraction-application-to-illiquid-corporate-bonds-arxiv181201102v1-qfinstTue, 04 Dec 2018 19:48:56 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/838786/machine-learning-for-yield-curve-feature-extraction-application-to-illiquid-corporate-bonds-arxiv181201102v1-qfinst
<![CDATA[Machine Learning for Yield Curve Feature Extraction: Application to Illiquid Corporate Bonds. (arXiv:1812.01102v1 [q-fin.ST])]]>This paper studies an application of machine learning in extracting features
from the historical market implied corporate bond yields. We consider an
example of a hypothetical illiquid fixed income market. After choosing a
surrogate liquid market, we apply the Denoising Autoencoder (DAE) algorithm to
learn the features of the missing yield parameters from the historical data of
the instruments traded in the chosen liquid market. The DAE algorithm is then
challenged by two "point-in-time" inpainting algorithms taken from the image
processing and computer vision domain. It is observed that, when tested on
unobserved rate surfaces, the DAE algorithm exhibits superior performance
thanks to the features it has learned from the historical shapes of yield
curves.
]]>838786http://www.moneyscience.com/pg/blog/arXiv/read/838785/predicting-future-stock-market-structure-by-combining-social-and-financial-network-information-arxiv181201103v1-qfinstTue, 04 Dec 2018 19:48:51 -0600
http://www.moneyscience.com/pg/blog/arXiv/read/838785/predicting-future-stock-market-structure-by-combining-social-and-financial-network-information-arxiv181201103v1-qfinst
<![CDATA[Predicting future stock market structure by combining social and financial network information. (arXiv:1812.01103v1 [q-fin.ST])]]>We demonstrate that future market correlation structure can be predicted with
high out-of-sample accuracy using a multiplex network approach that combines
information from social media and financial data. Market structure is measured
by quantifying the co-movement of asset prices returns, while social structure
is measured as the co-movement of social media opinion on those same assets.
Predictions are obtained with a simple model that uses link persistence and
link formation by triadic closure across both financial and social media
layers. Results demonstrate that the proposed model can predict future market
structure with up to a 40\% out-of-sample performance improvement compared to a
benchmark model that assumes a time-invariant financial correlation structure.
Social media information leads to improved models for all settings tested,
particularly in the long-term prediction of financial market structure.
Surprisingly, financial market structure exhibited higher predictability than
social opinion structure.
]]>838785