<![CDATA[MoneyScience: Research]]>
http://beta.moneyscience.com/pg/blog-directory/research?view=rss
http://www.moneyscience.com/pg/blog/arXiv/read/870350/a-simulation-of-the-insurance-industry-the-problem-of-risk-model-homogeneity-arxiv190705954v1-econgnMon, 15 Jul 2019 23:01:36 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/870350/a-simulation-of-the-insurance-industry-the-problem-of-risk-model-homogeneity-arxiv190705954v1-econgn
<![CDATA[A simulation of the insurance industry: The problem of risk model homogeneity. (arXiv:1907.05954v1 [econ.GN])]]>We develop an agent-based simulation of the catastrophe insurance and
reinsurance industry and use it to study the problem of risk model homogeneity.
The model simulates the balance sheets of insurance firms, who collect premiums
from clients in return for ensuring them against intermittent, heavy-tailed
risks. Firms manage their capital and pay dividends to their investors, and use
either reinsurance contracts or cat bonds to hedge their tail risk. The model
generates plausible time series of profits and losses and recovers stylized
facts, such as the insurance cycle and the emergence of asymmetric, long tailed
firm size distributions. We use the model to investigate the problem of risk
model homogeneity. Under Solvency II, insurance companies are required to use
only certified risk models. This has led to a situation in which only a few
firms provide risk models, creating a systemic fragility to the errors in these
models. We demonstrate that using too few models increases the risk of
nonpayment and default while lowering profits for the industry as a whole. The
presence of the reinsurance industry ameliorates the problem but does not
remove it. Our results suggest that it would be valuable for regulators to
incentivize model diversity. The framework we develop here provides a first
step toward a simulation model of the insurance industry for testing policies
and strategies for better capital management.
]]>870350http://www.moneyscience.com/pg/blog/arXiv/read/870349/online-rental-housing-market-representation-and-the-digital-reproduction-of-urban-inequality-arxiv190706118v1-econgnMon, 15 Jul 2019 23:01:36 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/870349/online-rental-housing-market-representation-and-the-digital-reproduction-of-urban-inequality-arxiv190706118v1-econgn
<![CDATA[Online Rental Housing Market Representation and the Digital Reproduction of Urban Inequality. (arXiv:1907.06118v1 [econ.GN])]]>As the rental housing market moves online, the Internet offers divergent
possible futures: either the promise of more-equal access to information for
previously marginalized homeseekers, or a reproduction of longstanding
information inequalities. Biases in online listings' representativeness could
impact different communities' access to housing search information, reinforcing
traditional information segregation patterns through a digital divide. They
could also circumscribe housing practitioners' and researchers' ability to draw
broad market insights from listings to understand rental supply and
affordability. This study examines millions of Craigslist rental listings
across the US and finds that they spatially concentrate and over-represent
whiter, wealthier, and better-educated communities. Other significant
demographic differences exist in age, language, college enrollment, rent,
poverty rate, and household size. Most cities' online housing markets are
digitally segregated by race and class, and we discuss various implications for
residential mobility, community legibility, gentrification, displacement,
housing voucher utilization, and automated monitoring and analytics in the
smart cities paradigm. While Craigslist contains valuable crowdsourced data to
better understand affordability and available rental supply in real-time, it
does not evenly represent all market segments. The Internet promises
information democratization, and online listings can reduce housing search
costs and increase choice sets. However, technology access/preferences and
information channel segregation can concentrate such information-broadcasting
benefits in already-advantaged communities, reproducing traditional
inequalities and reinforcing residential sorting and segregation dynamics.
Technology platforms construct new institutions with the power to shape spatial
economies and human interactions.
]]>870349http://www.moneyscience.com/pg/blog/arXiv/read/870347/multilevel-orderflow-imbalance-in-a-limit-order-book-arxiv190706230v1-qfintrMon, 15 Jul 2019 23:01:36 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/870347/multilevel-orderflow-imbalance-in-a-limit-order-book-arxiv190706230v1-qfintr
<![CDATA[Multi-Level Order-Flow Imbalance in a Limit Order Book. (arXiv:1907.06230v1 [q-fin.TR])]]>We study the \emph{multi-level order-flow imbalance (MLOFI)}, which measures
the net flow of buy and sell orders at different price levels in a limit order
book (LOB). Using a recent, high-quality data set for 6 liquid stocks on
Nasdaq, we use Ridge regression to fit a simple, linear relationship between
MLOFI and the contemporaneous change in mid-price. For all 6 stocks that we
study, we find that the goodness-of-fit of the relationship improves with each
additional price level that we include in the MLOFI vector. Our results
underline how the complex order-flow activity deep into the LOB can influence
the price-formation process.
]]>870347http://www.moneyscience.com/pg/blog/arXiv/read/870348/from-quadratic-hawkes-processes-to-superheston-rough-volatility-models-with-zumbach-effect-arxiv190706151v1-qfinstMon, 15 Jul 2019 23:01:36 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/870348/from-quadratic-hawkes-processes-to-superheston-rough-volatility-models-with-zumbach-effect-arxiv190706151v1-qfinst
<![CDATA[From quadratic Hawkes processes to super-Heston rough volatility models with Zumbach effect. (arXiv:1907.06151v1 [q-fin.ST])]]>Using microscopic price models based on Hawkes processes, it has been shown
that under some no-arbitrage condition, the high degree of endogeneity of
markets together with the phenomenon of metaorders splitting generate rough
Heston-type volatility at the macroscopic scale. One additional important
feature of financial dynamics, at the heart of several influential works in
econophysics, is the so-called feedback or Zumbach effect. This essentially
means that past trends in returns convey significant information on future
volatility. A natural way to reproduce this property in microstructure modeling
is to use quadratic versions of Hawkes processes. We show that after suitable
rescaling, the long term limits of these processes are refined versions of
rough Heston models where the volatility coefficient is enhanced compared to
the square root characterizing Heston-type dynamics. Furthermore the Zumbach
effect remains explicit in these limiting rough volatility models.
]]>870348http://www.moneyscience.com/pg/blog/arXiv/read/870345/neural-network-regression-for-bermudan-option-pricing-arxiv190706474v1-mathprMon, 15 Jul 2019 23:01:35 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/870345/neural-network-regression-for-bermudan-option-pricing-arxiv190706474v1-mathpr
<![CDATA[Neural network regression for Bermudan option pricing. (arXiv:1907.06474v1 [math.PR])]]>The pricing of Bermudan options amounts to solving a dynamic programming
principle , in which the main difficulty, especially in large dimension, comes
from the computation of the conditional expectation involved in the
continuation value. These conditional expectations are classically computed by
regression techniques on a finite dimensional vector space. In this work, we
study neural networks approximation of conditional expectations. We prove the
convergence of the well-known Longstaff and Schwartz algorithm when the
standard least-square regression is replaced by a neural network approximation.
]]>870345http://www.moneyscience.com/pg/blog/arXiv/read/870346/confidentiality-and-linked-data-arxiv190706465v1-cscrMon, 15 Jul 2019 23:01:35 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/870346/confidentiality-and-linked-data-arxiv190706465v1-cscr
<![CDATA[Confidentiality and linked data. (arXiv:1907.06465v1 [cs.CR])]]>Data providers such as government statistical agencies perform a balancing
act: maximising information published to inform decision-making and research,
while simultaneously protecting privacy. The emergence of identified
administrative datasets with the potential for sharing (and thus linking)
offers huge potential benefits but significant additional risks. This article
introduces the principles and methods of linking data across different sources
and points in time, focusing on potential areas of risk. We then consider
confidentiality risk, focusing in particular on the "intruder" problem central
to the area, and looking at both risks from data producer outputs and from the
release of micro-data for further analysis. Finally, we briefly consider
potential solutions to micro-data release, both the statistical solutions
considered in other contributed articles and non-statistical solutions.
]]>870346http://www.moneyscience.com/pg/blog/arXiv/read/870344/risk-management-with-tail-quasilinear-means-arxiv190206941v2-qfinrm-updatedMon, 15 Jul 2019 23:01:35 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/870344/risk-management-with-tail-quasilinear-means-arxiv190206941v2-qfinrm-updated
<![CDATA[Risk Management with Tail Quasi-Linear Means. (arXiv:1902.06941v2 [q-fin.RM] UPDATED)]]>We generalize Quasi-Linear Means by restricting to the tail of the risk
distribution and show that this can be a useful quantity in risk management
since it comprises in its general form the Value at Risk, the Tail Value at
Risk and the Entropic Risk Measure in a unified way. We then investigate the
fundamental properties of the proposed measure and show its unique features and
implications in the risk measurement process. Furthermore, we derive formulas
for truncated elliptical models of losses and provide formulas for selected
members of such models.
]]>870344http://www.moneyscience.com/pg/blog/arXiv/read/869855/singularities-and-catastrophes-in-economics-historical-perspectives-and-future-directions-arxiv190705582v1-econgnSun, 14 Jul 2019 23:02:28 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/869855/singularities-and-catastrophes-in-economics-historical-perspectives-and-future-directions-arxiv190705582v1-econgn
<![CDATA[Singularities and Catastrophes in Economics: Historical Perspectives and Future Directions. (arXiv:1907.05582v1 [econ.GN])]]>Economic theory is a mathematically rich field in which there are
opportunities for the formal analysis of singularities and catastrophes. This
article looks at the historical context of singularities through the work of
two eminent Frenchmen around the late 1960s and 1970s. Ren\'e Thom (1923-2002)
was an acclaimed mathematician having received the Fields Medal in 1958,
whereas G\'erard Debreu (1921-2004) would receive the Nobel Prize in economics
in 1983. Both were highly influential within their fields and given the
fundamental nature of their work, the potential for cross-fertilisation would
seem to be quite promising. This was not to be the case: Debreu knew of Thom's
work and cited it in the analysis of his own work, but despite this and other
applied mathematicians taking catastrophe theory to economics, the theory never
achieved a lasting following and relatively few results were published. This
article reviews Debreu's analysis of the so called ${\it regular}$ and ${\it
crtitical}$ economies in order to draw some insights into the economic
perspective of singularities before moving to how singularities arise naturally
in the Nash equilibria of game theory. Finally a modern treatment of stochastic
game theory is covered through recent work on the quantal response equilibrium.
In this view the Nash equilibrium is to the quantal response equilibrium what
deterministic catastrophe theory is to stochastic catastrophe theory, with some
caveats regarding when this analogy breaks down discussed at the end.
]]>869855http://www.moneyscience.com/pg/blog/arXiv/read/869854/from-small-markets-to-big-markets-arxiv190705593v1-qfinpmSun, 14 Jul 2019 23:02:27 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/869854/from-small-markets-to-big-markets-arxiv190705593v1-qfinpm
<![CDATA[From small markets to big markets. (arXiv:1907.05593v1 [q-fin.PM])]]>We study the most famous example of a large financial market: the Arbitrage
Pricing Model, where investors can trade in a one-period setting with countably
many assets admitting a factor structure. We consider the problem of maximising
expected utility in this setting. Besides establishing the existence of
optimizers under weaker assumptions than previous papers, we go on studying the
relationship between optimal investments in finite market segments and those in
the whole market. We show that certain natural (but nontrivial) continuity
rules hold: maximal satisfaction, reservation prices and (convex combinations
of) optimizers computed in small markets converge to their respective
counterparts in the big market.
]]>869854http://www.moneyscience.com/pg/blog/arXiv/read/869852/dreaming-machine-learning-lipschitz-extensions-for-reinforcement-learning-on-financial-markets-arxiv190705697v1-qfinstSun, 14 Jul 2019 23:02:27 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/869852/dreaming-machine-learning-lipschitz-extensions-for-reinforcement-learning-on-financial-markets-arxiv190705697v1-qfinst
<![CDATA[Dreaming machine learning: Lipschitz extensions for reinforcement learning on financial markets. (arXiv:1907.05697v1 [q-fin.ST])]]>We develop a new topological structure for the construction of a
reinforcement learning model in the framework of financial markets. It is based
on Lipschitz type extension of reward functions defined in metric spaces. Using
some known states of a dynamical system that represents the evolution of a
financial market, we use our technique to simulate new states, that we call
``dreams". These new states are used to feed a learning algorithm designed to
improve the investment strategy.
]]>869852http://www.moneyscience.com/pg/blog/arXiv/read/869853/gittins-theorem-under-uncertainty-arxiv190705689v1-mathocSun, 14 Jul 2019 23:02:27 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/869853/gittins-theorem-under-uncertainty-arxiv190705689v1-mathoc
<![CDATA[Gittins' theorem under uncertainty. (arXiv:1907.05689v1 [math.OC])]]>We study dynamic allocation problems for discrete time multi-armed bandits
under uncertainty, based on the the theory of nonlinear expectations. We show
that, under strong independence of the bandits and with some relaxation in the
definition of optimality, a Gittins allocation index gives optimal choices.
This involves studying the interaction of our uncertainty with controls which
determine the filtration. We also run a simple numerical example which
illustrates the interaction between the willingness to explore and uncertainty
aversion of the agent when making decisions.
]]>869853http://www.moneyscience.com/pg/blog/arXiv/read/868744/statistical-mechanics-of-time-series-arxiv190704925v1-qfinstThu, 11 Jul 2019 23:01:51 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/868744/statistical-mechanics-of-time-series-arxiv190704925v1-qfinst
<![CDATA[Statistical mechanics of time series. (arXiv:1907.04925v1 [q-fin.ST])]]>Countless natural and social multivariate systems are studied through sets of
simultaneous and time-spaced measurements of the observables that drive their
dynamics, i.e., through sets of time series. Typically, this is done via
hypothesis testing: the statistical properties of the empirical time series are
tested against those expected under a suitable null hypothesis. This is a very
challenging task in complex interacting systems, where statistical stability is
often poor due to lack of stationarity and ergodicity. Here, we describe an
unsupervised, data-driven framework to perform hypothesis testing in such
situations. This consists of a statistical mechanical theory - derived from
first principles - for ensembles of time series designed to preserve, on
average, some of the statistical properties observed on an empirical set of
time series. We showcase its possible applications on a set of stock market
returns from the NYSE.
]]>868744http://www.moneyscience.com/pg/blog/arXiv/read/868743/mathematical-analysis-of-dynamic-risk-default-in-microfinance-arxiv190704937v1-qfinrmThu, 11 Jul 2019 23:01:51 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/868743/mathematical-analysis-of-dynamic-risk-default-in-microfinance-arxiv190704937v1-qfinrm
<![CDATA[Mathematical Analysis of Dynamic Risk Default in Microfinance. (arXiv:1907.04937v1 [q-fin.RM])]]>In this work we will develop a new approach to solve the non repayment
problem in microfinance due to the problem of asymmetric information. This
approach is based on modeling and simulation of ordinary differential systems
where time remains a primordial component, they thus enable microfinance
institutions to manage their risk portfolios by a prediction of numbers of
solvent and insolvent borrowers ever a period, in order to define or redefine
its development strategy, investment and management in an area, where the
population is often poor and in need a mechanism of financial inclusion.
]]>868743http://www.moneyscience.com/pg/blog/arXiv/read/868738/distributions-of-historic-market-data-relaxation-and-correlations-arxiv190705348v1-qfinstThu, 11 Jul 2019 23:01:50 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/868738/distributions-of-historic-market-data-relaxation-and-correlations-arxiv190705348v1-qfinst
<![CDATA[Distributions of Historic Market Data -- Relaxation and Correlations. (arXiv:1907.05348v1 [q-fin.ST])]]>We show that, for a class of mean-reverting models, the correlation function
of stochastic variance (squared volatility) contains only one -- relaxation --
parameter. We generalize and simplify the expression for leverage for this
class of models. We apply our results to specific examples of such models --
multiplicative, Heston, and combined multiplicative-Heston -- and use historic
stock market data to obtain parameters of their steady-state distributions and
cross-correlations between Weiner processes in the models for stock returns and
stochastic variance.
]]>868738http://www.moneyscience.com/pg/blog/arXiv/read/868742/a-global-economic-policy-uncertainty-index-from-principal-component-analysis-arxiv190705049v1-econgnThu, 11 Jul 2019 23:01:50 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/868742/a-global-economic-policy-uncertainty-index-from-principal-component-analysis-arxiv190705049v1-econgn
<![CDATA[A global economic policy uncertainty index from principal component analysis. (arXiv:1907.05049v1 [econ.GN])]]>This paper constructs a global economic policy uncertainty index through the
principal component analysis of the economic policy uncertainty indices for
twenty primary economies around the world. We find that the PCA-based global
economic policy uncertainty index is a good proxy for the economic policy
uncertainty on a global scale, which is quite consistent with the GDP-weighted
global economic policy uncertainty index. The PCA-based economic policy
uncertainty index is found to be positively related with the volatility and
correlation of the global financial market, which indicates that the stocks are
more volatile and correlated when the global economic policy uncertainty is
higher. The PCA-based global economic policy uncertainty index performs
slightly better because the relationship between the PCA-based uncertainty and
market volatility and correlation is more significant.
]]>868742http://www.moneyscience.com/pg/blog/arXiv/read/868741/realworld-forward-rate-dynamics-with-affine-realizations-arxiv190705072v1-qfinmfThu, 11 Jul 2019 23:01:50 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/868741/realworld-forward-rate-dynamics-with-affine-realizations-arxiv190705072v1-qfinmf
<![CDATA[Real-world forward rate dynamics with affine realizations. (arXiv:1907.05072v1 [q-fin.MF])]]>We investigate the existence of affine realizations for L\'{e}vy driven
interest rate term structure models under the real-world probability measure,
which so far has only been studied under an assumed risk-neutral probability
measure. For models driven by Wiener processes, all results obtained under the
risk-neutral approach concerning the existence of affine realizations are
transferred to the general case. A similar result holds true for models driven
by compound Poisson processes with finite jump size distributions. However, in
the presence of jumps with infinite activity we obtain severe restrictions on
the structure of the market price of risk; typically, it must even be constant.
]]>868741http://www.moneyscience.com/pg/blog/arXiv/read/868737/adaptive-pricing-in-insurance-generalized-linear-models-and-gaussian-process-regression-approaches-arxiv190705381v1-econemThu, 11 Jul 2019 23:01:50 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/868737/adaptive-pricing-in-insurance-generalized-linear-models-and-gaussian-process-regression-approaches-arxiv190705381v1-econem
<![CDATA[Adaptive Pricing in Insurance: Generalized Linear Models and Gaussian Process Regression Approaches. (arXiv:1907.05381v1 [econ.EM])]]>We study the application of dynamic pricing to insurance. We view this as an
online revenue management problem where the insurance company looks to set
prices to optimize the long-run revenue from selling a new insurance product.
We develop two pricing models: an adaptive Generalized Linear Model (GLM) and
an adaptive Gaussian Process (GP) regression model. Both balance between
exploration, where we choose prices in order to learn the distribution of
demands & claims for the insurance product, and exploitation, where we
myopically choose the best price from the information gathered so far. The
performance of the pricing policies is measured in terms of regret: the
expected revenue loss caused by not using the optimal price. As is commonplace
in insurance, we model demand and claims by GLMs. In our adaptive GLM design,
we use the maximum quasi-likelihood estimation (MQLE) to estimate the unknown
parameters. We show that, if prices are chosen with suitably decreasing
variability, the MQLE parameters eventually exist and converge to the correct
values, which in turn implies that the sequence of chosen prices will also
converge to the optimal price. In the adaptive GP regression model, we sample
demand and claims from Gaussian Processes and then choose selling prices by the
upper confidence bound rule. We also analyze these GLM and GP pricing
algorithms with delayed claims. Although similar results exist in other
domains, this is among the first works to consider dynamic pricing problems in
the field of insurance. We also believe this is the first work to consider
Gaussian Process regression in the context of insurance pricing. These initial
findings suggest that online machine learning algorithms could be a fruitful
area of future investigation and application in insurance.
]]>868737http://www.moneyscience.com/pg/blog/arXiv/read/868739/exponential-stock-models-driven-by-tempered-stable-processes-arxiv190705142v1-qfinmfThu, 11 Jul 2019 23:01:50 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/868739/exponential-stock-models-driven-by-tempered-stable-processes-arxiv190705142v1-qfinmf
<![CDATA[Exponential stock models driven by tempered stable processes. (arXiv:1907.05142v1 [q-fin.MF])]]>We investigate exponential stock models driven by tempered stable processes,
which constitute a rich family of purely discontinuous L\'{e}vy processes. With
a view of option pricing, we provide a systematic analysis of the existence of
equivalent martingale measures, under which the model remains analytically
tractable. This includes the existence of Esscher martingale measures and
martingale measures having minimal distance to the physical probability
measure. Moreover, we provide pricing formulae for European call options and
perform a case study.
]]>868739http://www.moneyscience.com/pg/blog/arXiv/read/868740/tempered-stable-distributions-and-processes-arxiv190705141v1-mathprThu, 11 Jul 2019 23:01:50 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/868740/tempered-stable-distributions-and-processes-arxiv190705141v1-mathpr
<![CDATA[Tempered stable distributions and processes. (arXiv:1907.05141v1 [math.PR])]]>We investigate the class of tempered stable distributions and their
associated processes. Our analysis of tempered stable distributions includes
limit distributions, parameter estimation and the study of their densities.
Regarding tempered stable processes, we deal with density transformations and
compute their $p$-variation indices. Exponential stock models driven by
tempered stable processes are discussed as well.
]]>868740http://www.moneyscience.com/pg/blog/arXiv/read/868452/nonlinear-price-dynamics-of-sp-100-stocks-arxiv190704422v1-qfingnWed, 10 Jul 2019 23:02:34 -0500
http://www.moneyscience.com/pg/blog/arXiv/read/868452/nonlinear-price-dynamics-of-sp-100-stocks-arxiv190704422v1-qfingn
<![CDATA[Nonlinear price dynamics of S&P 100 stocks. (arXiv:1907.04422v1 [q-fin.GN])]]>The methodology presented provides a quantitative way to characterize
investor behavior and price dynamics within a particular asset class and time
period. The methodology is applied to a data set consisting of over 250,000
data points of the S&P 100 stocks during 2004-2018. Using a two-way
fixed-effects model, we uncover trader motivations including evidence of both
under- and overreaction within a unified setting. A nonlinear relationship is
found between return and trend suggesting a small, positive trend increases the
return, while a larger one tends to decrease it. The shape parameters of the
nonlinearity quantify trader motivation to buy into trends or wait for
bargains. The methodology allows the testing of any behavioral finance bias or
technical analysis concept.
]]>868452