<![CDATA[MoneyScience: Research]]>
http://www.moneyscience.com/pg/blog-directory/research?view=rss
FinancialResearchFocushttps://feedburner.google.comhttp://www.moneyscience.com/pg/blog/arXiv/read/791923/arbitragefree-regularization-arxiv171005114v1-qfinmfMon, 16 Oct 2017 20:12:20 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/Gafj_YAxi_Q/arbitragefree-regularization-arxiv171005114v1-qfinmf
<![CDATA[Arbitrage-Free Regularization. (arXiv:1710.05114v1 [q-fin.MF])]]>We introduce a path-dependent geometric framework which generalizes the HJM
modeling approach to a wide variety of other asset classes. A machine learning
regularization framework is developed with the objective of removing arbitrage
opportunities from models within this general framework. The regularization
method relies on minimal deformations of a model subject to a path-dependent
penalty that detects arbitrage opportunities. We prove that the solution of
this regularization problem is independent of the arbitrage-penalty chosen,
subject to a fixed information loss functional. In addition to the general
properties of the minimal deformation, we also consider several explicit
examples. This paper is focused on placing machine learning methods in finance
on a sound theoretical basis and the techniques developed to achieve this
objective may be of interest in other areas of application.
]]>791923http://www.moneyscience.com/pg/blog/arXiv/read/791923/arbitragefree-regularization-arxiv171005114v1-qfinmfhttp://www.moneyscience.com/pg/blog/arXiv/read/791922/mean-field-game-approach-to-production-and-exploration-of-exhaustible-commodities-arxiv171005131v1-qfinecMon, 16 Oct 2017 20:11:16 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/I6f_0uwen64/mean-field-game-approach-to-production-and-exploration-of-exhaustible-commodities-arxiv171005131v1-qfinec
<![CDATA[Mean Field Game Approach to Production and Exploration of Exhaustible Commodities. (arXiv:1710.05131v1 [q-fin.EC])]]>In a game theoretic framework, we study energy markets with a continuum of
homogenous producers who produce energy from an exhaustible resource such as
oil. Each producer simultaneously optimizes production rate that drives her
revenues, as well as exploration effort to replenish her reserves. This
exploration activity is modeled through a controlled point process that leads
to stochastic increments to reserves level. The producers interact with each
other through the market price that depends on the aggregate production. We
employ a mean field game approach to solve for a Markov Nash equilibrium and
develop numerical schemes to solve the resulting system of non-local HJB and
transport equations with non-local coupling. A time-stationary formulation is
also explored, as well as the fluid limit where exploration becomes
deterministic.
]]>791922http://www.moneyscience.com/pg/blog/arXiv/read/791922/mean-field-game-approach-to-production-and-exploration-of-exhaustible-commodities-arxiv171005131v1-qfinechttp://www.moneyscience.com/pg/blog/arXiv/read/791921/dynamic-portfolio-optimization-with-looping-contagion-risk-arxiv171005168v1-qfinmfMon, 16 Oct 2017 20:10:13 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/e1yDBhtukro/dynamic-portfolio-optimization-with-looping-contagion-risk-arxiv171005168v1-qfinmf
<![CDATA[Dynamic Portfolio Optimization with Looping Contagion Risk. (arXiv:1710.05168v1 [q-fin.MF])]]>In this paper we consider a utility maximization problem with defaultable
stocks and looping contagion risk. We assume that the default intensity of one
company depends on the stock prices of itself and another company, and the
default of the company induces an immediate drop in the stock price of the
surviving company. We prove the value function is the unique continuous
viscosity solution of the HJB equation. We also compare and analyse the
statistical distributions of terminal wealth of log utility based on two
optimal strategies, one using the full information of intensity process, the
other a proxy constant intensity process. These two strategies may be
considered respectively the active and passive optimal portfolio investment.
Our simulation results show that, statistically, active portfolio investment is
more volatile and performs either much better or much worse than the passive
portfolio investment in extreme scenarios.
]]>791921http://www.moneyscience.com/pg/blog/arXiv/read/791921/dynamic-portfolio-optimization-with-looping-contagion-risk-arxiv171005168v1-qfinmfhttp://www.moneyscience.com/pg/blog/arXiv/read/791920/sequential-design-and-spatial-modeling-for-portfolio-tail-risk-measurement-arxiv171005204v1-qfinrmMon, 16 Oct 2017 20:08:27 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/oCaVgXoXHWI/sequential-design-and-spatial-modeling-for-portfolio-tail-risk-measurement-arxiv171005204v1-qfinrm
<![CDATA[Sequential Design and Spatial Modeling for Portfolio Tail Risk Measurement. (arXiv:1710.05204v1 [q-fin.RM])]]>We consider calculation of capital requirements when the underlying economic
scenarios are determined by simulatable risk factors. In the respective nested
simulation framework, the goal is to estimate portfolio tail risk, quantified
via VaR or TVaR of a given collection of future economic scenarios representing
factor levels at the risk horizon. Traditionally, evaluating portfolio losses
of an outer scenario is done by computing a conditional expectation via
inner-level Monte Carlo and is computationally expensive. We introduce several
inter-related machine learning techniques to speed up this computation, in
particular by properly accounting for the simulation noise. Our main workhorse
is an advanced Gaussian Process (GP) regression approach which uses
nonparametric spatial modeling to efficiently learn the relationship between
the stochastic factors defining scenarios and corresponding portfolio value.
Leveraging this emulator, we develop sequential algorithms that adaptively
allocate inner simulation budgets to target the quantile region. The GP
framework also yields better uncertainty quantification for the resulting
VaR/TVaR estimators that reduces bias and variance compared to existing
methods. We illustrate the proposed strategies with two case-studies in two and
six dimensions.
]]>791920http://www.moneyscience.com/pg/blog/arXiv/read/791920/sequential-design-and-spatial-modeling-for-portfolio-tail-risk-measurement-arxiv171005204v1-qfinrmhttp://www.moneyscience.com/pg/blog/arXiv/read/791919/robust-maximum-likelihood-estimation-of-sparse-vector-error-correction-model-arxiv171005513v1-statmlMon, 16 Oct 2017 20:07:24 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/9tJ8NwKmUNQ/robust-maximum-likelihood-estimation-of-sparse-vector-error-correction-model-arxiv171005513v1-statml
<![CDATA[Robust Maximum Likelihood Estimation of Sparse Vector Error Correction Model. (arXiv:1710.05513v1 [stat.ML])]]>In econometrics and finance, the vector error correction model (VECM) is an
important time series model for cointegration analysis, which is used to
estimate the long-run equilibrium variable relationships. The traditional
analysis and estimation methodologies assume the underlying Gaussian
distribution but, in practice, heavy-tailed data and outliers can lead to the
inapplicability of these methods. In this paper, we propose a robust model
estimation method based on the Cauchy distribution to tackle this issue. In
addition, sparse cointegration relations are considered to realize feature
selection and dimension reduction. An efficient algorithm based on the
majorization-minimization (MM) method is applied to solve the proposed
nonconvex problem. The performance of this algorithm is shown through numerical
simulations.
]]>791919http://www.moneyscience.com/pg/blog/arXiv/read/791919/robust-maximum-likelihood-estimation-of-sparse-vector-error-correction-model-arxiv171005513v1-statmlhttp://www.moneyscience.com/pg/blog/arXiv/read/791918/efficient-hedging-in-bates-model-using-highorder-compact-finite-differences-arxiv171005542v1-qfincpMon, 16 Oct 2017 20:06:20 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/c3GNuZg4MNw/efficient-hedging-in-bates-model-using-highorder-compact-finite-differences-arxiv171005542v1-qfincp
<![CDATA[Efficient hedging in Bates model using high-order compact finite differences. (arXiv:1710.05542v1 [q-fin.CP])]]>We evaluate the hedging performance of a high-order compact finite difference
scheme from [4] for option pricing in Bates model. We compare the scheme's
hedging performance to standard finite difference methods in different
examples. We observe that the new scheme outperforms a standard, second-order
central finite difference approximation in all our experiments.
]]>791918http://www.moneyscience.com/pg/blog/arXiv/read/791918/efficient-hedging-in-bates-model-using-highorder-compact-finite-differences-arxiv171005542v1-qfincphttp://www.moneyscience.com/pg/blog/arXiv/read/791917/geometric-learning-and-filtering-in-finance-arxiv171005829v1-qfinmfMon, 16 Oct 2017 20:05:17 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/CG_h0-Xz94w/geometric-learning-and-filtering-in-finance-arxiv171005829v1-qfinmf
<![CDATA[Geometric Learning and Filtering in Finance. (arXiv:1710.05829v1 [q-fin.MF])]]>We develop a method for incorporating relevant non-Euclidean geometric
information into a broad range of classical filtering and statistical or
machine learning algorithms. We apply these techniques to approximate the
solution of the non-Euclidean filtering problem to arbitrary precision. We then
extend the particle filtering algorithm to compute our asymptotic solution to
arbitrary precision. Moreover, we find explicit error bounds measuring the
discrepancy between our locally triangulated filter and the true theoretical
non-Euclidean filter. Our methods are motivated by certain fundamental problems
in mathematical finance. In particular we apply these filtering techniques to
incorporate the non-Euclidean geometry present in stochastic volatility models
and optimal Markowitz portfolios. We also extend Euclidean statistical or
machine learning algorithms to non-Euclidean problems by using the local
triangulation technique, which we show improves the accuracy of the original
algorithm. We apply the local triangulation method to obtain improvements of
the (sparse) principal component analysis and the principal geodesic analysis
algorithms and show how these improved algorithms can be used to parsimoniously
estimate the evolution of the shape of forward-rate curves. While focused on
financial applications, the non-Euclidean geometric techniques presented in
this paper can be employed to provide improvements to a range of other
statistical or machine learning algorithms and may be useful in other areas of
application.
]]>791917http://www.moneyscience.com/pg/blog/arXiv/read/791917/geometric-learning-and-filtering-in-finance-arxiv171005829v1-qfinmfhttp://www.moneyscience.com/pg/blog/arXiv/read/791858/a-general-framework-for-portfolio-theory-part-ii-drawdown-risk-measures-arxiv171004818v1-qfinrmSun, 15 Oct 2017 19:38:24 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/OJIeHaGnMC0/a-general-framework-for-portfolio-theory-part-ii-drawdown-risk-measures-arxiv171004818v1-qfinrm
<![CDATA[A General Framework for Portfolio Theory. Part II: drawdown risk measures. (arXiv:1710.04818v1 [q-fin.RM])]]>The aim of this paper is to provide several examples of convex risk measures
necessary for the application of the general framework for portfolio theory of
Maier-Paape and Zhu, presented in Part I of this series (arXiv:1710.04579
[q-fin.PM]). As alternative to classical portfolio risk measures such as the
standard deviation we in particular construct risk measures related to the
current drawdown of the portfolio equity. Combined with the results of Part I
(arXiv:1710.04579 [q-fin.PM]), this allows us to calculate efficient portfolios
based on a drawdown risk measure constraint.
]]>791858http://www.moneyscience.com/pg/blog/arXiv/read/791858/a-general-framework-for-portfolio-theory-part-ii-drawdown-risk-measures-arxiv171004818v1-qfinrmhttp://www.moneyscience.com/pg/blog/arXiv/read/791839/stochastic-gradient-descent-in-continuous-time-a-central-limit-theorem-arxiv171004273v1-mathprThu, 12 Oct 2017 19:46:49 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/gcKneMbnIGE/stochastic-gradient-descent-in-continuous-time-a-central-limit-theorem-arxiv171004273v1-mathpr
<![CDATA[Stochastic Gradient Descent in Continuous Time: A Central Limit Theorem. (arXiv:1710.04273v1 [math.PR])]]>Stochastic gradient descent in continuous time (SGDCT) provides a
computationally efficient method for the statistical learning of
continuous-time models, which are widely used in science, engineering, and
finance. The SGDCT algorithm follows a (noisy) descent direction along a
continuous stream of data. The parameter updates occur in continuous time and
satisfy a stochastic differential equation. This paper analyzes the asymptotic
convergence rate of the SGDCT algorithm by proving a central limit theorem for
strongly convex objective functions and, under slightly stronger conditions,
for non-convex objective functions as well. An L$^p$ convergence rate is also
proven for the algorithm in the strongly convex case.
]]>791839http://www.moneyscience.com/pg/blog/arXiv/read/791839/stochastic-gradient-descent-in-continuous-time-a-central-limit-theorem-arxiv171004273v1-mathprhttp://www.moneyscience.com/pg/blog/arXiv/read/791838/utility-maximization-problem-under-transaction-costs-optimal-dual-processes-and-stability-arxiv171004363v1-qfinmfThu, 12 Oct 2017 19:45:45 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/SGWu2scLeIA/utility-maximization-problem-under-transaction-costs-optimal-dual-processes-and-stability-arxiv171004363v1-qfinmf
<![CDATA[Utility maximization problem under transaction costs: optimal dual processes and stability. (arXiv:1710.04363v1 [q-fin.MF])]]>This paper discusses the num\'eraire-based utility maximization problem in
markets with proportional transaction costs. In particular, the investor is
required to liquidate all her position in stock at the terminal time. We first
observe the stability of the primal and dual value functions as well as the
convergence of the primal and dual optimizers when perturbations occur on the
utility function and on the physical probability. We then study the properties
of the optimal dual process (ODP), that is, a process from the dual domain that
induces the optimality of the dual problem. When the market is driven by a
continuous process, we construct the ODP for the problem in the limiting market
by a sequence of ODPs corresponding to the problems with small misspecificated
parameters. Moreover, we prove that this limiting ODP defines a shadow price.
]]>791838http://www.moneyscience.com/pg/blog/arXiv/read/791838/utility-maximization-problem-under-transaction-costs-optimal-dual-processes-and-stability-arxiv171004363v1-qfinmfhttp://www.moneyscience.com/pg/blog/arXiv/read/791837/computational-analysis-of-the-structural-properties-of-economic-and-financial-networks-arxiv171004455v1-qfincpThu, 12 Oct 2017 19:44:41 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/FYeEVwrXgOU/computational-analysis-of-the-structural-properties-of-economic-and-financial-networks-arxiv171004455v1-qfincp
<![CDATA[Computational Analysis of the structural properties of Economic and Financial Networks. (arXiv:1710.04455v1 [q-fin.CP])]]>In recent years, methods from network science are gaining rapidly interest in
economics and finance. A reason for this is that in a globalized world the
interconnectedness among economic and financial entities are crucial to
understand and networks provide a natural framework for representing and
studying such systems. In this paper, we are surveying the use of networks and
network-based methods for studying economy related questions. We start with a
brief overview of graph theory and basic definitions. Then we discuss
descriptive network measures and network complexity measures for quantifying
structural properties of economic networks. Finally, we discuss different
network and tree structures as relevant for applications.
]]>791837http://www.moneyscience.com/pg/blog/arXiv/read/791837/computational-analysis-of-the-structural-properties-of-economic-and-financial-networks-arxiv171004455v1-qfincphttp://www.moneyscience.com/pg/blog/arXiv/read/791836/a-general-framework-for-portfolio-theory-part-i-theory-and-various-models-arxiv171004579v1-qfinpmThu, 12 Oct 2017 19:43:24 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/n9_JFROXRWk/a-general-framework-for-portfolio-theory-part-i-theory-and-various-models-arxiv171004579v1-qfinpm
<![CDATA[A General Framework for Portfolio Theory. Part I: theory and various models. (arXiv:1710.04579v1 [q-fin.PM])]]>Utility and risk are two often competing measurements on the investment
success. We show that efficient trade-off between these two measurements for
investment portfolios happens, in general, on a convex curve in the two
dimensional space of utility and risk. This is a rather general pattern. The
modern portfolio theory of Markowitz [H. Markowitz, Portfolio Selection, 1959]
and its natural generalization, the capital market pricing model, [W. F.
Sharpe, Mutual fund performance , 1966] are special cases of our general
framework when the risk measure is taken to be the standard deviation and the
utility function is the identity mapping. Using our general framework, we also
recover the results in [R. T. Rockafellar, S. Uryasev and M. Zabarankin, Master
funds in portfolio analysis with general deviation measures, 2006] that extends
the capital market pricing model to allow for the use of more general deviation
measures. This generalized capital asset pricing model also applies to e.g.
when an approximation of the maximum drawdown is considered as a risk measure.
Furthermore, the consideration of a general utility function allows to go
beyond the "additive" performance measure to a "multiplicative" one of
cumulative returns by using the log utility. As a result, the growth optimal
portfolio theory [J. Lintner, The valuation of risk assets and the selection of
risky investments in stock portfolios and capital budgets, 1965] and the
leverage space portfolio theory [R. Vince, The Leverage Space Trading Model,
2009] can also be understood under our general framework. Thus, this general
framework allows a unification of several important existing portfolio theories
and goes much beyond.
]]>791836http://www.moneyscience.com/pg/blog/arXiv/read/791836/a-general-framework-for-portfolio-theory-part-i-theory-and-various-models-arxiv171004579v1-qfinpmhttp://www.moneyscience.com/pg/blog/arXiv/read/791822/a-700seat-noloss-composition-for-the-2019-european-parliament-arxiv171003820v1-qfinecWed, 11 Oct 2017 19:43:07 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/htApYtZ3JX4/a-700seat-noloss-composition-for-the-2019-european-parliament-arxiv171003820v1-qfinec
<![CDATA[A 700-seat no-loss composition for the 2019 European Parliament. (arXiv:1710.03820v1 [q-fin.EC])]]>The following paper is part of the authors' response to an invitation from
the Constitutional Affairs Committee (AFCO) of the European Parliament to
advise on mathematical methods for the allocation of Parliamentary seats
between the 27 Member States following the planned departure of the United
Kingdom in 2019. The authors were requested to propose a method that respects
the usual conditions of EU law, and with the additional property that no Member
State (other than the UK) receives fewer that its 2014 allocation. This paper
was delivered to the AFCO on 21 August 2017, for consideration by the AFCO at
its meeting in Strasbourg on 11 September 2017.
]]>791822http://www.moneyscience.com/pg/blog/arXiv/read/791822/a-700seat-noloss-composition-for-the-2019-european-parliament-arxiv171003820v1-qfinechttp://www.moneyscience.com/pg/blog/arXiv/read/791821/high-frequency-market-making-with-machine-learning-arxiv171003870v1-qfintrWed, 11 Oct 2017 19:42:04 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/IHkwfLQ97AM/high-frequency-market-making-with-machine-learning-arxiv171003870v1-qfintr
<![CDATA[High Frequency Market Making with Machine Learning. (arXiv:1710.03870v1 [q-fin.TR])]]>High frequency trading has been characterized as an arms race with 'Red
Queen' characteristics [Farmer,2012]. It is improbable, even impossible, that
many market participants can sustain a competitive advantage through the sole
reliance on low latency trade execution systems. The growth in volume of market
data, advances in computer hardware and commensurate prominence of machine
learning in other disciplines, have spurred the exploration of machine learning
for price discovery. Even though the application of machine learning to price
prediction has been extensively researched, the merit of this approach for high
frequency market making has received little attention.
read more...

]]>791821http://www.moneyscience.com/pg/blog/arXiv/read/791821/high-frequency-market-making-with-machine-learning-arxiv171003870v1-qfintrhttp://www.moneyscience.com/pg/blog/arXiv/read/791820/a-buffer-hawkes-process-for-limit-order-books-arxiv171003506v1-mathpr-cross-listedWed, 11 Oct 2017 19:41:00 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/I0jupaxLZD8/a-buffer-hawkes-process-for-limit-order-books-arxiv171003506v1-mathpr-cross-listed
<![CDATA[A buffer Hawkes process for limit order books. (arXiv:1710.03506v1 [math.PR] CROSS LISTED)]]>We introduce a Markovian single point process model, with random intensity
regulated through a buffer mechanism and a self-exciting effect controlling the
arrival stream to the buffer. The model applies the principle of the Hawkes
process in which point process jumps generate a shot-noise intensity field.
Unlike the Hawkes case, the intensity field is fed into a separate buffer, the
size of which is the driving intensity of new jumps. In this manner, the
intensity loop portrays mutual-excitation of point process events and buffer
size dynamics. This scenario is directly applicable to the market evolution of
limit order books, with buffer size being the current number of limit orders
and the jumps representing the execution of market orders. We give a branching
process representation of the point process and prove that the scaling limit is
Brownian motion with explicit volatility.
]]>791820http://www.moneyscience.com/pg/blog/arXiv/read/791820/a-buffer-hawkes-process-for-limit-order-books-arxiv171003506v1-mathpr-cross-listedhttp://www.moneyscience.com/pg/blog/arXiv/read/791801/large-deviations-for-risk-measures-in-finite-mixture-models-arxiv171003252v1-qfinrmTue, 10 Oct 2017 19:43:19 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/aq92lhzElqw/large-deviations-for-risk-measures-in-finite-mixture-models-arxiv171003252v1-qfinrm
<![CDATA[Large deviations for risk measures in finite mixture models. (arXiv:1710.03252v1 [q-fin.RM])]]>Due to their heterogeneity, insurance risks can be properly described as a
mixture of different fixed models, where the weights assigned to each model may
be estimated empirically from a sample of available data. If a risk measure is
evaluated on the estimated mixture instead of the (unknown) true one, then it
is important to investigate the committed error. In this paper we study the
asymptotic behaviour of estimated risk measures, as the data sample size tends
to infinity, in the fashion of large deviations. We obtain large deviation
results by applying the contraction principle, and the rate functions are given
by a suitable variational formula; explicit expressions are available for
mixtures of two models. Finally, our results are applied to the most common
risk measures, namely the quantiles, the Expected Shortfall and the entropic
risk measure.
]]>791801http://www.moneyscience.com/pg/blog/arXiv/read/791801/large-deviations-for-risk-measures-in-finite-mixture-models-arxiv171003252v1-qfinrmhttp://www.moneyscience.com/pg/blog/arXiv/read/791800/a-strategic-investment-framework-for-biotechnology-markets-via-dynamic-asset-allocation-and-class-diversification-arxiv171003267v1-qfinpmTue, 10 Oct 2017 19:42:16 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/fYKCol_uhmU/a-strategic-investment-framework-for-biotechnology-markets-via-dynamic-asset-allocation-and-class-diversification-arxiv171003267v1-qfinpm
<![CDATA[A Strategic Investment Framework for Biotechnology Markets via Dynamic Asset Allocation and Class Diversification. (arXiv:1710.03267v1 [q-fin.PM])]]>In this paper, we propose an innovative investment framework incorporating
asset allocation and class diversification oriented specifically for the
biotechnology industry. With growing interests and capitalization in multiple
biotech markets, investors require a more dynamic method of managing their
assets within individual portfolios for optimal return efficiency. By selecting
a single firm representative of identified industry trends, analyzing financial
metrics relevant to the suggested approaches, and assessing financial health,
we developed an adaptable investment methodology. We also performed analyses of
industrial viability and investigated the implications of the selected
strategies, with which we were able to optimize our framework for versatile
application within specialized biotech markets.
]]>791800http://www.moneyscience.com/pg/blog/arXiv/read/791800/a-strategic-investment-framework-for-biotechnology-markets-via-dynamic-asset-allocation-and-class-diversification-arxiv171003267v1-qfinpmhttp://www.moneyscience.com/pg/blog/arXiv/read/791799/market-impact-with-multitimescale-liquidity-arxiv171003734v1-qfintrTue, 10 Oct 2017 19:41:13 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/nYXD6aOfO0k/market-impact-with-multitimescale-liquidity-arxiv171003734v1-qfintr
<![CDATA[Market impact with multi-timescale liquidity. (arXiv:1710.03734v1 [q-fin.TR])]]>We present an extended version of the recently proposed "LLOB" model for the
dynamics of latent liquidity in financial markets. By allowing for finite
cancellation and deposition rates within a continuous reaction-diffusion setup,
we account for finite memory effects on the dynamics of latent order book. We
compute in particular the finite memory corrections to the square root impact
law, as well as the impact decay and the permanent impact of a meta-order. In
addition, we consider the case of a spectrum of cancellation and deposition
rates, which allows us to obtain a square root impact law for moderate
participation rates, as observed empirically. Our multi-scale framework also
provides an alternative solution to the so-called price diffusivity puzzle in
the presence of a long-range correlated order flow.
]]>791799http://www.moneyscience.com/pg/blog/arXiv/read/791799/market-impact-with-multitimescale-liquidity-arxiv171003734v1-qfintrhttp://www.moneyscience.com/pg/blog/arXiv/read/791781/double-functional-median-in-robust-prediction-of-hierarchical-functional-time-series-an-application-to-forecast-internet-service-users-behaviors-arxiv171002669v1-statcoMon, 09 Oct 2017 20:12:19 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/OnFbAf0Aeqg/double-functional-median-in-robust-prediction-of-hierarchical-functional-time-series-an-application-to-forecast-internet-service-users-behaviors-arxiv171002669v1-statco
<![CDATA[Double Functional Median in Robust Prediction of Hierarchical Functional Time Series - An Application to Forecast Internet Service Users Behaviors. (arXiv:1710.02669v1 [stat.CO])]]>In this article, a new nonparametric and robust method of forecasting
hierarchical functional time series is presented. The method is compared with
Hyndman and Shang's method with respect to their unbiasedness, effectiveness,
robustness, and computational complexity. Taking into account results of the
analytical, simulation and empirical studies, we come to the conclusion that
our proposal is superior over the proposal of Hyndman and Shang with respect to
some statistical criteria and especially with respect to their robustness and
computational complexity. The studied empirical example relates to the
management of Internet service divided into four subservices.
]]>791781http://www.moneyscience.com/pg/blog/arXiv/read/791781/double-functional-median-in-robust-prediction-of-hierarchical-functional-time-series-an-application-to-forecast-internet-service-users-behaviors-arxiv171002669v1-statcohttp://www.moneyscience.com/pg/blog/arXiv/read/791780/an-optimized-microeconomic-modeling-system-for-analyzing-industrial-externalities-in-nonoecd-countries-arxiv171002755v1-qfinecMon, 09 Oct 2017 20:11:16 -0500
http://feedproxy.google.com/~r/FinancialResearchFocus/~3/fvXDpM_byAY/an-optimized-microeconomic-modeling-system-for-analyzing-industrial-externalities-in-nonoecd-countries-arxiv171002755v1-qfinec
<![CDATA[An Optimized Microeconomic Modeling System for Analyzing Industrial Externalities in Non-OECD Countries. (arXiv:1710.02755v1 [q-fin.EC])]]>In this paper, we provide an integrated systems modeling approach to
analyzing global externalities from a microeconomic perspective. Various forms
of policy (fiscal, monetary, etc.) have addressed flaws and market failures in
models, but few have been able to successfully eliminate modern externalities
that remain an environmental and human threat. We assess three primary global
industries (pollution, agriculture, and energy) with respect to non-OECD
entities through both qualitative and quantitative studies. By combining key
mutual points of specific externalities present within each respective
industry, we are able to propose an alternative and optimized solution to
internalizing them via incentives and cooperative behavior rather than by
traditional Pigouvian taxes and subsidies.
]]>791780http://www.moneyscience.com/pg/blog/arXiv/read/791780/an-optimized-microeconomic-modeling-system-for-analyzing-industrial-externalities-in-nonoecd-countries-arxiv171002755v1-qfinec