This paper addresses a moral hazard problem in which the agent's actions affect the future profits of the firm. The optimal contract can be implemented through the issuance of variable coupon debt and purchase of fixed-coupon debt. Consequently, the resulting capital structure acts as a hedge for the firm, reducing underinvestment costs in bad states of nature and controlling overinvestment incentives in good ones. However, owing to asymmetric information between the firm's manager and investors, this hedge is only partial. The firm's investments vary with cash flows, disclosing the agent's asymmetric information to the principal. Copyright © 2016 John Wiley & Sons, Ltd.

]]>We explore the use of deep learning hierarchical models for problems in financial prediction and classification. Financial prediction problems – such as those presented in designing and pricing securities, constructing portfolios, and risk management – often involve large data sets with complex data interactions that currently are difficult or impossible to specify in a full economic model. Applying deep learning methods to these problems can produce more useful results than standard methods in finance. In particular, deep learning can detect and exploit interactions in the data that are, at least currently, invisible to any existing financial economic theory. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Bayesian designs make formal use of the experimenter's prior information in planning scientific experiments. In their 1989 paper, Chaloner and Larntz suggested to choose the design that maximizes the prior expectation of a suitable utility function of the Fisher information matrix, which is particularly useful when Fisher's information depends on the unknown parameters of the model. In this paper, their method is applied to a randomized experiment for a binary response model with two treatments, in an adaptive way, that is, updating the prior information at each step on the basis of the accrued data. The utility is the *A*-optimality criterion and the marginal priors for the parameters of interest are assumed to be beta distributions. This design is shown to converge almost surely to the Neyman allocation. But frequently, experiments are designed with more purposes in mind than just inferential ones. In clinical trials for treatment comparison, Bayesian statisticians share with non-Bayesians the goal of randomizing patients to treatment arms so as to assign more patients to the treatment that does better in the trial. One possible approach is to optimize the prior expectation of a combination of the different utilities. This idea is applied in the second part of the paper to the same binary model, under a very general joint prior, combining either *A*- or *D*-optimality with an ethical criterion. The resulting randomized experiment is skewed in favor of the more promising treatment and can be described as Bayes compound optimal. Copyright © 2016 John Wiley & Sons, Ltd.

Environmental report cards are popular mechanisms for summarising the overall status of an environmental system of interest. This paper describes the development of such a report card in the context of a study for Gladstone Harbour in Queensland, Australia. The harbour is within the World Heritage-protected Great Barrier Reef and is the location of major industrial development, hence the interest in developing a way of reporting its health in a statistically valid, transparent and sustainable manner. A Bayesian network (BN) approach was used because of its ability to aggregate and integrate different sources of information, provide probabilistic estimates of interest and update these estimates in a natural manner as new information becomes available.

BN modelling is an iterative process, and in the context of environmental reporting, this is appealing as model development can be initiated while quantitative knowledge is still under development, and subsequently refined as more knowledge becomes available. Moreover, the BN model helps build the maturity of the quantitative information needed and helps target investment in monitoring and/or process modelling activities to inform the approach taken. The model is able to incorporate spatial and temporal information and may be structured in such a way that new indicators of relevance to the underlying environmental gradient being monitored may replace less informative indicators or be added to the model with minimal effort.

The model described here focuses on the environmental component, but has the capacity to also incorporate social, cultural and economic components of the Gladstone Harbour Report Card. Copyright © 2016 John Wiley & Sons, Ltd.

Empirical evidence suggests that single factor models would not capture the full dynamics of stochastic volatility such that a marked discrepancy between their predicted prices and market prices exists for certain ranges (deep in-the-money and out-of-the-money) of time-to-maturities of options. On the other hand, there is an empirical reason to believe that volatility skew fluctuates randomly. Based upon the idea of combining stochastic volatility and stochastic skew, this paper incorporates stochastic elasticity of variance running on a fast timescale into the Heston stochastic volatility model. This multiscale and multifactor hybrid model keeps analytic tractability of the Heston model as much as possible, while it enhances capturing the complex nature of volatility and skew dynamics. Asymptotic analysis based on ergodic theory yields a closed form analytic formula for the approximate price of European vanilla options. Subsequently, the effect of adding the stochastic elasticity factor on top of the Heston model is demonstrated in terms of implied volatility surface. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The threshold autoregressive model with generalized autoregressive conditionally heteroskedastic (GARCH) specification is a popular nonlinear model that captures the well-known asymmetric phenomena in financial market data. The switching mechanisms of hysteretic autoregressive GARCH models are different from threshold autoregressive model with GARCH as regime switching may be delayed when the hysteresis variable lies in a hysteresis zone. This paper conducts a Bayesian model comparison among competing models by designing an adaptive Markov chain Monte Carlo sampling scheme. We illustrate the performance of three kinds of criteria by comparing models with fat-tailed and/or skewed errors: deviance information criteria, Bayesian predictive information, and an asymptotic version of Bayesian predictive information. A simulation study highlights the properties of the three Bayesian criteria and the accuracy as well as their favorable performance as model selection tools. We demonstrate the proposed method in an empirical study of 12 international stock markets, providing evidence to strongly support for both models with skew fat-tailed innovations. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Inspection models applicable to a finite planning horizon are developed for the following lifetime distributions: uniform, exponential, and Weibull distribution. For a given lifetime distribution, maximization of profit is used as the sole optimization criterion for determining an optimal planning horizon over which a system may be operated as well as ideal inspection times. Illustrative examples (focusing on the uniform and Weibull distributions and using Mathematica programs) are given. For some situations, evenly spreading inspections over the entire planning horizon are seen to result in the attainment of desirable profit levels over a shorter planning horizon. Scope for further research is given as well. Copyright © 2016 John Wiley & Sons, Ltd.

]]>We study the relationship between lumber strength properties and their visual grading characteristics. This topic is central to the analysis of the reliability of lumber products in that it underlies the calculation of structural design values. The approaches described in the paper are adaptations of survival analysis methods commonly used in medical studies. Because each piece of lumber can only be tested to destruction with one method (i.e., each piece cannot be broken twice), modeling these strengths distributions simultaneously can be challenging. In the past, this kind of problem has been solved by subjectively matching pieces of lumber, but the quality of this approach is then an issue. The objective of our analysis is to build a predictive model that relates the strength properties to the recorded characteristics. The paper concludes that type of wood defect (knot), a lumber grade status (off-grade: yes/no), and a lumber's module of elasticity have statistically significant effects on wood strength. We find that the Weibull accelerated failure time model provides a better fit than the Cox proportional hazards model in our dataset. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This paper solves an optimal portfolio selection problem in the discrete-time setting where the states of the financial market cannot be completely observed, which breaks the common assumption that the states of the financial market are fully observable. The dynamics of the unobservable market state is formulated by a hidden Markov chain, and the return of the risky asset is modulated by the unobservable market state. Based on the observed information up to the decision moment, an investor wants to find the optimal multi-period investment strategy to maximize the mean-variance utility of the terminal wealth. By adopting a sufficient statistic, the portfolio optimization problem with incompletely observable information is converted into the one with completely observable information. The optimal investment strategy is derived by using the dynamic programming approach and the embedding technique, and the efficient frontier is also presented. Compared with the case when the market state can be completely observed, we find that the unobservable market state does decrease the investment value on the risky asset in average. Finally, numerical results illustrate the impact of the unobservable market state on the efficient frontier, the optimal investment strategy and the Sharpe ratio. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In high-dimensional data settings where *p* ≫ *n*, many penalized regularization approaches were studied for simultaneous variable selection and estimation. However, with the existence of covariates with weak effect, many existing variable selection methods, including Lasso and its generations, cannot distinguish covariates with weak and no contribution. Thus, prediction based on a subset model of selected covariates only can be inefficient. In this paper, we propose a post selection shrinkage estimation strategy to improve the prediction performance of a selected subset model. Such a post selection shrinkage estimator (PSE) is data adaptive and constructed by shrinking a post selection weighted ridge estimator in the direction of a selected candidate subset. Under an asymptotic distributional quadratic risk criterion, its prediction performance is explored analytically. We show that the proposed post selection PSE performs better than the post selection weighted ridge estimator. More importantly, it improves the prediction performance of any candidate subset model selected from most existing Lasso-type variable selection methods significantly. The relative performance of the post selection PSE is demonstrated by both simulation studies and real-data analysis. Copyright © 2016 John Wiley & Sons, Ltd.

The problem of an *inspection permutation* or *inspection strategy* (first discussed in a research paper in 1989 and reviewed in another research paper in 1991) is revisited. The problem deals with an *N*-component system whose times to failure are independent but not identically distributed random variables. Each of the failure times follows an exponential distribution. The components in the system are connected in series such that the failure of at least one component entails the failure of the system. Upon system failure, the components are inspected one after another in a hierarchical way (called an inspection permutation) until the component causing the system to fail is identified. The inspection of each component is a process that takes a non-negligible amount of time and is performed at a cost. Once the faulty component is identified, it is repaired at a cost, and the repair process takes some time. After the repair, the system is *good as new* and is put back in operation. The inspection permutation that results in the maximum long run average net income per unit of time (for the *undiscounted case*) or maximum total discounted net income per unit of time (for the *discounted case*) is called the optimal inspection permutation/strategy. A way of determining an optimal inspection permutation in an easier fashion, taking advantage of the improvements in computer software, is proffered. Mathematica is used to showcase how the method works with the aid of a numerical example. Copyright © 2016 John Wiley & Sons, Ltd.

This paper provides analytic pricing formulas of discretely monitored geometric Asian options under the regime-switching model. We derive the joint Laplace transform of the discount factor, the log return of the underlying asset price at maturity, and the logarithm of the geometric mean of the asset price. Then using the change of measures and the inversion of the transform, the prices and deltas of a fixed-strike and a floating-strike geometric Asian option are obtained. As the numerical results, we calculate the price of a fixed-strike and a floating-strike discrete geometric Asian call option using our formulas and compare with the results of the Monte Carlo simulation. Copyright © 2016 John Wiley & Sons, Ltd.

]]>No abstract is available for this article.

]]>Reference samples are frequently used to estimate in-control parameters, which are then used as the true in-control parameters during the monitoring phase of Statistical Process Control (SPC) applications. The SPC literature has recognized that even small errors in parameter estimates determined from reference samples can have a large impact on the conditional (given the values of the estimated parameters) in-control average run length. However, there is little quantitative guidance on how large the reference sample should be to minimize this impact. In this paper, under the context of a recently developed Cumulative Sum (CUSUM) designed to detect translations in exponential distributions, a reference sample size formula for controlling relative error of the conditional in-control average run length is derived. The result in this paper is a stepping stone for reference sample size formulas in more general settings. Copyright © 2016 John Wiley & Sons, Ltd.

]]>We establish a simple connection between certain *in-control* characteristics of the Cumulative Sum (CUSUM) Run Length and their *out-of-control* counterparts. The connection is in the form of paired integral (renewal) equations. The derivation exploits Wald's likelihood ratio identity and the well-known fact that the CUSUM chart is equivalent to repetitive application of Wald's Sequential Probability Ratio Test (SPRT). The characteristics considered include the entire Run Length distribution and all of the corresponding moments, starting from the zero-state average run length. A particular *practical* benefit of our result is that it enables the in-control and out-of-control characteristics of the CUSUM Run Length to be computed *concurrently*. Moreover, owing to the equivalence of the CUSUM chart to a sequence of SPRTs, the Average Sample Number and Operating Characteristic functions of an SPRT under the null and under the alternative can *all* be computed *simultaneously* as well. This would double up the efficiency of any numerical method that one may choose to devise to carry out the actual computations. Copyright © 2016 John Wiley & Sons, Ltd.

Sliced Latin hypercube designs (SLHDs) achieve maximum stratification in each dimension, but neither the full designs nor their slices can guarantee a good uniformity over the experimental region. Although the uniformity of the full SLHD and that of its slices are related, there is no one-to-one correspondence between them. In this paper, we propose a new uniformity measure for SLHDs by combining the two kinds of uniformity. Based on such a combined uniformity measure, the obtained uniform SLHDs have the design points evenly spread over the experimental region not only for the whole designs but also for their slices. Numerical simulation shows the effectiveness of the proposed uniform SLHDs for computer experiments with both quantitative and qualitative factors. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In this work, we create a family of simple stochastic covariance models, which display stochastic mean-reverting levels of covariance as an additional level of stochastic behavior beyond well-known stochastic volatility and correlation. The one-dimensional version of our model is inspired by Heston model, while the multidimensional model generalizes the principal component stochastic volatility model. Their main contribution is that they capture stochastic mean-reversion levels on the volatility and on the eigenvalues of the instantaneous covariance matrix of the vector of stock prices, with direct implications on the correlations as well. Our focus is on the multidimensional model; we investigate its properties and derive a closed-form expression for the characteristic function. This allows us to study the pricing of financial derivatives, such as correlation and spread options. Those prices are compared with simulated Monte Carlo prices for correctness. A sensitivity analysis is performed on the parameters of the stochastic mean-reverting level of volatilities to study their impact on the price. Finally, implied volatility curves and correlation surfaces are built to reveal the additional flexibility gained within the new model. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This paper studies *k*-out-of-*n* redundant systems with component lifetimes having lower tail permutation decreasing probability density. For matched redundancies with stochastic arrangement increasing lifetimes, the allocation of a more reliable component to a weaker component is proved to enhance system reliability. For redundancies with independent and identically distributed lifetimes, more allocations to a weaker component are shown to stochastically increase the system lifetime. In addition, using a real data set, we illustrate the statistical aspects of developing lifetimes with lower tail permutation decreasing density. Copyright © 2016 John Wiley & Sons, Ltd.

We consider the problem of modeling the dependence among many time series. We build high-dimensional time-varying copula models by combining pair-copula constructions with stochastic autoregressive copula and generalized autoregressive score models to capture dependence that changes over time. We show how the estimation of this highly complex model can be broken down into the estimation of a sequence of bivariate models, which can be achieved by using the method of maximum likelihood. Further, by restricting the conditional dependence parameter on higher cascades of the pair copula construction to be constant, we can greatly reduce the number of parameters to be estimated without losing much flexibility. Applications to five MSCI stock market indices and to a large dataset of daily stock returns of all constituents of the Dax 30 illustrate the usefulness of the proposed model class in-sample and for density forecasting. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The modeling of macroeconomic influence on rating migration matrices plays an important role in credit risk management, especially in stress testing. In contrast to approaches, which separately condition migration matrices by a qualitative assessment of the state of the business cycle, we promote the use of generalized regression models, which directly allow to consider macroeconomic covariates. We systemize, extend, and critically discuss different regression approaches and put an emphasis on violations of model assumptions as well as on sufficient treatment of such problems, an aspect, which has not been focused on to a satisfactory extent in the recent literature. Moreover, we introduce a framework for model evaluation and variable selection, which is based on the concept of out-of-sample forecasting, in order to avoid overfitting. Finally, we illustrate the concepts outlined by practical examples based on Standard & Poor's global corporate ratings data. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Semiconductors are fabricated through unit processes including photolithography, etching, diffusion, ion implantation, deposition, and planarization processes. Chemical mechanical planarization, which is essential in advanced semiconductor manufacturing processes, aims to achieve high planarity across the wafer surface. This paper presents a case study in which the optimal blend of mixture slurry was obtained to improve the two response variables (material loss and roughness) at the same time. The mixture slurry consists of several pure slurries; when all of the abrasive particles within the slurry are of the same size, the slurry is referred to as a pure slurry. The optimal blend was obtained by applying a multiresponse surface optimization method. In particular, the recently developed posterior approach to dual response surface optimization was employed, which allows the chemical mechanical planarization process engineer to investigate tradeoffs between the two response variables. The two responses were better with the obtained blend than the existing blend. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The purpose of this article is to summarize recent research results for constructing nonparametric multivariate control charts with main focus on data depth-based control charts. Data depth provides dimension reduction to high-dimensional problems in a completely nonparametric way. Several depth measures including Tukey depth are shown to be particularly effective for purposes of statistical process control in case that the data deviate normality assumption. For detecting small or moderate shifts in the process target mean, the multivariate version of the exponentially weighted moving average chart is generally robust to non-normal data, so that nonparametric alternatives may be less often required. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The aim of the present paper is the stochastic modeling and statistical inference of a component which deteriorates over time, for prediction purpose. The deterioration is due to defects which appear one by one and next independently propagate over time. The motivation comes from an application to passive components within electric power plants, where (measurable) flaw indications first initiate (one at a time) and next grow over time. The available data come from inspections at discrete times, where only the largest flaw indication is measured together with the total number of indications on each component. Although detected, too small indications cannot be measured, leading to censored observations. Taking into account this partial information coming from the field, a specific stochastic model is proposed, where the flaw indications initiate according to a Poisson process and next propagate according to competing independent gamma processes. A parametric estimation procedure is developed, tested on simulated data and then applied to the industrial case. The fitted model is next used to make some prediction over the future deterioration of each component and over its residual operating time until a specified critical degradation level is reached. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The distribution of a coherent system based on IID components can be written as a mixture of the distributions of progressively Type-II censored order statistics. The coefficients in that representation are called the progressive censoring signature (PC-signature) of the system. In this paper, we explore the basic properties and potential applications of these mixture representations. We show that they can be used to study the associated censoring schemes (and their exact probabilities) in the operating development of the system. It is illustrated that, in some sense, the PC-signature provides more information than the classical (Samaniego) signature of the system. Further, this new signature can be used to establish distribution-free ordering properties for system lifetimes. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In this paper, we have examined the inflation rates in the Group of Seven countries, investigating issues such as the existence of unit roots, structural breaks, fractional integration and potential non-linearities using a fractional dependence (FD) approach based on Chebyshev polynomials in time. This robust FD approach allows one to test for persistence as well as non-linearity of the series. We first tested for stationarity and structural breaks using classical approaches and observed inconclusive results with regard to the stationarity levels of the series. Using Bai–Perron tests, we actually confirmed significant structural breaks, even up to five, in each of the inflation series. However, noting that structural breaks are significantly related to fractional differentiation, this latter approach was also conducted. Here, we observed that the estimates of the differencing parameter were quite stable across time, and evidence of unit roots was found in the cases of the UK, Canada, France, Japan and the USA; for Germany, we found some evidence of mean reversion, while estimates of ** d** above 1 were found in the case of Italy. On the other hand, non-linear deterministic trends were clearly rejected in all cases. Copyright © 2016 John Wiley & Sons, Ltd.

This paper presents a reduced-form model for pricing defaultable bonds and credit default swaps (CDSs) with stochastic recovery, where the recovery risk is coupled with the default intensity, while the default intensity is described by a stochastic differential equation. Closed-form pricing formulae for defaultable bonds and CDSs are obtained by applying variable change techniques and partial differential equation approaches. The closed-form pricing formulae can provide valuable assistance in analyzing certain complications associated with portfolio management and hedging analysis. Finally, numerical experiments are provided to illustrate how the recovery parameters and intensity parameters affect the credit spread of a defaultable bond and the swap premium of a CDS. Copyright © 2016 John Wiley & Sons, Ltd.

]]>