We propose a strategy for automated trading, outline theoretical justification of the profitability of this strategy, and overview the backtesting results in application to foreign currencies trading. The proposed methodology relies on the assumption that processes reflecting the dynamics of currency exchange rates are in a certain sense similar to the class of Ornstein–Uhlenbeck processes and exhibit the mean reverting property. In order to describe the quantitative characteristics of the projected return of the strategy, we derive the explicit expression for the running maximum of the Ornstein–Uhlenbeck process stopped at maximum drawdown and look at the correspondence between derived characteristics and the observed ones. Copyright © 2017 John Wiley & Sons, Ltd.

]]>This paper proposes a methodology that accounts for the selection effect due to non-random entry in duration models using latent-class models. A mixed proportional hazard model with continuous finite mixture unobserved heterogeneity (MPH-CFM) is introduced to correct for the potential bias induced by the selection effect. Conditions for identification, consistency, and asymptotic normality of the MPH-CFM are provided. The estimator is used to investigate the duration of new entrant Canadian manufacturing firms. For the current application, the MPH-CFM is compared with alternative duration models and found to be superior. Empirically, the results indicate that there are two classes of firms. Class I starts with high hazard and decreases non-monotonically while Class II has a negligible hazard. These empirical results can be used to understand alternative models of firm dynamics. Copyright © 2017 John Wiley & Sons, Ltd.

]]>The intent of this discussion is to highlight opportunities and limitations of utility-based and decision theoretic arguments in clinical trial design. The discussion is based on a specific case study, but the arguments and principles remain valid in general. The example concerns the design of a randomized clinical trial to compare a gel sealant versus standard care for resolving air leaks after pulmonary resection. The design follows a principled approach to optimal decision making, including a probability model for the unknown distributions of time to resolution of air leaks under the two treatment arms and an explicit utility function that quantifies clinical preferences for alternative outcomes. As is typical for any real application, the final implementation includes some compromises from the initial principled setup. In particular, we use the formal decision problem only for the final decision, but use reasonable ad hoc decision boundaries for making interim group sequential decisions that stop the trial early. Beyond the discussion of the particular study, we review more general considerations of using a decision theoretic approach for clinical trial design and summarize some of the reasons why such approaches are not commonly used. Copyright © 2017 John Wiley & Sons, Ltd.

]]>Insurance risks data typically exhibit skewed behaviour. In this paper, we propose a Bayesian approach to capture the main features of these data sets. This work extends a methodology recently introduced in the literature by considering an extra parameter that captures the skewness of the data. In particular, a skewed Student-t distribution is considered. Two data sets are analysed: the Danish fire losses and the US indemnity loss. The analysis is carried with an objective Bayesian approach. For the discrete parameter representing the number of the degrees of freedom, we adopt a novel prior recently appeared in the literature. Copyright © 2017 John Wiley & Sons, Ltd.

]]>This research considers a supply chain financing system consisting of a capital-constrained retailer, a supplier and a risk-averse bank. The retailer may be subject to credit limit because of the bank's downside risk control, and hence, credit insurance should be needed to enhance his financing ability. This paper develops a mathematical optimization model by incorporating insurance policy into the well-known newsvendor financing model. The optimal inventory and insurance decisions under different scenarios, that is, no insurance, insurance with symmetric information and insurance with asymmetric information, are derived. This work also discusses how the retailer's capital level, the bank's risk aversion, and the insurer's loading factor affect the optimal inventory and insurance decisions. The results show that the retailer will use credit insurance if he is sufficiently capital-constrained or the insurer's risk loading factor is low enough. Moreover, credit insurance can bring Pareto improvement to the supply chain financing system, which verifies the prevalence of credit insurance in practice. Several numerical experiments are presented to examine the sensitivities of key parameters. Copyright © 2016 John Wiley & Sons, Ltd.

]]>We study a stratified multisite cluster-sampling panel time series approach in order to analyse and evaluate the quality and reliability of produced items, motivated by the problem to sample and analyse multisite outdoor measurements from photovoltaic systems. The specific stratified sampling in spatial clusters reduces sampling costs and allows for heterogeneity as well as for the analysis of spatial correlations due to defects and damages that tend to occur in clusters. The analysis is based on weighted least squares using data-dependent weights. We show that this does not affect consistency and asymptotic normality of the least squares estimator under the proposed sampling design under general conditions. The estimation of the relevant variance–covariance matrices is discussed in detail for various models including nested designs and random effects. The strata corresponding to damages or manufacturers are modelled via a quality feature by means of a threshold approach. The analysis of outdoor electroluminescence images shows that spatial correlations and local clusters may arise in such photovoltaic data. Further, relevant statistics such as the mean pixel intensity cannot be assumed to follow a Gaussian law. We investigate the proposed inferential tools in detail by simulations in order to assess the influence of spatial cluster correlations and serial correlations on the test's size and power. ©2016 The Authors. Applied Stochastic Models in Business and Industry published by John Wiley & Sons, Ltd.

]]>A new approach to optimal maintenance of systems (networks) is suggested. It is applied to systems subject to two external independent shock processes. A system ‘consists’ of two parts, and each shock process affects only its own part. A new notion of bivariate signature is suggested and used for obtaining survival characteristics of a system and further optimization of the preventive maintenance actions. The preventive maintenance optimization is considered in the univariate discrete scale that counts the overall numbers of shocks of both types. An example of a transportation network is considered. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In binary regression, symmetric links such as logit and probit are usually considered as standard. However, in the presence of unbalancing of ones and zeros, these links can be inappropriate and inflexible to fit the skewness in the response curve and likely to lead to misspecification. This is the case of covering some type of insurance, where it can be observed that the probability of a given binary response variable approaches zero at different rates than it approaches one. Furthermore, when usual links are considered, there is not a skewness parameter associated with the distribution chosen that, regardless of the linear predictor, is easily interpreted. In order to overcome such problems, a proposal for the construction of a set of new skew links is developed in this paper, where some of their properties are discussed. In this context, power links and their reversal versions are presented. A Bayesian inference approach using MCMC is developed for the presented models. The methodology is illustrated considering a sample of motor insurance policyholders selected randomly by gender. Results suggest that the proposed link functions are more appropriate than other alternative link functions commonly used in the literature. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In this article, we consider sample size determination for experiments in which estimation and design are performed by multiple parties. This problem has relevant applications in contexts involving adversarial decision makers, such as control theory, marketing, and drug testing. Specifically, we adopt a decision-theoretic perspective, and we assume that a decision on an unknown parameter of a statistical model involves two actors, and , who share the same data and loss function but not the same prior beliefs on the parameter. We also suppose that has to use 's optimal action, and we finally assume that the experiment is planned by a third party, . In this framework, we aim at determining an appropriate sample size so that the posterior expected loss incurred by in taking the optimal action of is sufficiently small. We develop general results for the one-parameter exponential family under quadratic loss and analyze the interactive impact of the prior beliefs of the three different parties on the resulting sample sizes. Relationships with other sample size determination criteria are explored. Copyright © 2016 John Wiley & Sons, Ltd.

]]>We present a Bayesian decision theoretic approach for developing replacement strategies. In so doing, we consider a semiparametric model to describe the failure characteristics of systems by specifying a nonparametric form for cumulative intensity function and by taking into account effect of covariates by a parametric form. Use of a gamma process prior for the cumulative intensity function complicates the Bayesian analysis when the updating is based on failure count data. We develop a Bayesian analysis of the model using Markov chain Monte Carlo methods and determine replacement strategies. Adoption of Markov chain Monte Carlo methods involves a data augmentation algorithm. We show the implementation of our approach using actual data from railroad tracks. Copyright © 2016 John Wiley & Sons, Ltd.

]]>We explore the use of deep learning hierarchical models for problems in financial prediction and classification. Financial prediction problems – such as those presented in designing and pricing securities, constructing portfolios, and risk management – often involve large data sets with complex data interactions that currently are difficult or impossible to specify in a full economic model. Applying deep learning methods to these problems can produce more useful results than standard methods in finance. In particular, deep learning can detect and exploit interactions in the data that are, at least currently, invisible to any existing financial economic theory. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Bayesian designs make formal use of the experimenter's prior information in planning scientific experiments. In their 1989 paper, Chaloner and Larntz suggested to choose the design that maximizes the prior expectation of a suitable utility function of the Fisher information matrix, which is particularly useful when Fisher's information depends on the unknown parameters of the model. In this paper, their method is applied to a randomized experiment for a binary response model with two treatments, in an adaptive way, that is, updating the prior information at each step on the basis of the accrued data. The utility is the *A*-optimality criterion and the marginal priors for the parameters of interest are assumed to be beta distributions. This design is shown to converge almost surely to the Neyman allocation. But frequently, experiments are designed with more purposes in mind than just inferential ones. In clinical trials for treatment comparison, Bayesian statisticians share with non-Bayesians the goal of randomizing patients to treatment arms so as to assign more patients to the treatment that does better in the trial. One possible approach is to optimize the prior expectation of a combination of the different utilities. This idea is applied in the second part of the paper to the same binary model, under a very general joint prior, combining either *A*- or *D*-optimality with an ethical criterion. The resulting randomized experiment is skewed in favor of the more promising treatment and can be described as Bayes compound optimal. Copyright © 2016 John Wiley & Sons, Ltd.

Environmental report cards are popular mechanisms for summarising the overall status of an environmental system of interest. This paper describes the development of such a report card in the context of a study for Gladstone Harbour in Queensland, Australia. The harbour is within the World Heritage-protected Great Barrier Reef and is the location of major industrial development, hence the interest in developing a way of reporting its health in a statistically valid, transparent and sustainable manner. A Bayesian network (BN) approach was used because of its ability to aggregate and integrate different sources of information, provide probabilistic estimates of interest and update these estimates in a natural manner as new information becomes available.

BN modelling is an iterative process, and in the context of environmental reporting, this is appealing as model development can be initiated while quantitative knowledge is still under development, and subsequently refined as more knowledge becomes available. Moreover, the BN model helps build the maturity of the quantitative information needed and helps target investment in monitoring and/or process modelling activities to inform the approach taken. The model is able to incorporate spatial and temporal information and may be structured in such a way that new indicators of relevance to the underlying environmental gradient being monitored may replace less informative indicators or be added to the model with minimal effort.

The model described here focuses on the environmental component, but has the capacity to also incorporate social, cultural and economic components of the Gladstone Harbour Report Card. Copyright © 2016 John Wiley & Sons, Ltd.

In high-dimensional data settings where *p* ≫ *n*, many penalized regularization approaches were studied for simultaneous variable selection and estimation. However, with the existence of covariates with weak effect, many existing variable selection methods, including Lasso and its generations, cannot distinguish covariates with weak and no contribution. Thus, prediction based on a subset model of selected covariates only can be inefficient. In this paper, we propose a post selection shrinkage estimation strategy to improve the prediction performance of a selected subset model. Such a post selection shrinkage estimator (PSE) is data adaptive and constructed by shrinking a post selection weighted ridge estimator in the direction of a selected candidate subset. Under an asymptotic distributional quadratic risk criterion, its prediction performance is explored analytically. We show that the proposed post selection PSE performs better than the post selection weighted ridge estimator. More importantly, it improves the prediction performance of any candidate subset model selected from most existing Lasso-type variable selection methods significantly. The relative performance of the post selection PSE is demonstrated by both simulation studies and real-data analysis. Copyright © 2016 John Wiley & Sons, Ltd.

No abstract is available for this article.

]]>This paper provides analytic pricing formulas of discretely monitored geometric Asian options under the regime-switching model. We derive the joint Laplace transform of the discount factor, the log return of the underlying asset price at maturity, and the logarithm of the geometric mean of the asset price. Then using the change of measures and the inversion of the transform, the prices and deltas of a fixed-strike and a floating-strike geometric Asian option are obtained. As the numerical results, we calculate the price of a fixed-strike and a floating-strike discrete geometric Asian call option using our formulas and compare with the results of the Monte Carlo simulation. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This paper solves an optimal portfolio selection problem in the discrete-time setting where the states of the financial market cannot be completely observed, which breaks the common assumption that the states of the financial market are fully observable. The dynamics of the unobservable market state is formulated by a hidden Markov chain, and the return of the risky asset is modulated by the unobservable market state. Based on the observed information up to the decision moment, an investor wants to find the optimal multi-period investment strategy to maximize the mean-variance utility of the terminal wealth. By adopting a sufficient statistic, the portfolio optimization problem with incompletely observable information is converted into the one with completely observable information. The optimal investment strategy is derived by using the dynamic programming approach and the embedding technique, and the efficient frontier is also presented. Compared with the case when the market state can be completely observed, we find that the unobservable market state does decrease the investment value on the risky asset in average. Finally, numerical results illustrate the impact of the unobservable market state on the efficient frontier, the optimal investment strategy and the Sharpe ratio. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Inspection models applicable to a finite planning horizon are developed for the following lifetime distributions: uniform, exponential, and Weibull distribution. For a given lifetime distribution, maximization of profit is used as the sole optimization criterion for determining an optimal planning horizon over which a system may be operated as well as ideal inspection times. Illustrative examples (focusing on the uniform and Weibull distributions and using Mathematica programs) are given. For some situations, evenly spreading inspections over the entire planning horizon are seen to result in the attainment of desirable profit levels over a shorter planning horizon. Scope for further research is given as well. Copyright © 2016 John Wiley & Sons, Ltd.

]]>We study the relationship between lumber strength properties and their visual grading characteristics. This topic is central to the analysis of the reliability of lumber products in that it underlies the calculation of structural design values. The approaches described in the paper are adaptations of survival analysis methods commonly used in medical studies. Because each piece of lumber can only be tested to destruction with one method (i.e., each piece cannot be broken twice), modeling these strengths distributions simultaneously can be challenging. In the past, this kind of problem has been solved by subjectively matching pieces of lumber, but the quality of this approach is then an issue. The objective of our analysis is to build a predictive model that relates the strength properties to the recorded characteristics. The paper concludes that type of wood defect (knot), a lumber grade status (off-grade: yes/no), and a lumber's module of elasticity have statistically significant effects on wood strength. We find that the Weibull accelerated failure time model provides a better fit than the Cox proportional hazards model in our dataset. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Empirical evidence suggests that single factor models would not capture the full dynamics of stochastic volatility such that a marked discrepancy between their predicted prices and market prices exists for certain ranges (deep in-the-money and out-of-the-money) of time-to-maturities of options. On the other hand, there is an empirical reason to believe that volatility skew fluctuates randomly. Based upon the idea of combining stochastic volatility and stochastic skew, this paper incorporates stochastic elasticity of variance running on a fast timescale into the Heston stochastic volatility model. This multiscale and multifactor hybrid model keeps analytic tractability of the Heston model as much as possible, while it enhances capturing the complex nature of volatility and skew dynamics. Asymptotic analysis based on ergodic theory yields a closed form analytic formula for the approximate price of European vanilla options. Subsequently, the effect of adding the stochastic elasticity factor on top of the Heston model is demonstrated in terms of implied volatility surface. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The problem of an *inspection permutation* or *inspection strategy* (first discussed in a research paper in 1989 and reviewed in another research paper in 1991) is revisited. The problem deals with an *N*-component system whose times to failure are independent but not identically distributed random variables. Each of the failure times follows an exponential distribution. The components in the system are connected in series such that the failure of at least one component entails the failure of the system. Upon system failure, the components are inspected one after another in a hierarchical way (called an inspection permutation) until the component causing the system to fail is identified. The inspection of each component is a process that takes a non-negligible amount of time and is performed at a cost. Once the faulty component is identified, it is repaired at a cost, and the repair process takes some time. After the repair, the system is *good as new* and is put back in operation. The inspection permutation that results in the maximum long run average net income per unit of time (for the *undiscounted case*) or maximum total discounted net income per unit of time (for the *discounted case*) is called the optimal inspection permutation/strategy. A way of determining an optimal inspection permutation in an easier fashion, taking advantage of the improvements in computer software, is proffered. Mathematica is used to showcase how the method works with the aid of a numerical example. Copyright © 2016 John Wiley & Sons, Ltd.

The threshold autoregressive model with generalized autoregressive conditionally heteroskedastic (GARCH) specification is a popular nonlinear model that captures the well-known asymmetric phenomena in financial market data. The switching mechanisms of hysteretic autoregressive GARCH models are different from threshold autoregressive model with GARCH as regime switching may be delayed when the hysteresis variable lies in a hysteresis zone. This paper conducts a Bayesian model comparison among competing models by designing an adaptive Markov chain Monte Carlo sampling scheme. We illustrate the performance of three kinds of criteria by comparing models with fat-tailed and/or skewed errors: deviance information criteria, Bayesian predictive information, and an asymptotic version of Bayesian predictive information. A simulation study highlights the properties of the three Bayesian criteria and the accuracy as well as their favorable performance as model selection tools. We demonstrate the proposed method in an empirical study of 12 international stock markets, providing evidence to strongly support for both models with skew fat-tailed innovations. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This paper addresses a moral hazard problem in which the agent's actions affect the future profits of the firm. The optimal contract can be implemented through the issuance of variable coupon debt and purchase of fixed-coupon debt. Consequently, the resulting capital structure acts as a hedge for the firm, reducing underinvestment costs in bad states of nature and controlling overinvestment incentives in good ones. However, owing to asymmetric information between the firm's manager and investors, this hedge is only partial. The firm's investments vary with cash flows, disclosing the agent's asymmetric information to the principal. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Continuous surveillance of the coefficient of variation is a quality control issue worthy of consideration in several manufacturing and service-oriented companies. In this paper, we present a new method to monitor the squared coefficient of variation by means of two one-sided cumulative sum-type control charts. We study the run length properties of the proposed charts using a Markov chain approach. Several tables are given in order to show the sensitivity of the proposed charts for different deterministic shift sizes and their performance for the random shift size condition. The results show that the proposed control charts have attractive performance compared with some competing charts and are better in many cases. An illustrative example is discussed on a real dataset. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The parametric conditional autoregressive expectiles (CARE) models have been developed to estimate expectiles, which can be used to assess value at risk and expected shortfall. The challenge lies in parametric CARE modeling is the specification of a parametric form. To avoid any model misspecification, we propose a nonparametric CARE model via neural network. The nonparametric CARE model can be estimated by a classical gradient based nonlinear optimization algorithm, and the consistency of nonparametric conditional expectile estimators is established. We then apply the nonparametric CARE model to estimating value at risk and expected shortfall of six stock indices. Empirical results for the new model is competitive with those classical models and parametric CARE models. Copyright © 2016 John Wiley & Sons, Ltd.

]]>