Urban rail planning is extremely complex, mainly because it is a decision problem under different uncertainties. In practice, travel demand is generally uncertain, and therefore, the timetabling decisions must be based on accurate estimation. This research addresses the optimization of train timetable at public transit terminals of an urban rail in a stochastic setting. To cope with stochastic fluctuation of arrival rates, a two-stage stochastic programming model is developed. The objective is to construct a daily train schedule that minimizes the expected waiting time of passengers. Due to the high computational cost of evaluating the expected value objective, the sample average approximation method is applied. The method provided statistical estimations of the optimality gap as well as lower and upper bounds and the associated confidence intervals. Numerical experiments are performed to evaluate the performance of the proposed model and the solution method.

]]>The generalized *T*^{2} chart (GT-chart), which is composed of the *T*^{2} statistic based on a small number of principal components and the remaining components, is a popular alternative to the traditional Hotelling's *T*^{2} control chart. However, the application of the GT-chart to high-dimensional data, which are now ubiquitous, encounters difficulties from high dimensionality similar to other multivariate procedures. The sample principal components and their eigenvalues do not consistently estimate the population values, and the GT-chart relying on them is also inconsistent in estimating the control limits. In this paper, we investigate the effects of high dimensionality on the GT-chart and then propose a corrected GT-chart using the recent results of random matrix theory for the spiked covariance model. We numerically show that the corrected GT-chart exhibits superior performance compared to the existing methods, including the GT-chart and Hotelling's *T*^{2} control chart, under various high-dimensional cases. Finally, we apply the proposed corrected GT-chart to monitor chemical processes introduced in the literature.

In this work, we investigate sequential Bayesian estimation for inference of stochastic volatility with variance-gamma (SVVG) jumps in returns. We develop an estimation algorithm that combines the sequential learning auxiliary particle filter with the particle learning filter. Simulation evidence and empirical estimation results indicate that this approach is able to filter latent variances, identify latent jumps in returns, and provide sequential learning about the static parameters of SVVG. We demonstrate comparative performance of the sequential algorithm and off-line Markov Chain Monte Carlo in synthetic and real data applications.

]]>Actuarial risks and financial asset returns are typically heavy tailed. In this paper, we introduce 2 stochastic dominance criteria, called the right-tail order and the left-tail order, to compare these variables stochastically. The criteria are based on comparisons of expected utilities, for 2 classes of utility functions that give more weight to the right or the left tail (depending on the context) of the distributions. We study their properties, applications, and connections with other classical criteria, including the increasing convex and the second-order stochastic dominance. Finally, we rank some parametric families of distributions and provide empirical evidence of the new stochastic dominance criteria with an example using real data.

]]>In this paper, we extend the closed form moment estimator (ordinary MCFE) for the autoregressive conditional duration model given by Lu et al (2016) and propose some closed form robust moment-based estimators for the multiplicative error model to deal with the additive and innovational outliers. The robustification of the closed form estimator is done by replacing the sample mean and sample autocorrelation with some robust estimators. These estimators are more robust than the quasi-maximum likelihood estimator (QMLE) often used to estimate this model, and they are easy to implement and do not require the use of any numerical optimization procedure and the choice of initial value. The performance of our proposal in estimating the parameters and forecasting conditional mean *μ*_{t} of the MEM(1,1) process is compared with the proposals existing in the literature via Monte Carlo experiments, and the results of these experiments show that our proposal outperforms the ordinary MCFE, QMLE, and least absolute deviation estimator in the presence of outliers in general. Finally, we fit the price durations of IBM stock with the robust closed form estimators and the benchmarks and analyze their performances in estimating model parameters and forecasting the irregularly spaced intraday Value at Risk.

A commonly occurring problem in reliability testing is how to combine pass/fail test data that is collected from disparate environments. We have worked with colleagues in aerospace engineering for a number of years where two types of test environments in use are ground tests and flight tests. Ground tests are less expensive and consequently more numerous. Flight tests are much less frequent, but directly reflect the actual usage environment. We discuss a relatively simple combining approach that realizes the benefit of a larger sample size by using ground test data, but at the same time accounts for the difference between the two environments. We compare our solution with what look like more sophisticated approaches to the problem in order to calibrate its limitations. Overall, we find that our proposed solution is robust to its inherent assumptions, which explains its usefulness in practice. Copyright © 2017 John Wiley & Sons, Ltd.

]]>Variance swap is a typical financial tool for managing volatility risk. In this paper, we evaluate different types of variance swaps under a threshold Ornstein–Uhlenbeck model, which exhibits both mean reversion and regime switching features in the underlying asset price. We derive the analytical solution for the joint moment generating function of log-asset prices at two distinct time points. This enables us to price various types of variance swaps analytically. Copyright © 2017 John Wiley & Sons, Ltd.

]]>This paper is concerned with a discrete-time *G**e**o*/*G*/1 repairable queueing system with Bernoulli feedback and randomized
-policy. The service station may be subject to failures randomly during serving customers and therefore is sent for repair immediately. The
-policy means that when the number of customers in the system reaches a given threshold value *N*, the deactivated server is turned on with probability *p* or is still left off with probability 1−*p*. Applying the law of total probability decomposition, the renewal theory and the probability generating function technique, we investigate the queueing performance measures and reliability indices simultaneously in our work. Both the transient queue length distribution and the recursive expressions of the steady-state queue length distribution at various epochs are explicitly derived. Meanwhile, the stochastic decomposition property is presented for the proposed model. Various reliability indices, including the transient and the steady-state unavailability of the service station, the expected number of the service station breakdowns during the time interval
and the equilibrium failure frequency of the service station are also discussed. Finally, an operating cost function is formulated, and the direct search method is employed to numerically find the optimum value of *N* for minimizing the system cost. Copyright © 2017 John Wiley & Sons, Ltd.

The paper is devoted to the stochastic optimistic bilevel optimization problem with quantile criterion in the upper level problem. If the probability distribution is finite, the problem can be transformed into a mixed-integer nonlinear optimization problem. We formulate assumptions guaranteeing that an optimal solution exists. A production planning problem is used to illustrate usefulness of the model. Copyright © 2017 John Wiley & Sons, Ltd.

]]>This work considers optimum design of a life testing experiment with progressive type I interval censoring. A cost minimization-based optimality criterion is proposed. The proposed cost function incorporates the cost of conducting the experiment, opportunity cost, and post-sale cost. It is shown that the proposed cost function is scale invariant for any lifetime distribution whose support does not depend on the parameters of the distribution. Weibull distribution is considered for illustration. Optimum solution is obtained by a suitable numerical method. A sensitivity analysis is undertaken to study the effect of small perturbations in lifetime model parameter values or cost coefficients. Copyright © 2017 John Wiley & Sons, Ltd.

]]>Establishment of cost-effective management strategy of aquaculture is one of the most important issues in fishery science, which can be addressed with bio-economic mathematical modeling. This paper deals with the aforementioned issue using a stochastic process model for aquacultured non-renewable fishery resources from the viewpoint of an optimal stopping (timing) problem. The goal of operating the model is to find the optimal criteria to start harvesting the resources under stochastic environment, which turns out to be determined from the Bellman equation (BE). The BE has a separation of variables type structure and can be simplified to a reduced BE with a fewer degrees of freedom. Dependence of solutions to the original and reduced BEs on parameters and independent variables is analyzed from both analytical and numerical standpoints. Implications of the analysis results to management of aquaculture systems are presented as well. Numerical simulation focusing on aquacultured *Plecoglossus altivelis* in Japan validates the mathematical analysis results. Copyright © 2017 John Wiley & Sons, Ltd.

In this paper, we introduce a unifying approach to option pricing under continuous-time stochastic volatility models with jumps. For European style options, a new semi-closed pricing formula is derived using the generalized complex Fourier transform of the corresponding partial integro-differential equation. This approach is successfully applied to models with different volatility diffusion and jump processes. We also discuss how to price options with different payoff functions in a similar way.

In particular, we focus on a log-normal and a log-uniform jump diffusion stochastic volatility model, originally introduced by Bates and Yan and Hanson, respectively. The comparison of existing and newly proposed option pricing formulas with respect to time efficiency and precision is discussed. We also derive a representation of an option price under a new approximative fractional jump diffusion model that differs from the aforementioned models, especially for the out-of-the money contracts. Copyright © 2017 John Wiley & Sons, Ltd.

The prevailing engineering principle that redundancy at the component level is superior to redundancy at the system level is generalized to coherent systems with dependent components. Sufficient (and necessary) conditions are presented to compare component and system redundancies by means of the usual stochastic, hazard rate, reversed hazard rate, and likelihood ratio orderings. Explicit numerical examples are provided to illustrate the theoretical findings. Some related results in the literature are generalized and extended. Copyright © 2017 John Wiley & Sons, Ltd.

]]>Two hypotheses can explain the declining probability of gaining employment as an unemployment spell wears on: heterogeneity of the unemployed versus duration dependence. The nonparametric tests developed in the literature for testing duration dependence would not account for the fact that an unemployment spell can terminate in other ways than employment. The nonparametric tests developed in this paper extend, under certain conditions, those tests to competing risks. We illustrate our test using US unemployment data in which we find little consistent evidence for duration dependence. © 2017 The Authors. *Applied Stochastic Models in Business and Industry* published by John Wiley & Sons, Ltd.

In this article, we introduce a likelihood-based estimation method for the stochastic volatility in mean (SVM) model with scale mixtures of normal (SMN) distributions. Our estimation method is based on the fact that the powerful hidden Markov model (HMM) machinery can be applied in order to evaluate an arbitrarily accurate approximation of the likelihood of an SVM model with SMN distributions. Likelihood-based estimation of the parameters of stochastic volatility models, in general, and SVM models with SMN distributions, in particular, is usually regarded as challenging as the likelihood is a high-dimensional multiple integral. However, the HMM approximation, which is very easy to implement, makes numerical maximum of the likelihood feasible and leads to simple formulae for forecast distributions, for computing appropriately defined residuals, and for decoding, that is, estimating the volatility of the process. Copyright © 2017 John Wiley & Sons, Ltd.

]]>In this paper, we consider a repairable system in which two types of failures can occur on each failure. One is a minor failure that can be corrected with minimal repair, whereas the other type is a catastrophic failure that destroys the system. The total number of failures until the catastrophic failure is a positive random variable with a given probability vector. It is assumed that there is some partial information about the failure status of the system, and then various properties of the conditional probability of the system failure are studied. Mixture representations of the reliability function for the system in terms of the reliability function of the residual lifetimes of record values are obtained. Some stochastic properties of the conditional probabilities and the residual lifetimes of two systems are finally discussed. Copyright © 2017 John Wiley & Sons, Ltd.

]]>The inherent uncertainty in supply chain systems compels managers to be more perceptive to the stochastic nature of the systems' major parameters, such as suppliers' reliability, retailers' demands, and facility production capacities. To deal with the uncertainty inherent to the parameters of the stochastic supply chain optimization problems and to determine optimal or close to optimal policies, many approximate deterministic equivalent models are proposed. In this paper, we consider the stochastic periodic inventory routing problem modeled as chance-constrained optimization problem. We then propose a safety stock-based deterministic optimization model to determine near-optimal solutions to this chance-constrained optimization problem. We investigate the issue of adequately setting safety stocks at the supplier's warehouse and at the retailers so that the promised service levels to the retailers are guaranteed, while distribution costs as well as inventory throughout the system are optimized. The proposed deterministic models strive to optimize the safety stock levels in line with the planned service levels at the retailers. Different safety stock models are investigated and analyzed, and the results are illustrated on two comprehensively worked out cases. We conclude this analysis with some insights on how safety stocks are to be determined, allocated, and coordinated in stochastic periodic inventory routing problem. Copyright © 2017 John Wiley & Sons, Ltd.

]]>Business failure prediction models are important in providing warning for preventing financial distress and giving stakeholders time to react in a timely manner to a crisis. The empirical approach to corporate distress analysis and forecasting has recently attracted new attention from financial institutions, academics, and practitioners. In fact, this field is as interesting today as it was in the 1930s, and over the last 80 years, a remarkable body of both theoretical and empirical studies on this topic has been published. Nevertheless, some issues are still under investigation, such as the selection of financial ratios to define business failure and the identification of an optimal subset of predictors. For this purpose, there exist a large number of methods that can be used, although their drawbacks are usually neglected in this context. Moreover, most variable selection procedures are based on some very strict assumptions (linearity and additivity) that make their application difficult in business failure prediction. This paper proposes to overcome these limits by selecting relevant variables using a nonparametric method named Rodeo that is consistent even when the aforementioned assumptions are not satisfied. We also compare Rodeo with two other variable selection methods (Lasso and Adaptive Lasso), and the empirical results demonstrate that our proposed procedure outperforms the others in terms of positive/negative predictive value and is able to capture the nonlinear effects of the selected variables. Copyright © 2017 John Wiley & Sons, Ltd.

]]>This article describes statistical analyses pertaining to marketing data from a large multinational pharmaceutical firm. We describe models for monthly new prescription counts that are written by physicians for the firm's focal drug and for competing drugs, as functions of physician-specific and time-varying predictors. Modeling patterns in discrete-valued time series, and specifically time series of counts, based on large datasets, is the focus of much recent research attention. We first provide a brief overview of Bayesian approaches we have employed for modeling multivariate count time series using Markov Chain Monte Carlo methods. We then discuss a flexible level correlated model framework, which enables us to combine different marginal count distributions and to build a hierarchical model for the vector time series of counts, while accounting for the association among the components of the response vector, as well as possible overdispersion. We employ the integrated nested Laplace approximation (INLA) for fast approximate Bayesian modeling using the R-INLA package (r-inla.org). To enhance computational speed, we first build a model for each physician, use features of the estimated trends in the time-varying parameters in order to cluster the physicians into groups, and fit aggregate models for all physicians within each cluster. Our three-stage analysis can provide useful guidance to the pharmaceutical firm on their marketing actions. Copyright © 2017 John Wiley & Sons, Ltd.

]]>We present a Bayesian decision theoretic approach for developing replacement strategies. In so doing, we consider a semiparametric model to describe the failure characteristics of systems by specifying a nonparametric form for cumulative intensity function and by taking into account effect of covariates by a parametric form. Use of a gamma process prior for the cumulative intensity function complicates the Bayesian analysis when the updating is based on failure count data. We develop a Bayesian analysis of the model using Markov chain Monte Carlo methods and determine replacement strategies. Adoption of Markov chain Monte Carlo methods involves a data augmentation algorithm. We show the implementation of our approach using actual data from railroad tracks. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Environmental report cards are popular mechanisms for summarising the overall status of an environmental system of interest. This paper describes the development of such a report card in the context of a study for Gladstone Harbour in Queensland, Australia. The harbour is within the World Heritage-protected Great Barrier Reef and is the location of major industrial development, hence the interest in developing a way of reporting its health in a statistically valid, transparent and sustainable manner. A Bayesian network (BN) approach was used because of its ability to aggregate and integrate different sources of information, provide probabilistic estimates of interest and update these estimates in a natural manner as new information becomes available.

BN modelling is an iterative process, and in the context of environmental reporting, this is appealing as model development can be initiated while quantitative knowledge is still under development, and subsequently refined as more knowledge becomes available. Moreover, the BN model helps build the maturity of the quantitative information needed and helps target investment in monitoring and/or process modelling activities to inform the approach taken. The model is able to incorporate spatial and temporal information and may be structured in such a way that new indicators of relevance to the underlying environmental gradient being monitored may replace less informative indicators or be added to the model with minimal effort.

The model described here focuses on the environmental component, but has the capacity to also incorporate social, cultural and economic components of the Gladstone Harbour Report Card. Copyright © 2016 John Wiley & Sons, Ltd.

No abstract is available for this article.

]]>Bayesian designs make formal use of the experimenter's prior information in planning scientific experiments. In their 1989 paper, Chaloner and Larntz suggested to choose the design that maximizes the prior expectation of a suitable utility function of the Fisher information matrix, which is particularly useful when Fisher's information depends on the unknown parameters of the model. In this paper, their method is applied to a randomized experiment for a binary response model with two treatments, in an adaptive way, that is, updating the prior information at each step on the basis of the accrued data. The utility is the *A*-optimality criterion and the marginal priors for the parameters of interest are assumed to be beta distributions. This design is shown to converge almost surely to the Neyman allocation. But frequently, experiments are designed with more purposes in mind than just inferential ones. In clinical trials for treatment comparison, Bayesian statisticians share with non-Bayesians the goal of randomizing patients to treatment arms so as to assign more patients to the treatment that does better in the trial. One possible approach is to optimize the prior expectation of a combination of the different utilities. This idea is applied in the second part of the paper to the same binary model, under a very general joint prior, combining either *A*- or *D*-optimality with an ethical criterion. The resulting randomized experiment is skewed in favor of the more promising treatment and can be described as Bayes compound optimal. Copyright © 2016 John Wiley & Sons, Ltd.

Bayesian optimality criteria provide a robust design strategy to parameter misspecification. We develop an approximate design theory for Bayesian *D*-optimality for nonlinear regression models with covariates subject to measurement errors. Both maximum likelihood and least squares estimation are studied, and explicit characterisations of the Bayesian *D*-optimal saturated designs for the Michaelis–Menten, Emax and exponential regression models are provided. Several data examples are considered for the case of no preference for specific parameter values, where Bayesian *D*-optimal saturated designs are calculated using the uniform prior and compared with several other designs, including the corresponding locally *D*-optimal designs, which are often used in practice. Copyright © 2017 John Wiley & Sons, Ltd.

In this article, we consider sample size determination for experiments in which estimation and design are performed by multiple parties. This problem has relevant applications in contexts involving adversarial decision makers, such as control theory, marketing, and drug testing. Specifically, we adopt a decision-theoretic perspective, and we assume that a decision on an unknown parameter of a statistical model involves two actors, and , who share the same data and loss function but not the same prior beliefs on the parameter. We also suppose that has to use 's optimal action, and we finally assume that the experiment is planned by a third party, . In this framework, we aim at determining an appropriate sample size so that the posterior expected loss incurred by in taking the optimal action of is sufficiently small. We develop general results for the one-parameter exponential family under quadratic loss and analyze the interactive impact of the prior beliefs of the three different parties on the resulting sample sizes. Relationships with other sample size determination criteria are explored. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The intent of this discussion is to highlight opportunities and limitations of utility-based and decision theoretic arguments in clinical trial design. The discussion is based on a specific case study, but the arguments and principles remain valid in general. The example concerns the design of a randomized clinical trial to compare a gel sealant versus standard care for resolving air leaks after pulmonary resection. The design follows a principled approach to optimal decision making, including a probability model for the unknown distributions of time to resolution of air leaks under the two treatment arms and an explicit utility function that quantifies clinical preferences for alternative outcomes. As is typical for any real application, the final implementation includes some compromises from the initial principled setup. In particular, we use the formal decision problem only for the final decision, but use reasonable ad hoc decision boundaries for making interim group sequential decisions that stop the trial early. Beyond the discussion of the particular study, we review more general considerations of using a decision theoretic approach for clinical trial design and summarize some of the reasons why such approaches are not commonly used. Copyright © 2017 John Wiley & Sons, Ltd.

]]>Recent developments in experimental designs for clinical trials are stimulated by advances in personalized medicine. Clinical trials today seek to answer several research questions for multiple patient subgroups. Bayesian designs, which enable the use of sound utilities and prior information, can be tailored to these settings. On the other hand, frequentist concepts of data analysis remain pivotal. For example, type I/II error rates are the accepted standards for reporting trial results and are required by regulatory agencies. Bayesian designs are often perceived as incompatible with these established concepts, which hinder widespread clinical applications. We discuss a pragmatic framework for combining Bayesian experimental designs with frequentists analyses. The approach seeks to facilitate a more widespread application of Bayesian experimental designs in clinical trials. We discuss several applications of this framework in different clinical settings, including bridging trials and multi-arm trials in infectious diseases and glioblastoma. We also outline computational algorithms for implementing the proposed approach. Copyright © 2017 John Wiley & Sons, Ltd.

]]>We designed experiments to determine optimized values for input parameters such as temperature, solution concentration, and power input for synthesizing ceramic materials, specifically titanium dioxide (TiO_{2}) thin films using microwave radiation, which permits crystallization of these films at significantly lower temperatures (150-160 °C) compared to conventional techniques (>450 °C). The advantage of using lower temperatures is both reduced energy requirements, and in expanding the set of substrates (e.g., plastics) on which the thin film materials can be deposited. Low temperature crystallization permits ceramic thin film materials to be directly grown on delicate plastic substrates (which melt at temperatures over 200°C) and thus would have important applications in the emerging flexible electronics industry.

Using a linear regression with quadratic terms, we found estimated optimal settings for the reaction parameters. When tried experimentally, these optimal settings produced better results (% coverage with film) than any of the data used in estimation. This approach allows fine tuning of the input parameters and can lead to reliable synthesis of films in a low-temperature environment. It may also be an important step in understanding the fundamental mechanisms underlying the growth of these films in the presence of electromagnetic fields like microwave radiation. Copyright © 2017 John Wiley & Sons, Ltd.

Designing accelerated life tests presents a number of conceptual and computational challenges. We propose a Bayesian decision-theoretic approach for selecting an optimal stress-testing schedule and develop an augmented probability simulation approach to obtain the optimal design. The notion of a ‘dual utility probability density’ enables us to invoke the concept of a conjugate utility function. For accelerated life tests, this allows us to construct an augmented probability simulation that simultaneously optimizes and calculates the expected utility. In doing so, we circumvent many of the computational difficulties associated with evaluating pre-posterior expected utilities. To illustrate our methodology, we consider a single-stage accelerated life test design; our approach naturally extends to multiple-stage designs. Finally, we conclude with suggestions for further research. Copyright © 2017 John Wiley & Sons, Ltd.

]]>