This paper considers information properties of coherent systems when component lifetimes are independent and identically distributed. Some results on the entropy of coherent systems in terms of ordering properties of component distributions are proposed. Moreover, various sufficient conditions are given under which the entropy order among systems as well as the corresponding dual systems hold. Specifically, it is proved that under some conditions, the entropy order among component lifetimes is preserved under coherent system formations. The findings are based on system signatures as a useful measure from comparison purposes. Furthermore, some results on the system's entropy are derived when lifetimes of components are dependent and identically distributed. Several illustrative examples are also given.

]]>Correlated count data processes with a finite range can be adequately described by a first-order binomial autoregressive model. However, in several practical applications, these data demonstrate extra-binomial variation, and a more appropriate choice is the first-order beta-binomial autoregressive model. In this paper, we propose and study control charts that can be used for the monitoring of these 2 processes. Practical guidelines concerning their statistical design are provided, whereas the effect of the extra-binomial variation is investigated as well. Finally, the practical application of the proposed schemes is illustrated via a real-data example.

]]>Various charts such as |*S*|, *W*, and *G* are used for monitoring process dispersion. Most of these charts are based on the normality assumption, while exact distribution of the control statistic is unknown, and thus limiting distribution of control statistic is employed which is applicable for large sample sizes. In practice, the normality assumption of distribution might be violated, while it is not always possible to collect large sample size. Furthermore, to use control charts in practice, the in-control state usually has to be estimated. Such estimation has a negative effect on the performance of control chart. Non-parametric bootstrap control charts can be considered as an alternative when the distribution is unknown or a collection of large sample size is not possible or the process parameters are estimated from a Phase I data set. In this paper, non-parametric bootstrap multivariate control charts |*S*|, *W*, and *G* are introduced, and their performances are compared against Shewhart-type control charts. The proposed method is based on bootstrapping the data used for estimating the in-control state. Simulation results show satisfactory performance for the bootstrap control charts. Ultimately, the proposed control charts are applied to a real case study.

In this paper, we investigate the possibility of using multivariate singular spectrum analysis (SSA), a nonparametric technique in the field of time series analysis, for mortality forecasting. We consider a real data application with 9 European countries: Belgium, Denmark, Finland, France, Italy, Netherlands, Norway, Sweden, and Switzerland, over a period 1900 to 2009, and a simulation study based on the data set. The results show the superiority of multivariate SSA in comparison with the univariate SSA, in terms of forecasting accuracy.

]]>We develop a simple stock selection model to explain why active equity managers tend to underperform a benchmark index. We motivate our model with the empirical observation that the best performing stocks in a broad market index often perform much better than the other stocks in the index. Randomly selecting a subset of securities from the index may dramatically increase the chance of underperforming the index. The relative likelihood of underperformance by investors choosing active management likely is much more important than the loss those same investors take due to the higher fees of active management relative to passive index investing. Thus, active management may be even more challenging than previously believed, and the stakes for finding the best active managers may be larger than previously assumed.

]]>Microseismic sensing networks are important tools for the assessment and control of geomechanical hazards in underground mining operations. In such a setting, the maintenance of a healthy network, that is, one that accurately registers all microseisms above some minimum energy level with acceptable levels of noise, is crucially relevant.

In this paper, we develop a nondisruptive method to monitor the health of such a network, by associating with each sensor a set of performance indexes, inspired from reliability engineering, which are estimated from the set of registered signals. Our method addresses 2 relevant features of each of the sensors' behavior, namely, what type of noise is or might be affecting the registering process, and how effective at registering microseisms the sensor is.

The method is evaluated through a case study with microseismic data registered at the Chilean underground mine El Teniente. This study illustrates our method's capability to discriminate and rank sensors with satisfactory, poor, or defective sensing performances, as well as to characterize their failure profile or type, an information that can be used to plan or optimize the network maintenance procedures.

In recent years, there has been an increasing incidence of failure of rock bolts due to stress corrosion cracking and localized corrosion attack in Australian underground coal mines. Unfortunately, prediction of the risk of failure from results obtained from laboratory testing is not necessarily reliable because it is difficult to properly simulate the mine environment. An alternative way of predicting failure is to apply machine learning methods to data obtained from underground mines. In this paper, support vector machines are built to predict failure of bolts in complex mine environments. Feature transformation and feature selection methods are applied to extract useful information from the original data. A dataset, which had continuous features and spatial data, was used to test the proposed model. The results showed that principal component analysis-based feature transformation provides reliable risk prediction.

]]>The problem of heterogeneity represents a very important issue in the decision-making process. Furthermore, it has become common practice in the context of marketing research to assume that different population parameters are possible depending on sociodemographic and psycho-demographic variables such as age, gender, and social status. In recent decades, numerous approaches have been proposed with the aim of involving heterogeneity in the parameter estimation procedures. In partial least squares path modeling, the common practice consists of achieving a global measurement of the differences arising from heterogeneity. This leaves the analyst with the important task of detecting, a posteriori, which are the causal relationships (ie, path coefficients) that produce changes in the model. This is the case in Pathmox analysis, which solves the heterogeneity problem by building a binary tree to detect those segments of population that cause the heterogeneity. In this article, we propose extending the same Pathmox methodology to asses which particular endogenous equation of the structural model and which path coefficients are responsible of the difference.

]]>Model fusion methods, or more generally ensemble methods, are a useful tool for prediction. Combining predictions from a set of models smooths out biases and reduces variances of predictions from individual models, and hence, the combined predictions typically outperform those from individual models. In many algorithms, individual predictions are arithmetically averaged with equal weights. However, in the presence of correlated models, the fusion process is required to account for association between models; otherwise, the naively averaged predictions will be suboptimal. This article describes optimal model fusion principles and illustrates the potential pitfalls of naive fusion in the presence of correlated models for binary data. An efficient algorithm for correlated model fusion is detailed and applied to algorithms mining social media information to predict civil unrest. Copyright © 2017 John Wiley & Sons, Ltd.

]]>In this work, a set of sequences of information (time series), under nonstationary regime, with continuous space state, discrete time, and a Markovian dependence, is considered. A new model that expresses the marginal transition density function of one sequence as a linear combination of the marginal transition density functions of all sequences in the set is proposed. The coefficients of this combination are denominated *marginal contribution coefficients* and represent how much each transition density function contributes to the calculation of a chosen transition density function. The proposed coefficient is a marginal coefficient because it can be computed instantaneously, and it may change from one time to another time since all calculations are performed before stationarity is reached. This clearly differentiates the new coefficient from well-known measures such as the cross-correlation and the coherence. The idea behind the model is that if a specific sequence has a high marginal contribution for the transition density function from another sequence, the first may be replaced by the latter without losing much information that means that the knowledge of few densities should be enough to recover the overall behaviour. Simulations, considering 2 chains, are presented so as to check the sensitivity of the proposed model. The methodology is also applied to a real data originated from a wire-drawing machine whose main function is to decrease the transverse diameter of metal wires. The behaviour of the level of acceleration of each bearing in relation to the other ones is then verified.

We constructed a Stackelberg game in a supply chain finance (SCF) system including a manufacturer, a capital-constrained retailer, and a bank that provides loans on the basis of the manufacturer's credit guarantee. To emphasize the financial service providers' risks, we assumed that both the bank and the manufacturer are risk-averse and formulated trade-off objective functions for both of them as the convex combination of the expected profit and conditional value-at-risk. To explore the effects of the risk preferences and decision preferences on SCF equilibriums, we mathematically analyzed the optimal order quantities, wholesale prices, and interest rates under different risk preference scenarios and performed numerical analyses to quantify the effects. We found that incorporating bank credit with a credit guarantee can effectively balance the retailer's financing risk between the bank and the manufacturer through interest rate charging and wholesale pricing. Moreover, SCF equilibriums with risk aversion are highly affected by the degree of both the lender's and guarantor's risk tolerance in regard to the borrower's default probability and will be more conservative than those in the risk-neutral cases that only maximize expected profit.

]]>A screening design is an experimental plan used for identifying the expectedly few active factors from potentially many. In this paper, we compare the performances of 3 experimental plans, a Plackett-Burman design, a minimum run resolution IV design, and a definitive screening design, all with 12 and 13 runs, when they are used for screening and 3 out of 6 factors are active. The functional relationship between the response and the factors was allowed to be of 2 types, a second-order model and a model with all main effects and interactions included. D-efficiencies for the designs ability to estimate parameters in such models were computed, but it turned out that these are not very informative for comparing the screening performances of the 2-level designs to the definitive screening design. The overall screening performance of the 2-level designs was quite good, but there exist situations where the definitive screening design, allowing both screening and estimation of second-order models in the same operation, has a reasonable high probability of being successful.

]]>Urban rail planning is extremely complex, mainly because it is a decision problem under different uncertainties. In practice, travel demand is generally uncertain, and therefore, the timetabling decisions must be based on accurate estimation. This research addresses the optimization of train timetable at public transit terminals of an urban rail in a stochastic setting. To cope with stochastic fluctuation of arrival rates, a two-stage stochastic programming model is developed. The objective is to construct a daily train schedule that minimizes the expected waiting time of passengers. Due to the high computational cost of evaluating the expected value objective, the sample average approximation method is applied. The method provided statistical estimations of the optimality gap as well as lower and upper bounds and the associated confidence intervals. Numerical experiments are performed to evaluate the performance of the proposed model and the solution method.

]]>The generalized *T*^{2} chart (GT-chart), which is composed of the *T*^{2} statistic based on a small number of principal components and the remaining components, is a popular alternative to the traditional Hotelling's *T*^{2} control chart. However, the application of the GT-chart to high-dimensional data, which are now ubiquitous, encounters difficulties from high dimensionality similar to other multivariate procedures. The sample principal components and their eigenvalues do not consistently estimate the population values, and the GT-chart relying on them is also inconsistent in estimating the control limits. In this paper, we investigate the effects of high dimensionality on the GT-chart and then propose a corrected GT-chart using the recent results of random matrix theory for the spiked covariance model. We numerically show that the corrected GT-chart exhibits superior performance compared to the existing methods, including the GT-chart and Hotelling's *T*^{2} control chart, under various high-dimensional cases. Finally, we apply the proposed corrected GT-chart to monitor chemical processes introduced in the literature.

In this work, we investigate sequential Bayesian estimation for inference of stochastic volatility with variance-gamma (SVVG) jumps in returns. We develop an estimation algorithm that combines the sequential learning auxiliary particle filter with the particle learning filter. Simulation evidence and empirical estimation results indicate that this approach is able to filter latent variances, identify latent jumps in returns, and provide sequential learning about the static parameters of SVVG. We demonstrate comparative performance of the sequential algorithm and off-line Markov Chain Monte Carlo in synthetic and real data applications.

]]>Actuarial risks and financial asset returns are typically heavy tailed. In this paper, we introduce 2 stochastic dominance criteria, called the right-tail order and the left-tail order, to compare these variables stochastically. The criteria are based on comparisons of expected utilities, for 2 classes of utility functions that give more weight to the right or the left tail (depending on the context) of the distributions. We study their properties, applications, and connections with other classical criteria, including the increasing convex and the second-order stochastic dominance. Finally, we rank some parametric families of distributions and provide empirical evidence of the new stochastic dominance criteria with an example using real data.

]]>In this paper, we extend the closed form moment estimator (ordinary MCFE) for the autoregressive conditional duration model given by Lu et al (2016) and propose some closed form robust moment-based estimators for the multiplicative error model to deal with the additive and innovational outliers. The robustification of the closed form estimator is done by replacing the sample mean and sample autocorrelation with some robust estimators. These estimators are more robust than the quasi-maximum likelihood estimator (QMLE) often used to estimate this model, and they are easy to implement and do not require the use of any numerical optimization procedure and the choice of initial value. The performance of our proposal in estimating the parameters and forecasting conditional mean *μ*_{t} of the MEM(1,1) process is compared with the proposals existing in the literature via Monte Carlo experiments, and the results of these experiments show that our proposal outperforms the ordinary MCFE, QMLE, and least absolute deviation estimator in the presence of outliers in general. Finally, we fit the price durations of IBM stock with the robust closed form estimators and the benchmarks and analyze their performances in estimating model parameters and forecasting the irregularly spaced intraday Value at Risk.

A commonly occurring problem in reliability testing is how to combine pass/fail test data that is collected from disparate environments. We have worked with colleagues in aerospace engineering for a number of years where two types of test environments in use are ground tests and flight tests. Ground tests are less expensive and consequently more numerous. Flight tests are much less frequent, but directly reflect the actual usage environment. We discuss a relatively simple combining approach that realizes the benefit of a larger sample size by using ground test data, but at the same time accounts for the difference between the two environments. We compare our solution with what look like more sophisticated approaches to the problem in order to calibrate its limitations. Overall, we find that our proposed solution is robust to its inherent assumptions, which explains its usefulness in practice. Copyright © 2017 John Wiley & Sons, Ltd.

]]>Variance swap is a typical financial tool for managing volatility risk. In this paper, we evaluate different types of variance swaps under a threshold Ornstein–Uhlenbeck model, which exhibits both mean reversion and regime switching features in the underlying asset price. We derive the analytical solution for the joint moment generating function of log-asset prices at two distinct time points. This enables us to price various types of variance swaps analytically. Copyright © 2017 John Wiley & Sons, Ltd.

]]>This paper is concerned with a discrete-time *G**e**o*/*G*/1 repairable queueing system with Bernoulli feedback and randomized
-policy. The service station may be subject to failures randomly during serving customers and therefore is sent for repair immediately. The
-policy means that when the number of customers in the system reaches a given threshold value *N*, the deactivated server is turned on with probability *p* or is still left off with probability 1−*p*. Applying the law of total probability decomposition, the renewal theory and the probability generating function technique, we investigate the queueing performance measures and reliability indices simultaneously in our work. Both the transient queue length distribution and the recursive expressions of the steady-state queue length distribution at various epochs are explicitly derived. Meanwhile, the stochastic decomposition property is presented for the proposed model. Various reliability indices, including the transient and the steady-state unavailability of the service station, the expected number of the service station breakdowns during the time interval
and the equilibrium failure frequency of the service station are also discussed. Finally, an operating cost function is formulated, and the direct search method is employed to numerically find the optimum value of *N* for minimizing the system cost. Copyright © 2017 John Wiley & Sons, Ltd.

The paper is devoted to the stochastic optimistic bilevel optimization problem with quantile criterion in the upper level problem. If the probability distribution is finite, the problem can be transformed into a mixed-integer nonlinear optimization problem. We formulate assumptions guaranteeing that an optimal solution exists. A production planning problem is used to illustrate usefulness of the model. Copyright © 2017 John Wiley & Sons, Ltd.

]]>This work considers optimum design of a life testing experiment with progressive type I interval censoring. A cost minimization-based optimality criterion is proposed. The proposed cost function incorporates the cost of conducting the experiment, opportunity cost, and post-sale cost. It is shown that the proposed cost function is scale invariant for any lifetime distribution whose support does not depend on the parameters of the distribution. Weibull distribution is considered for illustration. Optimum solution is obtained by a suitable numerical method. A sensitivity analysis is undertaken to study the effect of small perturbations in lifetime model parameter values or cost coefficients. Copyright © 2017 John Wiley & Sons, Ltd.

]]>Establishment of cost-effective management strategy of aquaculture is one of the most important issues in fishery science, which can be addressed with bio-economic mathematical modeling. This paper deals with the aforementioned issue using a stochastic process model for aquacultured non-renewable fishery resources from the viewpoint of an optimal stopping (timing) problem. The goal of operating the model is to find the optimal criteria to start harvesting the resources under stochastic environment, which turns out to be determined from the Bellman equation (BE). The BE has a separation of variables type structure and can be simplified to a reduced BE with a fewer degrees of freedom. Dependence of solutions to the original and reduced BEs on parameters and independent variables is analyzed from both analytical and numerical standpoints. Implications of the analysis results to management of aquaculture systems are presented as well. Numerical simulation focusing on aquacultured *Plecoglossus altivelis* in Japan validates the mathematical analysis results. Copyright © 2017 John Wiley & Sons, Ltd.

Two hypotheses can explain the declining probability of gaining employment as an unemployment spell wears on: heterogeneity of the unemployed versus duration dependence. The nonparametric tests developed in the literature for testing duration dependence would not account for the fact that an unemployment spell can terminate in other ways than employment. The nonparametric tests developed in this paper extend, under certain conditions, those tests to competing risks. We illustrate our test using US unemployment data in which we find little consistent evidence for duration dependence. © 2017 The Authors. *Applied Stochastic Models in Business and Industry* published by John Wiley & Sons, Ltd.

This article describes statistical analyses pertaining to marketing data from a large multinational pharmaceutical firm. We describe models for monthly new prescription counts that are written by physicians for the firm's focal drug and for competing drugs, as functions of physician-specific and time-varying predictors. Modeling patterns in discrete-valued time series, and specifically time series of counts, based on large datasets, is the focus of much recent research attention. We first provide a brief overview of Bayesian approaches we have employed for modeling multivariate count time series using Markov Chain Monte Carlo methods. We then discuss a flexible level correlated model framework, which enables us to combine different marginal count distributions and to build a hierarchical model for the vector time series of counts, while accounting for the association among the components of the response vector, as well as possible overdispersion. We employ the integrated nested Laplace approximation (INLA) for fast approximate Bayesian modeling using the R-INLA package (r-inla.org). To enhance computational speed, we first build a model for each physician, use features of the estimated trends in the time-varying parameters in order to cluster the physicians into groups, and fit aggregate models for all physicians within each cluster. Our three-stage analysis can provide useful guidance to the pharmaceutical firm on their marketing actions. Copyright © 2017 John Wiley & Sons, Ltd.

]]>We present a Bayesian decision theoretic approach for developing replacement strategies. In so doing, we consider a semiparametric model to describe the failure characteristics of systems by specifying a nonparametric form for cumulative intensity function and by taking into account effect of covariates by a parametric form. Use of a gamma process prior for the cumulative intensity function complicates the Bayesian analysis when the updating is based on failure count data. We develop a Bayesian analysis of the model using Markov chain Monte Carlo methods and determine replacement strategies. Adoption of Markov chain Monte Carlo methods involves a data augmentation algorithm. We show the implementation of our approach using actual data from railroad tracks. Copyright © 2016 John Wiley & Sons, Ltd.

]]>No abstract is available for this article.

]]>Environmental report cards are popular mechanisms for summarising the overall status of an environmental system of interest. This paper describes the development of such a report card in the context of a study for Gladstone Harbour in Queensland, Australia. The harbour is within the World Heritage-protected Great Barrier Reef and is the location of major industrial development, hence the interest in developing a way of reporting its health in a statistically valid, transparent and sustainable manner. A Bayesian network (BN) approach was used because of its ability to aggregate and integrate different sources of information, provide probabilistic estimates of interest and update these estimates in a natural manner as new information becomes available.

BN modelling is an iterative process, and in the context of environmental reporting, this is appealing as model development can be initiated while quantitative knowledge is still under development, and subsequently refined as more knowledge becomes available. Moreover, the BN model helps build the maturity of the quantitative information needed and helps target investment in monitoring and/or process modelling activities to inform the approach taken. The model is able to incorporate spatial and temporal information and may be structured in such a way that new indicators of relevance to the underlying environmental gradient being monitored may replace less informative indicators or be added to the model with minimal effort.

The model described here focuses on the environmental component, but has the capacity to also incorporate social, cultural and economic components of the Gladstone Harbour Report Card. Copyright © 2016 John Wiley & Sons, Ltd.

Business failure prediction models are important in providing warning for preventing financial distress and giving stakeholders time to react in a timely manner to a crisis. The empirical approach to corporate distress analysis and forecasting has recently attracted new attention from financial institutions, academics, and practitioners. In fact, this field is as interesting today as it was in the 1930s, and over the last 80 years, a remarkable body of both theoretical and empirical studies on this topic has been published. Nevertheless, some issues are still under investigation, such as the selection of financial ratios to define business failure and the identification of an optimal subset of predictors. For this purpose, there exist a large number of methods that can be used, although their drawbacks are usually neglected in this context. Moreover, most variable selection procedures are based on some very strict assumptions (linearity and additivity) that make their application difficult in business failure prediction. This paper proposes to overcome these limits by selecting relevant variables using a nonparametric method named Rodeo that is consistent even when the aforementioned assumptions are not satisfied. We also compare Rodeo with two other variable selection methods (Lasso and Adaptive Lasso), and the empirical results demonstrate that our proposed procedure outperforms the others in terms of positive/negative predictive value and is able to capture the nonlinear effects of the selected variables. Copyright © 2017 John Wiley & Sons, Ltd.

]]>The inherent uncertainty in supply chain systems compels managers to be more perceptive to the stochastic nature of the systems' major parameters, such as suppliers' reliability, retailers' demands, and facility production capacities. To deal with the uncertainty inherent to the parameters of the stochastic supply chain optimization problems and to determine optimal or close to optimal policies, many approximate deterministic equivalent models are proposed. In this paper, we consider the stochastic periodic inventory routing problem modeled as chance-constrained optimization problem. We then propose a safety stock-based deterministic optimization model to determine near-optimal solutions to this chance-constrained optimization problem. We investigate the issue of adequately setting safety stocks at the supplier's warehouse and at the retailers so that the promised service levels to the retailers are guaranteed, while distribution costs as well as inventory throughout the system are optimized. The proposed deterministic models strive to optimize the safety stock levels in line with the planned service levels at the retailers. Different safety stock models are investigated and analyzed, and the results are illustrated on two comprehensively worked out cases. We conclude this analysis with some insights on how safety stocks are to be determined, allocated, and coordinated in stochastic periodic inventory routing problem. Copyright © 2017 John Wiley & Sons, Ltd.

]]>In this paper, we consider a repairable system in which two types of failures can occur on each failure. One is a minor failure that can be corrected with minimal repair, whereas the other type is a catastrophic failure that destroys the system. The total number of failures until the catastrophic failure is a positive random variable with a given probability vector. It is assumed that there is some partial information about the failure status of the system, and then various properties of the conditional probability of the system failure are studied. Mixture representations of the reliability function for the system in terms of the reliability function of the residual lifetimes of record values are obtained. Some stochastic properties of the conditional probabilities and the residual lifetimes of two systems are finally discussed. Copyright © 2017 John Wiley & Sons, Ltd.

]]>In this article, we introduce a likelihood-based estimation method for the stochastic volatility in mean (SVM) model with scale mixtures of normal (SMN) distributions. Our estimation method is based on the fact that the powerful hidden Markov model (HMM) machinery can be applied in order to evaluate an arbitrarily accurate approximation of the likelihood of an SVM model with SMN distributions. Likelihood-based estimation of the parameters of stochastic volatility models, in general, and SVM models with SMN distributions, in particular, is usually regarded as challenging as the likelihood is a high-dimensional multiple integral. However, the HMM approximation, which is very easy to implement, makes numerical maximum of the likelihood feasible and leads to simple formulae for forecast distributions, for computing appropriately defined residuals, and for decoding, that is, estimating the volatility of the process. Copyright © 2017 John Wiley & Sons, Ltd.

]]>The prevailing engineering principle that redundancy at the component level is superior to redundancy at the system level is generalized to coherent systems with dependent components. Sufficient (and necessary) conditions are presented to compare component and system redundancies by means of the usual stochastic, hazard rate, reversed hazard rate, and likelihood ratio orderings. Explicit numerical examples are provided to illustrate the theoretical findings. Some related results in the literature are generalized and extended. Copyright © 2017 John Wiley & Sons, Ltd.

]]>In this paper, we introduce a unifying approach to option pricing under continuous-time stochastic volatility models with jumps. For European style options, a new semi-closed pricing formula is derived using the generalized complex Fourier transform of the corresponding partial integro-differential equation. This approach is successfully applied to models with different volatility diffusion and jump processes. We also discuss how to price options with different payoff functions in a similar way.

In particular, we focus on a log-normal and a log-uniform jump diffusion stochastic volatility model, originally introduced by Bates and Yan and Hanson, respectively. The comparison of existing and newly proposed option pricing formulas with respect to time efficiency and precision is discussed. We also derive a representation of an option price under a new approximative fractional jump diffusion model that differs from the aforementioned models, especially for the out-of-the money contracts. Copyright © 2017 John Wiley & Sons, Ltd.