In this paper, we have examined the inflation rates in the Group of Seven countries, investigating issues such as the existence of unit roots, structural breaks, fractional integration and potential non-linearities using a fractional dependence (FD) approach based on Chebyshev polynomials in time. This robust FD approach allows one to test for persistence as well as non-linearity of the series. We first tested for stationarity and structural breaks using classical approaches and observed inconclusive results with regard to the stationarity levels of the series. Using Bai–Perron tests, we actually confirmed significant structural breaks, even up to five, in each of the inflation series. However, noting that structural breaks are significantly related to fractional differentiation, this latter approach was also conducted. Here, we observed that the estimates of the differencing parameter were quite stable across time, and evidence of unit roots was found in the cases of the UK, Canada, France, Japan and the USA; for Germany, we found some evidence of mean reversion, while estimates of ** d** above 1 were found in the case of Italy. On the other hand, non-linear deterministic trends were clearly rejected in all cases. Copyright © 2016 John Wiley & Sons, Ltd.

The distribution of a coherent system based on IID components can be written as a mixture of the distributions of progressively Type-II censored order statistics. The coefficients in that representation are called the progressive censoring signature (PC-signature) of the system. In this paper, we explore the basic properties and potential applications of these mixture representations. We show that they can be used to study the associated censoring schemes (and their exact probabilities) in the operating development of the system. It is illustrated that, in some sense, the PC-signature provides more information than the classical (Samaniego) signature of the system. Further, this new signature can be used to establish distribution-free ordering properties for system lifetimes. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This paper provides analytic pricing formulas of discretely monitored geometric Asian options under the regime-switching model. We derive the joint Laplace transform of the discount factor, the log return of the underlying asset price at maturity, and the logarithm of the geometric mean of the asset price. Then using the change of measures and the inversion of the transform, the prices and deltas of a fixed-strike and a floating-strike geometric Asian option are obtained. As the numerical results, we calculate the price of a fixed-strike and a floating-strike discrete geometric Asian call option using our formulas and compare with the results of the Monte Carlo simulation. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The purpose of this article is to summarize recent research results for constructing nonparametric multivariate control charts with main focus on data depth-based control charts. Data depth provides dimension reduction to high-dimensional problems in a completely nonparametric way. Several depth measures including Tukey depth are shown to be particularly effective for purposes of statistical process control in case that the data deviate normality assumption. For detecting small or moderate shifts in the process target mean, the multivariate version of the exponentially weighted moving average chart is generally robust to non-normal data, so that nonparametric alternatives may be less often required. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Semiconductors are fabricated through unit processes including photolithography, etching, diffusion, ion implantation, deposition, and planarization processes. Chemical mechanical planarization, which is essential in advanced semiconductor manufacturing processes, aims to achieve high planarity across the wafer surface. This paper presents a case study in which the optimal blend of mixture slurry was obtained to improve the two response variables (material loss and roughness) at the same time. The mixture slurry consists of several pure slurries; when all of the abrasive particles within the slurry are of the same size, the slurry is referred to as a pure slurry. The optimal blend was obtained by applying a multiresponse surface optimization method. In particular, the recently developed posterior approach to dual response surface optimization was employed, which allows the chemical mechanical planarization process engineer to investigate tradeoffs between the two response variables. The two responses were better with the obtained blend than the existing blend. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The modeling of macroeconomic influence on rating migration matrices plays an important role in credit risk management, especially in stress testing. In contrast to approaches, which separately condition migration matrices by a qualitative assessment of the state of the business cycle, we promote the use of generalized regression models, which directly allow to consider macroeconomic covariates. We systemize, extend, and critically discuss different regression approaches and put an emphasis on violations of model assumptions as well as on sufficient treatment of such problems, an aspect, which has not been focused on to a satisfactory extent in the recent literature. Moreover, we introduce a framework for model evaluation and variable selection, which is based on the concept of out-of-sample forecasting, in order to avoid overfitting. Finally, we illustrate the concepts outlined by practical examples based on Standard & Poor's global corporate ratings data. Copyright © 2016 John Wiley & Sons, Ltd.

]]>We establish a simple connection between certain *in-control* characteristics of the Cumulative Sum (CUSUM) Run Length and their *out-of-control* counterparts. The connection is in the form of paired integral (renewal) equations. The derivation exploits Wald's likelihood ratio identity and the well-known fact that the CUSUM chart is equivalent to repetitive application of Wald's Sequential Probability Ratio Test (SPRT). The characteristics considered include the entire Run Length distribution and all of the corresponding moments, starting from the zero-state average run length. A particular *practical* benefit of our result is that it enables the in-control and out-of-control characteristics of the CUSUM Run Length to be computed *concurrently*. Moreover, owing to the equivalence of the CUSUM chart to a sequence of SPRTs, the Average Sample Number and Operating Characteristic functions of an SPRT under the null and under the alternative can *all* be computed *simultaneously* as well. This would double up the efficiency of any numerical method that one may choose to devise to carry out the actual computations. Copyright © 2016 John Wiley & Sons, Ltd.

We consider the problem of modeling the dependence among many time series. We build high-dimensional time-varying copula models by combining pair-copula constructions with stochastic autoregressive copula and generalized autoregressive score models to capture dependence that changes over time. We show how the estimation of this highly complex model can be broken down into the estimation of a sequence of bivariate models, which can be achieved by using the method of maximum likelihood. Further, by restricting the conditional dependence parameter on higher cascades of the pair copula construction to be constant, we can greatly reduce the number of parameters to be estimated without losing much flexibility. Applications to five MSCI stock market indices and to a large dataset of daily stock returns of all constituents of the Dax 30 illustrate the usefulness of the proposed model class in-sample and for density forecasting. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In this work, we create a family of simple stochastic covariance models, which display stochastic mean-reverting levels of covariance as an additional level of stochastic behavior beyond well-known stochastic volatility and correlation. The one-dimensional version of our model is inspired by Heston model, while the multidimensional model generalizes the principal component stochastic volatility model. Their main contribution is that they capture stochastic mean-reversion levels on the volatility and on the eigenvalues of the instantaneous covariance matrix of the vector of stock prices, with direct implications on the correlations as well. Our focus is on the multidimensional model; we investigate its properties and derive a closed-form expression for the characteristic function. This allows us to study the pricing of financial derivatives, such as correlation and spread options. Those prices are compared with simulated Monte Carlo prices for correctness. A sensitivity analysis is performed on the parameters of the stochastic mean-reverting level of volatilities to study their impact on the price. Finally, implied volatility curves and correlation surfaces are built to reveal the additional flexibility gained within the new model. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This paper studies *k*-out-of-*n* redundant systems with component lifetimes having lower tail permutation decreasing probability density. For matched redundancies with stochastic arrangement increasing lifetimes, the allocation of a more reliable component to a weaker component is proved to enhance system reliability. For redundancies with independent and identically distributed lifetimes, more allocations to a weaker component are shown to stochastically increase the system lifetime. In addition, using a real data set, we illustrate the statistical aspects of developing lifetimes with lower tail permutation decreasing density. Copyright © 2016 John Wiley & Sons, Ltd.

We consider time-homogeneous Markov chains with state space ** E_{k}≡{0,1,…,k}** and initial distribution concentrated on the state 0. For pairs of such Markov chains, we study the

Remanufacturing processes such as refurbishing and reconditioning can extend the life of a product returned from the field. This provides financial opportunities and allows manufacturers to engage in sustainable practices. However, the inability to access a sufficient quantity of reconditioned components from end-of-life products can force the concurrent utilization of new components. This paper deals with the determination of an optimal warranty policy where a mixture of new and reconditioned components are used to carry out replacements upon failure for products under warranty. A mathematical optimization model is developed to maximize the manufacturer's expected total profit based on four decision variables: the warranty length, the sale price, the age of reconditioned components, and the proportion of reconditioned components to be used. A numerical procedure is used to compute the optimal solution. Numerical results are provided and discussed to demonstrate the validity and the added value of the proposed approach. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In this paper, a dynamic evaluation of the multistate weighted k-out-of-n:F system is presented in an unreliability viewpoint. The expected failure cost of components is used as an unreliability index. Using failure cost provides an opportunity to employ financial concepts in system unreliability estimation. Hence, system unreliability and system cost can be compared easily in order to making decision. The components' probabilities are computed over time to model the dynamic behavior of the system. The whole system has been assessed by recursive algorithm approach. As a result, a bi-objective optimization model can be developed to find optimal decisions on maintenance strategies. Finally, the application of the proposed model is investigated via a transportation system case. Matlab programming is developed for the case, and genetic algorithm is used to solve the optimization model. Copyright © 2016 John Wiley & Sons, Ltd.

]]>A general Bayesian approach for stochastic versions of deterministic growth models is presented to provide predictions for crack propagation in an early stage of the growth process. To improve the prediction, the information of other crack growth processes is used in a hierarchical (mixed-effects) model. Two stochastic versions of a deterministic growth model are compared. One is a nonlinear regression setup where the trajectory is assumed to be the solution of an ordinary differential equation with additive errors. The other is a diffusion model defined by a stochastic differential equation where increments have additive errors. While Bayesian prediction is known for hierarchical models based on nonlinear regression, we propose a new Bayesian prediction method for hierarchical diffusion models. Six growth models for each of the two approaches are compared with respect to their ability to predict the crack propagation in a large data example. Surprisingly, the stochastic differential equation approach has no advantage concerning the prediction compared with the nonlinear regression setup, although the diffusion model seems more appropriate for crack growth. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Burn-in is a method used to eliminate the initial failures in field use. In this paper, we will consider an information-based burn-in procedure for repairable items, which is completely new type of burn-in procedure. By this procedure, based on the operational (failure and repair) history of the items observed during burn-in procedure, those with poor reliability performance are eliminated. From a probabilistic point of view, this burn-in procedure utilizes the information contained in the ‘random paths’ of the corresponding point processes. A general formulation of the model will be suggested, and under the suggested framework, two-stage optimization procedure for determining optimal burn-in procedures will be studied in detail. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The design of attribute sampling inspection plans based on compressed or narrow limits for food safety applications is covered. Artificially compressed limits allow a significant reduction in the number of analytical tests to be carried out while maintaining the risks at predefined levels. The design of optimal sampling plans is discussed for two given points on the operating characteristic curve and especially for the zero acceptance number case. Compressed limit plans matching the attribute plans of the International Commission on Microbiological Specifications for Foods are also given. The case of unknown batch standard deviation is also discussed. Three-class attribute plans with optimal positions for given microbiological limit *M* and good manufacturing practices limit *m* are derived. The proposed plans are illustrated through examples. R software codes to obtain sampling plans are also given. Copyright © 2016 John Wiley & Sons, Ltd.

Modeling has often failed to meet expectations, mostly because of the difficulty of comprehending relationships within phenomena and expressing them in mathematical models. Reality is frequently too complex to be reflected in a single model. This is often the case of marketing research, where variables relating to socioeconomics or psychographics constitute potential sources of heterogeneity. In such cases, the assumption of ‘one model fits all’ is unrealistic and may lead to inaccurate decisions. Thus, heterogeneity is a major issue in modeling. Once a model has been fitted to a complete data set that fulfills all validation criteria, it is difficult to establish whether it is valid for the whole population or it is merely an average artifact from several sub-populations. The purpose of this paper is to present the Pathmox approach to deal with heterogeneity in partial least squares path modeling. The idea behind Pathmox is to build a tree of path models that have look-alike structure as a binary decision tree, with different models for each of its nodes. The split criterion consists of an *F* statistic comparing two structural models. In order to ensure the suitability of the split criterion, a simulation study was conducted. Finally, we have applied Pathmox to a survey that measured *Satisfaction* among Spanish mobile phone operators. Results suggest that the Pathmox approach performs adequately in detecting partial least squares path modeling heterogeneity. Copyright © 2016 John Wiley & Sons, Ltd.

Reference samples are frequently used to estimate in-control parameters, which are then used as the true in-control parameters during the monitoring phase of Statistical Process Control (SPC) applications. The SPC literature has recognized that even small errors in parameter estimates determined from reference samples can have a large impact on the conditional (given the values of the estimated parameters) in-control average run length. However, there is little quantitative guidance on how large the reference sample should be to minimize this impact. In this paper, under the context of a recently developed Cumulative Sum (CUSUM) designed to detect translations in exponential distributions, a reference sample size formula for controlling relative error of the conditional in-control average run length is derived. The result in this paper is a stepping stone for reference sample size formulas in more general settings. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Distribution-free (nonparametric) control charts are helpful in applications where we do not have enough information about the underlying distribution. The Shewhart precedence charts is a class of Phase I nonparametric charts for location. One of these charts, called the median precedence chart (Med chart hereafter), uses the median of the test sample as the charting statistic, whereas another chart, called the minimum precedence chart (Min chart hereafter), uses the minimum. In this paper, we first study the comparative performance of the Min and the Med charts, respectively, in terms of their in-control and out-of-control run-length properties in an extensive simulation study. It is seen that neither chart is best as each has its strength in certain situations. Next, we consider enhancing their performance by adding some supplementary runs-rules. It is seen that the new charts present very attractive run-length properties, that is, they outperform their competitors in many situations. A summary and some concluding remarks are given. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Definitive screening designs (DSDs) are a class of experimental designs that allow the estimation of linear, quadratic, and interaction effects with little experimental effort if there is effect sparsity. The number of experimental runs is twice the number of factors of interest plus one. Many industrial experiments involve nonnormal responses. Generalized linear models (GLMs) are a useful alternative for analyzing these kind of data. The analysis of GLMs is based on asymptotic theory, something very debatable, for example, in the case of the DSD with only 13 experimental runs. So far, analysis of DSDs considers a normal response. In this work, we show a five-step strategy that makes use of tools coming from the Bayesian approach to analyze this kind of experiment when the response is nonnormal. We consider the case of binomial, gamma, and Poisson responses without having to resort to asymptotic approximations. We use posterior odds that effects are active and posterior probability intervals for the effects and use them to evaluate the significance of the effects. We also combine the results of the Bayesian procedure with the lasso estimation procedure to enhance the scope of the method. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Predicting weekly box-office demand is an important yet challenging question. For theater exhibitors, such information will enhance negotiation options with distributers, and assist in planning weekly movie portfolio mix. Existing literature focuses on forecasts of pre-released total gross revenue or on weekly predictions based on first-weeks observations. This work adds to the literature in modeling the entire demand structure forecasts by utilizing information on movie similarity network. Specifically, we draw upon the assumption that aggregated consumers' choice in the film industry is the main key in understanding movies' demand. Therefore, similar movies, in terms of audience appeal, should yield similar demand structure. In this work, we propose an automated technique that derives measurements of demand structure. We demonstrate that our technique enables to analyze different aspects of demand structure, namely, decay rate, time of first demand peak, per-screen gross value at peak time, existence of second demand wave, and time on screens. We deploy ideas from variable selection procedures, to investigate the prediction power of similarity network on demand dynamics. We show that not only our models perform significantly better than models that discard the similarity network but are also robust to new sets of box-office movies. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Increased consumption of fossil fuels in industrial production has led to a significant elevation in the emission of greenhouse gases and to global warming. The most effective international action against global warming is the Kyoto Protocol, which aims to reduce carbon emissions to desired levels in a certain time span. Carbon trading is one of the mechanisms used to achieve the desired reductions. One of the most important implications of carbon trading for industrial systems is the risk of uncertainty about the prices of carbon allowance permits traded in the carbon markets. In this paper, we consider stochastic and time series modeling of carbon market prices and provide estimates of the model parameters involved, based on the European Union emissions trading scheme carbon allowances data obtained for 2008–2012 period. In particular, we consider fractional Brownian motion and autoregressive moving average–generalized autoregressive conditional heteroskedastic modeling of the European Union emissions trading scheme data and provide comparisons with benchmark models. Our analysis reveals evidence for structural changes in the underlying models in the span of the years 2008–2012. Data-driven methods for identifying possible change-points in the underlying models are employed, and a detailed analysis is provided. Our analysis indicated change-points in the European Union Allowance (EUA) prices in the first half of 2009 and in the second half of 2011, whereas in the Certified Emissions Reduction (CER) prices three change-points have appeared, in the first half of 2009, the middle of 2011, and in the second half of 2012. These change-points seem to parallel the global economic indicators as well. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The paper deals with the mathematical modeling of the reaction mechanism of rust formation. We provide both a quantitative description based on probability theory and a qualitative description for rust evolution using differential geometry. Copyright © 2016 John Wiley & Sons, Ltd.

]]>No abstract is available for this article.

]]>We discuss Bayesian forecasting of increasingly high-dimensional time series, a key area of application of stochastic dynamic models in the financial industry and allied areas of business. Novel state-space models characterizing sparse patterns of dependence among multiple time series extend existing multivariate volatility models to enable scaling to higher numbers of individual time series. The theory of these *dynamic dependence network* models shows how the individual series can be *decoupled* for sequential analysis and then *recoupled* for applied forecasting and decision analysis. Decoupling allows fast, efficient analysis of each of the series in individual univariate models that are linked – for later recoupling – through a theoretical multivariate volatility structure defined by a sparse underlying graphical model. Computational advances are especially significant in connection with model uncertainty about the sparsity patterns among series that define this graphical model; Bayesian model averaging using discounting of historical information builds substantially on this computational advance. An extensive, detailed case study showcases the use of these models and the improvements in forecasting and financial portfolio investment decisions that are achievable. Using a long series of daily international currencies, stock indices and commodity prices, the case study includes evaluations of multi-day forecasts and Bayesian portfolio analysis with a variety of practical utility functions, as well as comparisons against commodity trading advisor benchmarks. Copyright © 2016 John Wiley & Sons, Ltd.

We consider the problem of estimating occurrence rates of rare events for extremely sparse data using pre-existing hierarchies and selected features to perform inference along multiple dimensions. In particular, we focus on the problem of estimating click rates for {Advertiser, Publisher, and User} tuples where both the Advertisers and the Publishers are organized as hierarchies that capture broad contextual information at different levels of granularities. Typically, the click rates are low, and the coverage of the hierarchies and dimensions is sparse. To overcome these difficulties, we decompose the joint prior of the three-dimensional click-through rate using tensor decomposition and propose a multidimensional hierarchical Bayesian framework (abbreviated as MadHab). We set up a specific framework of each dimension to model dimension-specific characteristics. More specifically, we consider the hierarchical beta process prior for the Advertiser dimension and for the Publisher dimension respectively and a feature-dependent mixture model for the User dimension. Besides the centralized implementation, we propose two distributed algorithms through MapReduce and Spark for inferences, which make the model highly scalable and suited for large scale data mining applications. We demonstrate that on a real world ads campaign platform, our framework can effectively discriminate extremely rare events in terms of their click propensity. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This paper is concerned with the modeling of advertiser behaviors in sponsored search. Modeling advertiser behaviors can help search engines better serve advertisers, improve auction mechanism, and forecast future revenue. Previous works on this topic either unrealistically assume advertisers to be able to perceive the states of the sponsored search system and the private information of other advertisers or ignore the differences in advertisers' abilities to optimize their bid strategies. To tackle the problems, we propose viewing sponsored search auctions as partially observable multi-agent system with private information. Then, we employ a reinforcement learning behavior model to describe how each advertiser responds to this multi-agent system. The proposed model no longer assumes advertisers to have perfect information access, but instead assumes them to optimize their strategies only based on the partially observed states in the auctions. Furthermore, the model does not specify how the optimization is conducted, but instead uses parameters learned from data to describe different advertisers' abilities in obtaining the optimal strategies. Our experiments on real sponsored search data demonstrate that the proposed model outperforms previous models in predicting the bids and rank positions of the advertisers in the near future. In addition to the accurate prediction of these short-term behaviors, our study shows another nice property of the proposed model. That is, if all the advertisers behave according to the model, the multi-agent system of sponsored search will converge to a locally envy-free equilibrium, under certain conditions. This result establishes a connection between machine-learned behavior models and game-theoretic properties of the system. Copyright © 2016 John Wiley & Sons, Ltd.

]]>