Environmental report cards are popular mechanisms for summarising the overall status of an environmental system of interest. This paper describes the development of such a report card in the context of a study for Gladstone Harbour in Queensland, Australia. The harbour is within the World Heritage-protected Great Barrier Reef and is the location of major industrial development, hence the interest in developing a way of reporting its health in a statistically valid, transparent and sustainable manner. A Bayesian network (BN) approach was used because of its ability to aggregate and integrate different sources of information, provide probabilistic estimates of interest and update these estimates in a natural manner as new information becomes available.

BN modelling is an iterative process, and in the context of environmental reporting, this is appealing as model development can be initiated while quantitative knowledge is still under development, and subsequently refined as more knowledge becomes available. Moreover, the BN model helps build the maturity of the quantitative information needed and helps target investment in monitoring and/or process modelling activities to inform the approach taken. The model is able to incorporate spatial and temporal information and may be structured in such a way that new indicators of relevance to the underlying environmental gradient being monitored may replace less informative indicators or be added to the model with minimal effort.

The model described here focuses on the environmental component, but has the capacity to also incorporate social, cultural and economic components of the Gladstone Harbour Report Card. Copyright © 2016 John Wiley & Sons, Ltd.

Empirical evidence suggests that single factor models would not capture the full dynamics of stochastic volatility such that a marked discrepancy between their predicted prices and market prices exists for certain ranges (deep in-the-money and out-of-the-money) of time-to-maturities of options. On the other hand, there is an empirical reason to believe that volatility skew fluctuates randomly. Based upon the idea of combining stochastic volatility and stochastic skew, this paper incorporates stochastic elasticity of variance running on a fast timescale into the Heston stochastic volatility model. This multiscale and multifactor hybrid model keeps analytic tractability of the Heston model as much as possible, while it enhances capturing the complex nature of volatility and skew dynamics. Asymptotic analysis based on ergodic theory yields a closed form analytic formula for the approximate price of European vanilla options. Subsequently, the effect of adding the stochastic elasticity factor on top of the Heston model is demonstrated in terms of implied volatility surface. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The threshold autoregressive model with generalized autoregressive conditionally heteroskedastic (GARCH) specification is a popular nonlinear model that captures the well-known asymmetric phenomena in financial market data. The switching mechanisms of hysteretic autoregressive GARCH models are different from threshold autoregressive model with GARCH as regime switching may be delayed when the hysteresis variable lies in a hysteresis zone. This paper conducts a Bayesian model comparison among competing models by designing an adaptive Markov chain Monte Carlo sampling scheme. We illustrate the performance of three kinds of criteria by comparing models with fat-tailed and/or skewed errors: deviance information criteria, Bayesian predictive information, and an asymptotic version of Bayesian predictive information. A simulation study highlights the properties of the three Bayesian criteria and the accuracy as well as their favorable performance as model selection tools. We demonstrate the proposed method in an empirical study of 12 international stock markets, providing evidence to strongly support for both models with skew fat-tailed innovations. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Inspection models applicable to a finite planning horizon are developed for the following lifetime distributions: uniform, exponential, and Weibull distribution. For a given lifetime distribution, maximization of profit is used as the sole optimization criterion for determining an optimal planning horizon over which a system may be operated as well as ideal inspection times. Illustrative examples (focusing on the uniform and Weibull distributions and using Mathematica programs) are given. For some situations, evenly spreading inspections over the entire planning horizon are seen to result in the attainment of desirable profit levels over a shorter planning horizon. Scope for further research is given as well. Copyright © 2016 John Wiley & Sons, Ltd.

]]>We study the relationship between lumber strength properties and their visual grading characteristics. This topic is central to the analysis of the reliability of lumber products in that it underlies the calculation of structural design values. The approaches described in the paper are adaptations of survival analysis methods commonly used in medical studies. Because each piece of lumber can only be tested to destruction with one method (i.e., each piece cannot be broken twice), modeling these strengths distributions simultaneously can be challenging. In the past, this kind of problem has been solved by subjectively matching pieces of lumber, but the quality of this approach is then an issue. The objective of our analysis is to build a predictive model that relates the strength properties to the recorded characteristics. The paper concludes that type of wood defect (knot), a lumber grade status (off-grade: yes/no), and a lumber's module of elasticity have statistically significant effects on wood strength. We find that the Weibull accelerated failure time model provides a better fit than the Cox proportional hazards model in our dataset. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This paper solves an optimal portfolio selection problem in the discrete-time setting where the states of the financial market cannot be completely observed, which breaks the common assumption that the states of the financial market are fully observable. The dynamics of the unobservable market state is formulated by a hidden Markov chain, and the return of the risky asset is modulated by the unobservable market state. Based on the observed information up to the decision moment, an investor wants to find the optimal multi-period investment strategy to maximize the mean-variance utility of the terminal wealth. By adopting a sufficient statistic, the portfolio optimization problem with incompletely observable information is converted into the one with completely observable information. The optimal investment strategy is derived by using the dynamic programming approach and the embedding technique, and the efficient frontier is also presented. Compared with the case when the market state can be completely observed, we find that the unobservable market state does decrease the investment value on the risky asset in average. Finally, numerical results illustrate the impact of the unobservable market state on the efficient frontier, the optimal investment strategy and the Sharpe ratio. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In high-dimensional data settings where *p* ≫ *n*, many penalized regularization approaches were studied for simultaneous variable selection and estimation. However, with the existence of covariates with weak effect, many existing variable selection methods, including Lasso and its generations, cannot distinguish covariates with weak and no contribution. Thus, prediction based on a subset model of selected covariates only can be inefficient. In this paper, we propose a post selection shrinkage estimation strategy to improve the prediction performance of a selected subset model. Such a post selection shrinkage estimator (PSE) is data adaptive and constructed by shrinking a post selection weighted ridge estimator in the direction of a selected candidate subset. Under an asymptotic distributional quadratic risk criterion, its prediction performance is explored analytically. We show that the proposed post selection PSE performs better than the post selection weighted ridge estimator. More importantly, it improves the prediction performance of any candidate subset model selected from most existing Lasso-type variable selection methods significantly. The relative performance of the post selection PSE is demonstrated by both simulation studies and real-data analysis. Copyright © 2016 John Wiley & Sons, Ltd.

The problem of an *inspection permutation* or *inspection strategy* (first discussed in a research paper in 1989 and reviewed in another research paper in 1991) is revisited. The problem deals with an *N*-component system whose times to failure are independent but not identically distributed random variables. Each of the failure times follows an exponential distribution. The components in the system are connected in series such that the failure of at least one component entails the failure of the system. Upon system failure, the components are inspected one after another in a hierarchical way (called an inspection permutation) until the component causing the system to fail is identified. The inspection of each component is a process that takes a non-negligible amount of time and is performed at a cost. Once the faulty component is identified, it is repaired at a cost, and the repair process takes some time. After the repair, the system is *good as new* and is put back in operation. The inspection permutation that results in the maximum long run average net income per unit of time (for the *undiscounted case*) or maximum total discounted net income per unit of time (for the *discounted case*) is called the optimal inspection permutation/strategy. A way of determining an optimal inspection permutation in an easier fashion, taking advantage of the improvements in computer software, is proffered. Mathematica is used to showcase how the method works with the aid of a numerical example. Copyright © 2016 John Wiley & Sons, Ltd.

This paper presents a reduced-form model for pricing defaultable bonds and credit default swaps (CDSs) with stochastic recovery, where the recovery risk is coupled with the default intensity, while the default intensity is described by a stochastic differential equation. Closed-form pricing formulae for defaultable bonds and CDSs are obtained by applying variable change techniques and partial differential equation approaches. The closed-form pricing formulae can provide valuable assistance in analyzing certain complications associated with portfolio management and hedging analysis. Finally, numerical experiments are provided to illustrate how the recovery parameters and intensity parameters affect the credit spread of a defaultable bond and the swap premium of a CDS. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Sliced Latin hypercube designs (SLHDs) achieve maximum stratification in each dimension, but neither the full designs nor their slices can guarantee a good uniformity over the experimental region. Although the uniformity of the full SLHD and that of its slices are related, there is no one-to-one correspondence between them. In this paper, we propose a new uniformity measure for SLHDs by combining the two kinds of uniformity. Based on such a combined uniformity measure, the obtained uniform SLHDs have the design points evenly spread over the experimental region not only for the whole designs but also for their slices. Numerical simulation shows the effectiveness of the proposed uniform SLHDs for computer experiments with both quantitative and qualitative factors. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The aim of the present paper is the stochastic modeling and statistical inference of a component which deteriorates over time, for prediction purpose. The deterioration is due to defects which appear one by one and next independently propagate over time. The motivation comes from an application to passive components within electric power plants, where (measurable) flaw indications first initiate (one at a time) and next grow over time. The available data come from inspections at discrete times, where only the largest flaw indication is measured together with the total number of indications on each component. Although detected, too small indications cannot be measured, leading to censored observations. Taking into account this partial information coming from the field, a specific stochastic model is proposed, where the flaw indications initiate according to a Poisson process and next propagate according to competing independent gamma processes. A parametric estimation procedure is developed, tested on simulated data and then applied to the industrial case. The fitted model is next used to make some prediction over the future deterioration of each component and over its residual operating time until a specified critical degradation level is reached. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In this paper, we have examined the inflation rates in the Group of Seven countries, investigating issues such as the existence of unit roots, structural breaks, fractional integration and potential non-linearities using a fractional dependence (FD) approach based on Chebyshev polynomials in time. This robust FD approach allows one to test for persistence as well as non-linearity of the series. We first tested for stationarity and structural breaks using classical approaches and observed inconclusive results with regard to the stationarity levels of the series. Using Bai–Perron tests, we actually confirmed significant structural breaks, even up to five, in each of the inflation series. However, noting that structural breaks are significantly related to fractional differentiation, this latter approach was also conducted. Here, we observed that the estimates of the differencing parameter were quite stable across time, and evidence of unit roots was found in the cases of the UK, Canada, France, Japan and the USA; for Germany, we found some evidence of mean reversion, while estimates of ** d** above 1 were found in the case of Italy. On the other hand, non-linear deterministic trends were clearly rejected in all cases. Copyright © 2016 John Wiley & Sons, Ltd.

The distribution of a coherent system based on IID components can be written as a mixture of the distributions of progressively Type-II censored order statistics. The coefficients in that representation are called the progressive censoring signature (PC-signature) of the system. In this paper, we explore the basic properties and potential applications of these mixture representations. We show that they can be used to study the associated censoring schemes (and their exact probabilities) in the operating development of the system. It is illustrated that, in some sense, the PC-signature provides more information than the classical (Samaniego) signature of the system. Further, this new signature can be used to establish distribution-free ordering properties for system lifetimes. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This paper provides analytic pricing formulas of discretely monitored geometric Asian options under the regime-switching model. We derive the joint Laplace transform of the discount factor, the log return of the underlying asset price at maturity, and the logarithm of the geometric mean of the asset price. Then using the change of measures and the inversion of the transform, the prices and deltas of a fixed-strike and a floating-strike geometric Asian option are obtained. As the numerical results, we calculate the price of a fixed-strike and a floating-strike discrete geometric Asian call option using our formulas and compare with the results of the Monte Carlo simulation. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The purpose of this article is to summarize recent research results for constructing nonparametric multivariate control charts with main focus on data depth-based control charts. Data depth provides dimension reduction to high-dimensional problems in a completely nonparametric way. Several depth measures including Tukey depth are shown to be particularly effective for purposes of statistical process control in case that the data deviate normality assumption. For detecting small or moderate shifts in the process target mean, the multivariate version of the exponentially weighted moving average chart is generally robust to non-normal data, so that nonparametric alternatives may be less often required. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Semiconductors are fabricated through unit processes including photolithography, etching, diffusion, ion implantation, deposition, and planarization processes. Chemical mechanical planarization, which is essential in advanced semiconductor manufacturing processes, aims to achieve high planarity across the wafer surface. This paper presents a case study in which the optimal blend of mixture slurry was obtained to improve the two response variables (material loss and roughness) at the same time. The mixture slurry consists of several pure slurries; when all of the abrasive particles within the slurry are of the same size, the slurry is referred to as a pure slurry. The optimal blend was obtained by applying a multiresponse surface optimization method. In particular, the recently developed posterior approach to dual response surface optimization was employed, which allows the chemical mechanical planarization process engineer to investigate tradeoffs between the two response variables. The two responses were better with the obtained blend than the existing blend. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The modeling of macroeconomic influence on rating migration matrices plays an important role in credit risk management, especially in stress testing. In contrast to approaches, which separately condition migration matrices by a qualitative assessment of the state of the business cycle, we promote the use of generalized regression models, which directly allow to consider macroeconomic covariates. We systemize, extend, and critically discuss different regression approaches and put an emphasis on violations of model assumptions as well as on sufficient treatment of such problems, an aspect, which has not been focused on to a satisfactory extent in the recent literature. Moreover, we introduce a framework for model evaluation and variable selection, which is based on the concept of out-of-sample forecasting, in order to avoid overfitting. Finally, we illustrate the concepts outlined by practical examples based on Standard & Poor's global corporate ratings data. Copyright © 2016 John Wiley & Sons, Ltd.

]]>We establish a simple connection between certain *in-control* characteristics of the Cumulative Sum (CUSUM) Run Length and their *out-of-control* counterparts. The connection is in the form of paired integral (renewal) equations. The derivation exploits Wald's likelihood ratio identity and the well-known fact that the CUSUM chart is equivalent to repetitive application of Wald's Sequential Probability Ratio Test (SPRT). The characteristics considered include the entire Run Length distribution and all of the corresponding moments, starting from the zero-state average run length. A particular *practical* benefit of our result is that it enables the in-control and out-of-control characteristics of the CUSUM Run Length to be computed *concurrently*. Moreover, owing to the equivalence of the CUSUM chart to a sequence of SPRTs, the Average Sample Number and Operating Characteristic functions of an SPRT under the null and under the alternative can *all* be computed *simultaneously* as well. This would double up the efficiency of any numerical method that one may choose to devise to carry out the actual computations. Copyright © 2016 John Wiley & Sons, Ltd.

We consider the problem of modeling the dependence among many time series. We build high-dimensional time-varying copula models by combining pair-copula constructions with stochastic autoregressive copula and generalized autoregressive score models to capture dependence that changes over time. We show how the estimation of this highly complex model can be broken down into the estimation of a sequence of bivariate models, which can be achieved by using the method of maximum likelihood. Further, by restricting the conditional dependence parameter on higher cascades of the pair copula construction to be constant, we can greatly reduce the number of parameters to be estimated without losing much flexibility. Applications to five MSCI stock market indices and to a large dataset of daily stock returns of all constituents of the Dax 30 illustrate the usefulness of the proposed model class in-sample and for density forecasting. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In this work, we create a family of simple stochastic covariance models, which display stochastic mean-reverting levels of covariance as an additional level of stochastic behavior beyond well-known stochastic volatility and correlation. The one-dimensional version of our model is inspired by Heston model, while the multidimensional model generalizes the principal component stochastic volatility model. Their main contribution is that they capture stochastic mean-reversion levels on the volatility and on the eigenvalues of the instantaneous covariance matrix of the vector of stock prices, with direct implications on the correlations as well. Our focus is on the multidimensional model; we investigate its properties and derive a closed-form expression for the characteristic function. This allows us to study the pricing of financial derivatives, such as correlation and spread options. Those prices are compared with simulated Monte Carlo prices for correctness. A sensitivity analysis is performed on the parameters of the stochastic mean-reverting level of volatilities to study their impact on the price. Finally, implied volatility curves and correlation surfaces are built to reveal the additional flexibility gained within the new model. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This paper studies *k*-out-of-*n* redundant systems with component lifetimes having lower tail permutation decreasing probability density. For matched redundancies with stochastic arrangement increasing lifetimes, the allocation of a more reliable component to a weaker component is proved to enhance system reliability. For redundancies with independent and identically distributed lifetimes, more allocations to a weaker component are shown to stochastically increase the system lifetime. In addition, using a real data set, we illustrate the statistical aspects of developing lifetimes with lower tail permutation decreasing density. Copyright © 2016 John Wiley & Sons, Ltd.

Reference samples are frequently used to estimate in-control parameters, which are then used as the true in-control parameters during the monitoring phase of Statistical Process Control (SPC) applications. The SPC literature has recognized that even small errors in parameter estimates determined from reference samples can have a large impact on the conditional (given the values of the estimated parameters) in-control average run length. However, there is little quantitative guidance on how large the reference sample should be to minimize this impact. In this paper, under the context of a recently developed Cumulative Sum (CUSUM) designed to detect translations in exponential distributions, a reference sample size formula for controlling relative error of the conditional in-control average run length is derived. The result in this paper is a stepping stone for reference sample size formulas in more general settings. Copyright © 2016 John Wiley & Sons, Ltd.

]]>No abstract is available for this article.

]]>Increased consumption of fossil fuels in industrial production has led to a significant elevation in the emission of greenhouse gases and to global warming. The most effective international action against global warming is the Kyoto Protocol, which aims to reduce carbon emissions to desired levels in a certain time span. Carbon trading is one of the mechanisms used to achieve the desired reductions. One of the most important implications of carbon trading for industrial systems is the risk of uncertainty about the prices of carbon allowance permits traded in the carbon markets. In this paper, we consider stochastic and time series modeling of carbon market prices and provide estimates of the model parameters involved, based on the European Union emissions trading scheme carbon allowances data obtained for 2008–2012 period. In particular, we consider fractional Brownian motion and autoregressive moving average–generalized autoregressive conditional heteroskedastic modeling of the European Union emissions trading scheme data and provide comparisons with benchmark models. Our analysis reveals evidence for structural changes in the underlying models in the span of the years 2008–2012. Data-driven methods for identifying possible change-points in the underlying models are employed, and a detailed analysis is provided. Our analysis indicated change-points in the European Union Allowance (EUA) prices in the first half of 2009 and in the second half of 2011, whereas in the Certified Emissions Reduction (CER) prices three change-points have appeared, in the first half of 2009, the middle of 2011, and in the second half of 2012. These change-points seem to parallel the global economic indicators as well. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The paper deals with the mathematical modeling of the reaction mechanism of rust formation. We provide both a quantitative description based on probability theory and a qualitative description for rust evolution using differential geometry. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Predicting weekly box-office demand is an important yet challenging question. For theater exhibitors, such information will enhance negotiation options with distributers, and assist in planning weekly movie portfolio mix. Existing literature focuses on forecasts of pre-released total gross revenue or on weekly predictions based on first-weeks observations. This work adds to the literature in modeling the entire demand structure forecasts by utilizing information on movie similarity network. Specifically, we draw upon the assumption that aggregated consumers' choice in the film industry is the main key in understanding movies' demand. Therefore, similar movies, in terms of audience appeal, should yield similar demand structure. In this work, we propose an automated technique that derives measurements of demand structure. We demonstrate that our technique enables to analyze different aspects of demand structure, namely, decay rate, time of first demand peak, per-screen gross value at peak time, existence of second demand wave, and time on screens. We deploy ideas from variable selection procedures, to investigate the prediction power of similarity network on demand dynamics. We show that not only our models perform significantly better than models that discard the similarity network but are also robust to new sets of box-office movies. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Distribution-free (nonparametric) control charts are helpful in applications where we do not have enough information about the underlying distribution. The Shewhart precedence charts is a class of Phase I nonparametric charts for location. One of these charts, called the median precedence chart (Med chart hereafter), uses the median of the test sample as the charting statistic, whereas another chart, called the minimum precedence chart (Min chart hereafter), uses the minimum. In this paper, we first study the comparative performance of the Min and the Med charts, respectively, in terms of their in-control and out-of-control run-length properties in an extensive simulation study. It is seen that neither chart is best as each has its strength in certain situations. Next, we consider enhancing their performance by adding some supplementary runs-rules. It is seen that the new charts present very attractive run-length properties, that is, they outperform their competitors in many situations. A summary and some concluding remarks are given. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Definitive screening designs (DSDs) are a class of experimental designs that allow the estimation of linear, quadratic, and interaction effects with little experimental effort if there is effect sparsity. The number of experimental runs is twice the number of factors of interest plus one. Many industrial experiments involve nonnormal responses. Generalized linear models (GLMs) are a useful alternative for analyzing these kind of data. The analysis of GLMs is based on asymptotic theory, something very debatable, for example, in the case of the DSD with only 13 experimental runs. So far, analysis of DSDs considers a normal response. In this work, we show a five-step strategy that makes use of tools coming from the Bayesian approach to analyze this kind of experiment when the response is nonnormal. We consider the case of binomial, gamma, and Poisson responses without having to resort to asymptotic approximations. We use posterior odds that effects are active and posterior probability intervals for the effects and use them to evaluate the significance of the effects. We also combine the results of the Bayesian procedure with the lasso estimation procedure to enhance the scope of the method. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Modeling has often failed to meet expectations, mostly because of the difficulty of comprehending relationships within phenomena and expressing them in mathematical models. Reality is frequently too complex to be reflected in a single model. This is often the case of marketing research, where variables relating to socioeconomics or psychographics constitute potential sources of heterogeneity. In such cases, the assumption of ‘one model fits all’ is unrealistic and may lead to inaccurate decisions. Thus, heterogeneity is a major issue in modeling. Once a model has been fitted to a complete data set that fulfills all validation criteria, it is difficult to establish whether it is valid for the whole population or it is merely an average artifact from several sub-populations. The purpose of this paper is to present the Pathmox approach to deal with heterogeneity in partial least squares path modeling. The idea behind Pathmox is to build a tree of path models that have look-alike structure as a binary decision tree, with different models for each of its nodes. The split criterion consists of an *F* statistic comparing two structural models. In order to ensure the suitability of the split criterion, a simulation study was conducted. Finally, we have applied Pathmox to a survey that measured *Satisfaction* among Spanish mobile phone operators. Results suggest that the Pathmox approach performs adequately in detecting partial least squares path modeling heterogeneity. Copyright © 2016 John Wiley & Sons, Ltd.

The design of attribute sampling inspection plans based on compressed or narrow limits for food safety applications is covered. Artificially compressed limits allow a significant reduction in the number of analytical tests to be carried out while maintaining the risks at predefined levels. The design of optimal sampling plans is discussed for two given points on the operating characteristic curve and especially for the zero acceptance number case. Compressed limit plans matching the attribute plans of the International Commission on Microbiological Specifications for Foods are also given. The case of unknown batch standard deviation is also discussed. Three-class attribute plans with optimal positions for given microbiological limit *M* and good manufacturing practices limit *m* are derived. The proposed plans are illustrated through examples. R software codes to obtain sampling plans are also given. Copyright © 2016 John Wiley & Sons, Ltd.

In this paper, a dynamic evaluation of the multistate weighted k-out-of-n:F system is presented in an unreliability viewpoint. The expected failure cost of components is used as an unreliability index. Using failure cost provides an opportunity to employ financial concepts in system unreliability estimation. Hence, system unreliability and system cost can be compared easily in order to making decision. The components' probabilities are computed over time to model the dynamic behavior of the system. The whole system has been assessed by recursive algorithm approach. As a result, a bi-objective optimization model can be developed to find optimal decisions on maintenance strategies. Finally, the application of the proposed model is investigated via a transportation system case. Matlab programming is developed for the case, and genetic algorithm is used to solve the optimization model. Copyright © 2016 John Wiley & Sons, Ltd.

]]>A general Bayesian approach for stochastic versions of deterministic growth models is presented to provide predictions for crack propagation in an early stage of the growth process. To improve the prediction, the information of other crack growth processes is used in a hierarchical (mixed-effects) model. Two stochastic versions of a deterministic growth model are compared. One is a nonlinear regression setup where the trajectory is assumed to be the solution of an ordinary differential equation with additive errors. The other is a diffusion model defined by a stochastic differential equation where increments have additive errors. While Bayesian prediction is known for hierarchical models based on nonlinear regression, we propose a new Bayesian prediction method for hierarchical diffusion models. Six growth models for each of the two approaches are compared with respect to their ability to predict the crack propagation in a large data example. Surprisingly, the stochastic differential equation approach has no advantage concerning the prediction compared with the nonlinear regression setup, although the diffusion model seems more appropriate for crack growth. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Burn-in is a method used to eliminate the initial failures in field use. In this paper, we will consider an information-based burn-in procedure for repairable items, which is completely new type of burn-in procedure. By this procedure, based on the operational (failure and repair) history of the items observed during burn-in procedure, those with poor reliability performance are eliminated. From a probabilistic point of view, this burn-in procedure utilizes the information contained in the ‘random paths’ of the corresponding point processes. A general formulation of the model will be suggested, and under the suggested framework, two-stage optimization procedure for determining optimal burn-in procedures will be studied in detail. Copyright © 2016 John Wiley & Sons, Ltd.

]]>We consider time-homogeneous Markov chains with state space ** E_{k}≡{0,1,…,k}** and initial distribution concentrated on the state 0. For pairs of such Markov chains, we study the

Remanufacturing processes such as refurbishing and reconditioning can extend the life of a product returned from the field. This provides financial opportunities and allows manufacturers to engage in sustainable practices. However, the inability to access a sufficient quantity of reconditioned components from end-of-life products can force the concurrent utilization of new components. This paper deals with the determination of an optimal warranty policy where a mixture of new and reconditioned components are used to carry out replacements upon failure for products under warranty. A mathematical optimization model is developed to maximize the manufacturer's expected total profit based on four decision variables: the warranty length, the sale price, the age of reconditioned components, and the proportion of reconditioned components to be used. A numerical procedure is used to compute the optimal solution. Numerical results are provided and discussed to demonstrate the validity and the added value of the proposed approach. Copyright © 2016 John Wiley & Sons, Ltd.

]]>