The stochastic behaviour of lifetimes of a two component system is often primarily influenced by the system structure and by the covariates shared by the components. Any meaningful attempt to model the lifetimes must take into consideration the factors affecting their stochastic behaviour. In particular, for a load share system, we describe a reliability model incorporating both the load share dependence and the effect of observed and unobserved covariates. The model includes a bivariate Weibull to characterize load share, a positive stable distribution to describe frailty, and also incorporates effects of observed covariates. We investigate various interesting reliability properties of this model using cross ratio functions and conditional survivor functions. We implement maximum likelihood estimation of the model parameters and discuss model adequacy and selection. We illustrate our approach using a simulation study. For a real data situation, we demonstrate the superiority of the proposed model that incorporates both load share and frailty effects over competing models that incorporate just one of these effects. An attractive and computationally simple cross-validation technique is introduced to reconfirm the claim. We conclude with a summary and discussion.

]]>This paper proposes a dynamic system, with an associated fusion learning inference procedure, to perform real-time detection and localization of nuclear sources using a network of mobile sensors. This is motivated by the need for a reliable detection system in order to prevent nuclear attacks in major cities such as New York City. The approach advocated here installs a large number of relatively inexpensive (and perhaps relatively less accurate) nuclear source detection sensors and GPS devices in taxis and police vehicles moving in the city. Sensor readings and GPS information are sent to a control center at a high frequency, where the information is immediately processed and fused with the earlier signals. We develop a real-time detection and localization method aimed at detecting the presence of a nuclear source and estimating its location and power. We adopt a Bayesian framework to perform the fusion learning and use a sequential Monte Carlo algorithm to estimate the parameters of the model and to perform real-time localization. A simulation study is provided to assess the performance of the method for both stationary and moving sources. The results provide guidance and recommendations for an actual implementation of such a surveillance system. Copyright © 2017 John Wiley & Sons, Ltd.

]]>The computation of the reliability function of a (complex) coherent system is a difficult task. Hence, sometimes, we should simply work with some bounds (approximations). The computation of these bounds has been widely studied in the case of coherent systems with independent and identically distributed (IID) components. However, few results have been obtained in the case of heterogeneous (non ID) components. In this paper, we derive explicit bounds for systems with heterogeneous (independent or dependent) components. Also some stochastic comparisons are obtained. Some illustrative examples are included where we compare the different bounds proposed in the paper.

]]>We construct an arbitrage-free scenario tree reduction model, from which some arbitrage-free scenario tree reduction algorithms are designed. They ensure that the reduced scenario trees are arbitrage free. Numerical results show the practicality and efficiency of the proposed algorithms. Results for multistage portfolio selection problems demonstrate the necessity and importance for guaranteeing that the reduced scenario trees are arbitrage free, as well as the practicality of the proposed arbitrage-free scenario tree reduction algorithms for financial optimization.

]]>A discrete-time mover-stayer (MS) model is an extension of a discrete-time Markov chain, which assumes a simple form of population heterogeneity. The individuals in the population are either stayers, who never leave their initial states or movers who move according to a Markov chain. We, in turn, propose an extension of the MS model by specifying the stayer's probability as a logistic function of an individual's covariates. Such extension has been recently discussed for a continuous time MS but has not been considered before for a discrete time one. This extension allows for an in-sample classification of subjects who never left their initial states into stayers or movers. The parameters of an extended MS model are estimated using the expectation-maximization algorithm. A novel bootstrap procedure is proposed for out of sample validation of the in-sample classification. The bootstrap procedure is also applied to validate the in-sample classification with respect to a more general dichotomy than the MS one. The developed methods are illustrated with the data set on installment loans. But they can be applied more broadly in credit risk area, where prediction of creditworthiness of a loan borrower or lessee is of major interest.

]]>Relational event data, which consist of events involving pairs of actors over time, are now commonly available at the finest of temporal resolutions. Existing continuous-time methods for modeling such data are based on point processes and directly model interaction “contagion,” whereby one interaction increases the propensity of future interactions among actors, often as dictated by some latent variable structure. In this article, we present an alternative approach to using temporal-relational point process models for continuous-time event data. We characterize interactions between a pair of actors as either spurious or as resulting from an underlying, persistent connection in a latent social network. We argue that consistent deviations from expected behavior, rather than solely high frequency counts, are crucial for identifying well-established underlying social relationships. This study aims to explore these latent network structures in two contexts: one comprising of college students and another involving barn swallows.

]]>This paper investigates the portfolio strategy problem for passive fund management. We propose a novel portfolio strategy that combines the existing stratified strategy and optimized sampling strategy. The proposed method enables one to include adequate practical information in portfolio decision making, and promotes better out-of-sample performance. A mixed-integer program model is built that captures the stratification information, the cardinality requirement, and other practical constraints. The corresponding model is able to forecast and generate optimal tracking portfolios with high performance, especially in out-of-sample time period. As mixed-integer program is a well-known NP-hard problem, to tackle the computational challenge, we propose a stratified hybrid genetic algorithm, in which a novel crossover operator is introduced. To evaluate the proposed strategy and algorithm, we conduct numerical tests on real data sets collected from China Stock Exchange Markets. The experimental results show that the algorithm runs efficiently and the portfolio strategy performs significantly better than other existing strategies.

]]>One of the major challenges associated with the measurement of customer lifetime value is selecting an appropriate model for predicting customer future transactions. Among such models, the Pareto/negative binomial distribution (Pareto/NBD) is the most prevalent in noncontractual relationships characterized by latent customer defections; ie, defections are not observed by the firm when they happen. However, this model and its applications have some shortcomings. Firstly, a methodological shortcoming is that the Pareto/NBD, like all lifetime transaction models based on statistical distributions, assumes that the number of transactions by a customer follows a Poisson distribution. However, many applications have an empirical distribution that does not fit a Poisson model. Secondly, a computational concern is that the implementation of Pareto/NBD model presents some estimation challenges specifically related to the numerous evaluation of the Gaussian hypergeometric function. Finally, the model provides 4 parameters as output, which is insufficient to link the individual purchasing behavior to socio-demographic information and to predict the behavior of new customers. In this paper, we model a customer's lifetime transactions using the Conway-Maxwell-Poisson distribution, which is a generalization of the Poisson distribution, offering more flexibility and a better fit to real-world discrete data. To estimate parameters, we propose a Markov chain Monte Carlo algorithm, which is easy to implement. Use of this Bayesian paradigm provides individual customer estimates, which help link purchase behavior to socio-demographic characteristics and an opportunity to target individual customers.

]]>We propose a methodology based on partial least squares (PLS) regression models using the beta distribution, which is useful for describing data measured between zero and one. The beta PLS model parameters are estimated with the maximum likelihood method, whereas a randomized quantile residual and the generalized Cook and Mahalanobis distances are considered as diagnostic methods. A simulation study is provided for evaluating the performance of these diagnostic methods. We illustrate the methodology with real-world mining data. The results obtained in this study based on the beta PLS model and its diagnostics may be of interest for the mining industry.

]]>This paper considers information properties of coherent systems when component lifetimes are independent and identically distributed. Some results on the entropy of coherent systems in terms of ordering properties of component distributions are proposed. Moreover, various sufficient conditions are given under which the entropy order among systems as well as the corresponding dual systems hold. Specifically, it is proved that under some conditions, the entropy order among component lifetimes is preserved under coherent system formations. The findings are based on system signatures as a useful measure from comparison purposes. Furthermore, some results on the system's entropy are derived when lifetimes of components are dependent and identically distributed. Several illustrative examples are also given.

]]>Correlated count data processes with a finite range can be adequately described by a first-order binomial autoregressive model. However, in several practical applications, these data demonstrate extra-binomial variation, and a more appropriate choice is the first-order beta-binomial autoregressive model. In this paper, we propose and study control charts that can be used for the monitoring of these 2 processes. Practical guidelines concerning their statistical design are provided, whereas the effect of the extra-binomial variation is investigated as well. Finally, the practical application of the proposed schemes is illustrated via a real-data example.

]]>Various charts such as |*S*|, *W*, and *G* are used for monitoring process dispersion. Most of these charts are based on the normality assumption, while exact distribution of the control statistic is unknown, and thus limiting distribution of control statistic is employed which is applicable for large sample sizes. In practice, the normality assumption of distribution might be violated, while it is not always possible to collect large sample size. Furthermore, to use control charts in practice, the in-control state usually has to be estimated. Such estimation has a negative effect on the performance of control chart. Non-parametric bootstrap control charts can be considered as an alternative when the distribution is unknown or a collection of large sample size is not possible or the process parameters are estimated from a Phase I data set. In this paper, non-parametric bootstrap multivariate control charts |*S*|, *W*, and *G* are introduced, and their performances are compared against Shewhart-type control charts. The proposed method is based on bootstrapping the data used for estimating the in-control state. Simulation results show satisfactory performance for the bootstrap control charts. Ultimately, the proposed control charts are applied to a real case study.

In this paper, we investigate the possibility of using multivariate singular spectrum analysis (SSA), a nonparametric technique in the field of time series analysis, for mortality forecasting. We consider a real data application with 9 European countries: Belgium, Denmark, Finland, France, Italy, Netherlands, Norway, Sweden, and Switzerland, over a period 1900 to 2009, and a simulation study based on the data set. The results show the superiority of multivariate SSA in comparison with the univariate SSA, in terms of forecasting accuracy.

]]>We develop a simple stock selection model to explain why active equity managers tend to underperform a benchmark index. We motivate our model with the empirical observation that the best performing stocks in a broad market index often perform much better than the other stocks in the index. Randomly selecting a subset of securities from the index may dramatically increase the chance of underperforming the index. The relative likelihood of underperformance by investors choosing active management likely is much more important than the loss those same investors take due to the higher fees of active management relative to passive index investing. Thus, active management may be even more challenging than previously believed, and the stakes for finding the best active managers may be larger than previously assumed.

]]>Microseismic sensing networks are important tools for the assessment and control of geomechanical hazards in underground mining operations. In such a setting, the maintenance of a healthy network, that is, one that accurately registers all microseisms above some minimum energy level with acceptable levels of noise, is crucially relevant.

In this paper, we develop a nondisruptive method to monitor the health of such a network, by associating with each sensor a set of performance indexes, inspired from reliability engineering, which are estimated from the set of registered signals. Our method addresses 2 relevant features of each of the sensors' behavior, namely, what type of noise is or might be affecting the registering process, and how effective at registering microseisms the sensor is.

The method is evaluated through a case study with microseismic data registered at the Chilean underground mine El Teniente. This study illustrates our method's capability to discriminate and rank sensors with satisfactory, poor, or defective sensing performances, as well as to characterize their failure profile or type, an information that can be used to plan or optimize the network maintenance procedures.

In recent years, there has been an increasing incidence of failure of rock bolts due to stress corrosion cracking and localized corrosion attack in Australian underground coal mines. Unfortunately, prediction of the risk of failure from results obtained from laboratory testing is not necessarily reliable because it is difficult to properly simulate the mine environment. An alternative way of predicting failure is to apply machine learning methods to data obtained from underground mines. In this paper, support vector machines are built to predict failure of bolts in complex mine environments. Feature transformation and feature selection methods are applied to extract useful information from the original data. A dataset, which had continuous features and spatial data, was used to test the proposed model. The results showed that principal component analysis-based feature transformation provides reliable risk prediction.

]]>The problem of heterogeneity represents a very important issue in the decision-making process. Furthermore, it has become common practice in the context of marketing research to assume that different population parameters are possible depending on sociodemographic and psycho-demographic variables such as age, gender, and social status. In recent decades, numerous approaches have been proposed with the aim of involving heterogeneity in the parameter estimation procedures. In partial least squares path modeling, the common practice consists of achieving a global measurement of the differences arising from heterogeneity. This leaves the analyst with the important task of detecting, a posteriori, which are the causal relationships (ie, path coefficients) that produce changes in the model. This is the case in Pathmox analysis, which solves the heterogeneity problem by building a binary tree to detect those segments of population that cause the heterogeneity. In this article, we propose extending the same Pathmox methodology to asses which particular endogenous equation of the structural model and which path coefficients are responsible of the difference.

]]>Model fusion methods, or more generally ensemble methods, are a useful tool for prediction. Combining predictions from a set of models smooths out biases and reduces variances of predictions from individual models, and hence, the combined predictions typically outperform those from individual models. In many algorithms, individual predictions are arithmetically averaged with equal weights. However, in the presence of correlated models, the fusion process is required to account for association between models; otherwise, the naively averaged predictions will be suboptimal. This article describes optimal model fusion principles and illustrates the potential pitfalls of naive fusion in the presence of correlated models for binary data. An efficient algorithm for correlated model fusion is detailed and applied to algorithms mining social media information to predict civil unrest. Copyright © 2017 John Wiley & Sons, Ltd.

]]>In this work, a set of sequences of information (time series), under nonstationary regime, with continuous space state, discrete time, and a Markovian dependence, is considered. A new model that expresses the marginal transition density function of one sequence as a linear combination of the marginal transition density functions of all sequences in the set is proposed. The coefficients of this combination are denominated *marginal contribution coefficients* and represent how much each transition density function contributes to the calculation of a chosen transition density function. The proposed coefficient is a marginal coefficient because it can be computed instantaneously, and it may change from one time to another time since all calculations are performed before stationarity is reached. This clearly differentiates the new coefficient from well-known measures such as the cross-correlation and the coherence. The idea behind the model is that if a specific sequence has a high marginal contribution for the transition density function from another sequence, the first may be replaced by the latter without losing much information that means that the knowledge of few densities should be enough to recover the overall behaviour. Simulations, considering 2 chains, are presented so as to check the sensitivity of the proposed model. The methodology is also applied to a real data originated from a wire-drawing machine whose main function is to decrease the transverse diameter of metal wires. The behaviour of the level of acceleration of each bearing in relation to the other ones is then verified.

We constructed a Stackelberg game in a supply chain finance (SCF) system including a manufacturer, a capital-constrained retailer, and a bank that provides loans on the basis of the manufacturer's credit guarantee. To emphasize the financial service providers' risks, we assumed that both the bank and the manufacturer are risk-averse and formulated trade-off objective functions for both of them as the convex combination of the expected profit and conditional value-at-risk. To explore the effects of the risk preferences and decision preferences on SCF equilibriums, we mathematically analyzed the optimal order quantities, wholesale prices, and interest rates under different risk preference scenarios and performed numerical analyses to quantify the effects. We found that incorporating bank credit with a credit guarantee can effectively balance the retailer's financing risk between the bank and the manufacturer through interest rate charging and wholesale pricing. Moreover, SCF equilibriums with risk aversion are highly affected by the degree of both the lender's and guarantor's risk tolerance in regard to the borrower's default probability and will be more conservative than those in the risk-neutral cases that only maximize expected profit.

]]>A screening design is an experimental plan used for identifying the expectedly few active factors from potentially many. In this paper, we compare the performances of 3 experimental plans, a Plackett-Burman design, a minimum run resolution IV design, and a definitive screening design, all with 12 and 13 runs, when they are used for screening and 3 out of 6 factors are active. The functional relationship between the response and the factors was allowed to be of 2 types, a second-order model and a model with all main effects and interactions included. D-efficiencies for the designs ability to estimate parameters in such models were computed, but it turned out that these are not very informative for comparing the screening performances of the 2-level designs to the definitive screening design. The overall screening performance of the 2-level designs was quite good, but there exist situations where the definitive screening design, allowing both screening and estimation of second-order models in the same operation, has a reasonable high probability of being successful.

]]>Urban rail planning is extremely complex, mainly because it is a decision problem under different uncertainties. In practice, travel demand is generally uncertain, and therefore, the timetabling decisions must be based on accurate estimation. This research addresses the optimization of train timetable at public transit terminals of an urban rail in a stochastic setting. To cope with stochastic fluctuation of arrival rates, a two-stage stochastic programming model is developed. The objective is to construct a daily train schedule that minimizes the expected waiting time of passengers. Due to the high computational cost of evaluating the expected value objective, the sample average approximation method is applied. The method provided statistical estimations of the optimality gap as well as lower and upper bounds and the associated confidence intervals. Numerical experiments are performed to evaluate the performance of the proposed model and the solution method.

]]>The generalized *T*^{2} chart (GT-chart), which is composed of the *T*^{2} statistic based on a small number of principal components and the remaining components, is a popular alternative to the traditional Hotelling's *T*^{2} control chart. However, the application of the GT-chart to high-dimensional data, which are now ubiquitous, encounters difficulties from high dimensionality similar to other multivariate procedures. The sample principal components and their eigenvalues do not consistently estimate the population values, and the GT-chart relying on them is also inconsistent in estimating the control limits. In this paper, we investigate the effects of high dimensionality on the GT-chart and then propose a corrected GT-chart using the recent results of random matrix theory for the spiked covariance model. We numerically show that the corrected GT-chart exhibits superior performance compared to the existing methods, including the GT-chart and Hotelling's *T*^{2} control chart, under various high-dimensional cases. Finally, we apply the proposed corrected GT-chart to monitor chemical processes introduced in the literature.

In this work, we investigate sequential Bayesian estimation for inference of stochastic volatility with variance-gamma (SVVG) jumps in returns. We develop an estimation algorithm that combines the sequential learning auxiliary particle filter with the particle learning filter. Simulation evidence and empirical estimation results indicate that this approach is able to filter latent variances, identify latent jumps in returns, and provide sequential learning about the static parameters of SVVG. We demonstrate comparative performance of the sequential algorithm and off-line Markov Chain Monte Carlo in synthetic and real data applications.

]]>Actuarial risks and financial asset returns are typically heavy tailed. In this paper, we introduce 2 stochastic dominance criteria, called the right-tail order and the left-tail order, to compare these variables stochastically. The criteria are based on comparisons of expected utilities, for 2 classes of utility functions that give more weight to the right or the left tail (depending on the context) of the distributions. We study their properties, applications, and connections with other classical criteria, including the increasing convex and the second-order stochastic dominance. Finally, we rank some parametric families of distributions and provide empirical evidence of the new stochastic dominance criteria with an example using real data.

]]>In this paper, we extend the closed form moment estimator (ordinary MCFE) for the autoregressive conditional duration model given by Lu et al (2016) and propose some closed form robust moment-based estimators for the multiplicative error model to deal with the additive and innovational outliers. The robustification of the closed form estimator is done by replacing the sample mean and sample autocorrelation with some robust estimators. These estimators are more robust than the quasi-maximum likelihood estimator (QMLE) often used to estimate this model, and they are easy to implement and do not require the use of any numerical optimization procedure and the choice of initial value. The performance of our proposal in estimating the parameters and forecasting conditional mean *μ*_{t} of the MEM(1,1) process is compared with the proposals existing in the literature via Monte Carlo experiments, and the results of these experiments show that our proposal outperforms the ordinary MCFE, QMLE, and least absolute deviation estimator in the presence of outliers in general. Finally, we fit the price durations of IBM stock with the robust closed form estimators and the benchmarks and analyze their performances in estimating model parameters and forecasting the irregularly spaced intraday Value at Risk.

A commonly occurring problem in reliability testing is how to combine pass/fail test data that is collected from disparate environments. We have worked with colleagues in aerospace engineering for a number of years where two types of test environments in use are ground tests and flight tests. Ground tests are less expensive and consequently more numerous. Flight tests are much less frequent, but directly reflect the actual usage environment. We discuss a relatively simple combining approach that realizes the benefit of a larger sample size by using ground test data, but at the same time accounts for the difference between the two environments. We compare our solution with what look like more sophisticated approaches to the problem in order to calibrate its limitations. Overall, we find that our proposed solution is robust to its inherent assumptions, which explains its usefulness in practice. Copyright © 2017 John Wiley & Sons, Ltd.

]]>This article describes statistical analyses pertaining to marketing data from a large multinational pharmaceutical firm. We describe models for monthly new prescription counts that are written by physicians for the firm's focal drug and for competing drugs, as functions of physician-specific and time-varying predictors. Modeling patterns in discrete-valued time series, and specifically time series of counts, based on large datasets, is the focus of much recent research attention. We first provide a brief overview of Bayesian approaches we have employed for modeling multivariate count time series using Markov Chain Monte Carlo methods. We then discuss a flexible level correlated model framework, which enables us to combine different marginal count distributions and to build a hierarchical model for the vector time series of counts, while accounting for the association among the components of the response vector, as well as possible overdispersion. We employ the integrated nested Laplace approximation (INLA) for fast approximate Bayesian modeling using the R-INLA package (r-inla.org). To enhance computational speed, we first build a model for each physician, use features of the estimated trends in the time-varying parameters in order to cluster the physicians into groups, and fit aggregate models for all physicians within each cluster. Our three-stage analysis can provide useful guidance to the pharmaceutical firm on their marketing actions. Copyright © 2017 John Wiley & Sons, Ltd.

]]>No abstract is available for this article.

]]>We present a Bayesian decision theoretic approach for developing replacement strategies. In so doing, we consider a semiparametric model to describe the failure characteristics of systems by specifying a nonparametric form for cumulative intensity function and by taking into account effect of covariates by a parametric form. Use of a gamma process prior for the cumulative intensity function complicates the Bayesian analysis when the updating is based on failure count data. We develop a Bayesian analysis of the model using Markov chain Monte Carlo methods and determine replacement strategies. Adoption of Markov chain Monte Carlo methods involves a data augmentation algorithm. We show the implementation of our approach using actual data from railroad tracks. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Two hypotheses can explain the declining probability of gaining employment as an unemployment spell wears on: heterogeneity of the unemployed versus duration dependence. The nonparametric tests developed in the literature for testing duration dependence would not account for the fact that an unemployment spell can terminate in other ways than employment. The nonparametric tests developed in this paper extend, under certain conditions, those tests to competing risks. We illustrate our test using US unemployment data in which we find little consistent evidence for duration dependence. © 2017 The Authors. *Applied Stochastic Models in Business and Industry* published by John Wiley & Sons, Ltd.

Establishment of cost-effective management strategy of aquaculture is one of the most important issues in fishery science, which can be addressed with bio-economic mathematical modeling. This paper deals with the aforementioned issue using a stochastic process model for aquacultured non-renewable fishery resources from the viewpoint of an optimal stopping (timing) problem. The goal of operating the model is to find the optimal criteria to start harvesting the resources under stochastic environment, which turns out to be determined from the Bellman equation (BE). The BE has a separation of variables type structure and can be simplified to a reduced BE with a fewer degrees of freedom. Dependence of solutions to the original and reduced BEs on parameters and independent variables is analyzed from both analytical and numerical standpoints. Implications of the analysis results to management of aquaculture systems are presented as well. Numerical simulation focusing on aquacultured *Plecoglossus altivelis* in Japan validates the mathematical analysis results. Copyright © 2017 John Wiley & Sons, Ltd.

This work considers optimum design of a life testing experiment with progressive type I interval censoring. A cost minimization-based optimality criterion is proposed. The proposed cost function incorporates the cost of conducting the experiment, opportunity cost, and post-sale cost. It is shown that the proposed cost function is scale invariant for any lifetime distribution whose support does not depend on the parameters of the distribution. Weibull distribution is considered for illustration. Optimum solution is obtained by a suitable numerical method. A sensitivity analysis is undertaken to study the effect of small perturbations in lifetime model parameter values or cost coefficients. Copyright © 2017 John Wiley & Sons, Ltd.

]]>Variance swap is a typical financial tool for managing volatility risk. In this paper, we evaluate different types of variance swaps under a threshold Ornstein–Uhlenbeck model, which exhibits both mean reversion and regime switching features in the underlying asset price. We derive the analytical solution for the joint moment generating function of log-asset prices at two distinct time points. This enables us to price various types of variance swaps analytically. Copyright © 2017 John Wiley & Sons, Ltd.

]]>This paper is concerned with a discrete-time *G**e**o*/*G*/1 repairable queueing system with Bernoulli feedback and randomized
-policy. The service station may be subject to failures randomly during serving customers and therefore is sent for repair immediately. The
-policy means that when the number of customers in the system reaches a given threshold value *N*, the deactivated server is turned on with probability *p* or is still left off with probability 1−*p*. Applying the law of total probability decomposition, the renewal theory and the probability generating function technique, we investigate the queueing performance measures and reliability indices simultaneously in our work. Both the transient queue length distribution and the recursive expressions of the steady-state queue length distribution at various epochs are explicitly derived. Meanwhile, the stochastic decomposition property is presented for the proposed model. Various reliability indices, including the transient and the steady-state unavailability of the service station, the expected number of the service station breakdowns during the time interval
and the equilibrium failure frequency of the service station are also discussed. Finally, an operating cost function is formulated, and the direct search method is employed to numerically find the optimum value of *N* for minimizing the system cost. Copyright © 2017 John Wiley & Sons, Ltd.

The paper is devoted to the stochastic optimistic bilevel optimization problem with quantile criterion in the upper level problem. If the probability distribution is finite, the problem can be transformed into a mixed-integer nonlinear optimization problem. We formulate assumptions guaranteeing that an optimal solution exists. A production planning problem is used to illustrate usefulness of the model. Copyright © 2017 John Wiley & Sons, Ltd.

]]>