This paper considers the statistical analysis of masked data in a series system, where the components are assumed to have Marshall-Olkin Weibull distribution. Based on type-I progressive hybrid censored and masked data, we derive the maximum likelihood estimates, approximate confidence intervals, and bootstrap confidence intervals of unknown parameters. As the maximum likelihood estimate does not exist for small sample size, Gibbs sampling is used to obtain the Bayesian estimates and Monte Carlo method is employed to construct the credible intervals based on Jefferys prior with partial information. Numerical simulations are performed to compare the performances of the proposed methods and one data set is analyzed.

This article treats the problem of scheduling multiple cranes processing jobs along a line, where cranes are divided into different groups and only cranes in the same group can interfere with each other. Such crane scheduling problems occur, for example, at indented berths or in container yards where double rail-mounted gantry cranes stack containers such that cranes of the same size can interfere with each other but small cranes can pass underneath larger ones. We propose a novel algorithm based on Benders decomposition to solve this problem to optimality. In a computational study, it is shown that this algorithm solves small and medium-sized instances and even many large instances within a few seconds or minutes. Moreover, it improves several best known solutions from the literature with regard to the simpler problem version with only one crane group. We also look into whether investment in more complicated crane configurations with multiple crane groups is actually worthwhile.

We consider price and capacity decisions for a profit-maximizing service provider in a single server queueing system, in which customers are boundedly rational and decide whether to join the service according to a multinomial logit model. We find two potential price-capacity pair solutions for the first-order condition of the profit-maximizing problem. Profit is maximized at the solution with a larger capacity, but minimized at the smaller one. We then consider a dynamically adjusting capacity system to mimic a real-life situation and find that the maximum can be reached only when the initial service rate is larger than a certain threshold; otherwise, the system capacity and demand shrink to zero. We also find that a higher level of customers’ bounded rationality does not necessarily benefit a firm, nor does it necessarily allow service to be sustained. We extend our analysis to a setting in which customers’ bounded rationality level is related to historical demand and find that such a setting makes service easier to sustain. Finally we find that bounded rationality always harms social welfare.

Piracy attack is a serious safety problem for maritime transport worldwide. Whilst various strategic actions can be taken, such as rerouting vessels and strengthening navy patrols, this still cannot completely eliminate the possibility of a piracy attack. It is therefore important for a commercial vessel to be equipped with operational solutions in case of piracy attacks. In particular, the choice of a direction for rapidly fleeing is a critical decision for the vessel. In this article, we formulate such a problem as a nonlinear optimal control problem. We consider various policies, such as maintaining a straight direction or making turns, develop algorithms to optimize the policies, and derive conditions under which these policies are effective and safe. Our work can be used as a real-time decision making tool that enables a vessel master to evaluate different scenarios and quickly make decisions.

This article studies convergence properties of optimal values and actions for discounted and average-cost Markov decision processes (MDPs) with weakly continuous transition probabilities and applies these properties to the stochastic periodic-review inventory control problem with backorders, positive setup costs, and convex holding/backordering costs. The following results are established for MDPs with possibly non-compact action sets and unbounded cost functions: (i) convergence of value iterations to optimal values for discounted problems with possibly non-zero terminal costs, (ii) convergence of optimal finite-horizon actions to optimal infinite-horizon actions for total discounted costs, as the time horizon tends to infinity, and (iii) convergence of optimal discount-cost actions to optimal average-cost actions for infinite-horizon problems, as the discount factor tends to 1. Being applied to the setup-cost inventory control problem, the general results on MDPs imply the optimality of (*s*, *S*) policies and convergence properties of optimal thresholds. In particular this article analyzes the setup-cost inventory control problem without two assumptions often used in the literature: (a) the demand is either discrete or continuous or (b) the backordering cost is higher than the cost of backordered inventory if the amount of backordered inventory is large.© 2017 Wiley Periodicals, Inc. Naval Research Logistics 00: 000–000, 2017

In standard stochastic dynamic programming, the transition probability distributions of the underlying Markov Chains are assumed to be known with certainty. We focus on the case where the transition probabilities or other input data are uncertain. Robust dynamic programming addresses this problem by defining a min-max game between Nature and the controller. Considering examples from inventory and queueing control, we examine the structure of the optimal policy in such robust dynamic programs when event probabilities are uncertain. We identify the cases where certain monotonicity results still hold and the form of the optimal policy is determined by a threshold. We also investigate the marginal value of time and the case of uncertain rewards.© 2017 Wiley Periodicals, Inc. Naval Research Logistics, 2017

This article provides conditions under which total-cost and average-cost Markov decision processes (MDPs) can be reduced to discounted ones. Results are given for transient total-cost MDPs with transition rates whose values may be greater than one, as well as for average-cost MDPs with transition probabilities satisfying the condition that there is a state such that the expected time to reach it is uniformly bounded for all initial states and stationary policies. In particular, these reductions imply sufficient conditions for the validity of optimality equations and the existence of stationary optimal policies for MDPs with undiscounted total cost and average-cost criteria. When the state and action sets are finite, these reductions lead to linear programming formulations and complexity estimates for MDPs under the aforementioned criteria.© 2017 Wiley Periodicals, Inc. Naval Research Logistics, 2017

We study a single-product fluid-inventory model in which the procurement price of the product fluctuates according to a continuous time Markov chain. We assume that a fixed order price, in addition to state-dependent holding costs are incurred, and that the depletion rate of inventory is determined by the sell price of the product. Hence, at any time the controller has to simultaneously decide on the selling price of the product and whether to order or not, taking into account the current procurement price and the inventory level. In particular, the controller is faced with the question of how to best exploit the random time windows in which the procurement price is low. We consider two policies, derive the associated steady-state distributions and cost functionals, and apply those cost functionals to study the two policies.© 2017 Wiley Periodicals, Inc. Naval Research Logistics, 2017

This article analyzes a class of stochastic contests among multiple players under risk-averse exponential utility. In these contests, players compete over the completion of a task by simultaneously deciding on their investment, which determines how fast they complete the task. The completion time of the task for each player is assumed to be an exponentially distributed random variable with rate linear in the player's investment and the completion times of different players are assumed to be stochastically independent. The player that completes the task first earns a prize whereas the remaining players earn nothing. The article establishes a one-to-one correspondence between the Nash equilibrium of this contest with respect to risk-averse exponential utilities and the nonnegative solution of a nonlinear equation. Using the properties of the latter, it proves the existence and the uniqueness of the Nash equilibrium, and provides an efficient method to compute it. It exploits the resulting representation of the equilibrium investments to determine the effects of risk aversion and the differences between the outcome of the Nash equilibrium and that of a centralized version.© 2016 Wiley Periodicals, Inc. Naval Research Logistics, 2016

In this article, we consider shortest path problems in a directed graph where the transitions between nodes are subject to uncertainty. We use a minimax formulation, where the objective is to guarantee that a special destination state is reached with a minimum cost path under the worst possible instance of the uncertainty. Problems of this type arise, among others, in planning and pursuit-evasion contexts, and in model predictive control. Our analysis makes use of the recently developed theory of abstract semicontractive dynamic programming models. We investigate questions of existence and uniqueness of solution of the optimality equation, existence of optimal paths, and the validity of various algorithms patterned after the classical methods of value and policy iteration, as well as a Dijkstra-like algorithm for problems with nonnegative arc lengths.© 2016 Wiley Periodicals, Inc. Naval Research Logistics, 2016

A Markov population decision chain concerns the control of a population of individuals in different states by assigning an action to each individual in the system in each period. This article solves the problem of finding policies that maximize expected system utility over a finite horizon in Markov population decision chains with finite state-action space under the following assumptions: (1) The utility function exhibits constant risk posture, (2) the progeny vectors of distinct individuals are independent, and (3) the progeny vectors of individuals in a state who take the same action are identically distributed. The main result is that it is possible to solve the problem with the original state-action space without augmenting it to include information about the population in each state or any other aspect of the system history. In particular, there exists an optimal policy that assigns the same action to all individuals in a given state and period, independently of the population in that period and such a policy can be computed efficiently. The optimal utility operators that find the maximum of a finite collection of polynomials (rather than affine functions) yield an optimal solution with effort linear in the number of periods.© 2016 Wiley Periodicals, Inc. Naval Research Logistics, 2016

In this article, we study a class of Quasi-Skipfree (QSF) processes where the transition rate submatrices in the skipfree direction have a column times row structure. Under homogeneity and irreducibility assumptions we show that the stationary distributions of these processes have a product form as a function of the level. For an application, we will discuss the -queue that can be modeled as a QSF process on a two-dimensional state space. In addition, we study the properties of the stationary distribution and derive monotonicity of the mean number of the customers in the queue, their mean sojourn time and the variance as a function of for fixed mean arrival rate. © 2016 Wiley Periodicals, Inc. Naval Research Logistics, 2016

We consider a dynamic pricing model in which the instantaneous rate of the demand arrival process is dependent on not only the current price charged by the concerned firm, but also the present state of the world. While reflecting the current economic condition, the state evolves in a Markovian fashion. This model represents the real-life situation in which the sales season is relatively long compared to the fast pace at which the outside environment changes. We establish the value of being better informed on the state of the world. When reasonable monotonicity conditions are met, we show that better present economic conditions will lead to higher prices. Our computational study is partially calibrated with real data. It demonstrates that the benefit of heeding varying economic conditions is on par with the value of embracing randomness in the demand process. © 2015 Wiley Periodicals, Inc. Naval Research Logistics, 2015

Under quasi-hyperbolic discounting, the valuation of a payoff falls relatively rapidly for earlier delay periods, but then falls more slowly for longer delay periods. When the salespersons with quasi-hyperbolic discounting consider the product sale problem, they would exert less effort than their early plan, thus resulting in losses of future profit. We propose a winner-takes-all competition to alleviate the above time inconsistent behaviors of the salespersons, and allow the company to maximize its revenue by choosing an optimal bonus. To evaluate the effects of the competition scheme, we define the group time inconsistency degree of the salespersons, which measures the consequence of time inconsistent behaviors, and two welfare measures, the group welfare of the salespersons and the company revenue. We show that the competition always improves the group welfare and the company revenue as long as the company chooses to run the competition in the first place. However, the effect on group time inconsistency degree is mixed. When the optimal bonus is moderate (extreme high), the competition motivates (over-motivates) the salesperson to work hard, thus alleviates (worsens) the time inconsistent behaviors. © 2017 Wiley Periodicals, Inc. Naval Research Logistics 64: 357–372, 2017

We study a multi-stage dynamic assignment interdiction (DAI) game in which two agents, a user and an attacker, compete in the underlying bipartite assignment graph. The user wishes to assign a set of tasks at the minimum cost, and the attacker seeks to interdict a subset of arcs to maximize the user's objective. The user assigns exactly one task per stage, and the assignment costs and interdiction impacts vary across stages. Before any stage commences in the game, the attacker can interdict arcs subject to a cardinality constraint. An interdicted arc can still be used by the user, but at an increased assignment cost. The goal is to find an optimal sequence of assignments, coupled with the attacker's optimal interdiction strategy. We prove that this problem is strongly NP-hard, even when the attacker can interdict only one arc. We propose an exact exponential-state dynamic-programming algorithm for this problem as well as lower and upper bounds on the optimal objective function value. Our bounds are based on classical interdiction and robust optimization models, and on variations of the DAI game. We examine the efficiency of our algorithms and the quality of our bounds on a set of randomly generated instances. © 2017 Wiley Periodicals, Inc. Naval Research Logistics 64: 373–387, 2017

We consider parallel-machine scheduling with a common server and job preemption to minimize the makespan. While the non-preemptive version of the problem is strongly NP-hard, the complexity status of the preemptive version has remained open. We show that the preemptive version is NP-hard even if there is a fixed number of machines. We give a pseudo-polynomial time algorithm to solve the case with two machines. We show that the case with an arbitrary number of machines is unary NP-hard, analyze the performance ratios of some natural heuristic algorithms, and present several solvable special cases. © 2017 Wiley Periodicals, Inc. Naval Research Logistics 64: 388–398, 2017

We consider the integrated problem of optimally maintaining an imperfect, deteriorating sensor and the safety-critical system it monitors. The sensor's costless observations of the binary state of the system become less informative over time. A costly full inspection may be conducted to perfectly discern the state of the system, after which the system is replaced if it is in the out-of-control state. In addition, a full inspection provides the opportunity to replace the sensor. We formulate the problem of adaptively scheduling full inspections and sensor replacements using a partially observable Markov decision process (POMDP) model. The objective is to minimize the total expected discounted costs associated with system operation, full inspection, system replacement, and sensor replacement. We show that the optimal policy has a threshold structure and demonstrate the value of coordinating system and sensor maintenance via numerical examples. © 2017 Wiley Periodicals, Inc. Naval Research Logistics 64: 399–417, 2017

We consider an integrated usage and maintenance optimization problem for a *k*-out-of-*n* system pertaining to a moving asset. The *k*-out-of-*n* systems are commonly utilized in practice to increase availability, where *n* denotes the total number of parallel and identical units and *k* the number of units required to be active for a functional system. Moving assets such as aircraft, ships, and submarines are subject to different operating modes. Operating modes can dictate not only the number of system units that are needed to be active, but also where the moving asset physically is, and under which environmental conditions it operates. We use the intrinsic age concept to model the degradation process. The intrinsic age is analogous to an intrinsic clock which ticks on a different pace in different operating modes. In our problem setting, the number of active units, degradation rates of active and standby units, maintenance costs, and type of economic dependencies are functions of operating modes. In each operating mode, the decision maker should decide on the set of units to activate (usage decision) and the set of units to maintain (maintenance decision). Since the degradation rate differs for active and standby units, the units to be maintained depend on the units that have been activated, and vice versa. In order to minimize maintenance costs, usage and maintenance decisions should be jointly optimized. We formulate this problem as a Markov decision process and provide some structural properties of the optimal policy. Moreover, we assess the performance of usage policies that are commonly implemented for maritime systems. We show that the cost increase resulting from these policies is up to 27% for realistic settings. Our numerical experiments demonstrate the cases in which joint usage and maintenance optimization is more valuable. © 2017 Wiley Periodicals, Inc. Naval Research Logistics 64: 418–434, 2017