Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non-reversible and will lose important properties, like the real-valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd.

The quality of the mesh used in the finite element discretizations will affect the efficiency of solving the discreted linear systems. The usual algebraic solvers except multigrid method do not consider the effect of the grid geometry and the mesh quality on their convergence rates. In this paper, we consider the hierarchical quadratic discretizations of three-dimensional linear elasticity problems on some anisotropic hexahedral meshes and present a new two-level method, which is weakly independent of the size of the resulting problems by using a special local block Gauss–Seidel smoother, that is LBGS_v iteration when used for vertex nodes or LBGS_m iteration for midside nodes. Moreover, we obtain the efficient algebraic multigrid (AMG) methods by applying DAMG (AMG based on distance matrix) or DAMG-PCG (PCG with DAMG as a preconditioner) to the solution of the coarse level equation. The resulting AMG methods are then applied to a practical example as a long beam. The numerical results verify the efficiency and robustness of the proposed AMG algorithms. Copyright © 2015 John Wiley & Sons, Ltd.

Let be a collection of subspaces of a finite-dimensional real vector space *V*. Let *L* denote a one-dimensional subspace of *V*, and let *θ*(*L*,*V*_{i}) denote the principal angle between *L* and *V*_{i}. Motivated by a problem in data analysis, we seek an *L* that maximizes the function . Conceptually, this is the line through the origin that best represents with respect to the criterion *F*(*L*). A reformulation shows that *L* is spanned by a vector , which maximizes the function subject to the constraints *v*_{i}∈*V*_{i} and ||*v*_{i}||=1. In this setting, *v* is seen to be the longest vector that can be decomposed into unit vectors lying on prescribed hyperspheres. A closely related problem is to find the longest vector that can be decomposed into vectors lying on prescribed hyperellipsoids. Using Lagrange multipliers, the critical points of either problem can be cast as solutions of a multivariate eigenvalue problem. We employ homotopy continuation and numerical algebraic geometry to solve this problem and obtain the extremal decompositions. Copyright © 2015 John Wiley & Sons, Ltd.

Recently, Guo and Lin [*SIAM J. Matrix Anal. Appl.*, 31 (2010), 2784–2801] proposed an efficient numerical method to solve the palindromic quadratic eigenvalue problem (PQEP) (*λ*^{2}*A*^{T}+*λ**Q* + *A*)*z* = 0 arising from the vibration analysis of high speed trains, where have special structures: both *Q* and *A* are, among others, *m* × *m* block matrices with each block being *k* × *k* (thus, *n* = *m**k*), and moreover, *Q* is block tridiagonal, and *A* has only one nonzero block in the (1,*m*)th block position. The key intermediate step of the method is the computation of the so-called stabilizing solution to the *n* × *n* nonlinear matrix equation *X* + *A*^{T}*X*^{−1}*A* = *Q* via the doubling algorithm. The aim of this article is to propose an improvement to this key step through solving a new nonlinear matrix equation having the same form but of only *k* × *k* in size. This new and much smaller matrix equation can also be solved by the doubling algorithm. For the same accuracy, it takes the same number of doubling iterations to solve both the larger and the new smaller matrix equations, but each doubling iterative step on the larger equation takes about 4.8 as many flops than the step on the smaller equation. Replacing Guo's and Lin's key intermediate step by our modified one leads to an alternative method for the PQEP. This alternative method is faster, but the improvement in speed is not as dramatic as just for solving the respective nonlinear matrix equations and levels off as *m* increases. Numerical examples are presented to show the effectiveness of the new method. Copyright © 2014 John Wiley & Sons, Ltd.

We study sweeping preconditioners for symmetric and positive definite block tridiagonal systems of linear equations. The algorithm provides an approximate inverse that can be used directly or in a preconditioned iterative scheme. These algorithms are based on replacing the Schur complements appearing in a block Gaussian elimination direct solve by hierarchical matrix approximations with reduced off-diagonal ranks. This involves developing low rank hierarchical approximations to inverses. We first provide a convergence analysis for the algorithm for reduced rank hierarchical inverse approximation. These results are then used to prove convergence and preconditioning estimates for the resulting sweeping preconditioner. Copyright © 2014 John Wiley & Sons, Ltd.

The problem of computing spline-like functions for ideal data, subject to two-sided inequality constraints on the first-order derivative, are considered, focusing primarily on constant constraints and then generalizing. Using a variational approach, in which the inequality constraints are not explicitly imposed, a special type of exponential splines we call speed limit quasi splines is introduced. A simple, non-parametric, efficient, and robust iterative solver is suggested, which is suitable for a wide range of inequality constraints. Analysis of the convergence factor of this algorithm is provided and supported by extensive numerical tests. Copyright © 2014 John Wiley & Sons, Ltd.

Alternating least squares (ALS) is often considered the workhorse algorithm for computing the rank-*R* canonical tensor approximation, but for certain problems, its convergence can be very slow. The nonlinear conjugate gradient (NCG) method was recently proposed as an alternative to ALS, but the results indicated that NCG is usually not faster than ALS. To improve the convergence speed of NCG, we consider a nonlinearly preconditioned NCG (PNCG) algorithm for computing the rank-*R* canonical tensor decomposition. Our approach uses ALS as a nonlinear preconditioner in the NCG algorithm. Alternatively, NCG can be viewed as an acceleration process for ALS. We demonstrate numerically that the convergence acceleration mechanism in PNCG often leads to important pay-offs for difficult tensor decomposition problems, with convergence that is significantly faster and more robust than for the stand-alone NCG or ALS algorithms. We consider several approaches for incorporating the nonlinear preconditioner into the NCG algorithm that have been described in the literature previously and have met with success in certain application areas. However, it appears that the nonlinearly PNCG approach has received relatively little attention in the broader community and remains underexplored both theoretically and experimentally. Thus, this paper serves several additional functions, by providing in one place a concise overview of several PNCG variants and their properties that have only been described in a few places scattered throughout the literature, by systematically comparing the performance of these PNCG variants for the tensor decomposition problem, and by drawing further attention to the usefulness of nonlinearly PNCG as a general tool. In addition, we briefly discuss the convergence of the PNCG algorithm. In particular, we obtain a new convergence result for one of the PNCG variants under suitable conditions, building on known convergence results for non-preconditioned NCG. Copyright © 2014 John Wiley & Sons, Ltd.

This paper is concerned with a generalization of the Kronecker product splitting (KPS) iteration for solving linear systems arising in implicit Runge–Kutta and boundary value methods discretizations of ordinary differential equations. It is shown that the new scheme can outperform the standard KPS method in some situations and can be used as an effective preconditioner for Krylov subspace methods. Numerical experiments are presented to demonstrate the effectiveness of the methods. Copyright © 2014 John Wiley & Sons, Ltd.

In this paper, the idea of auxiliary space multigrid methods is introduced. The construction is based on a two-level block factorization of local (finite element stiffness) matrices associated with a partitioning of the domain into overlapping or non-overlapping subdomains. The two-level method utilizes a coarse-grid operator obtained from additive Schur complement approximation. Its analysis is carried out in the framework of auxiliary space preconditioning and condition number estimates for both the two-level preconditioner and the additive Schur complement approximation are derived. The two-level method is recursively extended to define the auxiliary space multigrid algorithm. In particular, so-called Krylov cycles are considered. The theoretical results are supported by a representative collection of numerical tests that further demonstrate the efficiency of the new algorithm for multiscale problems. Copyright © 2014 John Wiley & Sons, Ltd.

This paper is concerned with several variants of the Hermitian and skew-Hermitian splitting iteration method to solve a class of complex symmetric linear systems. Theoretical analysis shows that several Hermitian and skew-Hermitian splitting based iteration methods are unconditionally convergent. Numerical experiments from an *n*-degree-of-freedom linear system are reported to illustrate the efficiency of the proposed methods. Copyright © 2014 John Wiley & Sons, Ltd.

This paper addresses the solution of parabolic evolution equations simultaneously in space and time as may be of interest in, for example, optimal control problems constrained by such equations. As a model problem, we consider the heat equation posed on the unit cube in Euclidean space of moderately high dimension. An *a priori* stable minimal residual Petrov–Galerkin variational formulation of the heat equation in space–time results in a generalized least squares problem. This formulation admits a unique, quasi-optimal solution in the natural space–time Hilbert space and serves as a basis for the development of space–time compressive solution algorithms. The solution of the heat equation is obtained by applying the conjugate gradient method to the normal equations of the generalized least squares problem. Starting from stable subspace splittings in space and in time, multilevel space–time preconditioners for the normal equations are derived. In order to reduce the complexity of the full space–time problem, all computations are performed in a compressed or sparse format called the hierarchical Tucker format, supposing that the input data are available in this format. In order to maintain sparsity, compression of the iterates within the hierarchical Tucker format is performed in each conjugate gradient iteration. Its application to vectors in the hierarchical Tucker format is detailed. Finally, numerical results in up to five spatial dimensions based on the recently developed htucker toolbox for MATLAB are presented. Copyright © 2014 John Wiley & Sons, Ltd.

Automatic result verification is an important tool to guarantee that completely inaccurate results cannot be used for decisions without getting remarked during a numerical computation. Mathematical rigor provided by verified computing allows the computation of an enclosure containing the exact solution of a given problem. Particularly, the computation of linear systems can strongly benefit from this technique in terms of reliability of results. However, in order to compute an enclosure of the exact result of a linear system, more floating-point operations are necessary, consequently increasing the execution time. In this context, parallelism appears as a good alternative to improve the solver performance. In this paper, we present an approach to solve very large dense linear systems with verified computing on clusters. This approach enabled our parallel solver to compute huge linear systems with point or interval input matrices with dimensions up to 100,000. Numerical experiments show that the new version of our parallel solver introduced in this paper provides good relative speedups and delivers a reliable enclosure of the exact results. Copyright © 2014 John Wiley & Sons, Ltd.

Z-eigenvalues of tensors, especially extreme ones, are quite useful and are related to many problems, such as automatic control, quantum physics, and independent component analysis. For supersymmetric tensors, calculating the smallest/largest Z-eigenvalue is equivalent to solving a global minimization/maximization problem of a homogenous polynomial over the unit sphere. In this paper, we utilize the sequential subspace projection method (SSPM) to find extreme Z-eigenvalues and the corresponding Z-eigenvectors. The main idea of SSPM is to form a 2-dimensional subspace at the current point and then solve the original optimization problem in the subspace. SSPM benefits from the fact that the 2-dimensional subproblem can be solved by a direct method. Global convergence and linear convergence are established for supersymmetric tensors under certain assumptions. Preliminary numerical results over several testing problems show that SSPM is very promising. Besides, the globalization strategy of random phase can be easily incorporated into SSPM, which promotes the ability to find extreme Z-eigenvalues. Copyright © 2014 John Wiley & Sons, Ltd.

Compressed sensing has motivated the development of numerous *sparse approximation* algorithms designed to return a solution to an underdetermined system of linear equations where the solution has the fewest number of nonzeros possible, referred to as the sparsest solution. In the compressed sensing setting, *greedy* sparse approximation algorithms have been observed to be both able to recover the sparsest solution for similar problem sizes as other algorithms and to be computationally efficient; however, little theory is known for their average case behavior. We conduct a large-scale empirical investigation into the behavior of three of the state of the art greedy algorithms: Normalized Iterative Hard Thresholding (NIHT), Hard Thresholding Pursuit (HTP), and CSMPSP. The investigation considers a variety of random classes of linear systems. The regions of the problem size in which each algorithm is able to reliably recover the sparsest solution is accurately determined, and throughout this region, additional performance characteristics are presented. Contrasting the recovery regions and the average computational time for each algorithm, we present *algorithm selection maps*, which indicate, for each problem size, which algorithm is able to reliably recover the sparsest vector in the least amount of time. Although no algorithm is observed to be uniformly superior, NIHT is observed to have an advantageous balance of large recovery region, absolute recovery time, and robustness of these properties to additive noise across a variety of problem classes. A principle difference between the NIHT and the more sophisticated HTP and CSMPSP is the balance of asymptotic convergence rate against computational cost prior to potential support set updates. The data suggest that NIHT is typically faster than HTP and CSMPSP because of greater flexibility in updating the support that limits unnecessary computation on incorrect support sets. The algorithm selection maps presented here are the first of their kind for compressed sensing. Copyright © 2014 John Wiley & Sons, Ltd.

This paper introduces a new framework for implicit restarting of the Krylov–Schur algorithm. It is shown that restarting with arbitrary polynomial filter is possible by reassigning some of the eigenvalues of the Rayleigh quotient through a rank-one correction, implemented using only the elementary transformations (translation and similarity) of the Krylov decomposition. This framework includes the implicitly restarted Arnoldi (IRA) algorithm and the Krylov–Schur algorithm with implicit harmonic restart as special cases. Further, it reveals that the IRA algorithm can be turned into an eigenvalue assignment method. Copyright © 2014 John Wiley & Sons, Ltd.

Short and unified proofs of spectral properties of major preconditioners for saddle point problems are presented. The need to sufficiently accurately construct approximations of the pivot block and Schur complement matrices to obtain real eigenvalues or eigenvalues with positive real parts and non-dominating imaginary parts are pointed out. The use of augmented Lagrangian methods for more ill-conditioned problems are discussed. Copyright © 2014 John Wiley & Sons, Ltd.

The following paper discusses the application of a multigrid-in-time scheme to Least Squares Shadowing (LSS), a novel sensitivity analysis method for chaotic dynamical systems. While traditional sensitivity analysis methods break down for chaotic dynamical systems, LSS is able to compute accurate gradients. Multigrid is used because LSS requires solving a very large Karush–Kuhn–Tucker system constructed from the solution of the dynamical system over the entire time interval of interest. Several different multigrid-in-time schemes are examined, and a number of factors were found to heavily influence the convergence rate of multigrid-in-time for LSS. These include the iterative method used for the smoother, how the coarse grid system is formed and how the least squares objective function at the center of LSS is weighted. Copyright © 2014 John Wiley & Sons, Ltd.

We apply the novel tensor product formats (tensor train, quantized TT [QTT], and QTT-Tucker) to the solution of *d*-dimensional chemical master equations for gene regulating networks (signaling cascades, toggle switches, and phage- *λ*). For some important cases, for example, signaling cascade models, we prove analytical tensor product representations of the system operator. The quantized tensor representations (QTT, QTT-Tucker) are employed in both state space and time, and the global state-time (*d* + 1)-dimensional system is solved in the tensor product form by the alternating minimal energy iteration, the ALS-type algorithm. This approach leads to the logarithmic dependence of the computational complexity on the volume of the state space. We investigate the proposed technique numerically and compare it with the direct chemical master equation solution and some previously known approximate schemes, where possible. We observe that the newer tensor methods demonstrate a good potential in simulation of relevant biological systems. Copyright © 2014 John Wiley & Sons, Ltd.

No abstract is available for this article.

]]>We give two generalizations of the induced dimension reduction (IDR) approach for the solution of linear systems. We derive a flexible and a multi-shift quasi-minimal residual IDR variant. These variants are based on a generalized Hessenberg decomposition. We present a new, more stable way to compute basis vectors in IDR. Numerical examples are presented to show the effectiveness of these new IDR variants and the new basis compared with existing ones and to other Krylov subspace methods. Copyright © 2014 John Wiley & Sons, Ltd.

We consider the numerical solution of the continuous algebraic Riccati equation *A***X* + *XA* − *XFX* + *G* = 0, with *F* = *F**,*G* = *G** of low rank and *A* large and sparse. We develop an algorithm for the low-rank approximation of *X* by means of an invariant subspace iteration on a function of the associated Hamiltonian matrix. We show that the sought-after approximation can be obtained by a low-rank update, in the style of the well known Alternating Direction Implicit (ADI) iteration for the linear equation, from which the new method inherits many algebraic properties. Moreover, we establish new insightful matrix relations with emerging projection-type methods, which will help increase our understanding of this latter class of solution strategies. Copyright © 2014 John Wiley & Sons, Ltd.

In this paper, we discuss the usage of overlapping techniques for improving the convergence of preconditioners based on incomplete factorizations. To enable parallelism, these preconditioners are usually applied after the input matrix is permuted into a nested arrow form using *k*-way nested dissection. This graph partitioning technique uses *k*-way partitionning by vertex separator to recursively partition the graph of the input matrix into *k* subgraphs using a subset of its vertices called a separator. The overlapping technique is then based on algebraically extending the associated subdomains of these subgraphs and their corresponding separators obtained from *k*-way nested dissection by their direct neighbours. A similar approach is known to accelerate the convergence of domain decomposition methods, where the input matrix is partitioned into a number of independent subdomains using *k*-way vertex partitioning of a graph by edge separators, a different graph decomposition technique. We discuss the effect of the overlapping technique on the convergence of two classes of preconditioners, on the basis of nested factorization and block incomplete LDU factorization. Copyright © 2014 John Wiley & Sons, Ltd.

We apply iterative subspace correction methods to elliptic PDE problems discretized by generalized sparse grid systems. The involved subspace solvers are based on the combination of *all* anisotropic full grid spaces that are contained in the sparse grid space. Their relative scaling is at our disposal and has significant influence on the performance of the iterative solver. In this paper, we follow three approaches to obtain close-to-optimal or even optimal scaling parameters of the subspace solvers and thus of the overall subspace correction method. We employ a Linear Program that we derive from the theory of additive subspace splittings, an algebraic transformation that produces partially negative scaling parameters that result in improved asymptotic convergence properties, and finally, we use the OptiCom method as a variable nonlinear preconditioner. Copyright © 2014 John Wiley & Sons, Ltd.

The discretization of the double-layer potential integral equation for the interior Dirichlet–Laplace problem in a domain with smooth boundary results in a linear system that has a bounded condition number. Thus, the number of iterations required for the convergence of a Krylov method is, asymptotically, independent of the discretization size *N*. Using the fast multipole method to accelerate the matrix–vector products, we obtain an optimal solver. In practice, however, when the geometry is complicated, the number of Krylov iterations can be quite large—to the extend that necessitates the use of preconditioning. We summarize the different methodologies that have appeared in the literature (single-grid, multigrid, approximate sparse inverses), and we propose a new class of preconditioners based on a fast multipole method-based spatial decomposition of the double-layer operator. We present an experimental study in which we compare the different approaches, and we discuss the merits and shortcomings of our approach. Our method can be easily extended to other second-kind integral equations with non-oscillatory kernels in two and three dimensions. Copyright © 2014 John Wiley & Sons, Ltd.

The paper is devoted to the spectral analysis of effective preconditioners for linear systems obtained via a finite element approximation to diffusion-dominated convection–diffusion equations. We consider a model setting in which the structured finite element partition is made by equilateral triangles. Under such assumptions, if the problem is coercive and the diffusive and convective coefficients are regular enough, then the proposed preconditioned matrix sequences exhibit a strong eigenvalue clustering at unity, the preconditioning matrix sequence and the original matrix sequence are spectrally equivalent, and under the constant coefficients assumption, the eigenvector matrices have a mild conditioning. The obtained results allow to prove the conjugate gradient optimality and the generalized minimal residual quasi-optimality in the case of structured uniform meshes. The interest of such a study relies on the observation that automatic grid generators tend to construct equilateral triangles when the mesh is fine enough. Numerical tests, both on the model setting and in the non-structured case, show the effectiveness of the proposal and the correctness of the theoretical findings. Copyright © 2014 John Wiley & Sons, Ltd.

An isospectral matrix reduction is a procedure that reduces the size of a matrix while maintaining its eigenvalues up to a known set. As to not violate the fundamental theorem of algebra, the reduced matrices have rational functions as entries. Because isospectral reductions can preserve the spectrum of a matrix, they are fundamentally different from say the restriction of a matrix to an invariant subspace. We show that the notion of pseudospectrum can be extended to a wide class of matrices with rational function entries and that the pseudospectrum of such matrices shrinks with isospectral reductions. Hence, the eigenvalues of a reduced matrix are more robust to entry-wise perturbations than the eigenvalues of the original matrix. Moreover, the isospectral reductions considered here are more general than those considered elsewhere. We also introduce the notion of an inverse pseudospectrum (or pseudoresonances), which indicates how stable the poles of a rational function valued matrix are to entry-wise perturbations. Illustrations of these concepts are given for mass-spring networks. Copyright © 2014 John Wiley & Sons, Ltd.

Convergence results are provided for inexact two-sided inverse and Rayleigh quotient iteration, which extend the previously established results to the generalized non-Hermitian eigenproblem and inexact solves with a decreasing solve tolerance. Moreover, the simultaneous solution of the forward and adjoint problem arising in two-sided methods is considered, and the successful tuning strategy for preconditioners is extended to two-sided methods, creating a novel way of preconditioning two-sided algorithms. Furthermore, it is shown that inexact two-sided Rayleigh quotient iteration and the inexact two-sided Jacobi-Davidson method (without subspace expansion) applied to the generalized preconditioned eigenvalue problem are equivalent when a certain number of steps of a Petrov–Galerkin–Krylov method is used and when this specific tuning strategy is applied. Copyright © 2014 John Wiley & Sons, Ltd.