This paper is concerned with several variants of the Hermitian and skew-Hermitian splitting iteration method to solve a class of complex symmetric linear systems. Theoretical analysis shows that several Hermitian and skew-Hermitian splitting based iteration methods are unconditionally convergent. Numerical experiments from an *n*-degree-of-freedom linear system are reported to illustrate the efficiency of the proposed methods. Copyright © 2014 John Wiley & Sons, Ltd.

This paper addresses the solution of parabolic evolution equations simultaneously in space and time as may be of interest in, for example, optimal control problems constrained by such equations. As a model problem, we consider the heat equation posed on the unit cube in Euclidean space of moderately high dimension. An *a priori* stable minimal residual Petrov–Galerkin variational formulation of the heat equation in space–time results in a generalized least squares problem. This formulation admits a unique, quasi-optimal solution in the natural space–time Hilbert space and serves as a basis for the development of space–time compressive solution algorithms. The solution of the heat equation is obtained by applying the conjugate gradient method to the normal equations of the generalized least squares problem. Starting from stable subspace splittings in space and in time, multilevel space–time preconditioners for the normal equations are derived. In order to reduce the complexity of the full space–time problem, all computations are performed in a compressed or sparse format called the hierarchical Tucker format, supposing that the input data are available in this format. In order to maintain sparsity, compression of the iterates within the hierarchical Tucker format is performed in each conjugate gradient iteration. Its application to vectors in the hierarchical Tucker format is detailed. Finally, numerical results in up to five spatial dimensions based on the recently developed htucker toolbox for MATLAB are presented. Copyright © 2014 John Wiley & Sons, Ltd.

Automatic result verification is an important tool to guarantee that completely inaccurate results cannot be used for decisions without getting remarked during a numerical computation. Mathematical rigor provided by verified computing allows the computation of an enclosure containing the exact solution of a given problem. Particularly, the computation of linear systems can strongly benefit from this technique in terms of reliability of results. However, in order to compute an enclosure of the exact result of a linear system, more floating-point operations are necessary, consequently increasing the execution time. In this context, parallelism appears as a good alternative to improve the solver performance. In this paper, we present an approach to solve very large dense linear systems with verified computing on clusters. This approach enabled our parallel solver to compute huge linear systems with point or interval input matrices with dimensions up to 100,000. Numerical experiments show that the new version of our parallel solver introduced in this paper provides good relative speedups and delivers a reliable enclosure of the exact results. Copyright © 2014 John Wiley & Sons, Ltd.

Convergence results are provided for inexact two-sided inverse and Rayleigh quotient iteration, which extend the previously established results to the generalized non-Hermitian eigenproblem and inexact solves with a decreasing solve tolerance. Moreover, the simultaneous solution of the forward and adjoint problem arising in two-sided methods is considered, and the successful tuning strategy for preconditioners is extended to two-sided methods, creating a novel way of preconditioning two-sided algorithms. Furthermore, it is shown that inexact two-sided Rayleigh quotient iteration and the inexact two-sided Jacobi-Davidson method (without subspace expansion) applied to the generalized preconditioned eigenvalue problem are equivalent when a certain number of steps of a Petrov–Galerkin–Krylov method is used and when this specific tuning strategy is applied. Copyright © 2014 John Wiley & Sons, Ltd.

Z-eigenvalues of tensors, especially extreme ones, are quite useful and are related to many problems, such as automatic control, quantum physics, and independent component analysis. For supersymmetric tensors, calculating the smallest/largest Z-eigenvalue is equivalent to solving a global minimization/maximization problem of a homogenous polynomial over the unit sphere. In this paper, we utilize the sequential subspace projection method (SSPM) to find extreme Z-eigenvalues and the corresponding Z-eigenvectors. The main idea of SSPM is to form a 2-dimensional subspace at the current point and then solve the original optimization problem in the subspace. SSPM benefits from the fact that the 2-dimensional subproblem can be solved by a direct method. Global convergence and linear convergence are established for supersymmetric tensors under certain assumptions. Preliminary numerical results over several testing problems show that SSPM is very promising. Besides, the globalization strategy of random phase can be easily incorporated into SSPM, which promotes the ability to find extreme Z-eigenvalues. Copyright © 2014 John Wiley & Sons, Ltd.

Compressed sensing has motivated the development of numerous *sparse approximation* algorithms designed to return a solution to an underdetermined system of linear equations where the solution has the fewest number of nonzeros possible, referred to as the sparsest solution. In the compressed sensing setting, *greedy* sparse approximation algorithms have been observed to be both able to recover the sparsest solution for similar problem sizes as other algorithms and to be computationally efficient; however, little theory is known for their average case behavior. We conduct a large-scale empirical investigation into the behavior of three of the state of the art greedy algorithms: Normalized Iterative Hard Thresholding (NIHT), Hard Thresholding Pursuit (HTP), and CSMPSP. The investigation considers a variety of random classes of linear systems. The regions of the problem size in which each algorithm is able to reliably recover the sparsest solution is accurately determined, and throughout this region, additional performance characteristics are presented. Contrasting the recovery regions and the average computational time for each algorithm, we present *algorithm selection maps*, which indicate, for each problem size, which algorithm is able to reliably recover the sparsest vector in the least amount of time. Although no algorithm is observed to be uniformly superior, NIHT is observed to have an advantageous balance of large recovery region, absolute recovery time, and robustness of these properties to additive noise across a variety of problem classes. A principle difference between the NIHT and the more sophisticated HTP and CSMPSP is the balance of asymptotic convergence rate against computational cost prior to potential support set updates. The data suggest that NIHT is typically faster than HTP and CSMPSP because of greater flexibility in updating the support that limits unnecessary computation on incorrect support sets. The algorithm selection maps presented here are the first of their kind for compressed sensing. Copyright © 2014 John Wiley & Sons, Ltd.

This paper introduces a new framework for implicit restarting of the Krylov–Schur algorithm. It is shown that restarting with arbitrary polynomial filter is possible by reassigning some of the eigenvalues of the Rayleigh quotient through a rank-one correction, implemented using only the elementary transformations (translation and similarity) of the Krylov decomposition. This framework includes the implicitly restarted Arnoldi (IRA) algorithm and the Krylov–Schur algorithm with implicit harmonic restart as special cases. Further, it reveals that the IRA algorithm can be turned into an eigenvalue assignment method. Copyright © 2014 John Wiley & Sons, Ltd.

Short and unified proofs of spectral properties of major preconditioners for saddle point problems are presented. The need to sufficiently accurately construct approximations of the pivot block and Schur complement matrices to obtain real eigenvalues or eigenvalues with positive real parts and non-dominating imaginary parts are pointed out. The use of augmented Lagrangian methods for more ill-conditioned problems are discussed. Copyright © 2014 John Wiley & Sons, Ltd.

The following paper discusses the application of a multigrid-in-time scheme to Least Squares Shadowing (LSS), a novel sensitivity analysis method for chaotic dynamical systems. While traditional sensitivity analysis methods break down for chaotic dynamical systems, LSS is able to compute accurate gradients. Multigrid is used because LSS requires solving a very large Karush–Kuhn–Tucker system constructed from the solution of the dynamical system over the entire time interval of interest. Several different multigrid-in-time schemes are examined, and a number of factors were found to heavily influence the convergence rate of multigrid-in-time for LSS. These include the iterative method used for the smoother, how the coarse grid system is formed and how the least squares objective function at the center of LSS is weighted. Copyright © 2014 John Wiley & Sons, Ltd.

The paper is devoted to the spectral analysis of effective preconditioners for linear systems obtained via a finite element approximation to diffusion-dominated convection–diffusion equations. We consider a model setting in which the structured finite element partition is made by equilateral triangles. Under such assumptions, if the problem is coercive and the diffusive and convective coefficients are regular enough, then the proposed preconditioned matrix sequences exhibit a strong eigenvalue clustering at unity, the preconditioning matrix sequence and the original matrix sequence are spectrally equivalent, and under the constant coefficients assumption, the eigenvector matrices have a mild conditioning. The obtained results allow to prove the conjugate gradient optimality and the generalized minimal residual quasi-optimality in the case of structured uniform meshes. The interest of such a study relies on the observation that automatic grid generators tend to construct equilateral triangles when the mesh is fine enough. Numerical tests, both on the model setting and in the non-structured case, show the effectiveness of the proposal and the correctness of the theoretical findings. Copyright © 2014 John Wiley & Sons, Ltd.

The discretization of the double-layer potential integral equation for the interior Dirichlet–Laplace problem in a domain with smooth boundary results in a linear system that has a bounded condition number. Thus, the number of iterations required for the convergence of a Krylov method is, asymptotically, independent of the discretization size *N*. Using the fast multipole method to accelerate the matrix–vector products, we obtain an optimal solver. In practice, however, when the geometry is complicated, the number of Krylov iterations can be quite large—to the extend that necessitates the use of preconditioning. We summarize the different methodologies that have appeared in the literature (single-grid, multigrid, approximate sparse inverses), and we propose a new class of preconditioners based on a fast multipole method-based spatial decomposition of the double-layer operator. We present an experimental study in which we compare the different approaches, and we discuss the merits and shortcomings of our approach. Our method can be easily extended to other second-kind integral equations with non-oscillatory kernels in two and three dimensions. Copyright © 2014 John Wiley & Sons, Ltd.

An isospectral matrix reduction is a procedure that reduces the size of a matrix while maintaining its eigenvalues up to a known set. As to not violate the fundamental theorem of algebra, the reduced matrices have rational functions as entries. Because isospectral reductions can preserve the spectrum of a matrix, they are fundamentally different from say the restriction of a matrix to an invariant subspace. We show that the notion of pseudospectrum can be extended to a wide class of matrices with rational function entries and that the pseudospectrum of such matrices shrinks with isospectral reductions. Hence, the eigenvalues of a reduced matrix are more robust to entry-wise perturbations than the eigenvalues of the original matrix. Moreover, the isospectral reductions considered here are more general than those considered elsewhere. We also introduce the notion of an inverse pseudospectrum (or pseudoresonances), which indicates how stable the poles of a rational function valued matrix are to entry-wise perturbations. Illustrations of these concepts are given for mass-spring networks. Copyright © 2014 John Wiley & Sons, Ltd.

We apply the novel tensor product formats (tensor train, quantized TT [QTT], and QTT-Tucker) to the solution of *d*-dimensional chemical master equations for gene regulating networks (signaling cascades, toggle switches, and phage- *λ*). For some important cases, for example, signaling cascade models, we prove analytical tensor product representations of the system operator. The quantized tensor representations (QTT, QTT-Tucker) are employed in both state space and time, and the global state-time (*d* + 1)-dimensional system is solved in the tensor product form by the alternating minimal energy iteration, the ALS-type algorithm. This approach leads to the logarithmic dependence of the computational complexity on the volume of the state space. We investigate the proposed technique numerically and compare it with the direct chemical master equation solution and some previously known approximate schemes, where possible. We observe that the newer tensor methods demonstrate a good potential in simulation of relevant biological systems. Copyright © 2014 John Wiley & Sons, Ltd.

Truncated singular value decomposition is a popular method for solving linear discrete ill-posed problems with a small to moderately sized matrix *A*. Regularization is achieved by replacing the matrix *A* by its best rank-*k* approximant, which we denote by *A*_{k}. The rank may be determined in a variety of ways, for example, by the discrepancy principle or the L-curve criterion. This paper describes a novel regularization approach, in which *A* is replaced by the closest matrix in a unitarily invariant matrix norm with the same spectral condition number as *A*_{k}. Computed examples illustrate that this regularization approach often yields approximate solutions of higher quality than the replacement of *A* by *A*_{k}.Copyright © 2014 John Wiley & Sons, Ltd.

We apply iterative subspace correction methods to elliptic PDE problems discretized by generalized sparse grid systems. The involved subspace solvers are based on the combination of *all* anisotropic full grid spaces that are contained in the sparse grid space. Their relative scaling is at our disposal and has significant influence on the performance of the iterative solver. In this paper, we follow three approaches to obtain close-to-optimal or even optimal scaling parameters of the subspace solvers and thus of the overall subspace correction method. We employ a Linear Program that we derive from the theory of additive subspace splittings, an algebraic transformation that produces partially negative scaling parameters that result in improved asymptotic convergence properties, and finally, we use the OptiCom method as a variable nonlinear preconditioner. Copyright © 2014 John Wiley & Sons, Ltd.

In this paper, we discuss the usage of overlapping techniques for improving the convergence of preconditioners based on incomplete factorizations. To enable parallelism, these preconditioners are usually applied after the input matrix is permuted into a nested arrow form using *k*-way nested dissection. This graph partitioning technique uses *k*-way partitionning by vertex separator to recursively partition the graph of the input matrix into *k* subgraphs using a subset of its vertices called a separator. The overlapping technique is then based on algebraically extending the associated subdomains of these subgraphs and their corresponding separators obtained from *k*-way nested dissection by their direct neighbours. A similar approach is known to accelerate the convergence of domain decomposition methods, where the input matrix is partitioned into a number of independent subdomains using *k*-way vertex partitioning of a graph by edge separators, a different graph decomposition technique. We discuss the effect of the overlapping technique on the convergence of two classes of preconditioners, on the basis of nested factorization and block incomplete LDU factorization. Copyright © 2014 John Wiley & Sons, Ltd.

We consider the numerical solution of the continuous algebraic Riccati equation *A***X* + *XA* − *XFX* + *G* = 0, with *F* = *F**,*G* = *G** of low rank and *A* large and sparse. We develop an algorithm for the low-rank approximation of *X* by means of an invariant subspace iteration on a function of the associated Hamiltonian matrix. We show that the sought-after approximation can be obtained by a low-rank update, in the style of the well known Alternating Direction Implicit (ADI) iteration for the linear equation, from which the new method inherits many algebraic properties. Moreover, we establish new insightful matrix relations with emerging projection-type methods, which will help increase our understanding of this latter class of solution strategies. Copyright © 2014 John Wiley & Sons, Ltd.

We give two generalizations of the induced dimension reduction (IDR) approach for the solution of linear systems. We derive a flexible and a multi-shift quasi-minimal residual IDR variant. These variants are based on a generalized Hessenberg decomposition. We present a new, more stable way to compute basis vectors in IDR. Numerical examples are presented to show the effectiveness of these new IDR variants and the new basis compared with existing ones and to other Krylov subspace methods. Copyright © 2014 John Wiley & Sons, Ltd.

This paper introduces a new strategy for setting the regularization parameter when solving large-scale discrete ill-posed linear problems by means of the Arnoldi–Tikhonov method. This new rule is essentially based on the discrepancy principle, although no initial knowledge of the norm of the error that affects the right-hand side is assumed; an increasingly more accurate approximation of this quantity is recovered during the Arnoldi algorithm. Some theoretical estimates are derived in order to motivate our approach. Many numerical experiments performed on classical test problems as well as image deblurring problems are presented. Copyright © 2014 John Wiley & Sons, Ltd.

It is as well known that nonsymmetric algebraic Riccati equations arising in transport theory can be translated to vector equations. In this paper, we propose six predictor–corrector-type iterative schemes to solve the vector equations. And we give the convergence of these schemes. Unlike the previous work, we prove that all of them converge to the minimal positive solution of the vector equations by the initial vector (*e*,*e*), where *e* = (1,1, ⋯ ,1)^{T}. Moreover, we prove that all the sequences generated by the iterative schemes are strictly and monotonically increasing and bounded above. In addition, some numerical results are also reported in the paper, which confirm the good theoretical properties of our approach. Copyright © 2014 John Wiley & Sons, Ltd.

This paper discusses the methods of imposing symmetry in the augmented system formulation (ASF) for least-squares (LS) problems. A particular emphasis is on upper Hessenberg problems, where the challenge lies in leaving all zero-by-definition elements of the LS matrix unperturbed. Analytical solutions for optimal perturbation matrices are given, including upper Hessenberg matrices. Finally, the upper Hessenberg LS problems represented by unsymmetric ASF that indicate a normwise backward stability of the problem (which is not the case in general) are identified. It is observed that such problems normally arise from Arnoldi factorization (for example, in the generalized minimal residual (GMRES) algorithm). The problem is illustrated with a number of practical (arising in the GMRES algorithm) and some ‘purpose-built’ examples. Copyright © 2014 John Wiley & Sons, Ltd.

We analyze an algorithm for computing a skew-Hermitian logarithm of a unitary matrix and also skew-Hermitian approximate logarithms for nearly unitary matrices. This algorithm is very easy to implement using standard software, and it works well even for unitary matrices with no spectral conditions assumed. Certain examples, with many eigenvalues near − 1, lead to very non-Hermitian output for other basic methods of calculating matrix logarithms. Altering the output of these algorithms to force skew-Hermitian output creates accuracy issues, which are avoided by the considered algorithm. A modification is introduced to deal properly with the *J*-skew-symmetric unitary matrices. Applications to numerical studies of topological insulators in two symmetry classes are discussed. Copyright © 2014 John Wiley & Sons, Ltd.

Let *k*( ⋅ , ⋅ ) be a continuous kernel defined on Ω × Ω, Ω compact subset of , , and let us consider the integral operator from into ( set of continuous functions on Ω) defined as the map

is a compact operator and therefore its spectrum forms a bounded sequence having zero as unique accumulation point. Here, we first consider in detail the approximation of by using rectangle formula in the case where Ω = [0,1], and the step is *h* = 1 ∕ *n*. The related linear application can be represented as a matrix *A*_{n} of size *n*. In accordance with the compact character of the continuous operator, we prove that {*A*_{n}} ∼ _{σ}0 and {*A*_{n}} ∼ _{λ}0, that is, the considered sequence has singular values and eigenvalues clustered at zero. Moreover, the cluster is strong in perfect analogy with the compactness of . Several generalizations are sketched, with special attention to the general case of pure sampling sequences, and few examples and numerical experiments are critically discussed, including the use of GMRES and preconditioned GMRES for large linear systems coming from the numerical approximation of integral equations of the form

- (1)

with and datum *g*(*x*). Copyright © 2014 John Wiley & Sons, Ltd.

In many large-scale computations, systems of equations arise in the form *Au* = *b*, where *A* is a linear operation to be performed on the unknown data *u*, producing the known right-hand side, *b*, which represents some constraint of known or assumed behavior of the system being modeled. Because such systems can be very large, solving them directly can be too slow. In contrast, a multigrid method removes different components of the error at different resolutions using smoothers that reduce high-frequency components of the error more readily than low. Here, we present an open-source multigrid solver written only in Python. OpenMG is a pure Python experimentation environment for testing multigrid concepts, not a production solver. The particular restriction method implemented is for ‘standard’ multigrid. By making the code simple and modular, we make the algorithmic details clear. The resulting solver is tested on an implicit pressure reservoir simulation problem with satisfactory results.Copyright © 2014 John Wiley & Sons, Ltd.

This paper introduces a new preconditioning technique that is suitable for matrices arising from the discretization of a system of PDEs on unstructured grids. The preconditioner satisfies a so-called filtering property, which ensures that the input matrix is identical with the preconditioner on a given filtering vector. This vector is chosen to alleviate the effect of low-frequency modes on convergence and so decrease or eliminate the plateau that is often observed in the convergence of iterative methods. In particular, the paper presents a general approach that allows to ensure that the filtering condition is satisfied in a matrix decomposition. The input matrix can have an arbitrary sparse structure. Hence, it can be reordered using nested dissection, to allow a parallel computation of the preconditioner and of the iterative process. We show the efficiency of our preconditioner through a set of numerical experiments on symmetric and nonsymmetric matrices. Copyright © 2014 John Wiley & Sons, Ltd.

This paper is concerned with the numerical solution of symmetric large-scale Lyapunov equations with low-rank right-hand sides and coefficient matrices depending on a parameter. Specifically, we consider the situation when the parameter dependence is sufficiently smooth, and the aim is to compute solutions for many different parameter samples. On the basis of existing results for Lyapunov equations and parameter-dependent linear systems, we prove that the tensor containing all solution samples typically allows for an excellent low multilinear rank approximation. Stacking all sampled equations into one huge linear system, this fact can be exploited by combining the preconditioned CG method with low-rank truncation. Our approach is flexible enough to allow for a variety of preconditioners based, for example, on the sign function iteration or the alternating direction implicit method. Copyright © 2013 John Wiley & Sons, Ltd.

In statistical and biometric sciences, one often uses predictive linear models. The initial form of such models is usually obtained by fitting the coefficients of the model to a set of observed data according to the classical least squares method. Newborn models that are obtained in this way will be referred to as *raw models*. Such raw models are often subject of efforts to improve them as to their predictive performance on external datasets. Several methods can be followed to fine-tune raw models, thus leading to a variety of model building strategies. In this paper, the idea of so-called victory rates is introduced to compare the performance of building strategies mutually.Copyright © 2013 John Wiley & Sons, Ltd.

A squared Smith type algorithm for solving large-scale discrete-time Stein equations is developed. The algorithm uses restarted Krylov spaces to compute approximations of the squared Smith iterations in low-rank factored form. Fast convergence results when very few iterations of the alternating direction implicit method are applied to the Stein equation beforehand. The convergence of the algorithm is discussed and its performance is demonstrated by several test examples. Copyright © 2013 John Wiley & Sons, Ltd.

The main goal of this paper is to approximate inverse operators by high-order Newton-type methods with the important feature of not using inverse operators. We analyse the semilocal convergence, the speed of convergence, and the efficiency of these methods. We determine that Chebyshev's method is the most efficient method and test it on two problems: one associated to the heat equation and the other one to a boundary value problem. We consider examples with matrices that are close to be singular and/or are badly conditioned. We check the robustness and the stability of the methods by considering situations with many steps and noised data. Copyright © 2013 John Wiley & Sons, Ltd.

The topic of this paper is the convergence analysis of subspace gradient iterations for the simultaneous computation of a few of the smallest eigenvalues plus eigenvectors of a symmetric and positive definite matrix pair (*A*,*M*). The methods are based on subspace iterations for *A*^{ − 1}*M* and use the Rayleigh-Ritz procedure for convergence acceleration. New sharp convergence estimates are proved by generalizing estimates, which have been presented for vectorial steepest descent iterations (see SIAM J. Matrix Anal. Appl., 32(2):443-456, 2011). Copyright © 2013 John Wiley & Sons, Ltd.

Given a square matrix *A*, the inverse subspace problem is concerned with determining a closest matrix to *A* with a prescribed invariant subspace. When *A* is Hermitian, the closest matrix may be required to be Hermitian. We measure distance in the Frobenius norm and discuss applications to Krylov subspace methods for the solution of large-scale linear systems of equations and eigenvalue problems as well as to the construction of blurring matrices. Extensions that allow the matrix *A* to be rectangular and applications to Lanczos bidiagonalization, as well as to the recently proposed subspace-restricted SVD method for the solution of linear discrete ill-posed problems, also are considered.Copyright © 2013 John Wiley & Sons, Ltd.

No abstract is available for this article.

]]>The technique that was used to build the *eigCG* algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the *BiCG* algorithm. We show that, similar to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the *BiCG* residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems and then uses the computed eigenvectors to deflate *BiCGStab* for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of *eigBiCG* to compute spectral approximations with accuracy comparable with that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental *eigBiCG* followed by appropriately restarted and deflated *BiCGStab* provides a competitive method for systems with multiple right-hand sides. Copyright © 2013 John Wiley & Sons, Ltd.

The aim of this paper is to analyze efficient numerical methods for time integration of European option pricing models. When spatial discretization is adopted, the resulting problem consists of an ordinary differential equation that can be approximated by means of exponential Runge–Kutta integrators, where the matrix-valued functions are computed by the so-called shift-and-invert Krylov method. To our knowledge, the use of this numerical approach is innovative in the framework of option pricing, and it reveals to be very attractive and efficient to solve the problem at hand. In this respect, we propose some a posteriori estimates for the error in the shift-and-invert approximation of the core-functions arising in exponential integrators. The effectiveness of these error bounds is tested on several examples of interest. They can be adopted as a convenient stopping criterion for implementing the exponential Runge–Kutta algorithm in order to perform time integration. Copyright © 2013 John Wiley & Sons, Ltd.

In this paper, we construct and analyze a level-dependent coarse grid correction scheme for indefinite Helmholtz problems. This adapted multigrid (MG) method is capable of solving the Helmholtz equation on the finest grid using a series of MG cycles with a grid-dependent complex shift, leading to a stable correction scheme on all levels. It is rigorously shown that the adaptation of the complex shift throughout the MG cycle maintains the functionality of the two-grid correction scheme, as no smooth modes are amplified in or added to the error. In addition, a sufficiently smoothing relaxation scheme should be applied to ensure damping of the oscillatory error components. Numerical experiments on various benchmark problems show the method to be competitive with or even outperform the current state-of-the-art MG-preconditioned Krylov methods, for example, complex shifted Laplacian preconditioned flexible GMRES. Copyright © 2013 John Wiley & Sons, Ltd.

Modulus-based splitting, as well as multisplitting iteration methods, for linear complementarity problems are developed by Zhong-Zhi Bai. In related papers (see Bai, Z.-Z., Zhang, L.-L.: *Modulus-Based Synchronous Multisplitting Iteration Methods for Linear Complementarity Problems*. Numerical Linear Algebra with Applications **20** (2013) 425–439, and the references cited therein), the problem of convergence for two-parameter relaxation methods (accelerated overrelaxation-type methods) is analyzed under the assumption that one parameter is greater than the other. Here, we will show how we can avoid this assumption and, consequently, improve the convergence area. Copyright © 2013 John Wiley & Sons, Ltd.

A conventional block cyclic reduction algorithm operates by halving the size of the linear system at each reduction step, that is, the algorithm is a radix-2 method. An algorithm analogous to the block cyclic reduction known as the radix-q partial solution variant of the cyclic reduction (PSCR) method allows the use of higher radix numbers and is thus more suitable for parallel architectures as it requires fever reduction steps. This paper presents an alternative and more intuitive way of deriving a radix-4 block cyclic reduction method for systems with a coefficient matrix of the form tridiag{ − *I*,*D*, − *I*}. This is performed by modifying an existing radix-2 block cyclic reduction method. The resulting algorithm is then parallelized by using the partial fraction technique. The parallel variant is demonstrated to be less computationally expensive when compared to the radix-2 block cyclic reduction method in the sense that the total number of emerging subproblems is reduced. The method is also shown to be numerically stable and equivalent to the radix-4 PSCR method. The numerical results archived correspond to the theoretical expectations. Copyright © 2013 John Wiley & Sons, Ltd.

In this paper, we derive bounds for the complex eigenvalues of a nonsymmetric saddle point matrix with a symmetric positive semidefinite (2,2) block, that extend the corresponding previous bounds obtained by Bergamaschi. For the nonsymmetric saddle point problem, we propose a block diagonal preconditioner for the conjugate gradient method in a nonstandard inner product. Numerical experiments are also included to test the performance of the presented preconditioner. Copyright © 2013 John Wiley & Sons, Ltd.

Novel memory-efficient Arnoldi algorithms for solving matrix polynomial eigenvalue problems are presented. More specifically, we consider the case of matrix polynomials expressed in the Chebyshev basis, which is often numerically more appropriate than the standard monomial basis for a larger degree *d*. The standard way of solving polynomial eigenvalue problems proceeds by linearization, which increases the problem size by a factor *d*. Consequently, the memory requirements of Krylov subspace methods applied to the linearization grow by this factor. In this paper, we develop two variants of the Arnoldi method that build the Krylov subspace basis implicitly, in a way that only vectors of length equal to the size of the original problem need to be stored. The proposed variants are generalizations of the so-called quadratic Arnoldi method and two-level orthogonal Arnoldi procedure methods, which have been developed for the monomial case. We also show how the typical ingredients of a full implementation of the Arnoldi method, including shift-and-invert and restarting, can be incorporated. Numerical experiments are presented for matrix polynomials up to degree 30 arising from the interpolation of nonlinear eigenvalue problems, which stem from boundary element discretizations of PDE eigenvalue problems. Copyright © 2013 John Wiley & Sons, Ltd.