The computational efficiency of a typical, projection-based, nonlinear model reduction method hinges on the efficient approximation, for explicit computations, of the scalar projections onto a subspace of a residual vector. For implicit computations, it also hinges on the additional efficient approximation of similar projections of the Jacobian of this residual with respect to the solution. The computation of both approximations is often referred to in the literature as hyper reduction. To this effect, this paper focuses on the analysis and comparative performance study for nonlinear finite element reduced-order models of solids and structures of the recently developed energy-conserving mesh sampling and weighting (ECSW) hyper reduction method. Unlike most alternative approaches, this method approximates the scalar projections of residuals and/or Jacobians directly, instead of approximating first these vectors and matrices then projecting the resulting approximations onto the subspaces of interest. In this paper, it is shown that ECSW distinguishes itself furthermore from other hyper reduction methods through its preservation of the Lagrangian structure associated with Hamilton's principle. For second-order dynamical systems, this enables it to also preserve the numerical stability properties of the discrete system to which it is applied. It is also shown that for a fixed set of parameter values, the approximation error committed *online* by ECSW is bounded by its counterpart error committed *off-line* during the training of this method. Therefore, this error can be estimated in this case *a priori* and controlled. The performance of ECSW is demonstrated first for two academic but nevertheless interesting nonlinear dynamic response problems. For both of them, ECSW is shown to preserve numerical stability and deliver the desired level of accuracy, whereas the discrete empirical interpolation method and its recently introduced unassembled variant are shown to be susceptible to failure because of numerical instability. The potential of ECSW for enabling the effective reduction of nonlinear finite element dynamic models of solids and structures is also highlighted with the realistic simulation of the nonlinear transient dynamic response of a complete car engine to thermal and combustion pressure loads using an implicit scheme. For this simulation, ECSW is reported to enable the reduction of the CPU time required by the high-dimensional nonlinear finite element dynamic analysis by more than four orders of magnitude, while achieving a very good level of accuracy. Copyright © 2015 John Wiley & Sons, Ltd.

The post-treatment of (3D) displacement fields for the identification of spatially varying elastic material parameters is a large inverse problem that remains out of reach for massive 3D structures. We explore here the potential of the constitutive compatibility method for tackling such an inverse problem, provided an appropriate domain decomposition technique is introduced. In the method described here, the statically admissible stress field that can be related through the known constitutive symmetry to the kinematic observations is sought through minimization of an objective function, which measures the violation of constitutive compatibility. After this stress reconstruction, the local material parameters are identified with the given kinematic observations using the constitutive equation. Here, we first adapt this method to solve 3D identification problems and then implement it within a domain decomposition framework which allows for reduced computational load when handling larger problems. Copyright © 2015 John Wiley & Sons, Ltd.

We present a robust method for generating high-order nodal tetrahedral curved meshes. The approach consists of modifying an initial linear mesh by first, introducing high-order nodes, second, displacing the boundary nodes to ensure that they are on the computer-aided design surface, and third, smoothing and untangling the mesh obtained after the displacement of the boundary nodes to produce a valid curved high-order mesh. The smoothing algorithm is based on the optimization of a regularized measure of the mesh distortion relative to the original linear mesh. This means that whenever possible, the resulting mesh preserves the geometrical features of the initial linear mesh such as shape, stretching, and size. We present several examples to illustrate the performance of the proposed algorithm. Furthermore, the examples show that the implementation of the optimization problem is robust and capable of handling situations in which the mesh before optimization contains a large number of invalid elements. We consider cases with polynomial approximations up to degree ten, large deformations of the curved boundaries, concave boundaries, and highly stretched boundary layer elements. The meshes obtained are suitable for high-order finite element analyses. Copyright © 2015 John Wiley & Sons, Ltd.

This paper provides a comparison between one particular *phase-field* damage model and a *thick level set* (TLS) damage model for the simulation of brittle and quasi-brittle fractures. The TLS model is recasted in a variational framework, which allows comparison with the phase-field model. Using this framework, both the equilibrium equations and the damage evolution laws are guided by the initial choice of the potential energy. The potentials of the phase-field model and of the TLS model are quite different. TLS potential enforces *a priori* a bound on damage gradient whereas the phase-field potential does not. The TLS damage model is defined such that the damage profile fits to the one of the phase-field model for a beam of infinite length. The model parameters are calibrated to obtain the same surface fracture energy. Numerical results are provided for unidimensional and bidimensional tests for both models. Qualitatively, similar results are observed, although TLS model is observed to be less sensible to boundary conditions. Copyright © 2015 John Wiley & Sons, Ltd.

Numerical techniques are suggested in this paper, in order to improve the computational efficiency of the spectral boundary integral method, initiated by Clamond & Grue [D. Clamond and J. Grue. A fast method for fully nonlinear water-wave computations. *J*. *Fluid Mech*. 2001; **447**: 337–355] for simulating nonlinear water waves. This method involves dealing with the high order convolutions by using Fourier transform or inverse Fourier transform and evaluating the integrals with weakly singular integrands. A de-singularity technique is proposed here to help in efficiently evaluating the integrals with weak singularity. An anti-aliasing technique is developed in this paper to overcome the aliasing problem associated with Fourier transform or inverse Fourier transform with a limited resolution. This paper also presents a technique for determining a critical value of the free surface, under which the integrals can be neglected. Numerical tests are carried out on the numerical techniques and on the improved method equipped with the techniques. The tests will demonstrate that the improved method can significantly accelerate the computation, in particular when waves are strongly nonlinear. Copyright © 2015 John Wiley & Sons, Ltd.

A new numerical approach for solving incompressible two-phase flows is presented in the framework of the recently developed Consistent Particle Method (CPM). In the context of the Lagrangian particle formulation, the CPM computes spatial derivatives based on the generalized finite difference scheme and produces good results for single-phase flow problems. Nevertheless, for two-phase flows, the method cannot be directly applied near the fluid interface because of the abrupt discontinuity of fluid density resulting in large change in pressure gradient. This problem is resolved by dealing with the pressure gradient normalized by density, leading to a two-phase CPM of which the original singlephase CPM is a special case. In addition, a new adaptive particle selection scheme is proposed to overcome the problem of ill-conditioned coefficient matrix of pressure Poisson equation when particles are sparse and non-uniformly spaced. Numerical examples of Rayleigh–Taylor instability, gravity current flow, water-air sloshing and dam break are presented to demonstrate the accuracy of the proposed method in wave profile and pressure solution. Copyright © 2015 John Wiley & Sons, Ltd.

Previous studies have shown that the commonly used quadrature schemes for polygonal and polyhedral finite elements lead to consistency errors that persist under mesh refinement and subsequently render the approximations non-convergent. In this work, we consider minimal perturbations to the gradient field at the element level in order to restore polynomial consistency and recover optimal convergence rates when the weak form integrals are evaluated using quadrature. For finite elements of arbitrary order, we state the accuracy requirements on the underlying volumetric and boundary quadrature rules and discuss the properties of the resulting corrected gradient operator. We compare the proposed approach with the pseudo-derivative method developed by Belytschko and co-workers and, for linear elliptic problems, with our previous remedy that involves splitting of polynomial and non-polynomial of elemental energy bilinear form. We present several numerical results for linear and nonlinear elliptic problems in two and three dimensions that not only confirm the recovery of optimal convergence rates but also suggest that the global error levels are close to those of approximations obtained from exact evaluation of the weak form integrals. Copyright © 2015 John Wiley & Sons, Ltd.

The aim of this paper is to propose a continuous–discontinuous computational homogenization–localization framework to upscale microscale localization toward the onset and propagation of a cohesive discontinuity at the macroscale. The major novelty of this contribution is the development of a fully coupled micro–macro solution strategy, where the solution procedure for the macroscopic domain is based on the extended finite element method. The proposed approach departs from classical computational homogenization, which allows to derive the effective stress–strain response before the onset of localization. Upon strain localization, the microscale is characterized by a strain localization band where damage grows and by two adjacent unloading bulk regions at each side of the localization zone. The microscale localization band is lumped into a macroscopic cohesive crack, accommodated through discontinuity enriched macroscale kinematics. The governing response of the continuum with a discontinuity is obtained numerically based on proper scale transition relations in terms of the traction–separation law and the stress–strain description of the continuous surrounding material at both sides of the discontinuity. The potential of the method is demonstrated with a numerical example, which illustrates the onset and propagation of a macroscale cohesive crack emerging from microstructural damage within the underlying microstructure. Copyright © 2015 John Wiley & Sons, Ltd.

A numerical model to deal with nonlinear elastodynamics involving large rotations within the framework of the finite element based on NURBS (Non-Uniform Rational B-Spline) basis is presented. A comprehensive kinematical description using a corotational approach and an orthogonal tensor given by the exact polar decomposition is adopted. The state equation is written in terms of corotational variables according to the hypoelastic theory, relating the Jaumann derivative of the Cauchy stress to the Eulerian strain rate.

The generalized-*α* method (G*α*) method and Generalized Energy-Momentum Method with an additional parameter (GEMM+*ξ*) are employed in order to obtain a stable and controllable dissipative time-stepping scheme with algorithmic conservative properties for nonlinear dynamic analyses.

The main contribution is to show that the energy–momentum conservation properties and numerical stability may be improved once a NURBS-based FEM in the spatial discretization is used. Also it is shown that high continuity can postpone the numerical instability when GEMM+*ξ* with consistent mass is employed; likewise, increasing the continuity class yields a decrease in the numerical dissipation. A parametric study is carried out in order to show the stability and energy budget in terms of several properties such as continuity class, spectral radius and lumped as well as consistent mass matrices. Copyright © 2015 John Wiley & Sons, Ltd.

The extended finite element method (X-FEM) has proven to be an accurate, robust method for solving embedded interface problems. With a few exceptions, the X-FEM has mostly been used in conjunction with piecewise-linear shape functions and an associated piecewise-linear geometrical representation of interfaces. In the current work, the use of spline-based finite elements is examined along with a Nitsche technique for enforcing constraints on an embedded interface. To obtain optimal rates of convergence, we employ a hierarchical local refinement approach to improve the geometrical representation of curved interfaces. We further propose a novel weighting for the interfacial consistency terms arising in the Nitsche variational form with B-splines. A qualitative dependence between the weights and the stabilization parameters is established with additional element level eigenvalue calculations. An important consequence of this weighting is that the bulk as well as the interfacial fields remain well behaved in the presence of large heterogeneities as well as elements with arbitrarily small volume fractions. We demonstrate the accuracy and robustness of the proposed method through several numerical examples. Copyright © 2015 John Wiley & Sons, Ltd.

A stabilized discontinuous Galerkin method is developed for general hyperelastic materials at finite strains. Starting from a mixed method incorporating Lagrange multipliers along the interface, the displacement formulation is systematically derived through a variational multiscale approach whereby the numerical fine scales are modeled via edge bubble functions. Analytical expressions that are free from user-defined parameters arise for the weighted numerical flux and stability tensor. In particular, the specific form taken by these derived quantities naturally accounts for evolving geometric nonlinearity as well as discontinuous material properties. The method is applicable both to problems containing nonconforming meshes or different element types at specific interfaces and to problems consisting of fully discontinuous numerical approximations. Representative numerical tests involving large strains and rotations are performed to confirm the robustness of the method. Copyright © 2015 John Wiley & Sons, Ltd.

In this work, the extended finite element method (XFEM) is for the first time coupled with face-based strain-smoothing technique to solve three-dimensional fracture problems. This proposed method, which is called face-based smoothed XFEM here, is expected to combine both the advantages of XFEM and strain-smoothing technique. In XFEM, arbitrary crack geometry can be modeled and crack advance can be simulated without remeshing. Strain-smoothing technique can eliminate the integration of singular term over the volume around the crack front, thanks to the transformation of volume integration into area integration. Special smoothing scheme is implemented in the crack front smoothing domain. Three examples are presented to test the accuracy, efficiency, and convergence rate of the face-based smoothed XFEM. From the results, it is clear that smoothing technique can improve the performance of XFEM for three-dimensional fracture problems. Copyright © 2015 John Wiley & Sons, Ltd.

Non-probabilistic *convex models* need to be provided only the changing boundary of parameters rather than their exact probability distributions; thus, such models can be applied to uncertainty analysis of complex structures when experimental information is lacking. The *interval* and the *ellipsoidal models* are the two most commonly used modeling methods in the field of non-probabilistic convex modeling. However, the former can only deal with independent variables, while the latter can only deal with dependent variables. This paper presents a more general non-probabilistic convex model, the *multidimensional parallelepiped model*. This model can include the independent and dependent uncertain variables in a unified framework and can effectively deal with complex ‘multi-source uncertainty’ problems in which dependent variables and independent variables coexist. For any two parameters, the concepts of the correlation angle and the correlation coefficient are defined. Through the marginal intervals of all the parameters and also their correlation coefficients, a multidimensional parallelepiped can easily be built as the uncertainty domain for parameters. Through the introduction of affine coordinates, the parallelepiped model in the original parameter space is converted to an interval model in the affine space, thus greatly facilitating subsequent structural uncertainty analysis. The parallelepiped model is applied to structural uncertainty propagation analysis, and the response interval of the structure is obtained in the case of uncertain initial parameters. Finally, the method described in this paper was applied to several numerical examples. Copyright © 2015 John Wiley & Sons, Ltd.

The vector potential formulation is a promising solution method for nonlinear electromechanically coupled boundary value problems. However, one of the drawbacks of this formulation is the non-uniqueness of the (electric) vector potential in three dimensions. The present paper focuses on the Coulomb gauging method to overcome this problem. In particular, the corresponding gauging boundary conditions and their consistency with the physical boundary conditions are examined in detail. Furthermore, certain topological features like cavities and multiply connectedness of the domain of analysis are taken into account. Different variational/weak formulations being appropriate for finite element implementation are described. Finally, the suitability of these formulations is demonstrated in several numerical examples. Copyright © 2015 John Wiley & Sons, Ltd.

A discrete hyperelastic model was developed in this paper for a single atomic layer of graphene structure that was originally planar. This model can be viewed as an extension to the well-known continuum hyperelastic model. Based on the discrete nature of the atomic structure, the notion of discrete mapping and the concept of spatial secant were introduced. The spatial secant served as a deformation measure that provided a geometric exact mapping in the discrete sense between the atomistic and continuum representations. By incorporating a physics-based interatomic potential, the corresponding discrete hyperelastic model was then established. After an introduction of the model, the computational implementation using the Galerkin finite element and/or meshfree method was outlined. The computational framework was then applied to study of the mechanics of graphene sheets. Extensive comparisons with full-scale molecular mechanics simulations and experimental measurement were made to illustrate the robustness of this approach. Copyright © 2014 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

A novel meshless method based on the Shepard and Taylor interpolation method (STIM) and the hybrid boundary node method (HBNM) is proposed. Based on the Shepard interpolation method and Taylor expansion, the STIM is developed to construct the shape function of the HBNM. In the STIM, the Shepard shape function is used as the basic function, which is the zero-level shape function, and the high-power basic functions are constructed through Taylor expansion. Four advantages of the STIM are the interpolation property, the arbitrarily high-order consistency, the absence of inversion for the whole process of shape function construction, and the low computational expense. These properties are desirable in the implementation of meshless methods. By combining the STIM and the HBNM, a much more effective meshless method is proposed to solve the elasticity problems. Compared with the traditional HBNM, the STIM can improve accuracy because of the use of high-power basic functions and can also improve the computational efficiency because there is no inversion for the shape function construction process. Numerical examples are given to demonstrate the accuracy and efficiency of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.

Dielectric materials like electro-active polymers (EAPs) exhibit coupled electro-mechanical behavior at large strains. They respond by a deformation to an applied electrical field and are used in advanced industrial environments as sensors and actuators, for example, in robotics, biomimetics and smart structures. In field-activated or electronic EAPs, the electric activation is driven by Coulomb-type electrostatic forces, resulting in Maxwell stresses. These materials are able to provide finite actuation strains, which can even be improved by optimizing their composite microstructure. However, EAPs suffer from different types of instabilities. This concerns *global structural instabilities*, such as buckling and wrinkling of EAP devices, as well as *local material instabilities*, such as limit-points and bifurcation-points in the constitutive response, which induce snap-through and fine scale localization of local states. In this work, we outline variational-based definitions for structural and material stability, and design algorithms for accompanying stability checks in typical finite element computations. The formulation starts from stability criteria for a *canonical energy minimization principle* of electro-elasto-statics, and then shifts them over to representations related to an *enthalpy-based saddle point principle* that is considered as the most convenient setting for numerical implementation. Here, global structural stability is analyzed based on a *perturbation of the total electro-mechanical energy*, and related to statements of positive definiteness of incremental finite element tangent arrays. We base the local material stability on an *incremental quasi-convexity condition* of the electro-mechanical energy, inducing the positive definiteness of both the incremental electro-mechanical moduli as well as a generalized acoustic tensor. It is shown that the incremental arrays to be analyzed in the stability criteria appear within the enthalpy-based setting in a *distinct diagonal form*, with pure mechanical and pure electrical partitions. Applications of accompanying stability analyses in finite element computations are demonstrated by means of representative model problems. Copyright © 2015 John Wiley & Sons, Ltd.

Zero-thickness interface elements are commonly used in computational mechanics to model material interfaces or to introduce discontinuities. The latter class requires the existence of a non-compliant interface prior to the onset of fracture initiation. This is accomplished by assigning a high dummy stiffness to the interface prior to cracking. This dummy stiffness is known to introduce oscillations in the traction profile when using Gauss quadrature for the interface elements, but these oscillations are removed when resorting to a Newton-Cotes integration scheme 1. The traction oscillations are aggravated for interface elements that use B-splines or non-uniform rational B-splines as basis functions (isogeometric interface elements), and worse, do not disappear when using Newton-Cotes quadrature. An analysis is presented of this phenomenon, including eigenvalue analyses, and it appears that the use of lumped integration (at the control points) is the only way to avoid the oscillations in isogeometric interface elements. New findings have also been obtained for standard interface elements, for example that oscillations occur in the relative displacements at the interface irrespective of the value of the dummy stiffness. Copyright © 2015 John Wiley & Sons, Ltd.

A strategy for a two-dimensional contact analysis involving finite strain plasticity is developed with the aid of variable-node elements. The variable-node elements, in which nodes are added freely where they are needed, make it possible to transform the non-matching meshes into matching meshes directly. They thereby facilitate an efficient analysis, maintaining node-to-node contact during the contact deformation. The contact patch test, wherein the contact patch is constructed out of variable-node elements, is thus passed, and iterations for equilibrium solutions reach convergence faster in this scheme than in the conventional approach based on the node-to-surface contact. The effectiveness and accuracy of the proposed scheme are demonstrated through several numerical examples of elasto-plastic contact analyses. Copyright © 2015 John Wiley & Sons, Ltd.

In this work, we present a discrete beam lattice model with embedded discontinuities capable of simulating rock failure as a result of propagating cracks through rock mass. The developed model is a two-dimensional (plane strain) microscale representation of rocks as a two-phase heterogeneous material. Phase I is chosen for intact rock part, while phase II stands for pre-existing microcracks and other defects. The proposed model relies on Timoshenko beam elements enhanced with additional kinematics to describe localized failure mechanisms. The model can properly take into account the fracture process zone with pre-existing microcracks coalescence, along with localized failure modes, mode I of tensile opening and mode II of shear sliding. Furthermore, we give the very detailed presentation for two different approaches to capturing the evolution of modes I and II, and their interaction and combination. The first approach is to deal with modes I and II separately, where mode II can be activated but compression force may still be transferred through rock mass which is not yet completely damaged. The second approach is to represent both modes I and II being activated simultaneously at a point where complete failure is reached. A novel numerical procedure for dealing with two modes failure within framework of method of incompatible modes is presented in detail and validated by a set of numerical examples. Copyright © 2015 John Wiley & Sons, Ltd.

This paper introduces a theoretical and algorithmic reduced model approach to efficiently evaluate time responses of complex dynamic systems. The proposed approach combines four main components: analytical expressions of the average of the system's transfer functions in the frequency domain, precise and convergent rational approximations of these exact expressions, exact evaluation of these approximations through model reduction in rational Krylov subspaces and semi-analytical interpolation at just a few frequency points. The resulting algorithmic principles to evaluate the time response of a particular system are relatively straightforward: one first evaluates the response of the system with slight additional damping at a few frequencies and one then projects or reduces the system in the subspace spanned by these responses. The time response of the reduced model implicitly provides a precise evaluation of that of the original system. The properties of the reduced models and the precision of the proposed approach are studied, and applications on complex matrix systems are presented and discussed. While the theory and numerical algorithms are presented in a matrix context, they are also transposable in a continuous functional context. Copyright © 2015 John Wiley & Sons, Ltd.

In this work, a new, unconditionally stable, time marching procedure for dynamic analyses is presented. The scheme is derived from the standard central difference approximation, with stabilization being provided by a consistent perturbation of the original problem. Because the method only involves constitutive variables that are already available from computations at previous time steps, iterative procedures are not required to establish equilibrium when nonlinear models are focused, allowing more efficient analyses to be obtained. The theoretical properties of the proposed scheme are discussed taking into account standard stability and accuracy analyses, indicating the excellent performance of the new technique. At the end of the contribution, representative nonlinear numerical examples are studied, further illustrating the effectiveness of the new technique. Numerical results obtained by the standard central difference procedure and the implicit constant average acceleration method are also presented along the text for comparison. Copyright © 2015 John Wiley & Sons, Ltd.

In this paper, we establish the connections between the virtual element method (VEM) and the hourglass control techniques that have been developed since the early 1980s to stabilize underintegrated *C*^{0} Lagrange finite element methods. In the VEM, the bilinear form is decomposed into two parts: a consistent term that reproduces a given polynomial space and a correction term that provides stability. The essential ingredients of -continuous VEMs on polygonal and polyhedral meshes are described, which reveals that the variational approach adopted in the VEM affords a generalized and robust means to stabilize underintegrated finite elements. We focus on the heat conduction (Poisson) equation and present a virtual element approach for the isoparametric four-node quadrilateral and eight-node hexahedral elements. In addition, we show quantitative comparisons of the consistency and stabilization matrices in the VEM with those in the hourglass control method of Belytschko and coworkers. Numerical examples in two and three dimensions are presented for different stabilization parameters, which reveals that the method satisfies the patch test and delivers optimal rates of convergence in the *L*^{2} norm and the *H*^{1} seminorm for Poisson problems on quadrilateral, hexahedral, and arbitrary polygonal meshes. Copyright © 2015 John Wiley & Sons, Ltd.

The control-volume mixed finite element method is formulated for and applied to a computational domain consisting of a tetrahedral partitioning to solve the steady groundwater flow equations. Test functions consistent with piecewise constant and piecewise linear pressure distributions are used in the formulation. Comparisons are made with a standard mixed finite element formulation using lowest-order Raviart Thomas basis functions. Results suggest that the control-volume based formulation is a viable alternative to the standard formulation. Copyright © 2015 John Wiley & Sons, Ltd.

In bending problems of Mindlin–Reissner plate, the resultant forces often vary dramatically within a narrow range near free and soft simply-supported (SS1) boundaries. This is so-called the edge effect or the boundary layer effect, a challenging problem for conventional finite element method. In this paper, an effective finite element method for analysis of such edge effect is developed. The construction procedure is based on the *hybrid displacement function* (HDF) element method [1], a simple hybrid-Trefftz stress element method proposed recently. What is different is that an additional displacement function *f* related to the edge effect is considered, and its analytical solutions are employed as the additional trial functions for the first time. Furthermore, the free and the SS1 boundary conditions are also applied to modify the element assumed resultant fields. Then, two new special elements, HDF-P4-Free and HDF-P4-SS1, are successfully constructed. These new elements are allocated along the corresponding boundaries of the plate, while the other region is modeled by the usual HDF plate element HDF-P4-11 *β* [1]. Numerical tests demonstrate that the present method can effectively capture the edge effects and exactly satisfy the corresponding boundary conditions by only using relatively coarse meshes. Copyright © 2015 John Wiley & Sons, Ltd.

A new indirect approach to the problem of approximating the particular solution of non-homogeneous hyperbolic boundary value problems is presented. Unlike the dual reciprocity method, which constructs approximate particular solutions using radial basis functions, polynomials or trigonometric functions, the method reported here uses the homogeneous solutions of the problem obtained by discarding all time-derivative terms from the governing equation. Nevertheless, what typifies the present approach from a conceptual standpoint is the option of not using these trial functions exclusively for the approximation of the particular solution but to fully integrate them with the (Trefftz-compliant) homogeneous solution basis. The particular solution trial basis is capable of significantly improving the Trefftz solution even when the original equation is genuinely homogeneous, an advantage that is lost if the basis is used exclusively for the recovery of the source terms. Similarly, a sufficiently refined Trefftz-compliant basis is able to compensate for possible weaknesses of the particular solution approximation. The method is implemented using the displacement model of the hybrid-Trefftz finite element method. The functions used in the particular solution basis reduce most terms of the matrix of coefficients to boundary integral expressions and preserve the Hermitian, sparse and localized structure of the solving system that typifies hybrid-Trefftz formulations. Even when domain integrals are present, they are generally easy to handle, because the integrand presents no singularity. Copyright © 2015 John Wiley & Sons, Ltd.

We provide a constructive and numerically implementable proof that synchronized groups of coupled, self-sustaining oscillators can be represented as a single effective Perturbation Projection Vector (PPV) (or Phase Response Curve) phase macromodel – in other words, that a group of synchronized oscillators behaves as a single effective oscillator with respect to external influences. This result constitutes a foundation for understanding and predicting synchronization/timing hierarchically in large, complex systems that arise in nature and engineering. We apply this result hierarchically to networks of synchronized oscillators, thereby enabling their efficient and scalable analysis. We illustrate our theory and numerical methods with examples from electronics (networks of three-stage ring oscillators), biology (Fitzhugh–Nagumo neurons) and mechanics (pendulum clocks). Our experiments demonstrate that effective PPVs extracted hierarchically can capture emergent phenomena, such as pattern formation, in coupled oscillator networks. Copyright © 2015 John Wiley & Sons, Ltd.

Nonlinear geometrically exact rod dynamics is of great interest in many areas of engineering. In recent years, the research was focused towards Timoshenko-type rod theories where shearing is of importance. However, in many general model of mechanisms and spatial deformations, it is desirable to have a displacement-only formulation, which brings us back to the classical Bernoulli beam. While it is well known for linear analysis, the Bernoulli beam is not as common in geometrically exact models of dynamics, especially when we want to incorporate the rotational inertia into the model. This paper is about the development of an energy-momentum integration scheme for the geometrically exact Bernoulli-type rod. We will show that the task is achievable and devise a general framework to do so. Copyright © 2014 John Wiley & Sons, Ltd.

Materials have a hierarchical nature, deriving often their most useful properties from microscale or nanoscale constituents. Multiresolution analysis, a generalized continuum mechanics-based theory, uses extra degrees of freedom to account for an arbitrary number of these nested length scales. In the past, multiresolution analyses have focused mostly on metal alloys. While this article addresses recent advances in image-based multiresolution analysis of metal alloys, it also highlights extensions to multiresolution theory for modeling of bone mechanics and multiresolution analysis of polymers and polymer nanocomposites. A strong link between molecular dynamics simulations and macroscale multiresolution analyses is shown for both polymers and polymer nanocomposites. The forthcoming work is greatly indebted to the pioneering advances of Ted Belytschko in many areas of computational mechanics; his influence on our work and on the field of finite elements as a whole is substantial Copyright © 2015 John Wiley & Sons, Ltd.

Recently, graphics processing units (GPUs) have been increasingly leveraged in a variety of scientific computing applications. However, architectural differences between CPUs and GPUs necessitate the development of algorithms that take advantage of GPU hardware. As sparse matrix vector (SPMV) multiplication operations are commonly used in finite element analysis, a new SPMV algorithm and several variations are developed for unstructured finite element meshes on GPUs. The effective bandwidth of current GPU algorithms and the newly proposed algorithms are measured and analyzed for 15 sparse matrices of varying sizes and varying sparsity structures. The effects of optimization and differences between the new GPU algorithm and its variants are then subsequently studied. Lastly, both new and current SPMV GPU algorithms are utilized in the GPU CG solver in GPU finite element simulations of the heart. These results are then compared against parallel PETSc finite element implementation results. The effective bandwidth tests indicate that the new algorithms compare very favorably with current algorithms for a wide variety of sparse matrices and can yield very notable benefits. GPU finite element simulation results demonstrate the benefit of using GPUs for finite element analysis and also show that the proposed algorithms can yield speedup factors up to 12-fold for real finite element applications. Copyright © 2015 John Wiley & Sons, Ltd.

A time-domain meshless algorithm based on vector potentials is introduced for the analysis of transient electromagnetic fields. The proposed numerical algorithm is a modification of the radial point interpolation method, where radial basis functions are used for local interpolation of the vector potentials and their derivatives. In the proposed implementation, solving the second-order vector potential wave equation intrinsically enforces the divergence-free property of the electric and magnetic fields. Furthermore, the computational effort associated with the generation of a dual node distribution (as required for solving the first-order Maxwell's equations) is avoided. The proposed method is validated with several examples of 2D waveguides and filters, and the convergence is empirically demonstrated in terms of node density or size of local support domains. It is further shown that inhomogeneous node distributions can provide increased convergence rates, that is, the same accuracy with smaller number of nodes compared with a solution for homogeneous node distribution. A comparison of the magnetic vector potential technique with conventional radial point interpolation method is performed, highlighting the superiority of the divergence-free formulation. Copyright © 2015 John Wiley & Sons, Ltd.

An adaptive refinement scheme is presented to reduce the geometry discretization error and provide higher-order enrichment functions for the interface-enriched generalized FEM. The proposed method relies on the *h*-adaptive and *p*-adaptive refinement techniques to reduce the discrepancy between the exact and discretized geometries of curved material interfaces. A thorough discussion is provided on identifying the appropriate level of the refinement for curved interfaces based on the size of the elements of the background mesh. Varied techniques are then studied for selecting the quasi-optimal location of interface nodes to obtain a more accurate approximation of the interface geometry. We also discuss different approaches for creating the integration sub-elements and evaluating the corresponding enrichment functions together with their impact on the performance and computational cost of higher-order enrichments. Several examples are presented to demonstrate the application of the adaptive interface-enriched generalized FEM for modeling thermo-mechanical problems with intricate geometries. The accuracy and convergence rate of the method are also studied in these example problems. Copyright © 2015 John Wiley & Sons, Ltd.

An *anchored* analysis of variance (ANOVA) method is proposed in this paper to decompose the statistical moments. Compared to the *standard* ANOVA with mutually orthogonal component functions, the anchored ANOVA, with an arbitrary choice of the *anchor point*, loses the orthogonality if employing the same measure. However, an advantage of the anchored ANOVA consists in the considerably reduced number of deterministic solver's computations, which renders the uncertainty quantification of real engineering problems much easier. Different from existing methods, the *covariance decomposition* of the output variance is used in this work to take account of the interactions between non-orthogonal components, yielding an exact variance expansion and thus, with a suitable numerical integration method, provides a strategy that converges. This convergence is verified by studying academic tests. In particular, the sensitivity problem of existing methods to the choice of anchor point is analyzed via the Ishigami case, and we point out that covariance decomposition survives from this issue. Also, with a truncated anchored ANOVA expansion, numerical results prove that the proposed approach is less sensitive to the anchor point. The *covariance-based sensitivity indices* (SI) are also used, compared to the *variance-based SI*. Furthermore, we emphasize that the covariance decomposition can be generalized in a straightforward way to decompose higher-order moments. For academic problems, results show the method converges to exact solution regarding both the skewness and kurtosis. Finally, the proposed method is applied on a realistic case, that is, estimating the chemical reactions uncertainties in a hypersonic flow around a space vehicle during an atmospheric reentry. Copyright © 2015 John Wiley & Sons, Ltd.

We discuss recent advances on robust unfitted finite element methods on cut meshes. These methods are designed to facilitate computations on complex geometries obtained, for example, from computer-aided design or image data from applied sciences. Both the treatment of boundaries and interfaces and the discretization of PDEs on surfaces are discussed and illustrated numerically. © 2014 The Authors. *International Journal for Numerical Methods in Engineering* published by John Wiley & Sons Ltd.

The purpose of this paper is to study the effect of the bulk modulus on the iterative solution of free surface quasi-incompressible fluids using a mixed partitioned scheme. A practical rule to set up the value of a pseudo-bulk modulus a priori in the tangent bulk stiffness matrix for improving the conditioning of the linear system of algebraic equations is also given. The efficiency of the proposed strategy is tested in several problems analyzing the advantage of the modified bulk tangent matrix with regard to the stability of the pressure field, the convergence rate and the computational speed of the analyses. The technique has been tested on a finite calculus/particle finite element method Lagrangian formulation, but it can be easily extended to other quasi-incompressible stabilized finite element formulations. Copyright © 2014 John Wiley & Sons, Ltd.

A novel nonlinear parametric model order reduction technique for the solution of contact problems in flexible multibody dynamics is presented. These problems are characterized by significant variations in the location and size of the contact area and typically require high-dimensional finite element models having multiple inputs and outputs to be solved. The presented technique draws from the fields of nonlinear and parametric model reduction to construct a reduced-order model whose dimensions are insensitive to the dimensions of the full-order model. The solution of interest is approximated in a lower-dimensional subspace spanned by a constant set of eigenvectors augmented with a parameter-dependent set of *global contact shapes*. The latter represent deformation patterns of the interacting bodies obtained from a series of static contact analyses. The set of global contact shapes is parameterized with respect to the system configuration and therefore continuously varies in time. An energy-consistent formulation is assured by explicitly taking into account the dynamic parameter variability in the derivation of the equations of motion. The performance of the novel technique is demonstrated by simulating a dynamic gear contact problem and comparing results against traditional model reduction techniques as well as commercial nonlinear finite element software. Copyright © 2014 John Wiley & Sons, Ltd.

An adaptive approach to using reduced-order models (ROMs) as surrogates in partial differential equations (PDE)-constrained optimization is introduced that breaks the traditional offline-online framework of model order reduction. A sequence of optimization problems constrained by a given ROM is defined with the goal of converging to the solution of a given PDE-constrained optimization problem. For each reduced optimization problem, the constraining ROM is trained from sampling the high-dimensional model (HDM) at the solution of some of the previous problems in the sequence. The reduced optimization problems are equipped with a nonlinear trust-region based on a residual error indicator to keep the optimization trajectory in a region of the parameter space where the ROM is accurate. A technique for incorporating sensitivities into a reduced-order basis is also presented, along with a methodology for computing sensitivities of the ROM that minimizes the distance to the corresponding HDM sensitivity, in a suitable norm.

The proposed reduced optimization framework is applied to subsonic aerodynamic shape optimization and shown to reduce the number of queries to the HDM by a factor of 4-5, compared with the optimization problem solved using only the HDM, with errors in the optimal solution far less than 0.1%. Copyright © 2014 John Wiley & Sons, Ltd.

This paper primarily deals with the computational aspects of chemical dissolution-front instability problems in two-dimensional fluid-saturated porous media under non-isothermal conditions. After the dimensionless governing partial differential equations of the non-isothermal chemical dissolution-front instability problem are briefly described, the formulation of a computational procedure, which contains a combination of using the finite difference and finite element method, is derived for simulating the morphological evolution of chemical dissolution fronts in the non-isothermal chemical dissolution system within two-dimensional fluid-saturated porous media. To ensure the correctness and accuracy of the numerical solutions, the proposed computational procedure is verified through comparing the numerical solutions with the analytical solutions for a benchmark problem. As an application example, the verified computational procedure is then used to simulate the morphological evolution of chemical dissolution fronts in the supercritical non-isothermal chemical dissolution system. The related numerical results have demonstrated the following: (1) the proposed computational procedure can produce accurate numerical solutions for the planar chemical dissolution-front propagation problem in the non-isothermal chemical dissolution system consisting of a fluid-saturated porous medium; (2) the Zhao number has a significant effect not only on the dimensionless propagation speed of the chemical dissolution front but also on the distribution patterns of the dimensionless temperature, dimensionless pore-fluid pressure, and dimensionless chemical-species concentration in a non-isothermal chemical dissolution system; (3) once the finger penetrates the whole computational domain, the dimensionless pore-fluid pressure decreases drastically in the non-isothermal chemical dissolution system.

In this paper, we are interested in the dynamical response of a material body subjected to impact loadings. Such loadings are brutal and intense and may damage the material, leading to strain localization and rupture. Before strain localization occurs, computation of such problems is often accurate enough and very efficient when an explicit time integration scheme is applied. However, after strain localization occurs, the mathematical relevance of a model is preserved only if non-locality is introduced. This is often resulting in a dramatic increase of computational costs. We propose in this work to introduce non-locality through the Thick Level Set approach (TLS). It is the first time the TLS approach has been presented in a dynamical context. In this approach, additional computational efforts are limited in space to a domain slightly bigger than the strain localization region and the time discretization is explicit. The non-local computation is based on a new technique where basis functions are built on the damaged band. The resulting function space has needed properties to compute non-local fields. Copyright © 2014 John Wiley & Sons, Ltd.

Computational structural dynamics plays an essential role in the simulation of linear and nonlinear systems. Indeed, the characteristics of the time integration procedure have a critical impact on the feasibility of the calculation. In order to go beyond the classical approach (a unique time integrator and a unique timescale), the pioneer approach of Belytschko and co-workers consisted in developing mixed implicit–explicit time integrators for structural dynamics. In a first step, the implementation and stability analyses of partitioned integrators with one time step have been achieved for a large class of time integrators. In a second step, the implementation and stability analyses of partitioned integrators with different time steps were studied in detail for particular cases. However, stability results involving different time steps and different time integrators in different parts of the mesh is still an open question in the general case for structural dynamics. The aim of this paper is to propose a state-of-the art of heterogeneous (different time schemes) asynchronous (different time steps) time integrators (HATI) for computational structural dynamics. Finally, an alternative approach based on energy considerations (with velocity continuity at the interface) is proposed in order to develop a general class of HATI for structural dynamics. Copyright © 2014 John Wiley & Sons, Ltd.

In this work, we present a stable proper orthogonal decomposition–Galerkin approximation for parametrized steady incompressible Navier–Stokes equations with low Reynolds number. Copyright © 2014 John Wiley & Sons, Ltd.

This paper deals with data transfer between two meshes as it happens in a finite element context when a remeshing has to be performed. We propose a finite-volume-based data transfer method for an efficient remeshing of three-dimensional solid mechanics problems. The originality of this transfer method stems from a linear reconstruction of the fields to be transferred on an auxiliary finite volume mesh, a fast computation of the transfer operator and the application to the complete remeshing of 3D problems. This procedure is applicable to both nodal values and discrete fields defined at quadrature points. In addition, a data transfer method using mortar elements is presented. The main improvement made to this second method comes from a fast computation of mortar elements. These two data transfer methods are compared with the simplest transfer method, which consists of a classical interpolation. After some academic examples, we present 2D forging and 3D friction stir welding applications. Copyright © 2014 John Wiley & Sons, Ltd.

An efficient procedure for embedding kinematic boundary conditions in the biharmonic equation, for problems such as the pure streamfunction formulation of the Navier–Stokes equations and thin plate bending, is based on a stabilized variational formulation, obtained by Nitsche's approach for enforcing boundary constraints. The absence of kinematic admissibility constraints allows the use of non-conforming meshes with non-interpolatory approximations, thereby providing added flexibility in addressing the higher continuity requirements typical of these problems. Variationally conjugate pairs weakly enforce kinematic boundary conditions. The use of a scaling factor leads to a formulation with a single stabilization parameter. For plates, the enforcement of tangential derivatives of deflections obviates the need for pointwise enforcement of corner values in the presence of corners. The single stabilization parameter is determined from a local generalized eigenvalue problem, guaranteeing coercivity of the discrete bilinear form. The accuracy of the approach is verified by representative computations with bicubic B-splines, providing guidance to the determination of the scaling and exhibiting optimal rates of convergence and robust performance with respect to values of the stabilization parameter. Copyright © 2014 John Wiley & Sons, Ltd.

This work presents a method to adaptively refine reduced-order models *a posteriori* without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive *h*-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive *k*-means clustering of the state variables using snapshot data. The method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operations or full-order-model solves. Further, it enables the reduced-order model to satisfy *any prescribed error tolerance* regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis. Copyright © 2014 John Wiley & Sons, Ltd.

In this work, we propose a method to prescribe essential boundary conditions in the finite element approximation of elliptic problems when the boundary of the computational domain does not match with the element boundaries. The problems considered are the Poisson problem, the Stokes problem, and the Darcy problem, the latter both in the primal and in the dual formulation. The formulation proposed is of variational type. The key idea is to start with the variational form that defines the problem and treat the boundary condition as a constraint. The particular feature is that the Lagrange multiplier is not defined on the boundary where the essential condition needs to be prescribed but is taken as a certain trace of a field defined in the computational domain, either in all of it or just in a region surrounding the boundary. When approximated numerically, this may allow one to condense the DOFs of the new field and end up with a problem posed only in terms of the original unknowns. The nature of the field used to weakly impose boundary conditions depends on the problem being treated. For the Poisson problem, it is a flux; for the Stokes problem, a stress; for the Darcy problem in primal form, a velocity field; and for the Darcy problem in dual form, it is a potential. Copyright © 2014 John Wiley & Sons, Ltd.

A novel extended variational multiscale method for incompressible two-phase flow is proposed. In this approach, the level-set method, which allows for accurately representing complex interface evolutions, is combined with an extended finite element method for the fluid field. Sharp representation of the discontinuities at the interface related to surface-tension effects and large material–parameter ratios are enabled by this approach. To capture the discontinuities, jump enrichments are applied for both velocity and pressure field. Nitsche's method is then used to weakly impose the continuity of the velocity field. For a stable formulation on the entire domain, residual-based variational multiscale terms are supported by appropriate face-oriented ghost-penalty and fluid stabilization terms in the region of enriched elements. Both face-oriented terms and interfacial terms related to Nitsche's method are introduced such that it is accounted for viscous-dominated and convection-dominated transient flows. As a result, stability and well-conditioned systems are guaranteed independent of the interface location. The proposed method is applied to four numerical examples of increasing complexity: two-dimensional Rayleigh–Taylor instabilities, a two-dimensional collapsing water column, three-dimensional rising bubbles, and a three-dimensional bubble coalescence. Excellent agreement with either analytical solutions or numerical and experimental reference data as well as robustness for all flow regimes is demonstrated for all examples. Copyright © 2014 John Wiley & Sons, Ltd.

An embedded mesh method using piecewise constant multipliers originally proposed by Puso *et al.* (CMAME, 2012) is analyzed here to determine effects of the pressure stabilization term and small cut cells. The approach is implemented for transient dynamics using the central difference scheme for the time discretization. It is shown that the resulting equations of motion are a stable linear system with a condition number independent of mesh size. Next, it is shown that the constraints and the stabilization terms can be recast as non-proportional damping such that the time integration of the scheme is provably stable with a critical time step computed from the undamped equations of motion. Effects of small cuts are discussed throughout the presentation. A mesh study is conducted to evaluate the effects of the stabilization on the discretization error and conditioning and is used to recommend an optimal value for stabilization scaling parameter. Several nonlinear problems are also analyzed and compared with comparable conforming mesh results. Finally, several demanding problems highlighting the robustness of the proposed approach are shown. Copyright © 2014 John Wiley & Sons, Ltd.

In the recent paper, Fish and Kuznetsov introduced the so-called computational continua (C^{2}) approach, which is a variant of the higher order computational homogenization that does not require higher order continuity, introduces no new degrees of freedom, and is free of higher order boundary conditions. In a follow-up paper on reduced order computational continua, the C^{2} formulation has been enhanced with a model reduction scheme based on construction of residual-free fields to yield a computationally efficient framework coined as RC^{2}. The original C^{2} formulations were limited to rectangular and box elements. The primary objectives of the present manuscript is to revisit the original formulation in three respects: (i) consistent formulation of boundary conditions for unit cells subjected to higher order coarse scale fields, (ii) effective solution of the unit cell problem for lower order approximation of eigenstrains, and (iii) development of nonlocal quadrature schemes for general two-dimensional (quad and triangle) and three-dimensional (hexahedral and tetrahedral) elements. Copyright © 2014 John Wiley & Sons, Ltd.

This work presents a simple technique for real-time monitoring of thermal processes. Real-time simulation-based control of thermal processes is a big challenge because high-fidelity numerical simulations are costly and cannot be used, in general, for real-time decision making. Very often, processes are monitored or controlled with a few measurements at some specific points. Thus, the strategy presented here is centered on fast evaluation of the response only where it is needed. To accomplish this, classical harmonic analysis is combined with recent model reduction techniques. This leads to an advanced harmonic methodology, which solves in real time the transient heat equation at the monitored point.

In order to apply the *reciprocity principle*, harmonic analysis is used in the space-frequency domain. Then, Proper Generalized Decomposition, a reduced order approach, pre-computes a transfer function able to produce the output response for a given excitation. This transfer function is computed offline and only once. The response at the monitoring point can be recovered performing a computationally inexpensive post-processing step. This last step can be performed online for real-time monitoring of the thermal process. Examples show the applicability of this approach for a wide range of problems ranging from fast temperature evaluation to inverse problems. Copyright © 2014 John Wiley & Sons, Ltd.

We demonstrate the potential of collocation methods for efficient higher-order analysis on standard nodal finite element meshes. We focus on a collocation method that is variationally consistent and geometrically flexible, converges optimally, embraces concepts of reduced quadrature, and leads to symmetric stiffness and diagonal consistent mass matrices. At the same time, it minimizes the evaluation cost per quadrature point, thus reducing formation and assembly effort significantly with respect to standard Galerkin finite element methods. We provide a detailed review of all components of the technology in the context of elastodynamics, that is, weighted residual formulation, nodal basis functions on Gauss–Lobatto quadrature points, and symmetrization by averaging with the ultra-weak formulation. We quantify potential gains by comparing the computational efficiency of collocated and standard finite elements in terms of basic operation counts and timings. Our results show that collocation is significantly less expensive for problems dominated by the formation and assembly effort, such as higher-order elastostatic analysis. Furthermore, we illustrate the potential of collocation for efficient higher-order explicit dynamics. Throughout this work, we advocate a straightforward implementation based on simple modifications of standard finite element codes. We also point out the close connection to spectral element methods, where many of the key ideas are already established. Copyright © 2014 John Wiley & Sons, Ltd.

We present the first a priori error analysis for the first hybridizable discontinuous Galerkin method for linear elasticity proposed in Internat. J. Numer. Methods Engrg. **80** (2009), no. 8, 1058–1092. We consider meshes made of polyhedral, shape-regular elements of arbitrary shape and show that, whenever piecewise-polynomial approximations of degree are used and the exact solution is smooth enough, the antisymmetric part of the gradient of the displacement converges with order *k*, the stress and the symmetric part of the gradient of the displacement converge with order *k* + 1/2, and the displacement converges with order *k* + 1. We also provide numerical results showing that the orders of convergence are actually sharp. Copyright © 2014 John Wiley & Sons, Ltd.

In this work, we propose a novel model order reduction approach for two-phase flow in porous media by introducing a global pressure formulation in which the total mobility, which realizes the coupling between saturation and pressure, is regarded as a parameter to the pressure equation. In our approach, the mobility itself is an optimal fit of mobility profiles that are precomputed using the time-of-flight for the initial saturation. Using this formulation and the localized reduced basis multiscale method, we obtain a low-dimensional surrogate of the high-dimensional pressure equation. By applying ideas from model order reduction for parametrized partial differential equations, we are able to split the computational effort for solving the pressure equation into a costly *offline* step that is performed only once and an inexpensive *online* step that is carried out in every time step of the two-phase flow simulation, which is thereby largely accelerated. Usage of elements from numerical multiscale methods allows us to displace the computational intensity between the offline and online steps to reach an ideal runtime at acceptable error increase for the two-phase flow simulation. This is achieved by constructing reduced-dimensional local spaces that lead to a non-conforming global approximation space. As one example for a coupling on the global space, we introduce a discontinuous Galerkin scheme. Copyright © 2014 John Wiley & Sons, Ltd.

An adaptive and efficient approach for constructing reduced-order models (ROMs) that are robust to changes in parameters is developed. The approach is based on a greedy sampling of the underlying high-dimensional model (HDM) together with an efficient procedure for exploring the configuration space and identifying parameters for which the error is likely to be high. Because this exploration is based on a surrogate model for an error indicator, it is amenable to a fast training phase. Furthermore, a model for the exact error based on the error indicator is constructed and used to determine when the greedy procedure reaches a desired error tolerance. An efficient procedure for updating the reduced-order basis constructed by proper orthogonal decomposition is also introduced in this paper, avoiding the cost associated with computing large-scale singular value decompositions. The proposed procedure is then shown to require less evaluations of *a posteriori* error estimators than the classical procedure in order to identify locations of the parameter space to be sampled. It is illustrated on the training of parametric ROMs for three linear and nonlinear mechanical systems, including the realistic prediction of the response of a V-hull vehicle to underbody blasts. Copyright © 2014 John Wiley & Sons, Ltd.

A numerical technique that is based on the integration of the asymptotic solution in the numerical framework for computing the local singular behavior of Stokes flow near a sharp corner is presented. Moffat's asymptotic solution is used, and special enriched shape functions are developed and integrated in the extended finite element method (X-FEM) framework to solve the Navier–Stokes equations. The no-slip boundary condition on the walls of the corner is enforced via the use of Lagrange multipliers. Flows around corners with different angles are simulated, and the results are compared with both those of the known analytic solution and the X-FEM with no special enrichment near the corner. The results of the present technique are shown to greatly reduce the error made in computing the pressure and velocity fields near a corner tip without the need for mesh refinement near the corner. The method is then applied to the estimation of the permeability of a network of fibers, where it is shown that the local small-scale pressure singularities have a large impact on the large-scale network permeability. Copyright © 2014 John Wiley & Sons, Ltd.

We formulate a class of delicately controlled problems to model the kink-free evolution of quasistatic cracks in brittle, isotropic, linearly elastic materials in two dimensions. The evolving crack satisfies familiar principles—Griffith's criterion, local symmetry, and irreversibility. A novel feature of the formulation is that in addition to the crack path, the loading is also treated as an unknown. Specifically, a scaling factor for prescribed Dirichlet and Neumann boundary conditions is computed as part of the solution to yield an always-propagating and ostensibly kink-free crack and a continuous loading history beyond the initial step. A dimensionless statement of the problem depends only on the Poisson's ratio of a homogeneous material, and is in particular, independent of its Young's modulus and fracture toughness.

Numerical resolution of the formulated problem relies on two new ideas. The first is an algorithm to compute triangulations conforming to cracked domains by locally deforming a *given* background mesh in the vicinity of evolving cracks. The algorithm is robust under mild assumptions on the sizes and angles of triangles near the crack and its smoothness. Hence, a subset, if not the entire family, of cracked domains realized during the course of a simulation can be discretized with the same background mesh; we term the latter a *universal mesh* for such a family of domains. Universal meshes facilitate adopting a discrete representation for the crack (as splines in our examples), preclude the need for local splitting/retriangulation operations, liberate the crack from following directions prescribed by the mesh, and enable the adoption of standard FEMs to compute the elastic fields in the cracked solid. Second, we employ a method specifically designed to approximate the stress intensity factors for curvilinear cracks. We examine the performance of the resulting numerical method with detailed examples, including comparisons with an exact solution of a crack propagating along a circular arc (which we construct) and comparisons with experimental fracture paths. In all cases, we observe convergence of computed paths, their derivatives, and loading histories with refinement of the universal mesh. Copyright © 2014 John Wiley & Sons, Ltd.

We develop a three-dimensional, hierarchically parallel, finite strain multiscale solver capable of computing nonlinear multiscale solutions with over 1 billion finite elements and over 574 million nonlinear equations on 1552 computing cores. In the vein of FE^{2}, we use the nested iterative procedure and devote the solver to multiscale cohesive modeling of heterogeneous hyperelastic layers. The hierarchically parallel multiscale solver takes advantage of a client-server non-blocking communication matrix that limits latency, starvation, and overhead by overlaying computations at different scales. We perform simulations of real-scale engineering devices and bridge in length-scales, spanning from mm to nm in spatial resolution. Verification of the hierarchically parallel solver is provided together with a mesh convergence study. Moreover, we report on the scaling performance. Copyright © 2014 John Wiley & Sons, Ltd.

We present a parameterized-background data-weak (PBDW) formulation of the variational data assimilation (state estimation) problem for systems modeled by partial differential equations. The main contributions are a constrained optimization weak framework informed by the notion of experimentally observable spaces; *a priori* and *a posteriori* error estimates for the field and associated linear-functional outputs; weak greedy construction of prior (background) spaces associated with an underlying potentially high-dimensional parametric manifold; stability-informed choice of observation functionals and related sensor locations; and finally, output prediction from the optimality saddle in operations, where *M* is the number of experimental observations. We present results for a synthetic Helmholtz acoustics model problem to illustrate the elements of the methodology and confirm the numerical properties suggested by the theory. To conclude, we consider a physical raised-box acoustic resonator chamber: we integrate the PBDW methodology and a Robotic Observation Platform to achieve real-time *in situ* state estimation of the time-harmonic pressure field; we demonstrate the considerable improvement in prediction provided by the integration of a best-knowledge model and experimental observations; we extract, even from these results with real data, the numerical trends indicated by the theoretical convergence and stability analyses. Copyright © 2014 John Wiley & Sons, Ltd.

The coupling between two distinct numerical methods presents a major challenge, especially in the case of discrete-continuum coupling for dynamic simulations. This paper presents a general multiscale framework for coupling the discrete element method (DEM) and the finite difference method (FDM). DEM has been shown to be particular suitable for capturing physical phenomena at small length scales where continuum mechanics description no longer applies. Its efficiency, however, is limited because of the computational power required. A multidomain analysis coupling DEM and FDM is thus proposed to reduce the computational cost. To couple overlapping DEM and FDM, a bridging scale term is introduced such that compatibility of dynamic behavior between the DEM-based and the FDM-based models is enforced. This multiscale method couples two commercial packages: the DEM-based code, particle flow code, and the FDM-based code, fast lagrangian analysis of continua. The new method is applied to several reference dynamic tests. Results using the proposed method compare well with benchmark simulations from FDM and DEM. Copyright © 2015 John Wiley & Sons, Ltd.

Quadrilateral and triangular elements with curved edges are developed in the framework of spectral, discontinuous, hybrid control-volume/finite-element method for elliptic problems. In order to accommodate hybrid meshes, encompassing both triangular and quadrilateral elements, one single mapping is used. The scheme is applied to two-dimensional problems with discontinuous, anisotropic diffusion coefficients, and the exponential convergence of the method is verified in the presence of curved geometries. Copyright © 2014 John Wiley & Sons, Ltd.

In this paper, an advanced 20 × 20 stiffness matrix and the corresponding nodal load vector of a member of arbitrary composite cross section is developed taking into account shear lag effects due to both flexure and torsion (the case of the three-dimensional beam element of arbitrary homogeneous cross section is treated as a special one). The composite member consists of materials in contact each of which can surround a finite number of inclusions. Nonuniform warping distributions are taken into account by employing four independent warping parameters multiplying a shear warping function in each direction and two torsional warping functions. Ten boundary value problems with respect to the kinematical components are formulated and solved employing the analog equation method, a BEM-based technique. The aforementioned boundary value problems are formulated employing either an improved stress field arising from the correction of shear stress components or the stress field arising directly from displacement considerations. The warping functions and the geometric constants including the additional ones due to warping are evaluated employing a pure BEM approach, that is, only boundary discretization of the cross section is used. Numerical results are presented to illustrate the method and demonstrate its efficiency and accuracy. The deviations arising from the use of the advanced 20 × 20 stiffness matrix and the classical 12 × 12 or 14 × 14 ones employed in commercial software packages are illustrated through examples of great practical interest. Moreover, the influence of nonuniform warping effects necessitating the use of the aforementioned ‘advanced’ stiffness matrix is also demonstrated. Copyright © 2015 John Wiley & Sons, Ltd.