The finite analytic method is developed to solve the two-dimensional steady fluid flows in heterogeneous porous media with full tensor permeability on unstructured grids. The proposed FAM is constructed based upon the power-law analytic nodal solution in the angular domain with arbitrary shape. When approaching the grid node joining the subdomains, three different flow patterns may exist: power-law flow, linear flow or the stagnant flow. Based on the nodal analytic solution, the triangle-based FAM is proposed. Numerical examples show that the proposed numerical scheme makes the convergences much quickly than the traditional methods, typically the weighted harmonic mean method under the cell refinement. In practical applications, the grid refinement parameter *n*=2 or *n*=3 is recommended, and the relative error of the calculated equivalent permeability will below 4% independent of the heterogeneity. In contrast, when using the traditional numerical scheme the refinement ratio for the grid cell needs to increase dramatically to get an accurate result, especially for strong heterogeneous porous medium.

A methodology is proposed for the efficient solution of probabilistic nonconvex constrained optimization problems with uncertain. Statistical properties of the underlying stochastic generator are characterized from an initial statistical sample of function evaluations. A diffusion manifold over the initial set of data points is first identified and an associated basis computed. The joint probability density function of this initial set is estimated using a kernel density model and an Itô stochastic differential equation constructed with this model as its invariant measure. This ISDE is adapted to fluctuate around the manifold yielding additional joint realizations of the uncertain parameters, design variables, and function values are obtained as solutions of the ISDE. The expectations in the objective function and constraints are then accurately evaluated without performing additional function evaluations. The methodology brings together novel ideas from manifold learning and stochastic Hamiltonian dynamics to tackle an outstanding challenge in stochastic optimization. Three examples are presented to highlight different aspects of the proposed methodology. This article is protected by copyright. All rights reserved.

Fractures tend to propagate along the least resistance paths, and homogeneous-based models may not be able to reliably predict the true crack paths, as they are not capable of capturing nonlinearities and local damage induced by local inhomogeneity. This paper presents a stochastic numerical modelling framework for simulating fracturing in natural heterogeneous materials. Fracture propagation is modelled using Francfort and Marigo's variational theory, and randomness in the material properties is introduced by random field principle. A computational strategy on the basis of nonlinear dimensionality reduction framework is developed that maps domain of spatially variable properties of the materials to a low-dimensional space. This strategy allows us to predict the most probable fracture patterns leading to failure by an optimisation algorithm. The reliability and performance of the developed methodology are examined through simulation of experimental case studies and comparison of predictions with measured data.

We explore diffuse formulations of Nitsche's method for consistently imposing Dirichlet boundary conditions on phase-field approximations of sharp domains. Leveraging the properties of the phase-field gradient, we derive the variational formulation of the diffuse Nitsche method by transferring all integrals associated with the Dirichlet boundary from a geometrically sharp surface format in the standard Nitsche method to a geometrically diffuse volumetric format. We also derive conditions for the stability of the discrete system and formulate a diffuse local eigenvalue problem, from which the stabilization parameter can be estimated automatically in each element. We advertise metastable phase-field solutions of the Allen-Cahn problem for transferring complex imaging data into diffuse geometric models. In particular, we discuss the use of mixed meshes, that is, an adaptively refined mesh for the phase-field in the diffuse boundary region and a uniform mesh for the representation of the physics-based solution fields. We illustrate accuracy and convergence properties of the diffuse Nitsche method and demonstrate its advantages over diffuse penalty-type methods. In the context of imaging based analysis, we show that the diffuse Nitsche method achieves the same accuracy as the standard Nitsche method with sharp surfaces, if the inherent length scales, i.e., the interface width of the phase-field, the voxel spacing and the mesh size, are properly related. We demonstrate the flexibility of the new method by analyzing stresses in a human vertebral body. This article is protected by copyright. All rights reserved.

The Koiter-Newton method had recently demonstrated a superior performance for non-linear analyses of structures, compared to traditional path-following strategies. The method follows a predictor-corrector scheme to trace the entire equilibrium path. During a predictor step a reduced order model is constructed based on Koiter's asymptotic post-buckling theory which is followed by a Newton iteration in the corrector phase to regain the equilibrium of forces.

In this manuscript, we introduce a robust mixed solid-shell formulation to further enhance the efficiency of stability analyses in various aspects. We show that a Hellinger-Reissner variational formulation facilitates the reduced order model construction omitting an expensive evaluation of the inherent fourth order derivatives of the strain energy. We demonstrate that extremely large step sizes with a reasonable out-of-balance residual can be obtained with substantial impact on the total number of steps needed to trace the complete equilibrium path. More importantly, the numerical effort of the corrector phase involving a Newton iteration of the full order model is drastically reduced thus revealing the true strength of the proposed formulation. We study a number of problems from engineering and compare the results to the conventional approach in order to highlight the gain in numerical efficiency for stability problems. This article is protected by copyright. All rights reserved.

The present work addresses a multiscale framework for Fast-Fourier-Transform based computational homogenization. The framework considers the scale-bridging between microscopic and macroscopic scale. While the macroscopic problem is discretized with finite elements, the microscopic problems are solved by means of Fast Fourier Transforms on periodic representative volume elements (RVE). In such multiscale scenario, the computation of the *effective properties* of the microstructure is crucial. While effective quantities in terms of stresses and deformations can be computed from surface integrals along the boundary of the RVE, the computation of the associated moduli is not straight-forward. The key contribution of the present paper is thus the derivation and implementation of an *algorithmically consistent macroscopic tangent operator* which directly resembles the effective moduli of the microstructure. The macroscopic tangent is derived by means of the classical Lippmann-Schwinger equation and can be computed from a simple system of linear equations. This is performed through an efficient Fast-Fourier-Transform based approach along with a conjugate gradient solver. The viability and efficiency of the method is demonstrated for a number of two- and three-dimensional boundary value problems incorporating linear and nonlinear elasticity as well as viscoelastic material response. This article is protected by copyright. All rights reserved.

Modelling the entire ductile fracture process remains a challenge. On the one hand, continuous damage models succeed in capturing the initial diffuse damage stage but are not able to represent discontinuities or cracks. On the other hand, discontinuous methods, as the cohesive zones, which model the crack propagation behaviour, are suited to represent the localised damaging process. However, they are unable to represent diffuse damage. Moreover, most of the cohesive models do not capture triaxiality effect.

In this paper, the advantages of the two approaches are combined in a single damage to crack transition framework. In a small deformation setting, a non-local elastic damage model is associated with a cohesive model in a discontinuous Galerkin finite element framework. A cohesive band model is used to naturally introduce a triaxiality-dependent behaviour inside the cohesive law. Practically, a numerical thickness is introduced to recover a 3D-state, mandatory to incorporate the in-plane stretch effects. This thickness is evaluated to ensure the energy consistency of the method and is not a new numerical parameter. The traction-separation law is then built from the underlying damage model.

The method is numerically shown to capture the stress triaxiality effect on the crack initiation and propagation. This article is protected by copyright. All rights reserved.

A novel approach, referred to as the homotopy stochastic finite element method, is proposed to solve the eigenvalue problem of a structure associated with some amount of uncertainty based on the homotopy analysis method. For this approach, an infinite multivariate series of the involved random variables is proposed to express the random eigenvalue or even a random eigenvector. The coefficients of the multivariate series are determined using the homotopy analysis method. The convergence domain of the derived series is greatly expanded compared to the Taylor series due to the use of an approach function of the parameter *h*. Therefore, the proposed method is not limited to random parameters with small fluctuation. However, in practice, only single- and double-variable approximations are employed to simplify the calculation. The numerical examples show that with a suitable choice of the auxiliary parameter *h*, the suggested approximations can produce very accurate results and require reduced or similar computational efforts compared with the existing methods.

We consider an efficient preconditioner for a boundary integral equation (BIE) formulation of the two-dimensional Stokes equations in porous media. While BIEs are well-suited for resolving the complex porous geometry, they lead to a dense linear system of equations that is computationally expensive to solve for large problems. This expense is further amplified when a significant number of iterations is required in an iterative Krylov solver such as GMRES. In this paper, we apply a fast inexact direct solver, the inverse fast multipole method (IFMM), as an efficient preconditioner for GMRES. This solver is based on the framework of
-matrices and uses low-rank compressions to approximate certain matrix blocks. It has a tunable accuracy *ϵ* and a computational cost that scales as
. We discuss various numerical benchmarks that validate the accuracy and confirm the efficiency of the proposed method. We demonstrate with several types of boundary conditions that the preconditioner is capable of significantly accelerating the convergence of GMRES when compared to a simple block-diagonal preconditioner, especially for pipe flow problems involving many pores. This article is protected by copyright. All rights reserved.

This article presents a nonlinear solver combining regression analysis and a multiscale simulation scheme. First, the proposed method repeats microscopic analysis of a local simulation domain, which is extracted from the entire global domain, to statistically estimate the relation(s) between the value of a dependent variable at a point and values at surrounding points. The relation is called regression function. Subsequent global analysis reveals the behavior of the global domain with only coarse-grained points using the regression function quickly at low computational cost, which can be accomplished using a multiscale numerical solver, called the seamless-domain method (SDM). The objective of the study is to solve a nonlinear problem accurately and at low cost by combining the two techniques. We present an example problem of a nonlinear steady-state heat conduction analysis of a heterogeneous material. The proposed model using fewer than 1000 points generates a solution with precision similar to that of a standard finite-element solution using hundreds of thousands of nodes. To investigate the relationship between the accuracy and computational time, we apply the SDM under varying conditions such as the number of iterations of the prior analysis for statistical data learning.

This paper introduces multivariate input-output models to predict the errors and bases dimensions of local parametric Proper Orthogonal Decomposition reduced-order models. We refer to these mappings as the multivariate predictions of local reduced-order model characteristics (MP-LROM) models. We employ Gaussian Processes and Artificial Neural Networks to construct approximations of these multivariate mappings. Numerical results with a viscous Burgers model illustrate the performance and potential of the machine learning based regression MP-LROM models to approximate the characteristics of parametric local reduced-order models. The predicted reduced-order models errors are compared against the multi-fidelity correction and reduced order model error surrogates methods predictions, whereas the predicted reduced-order dimensions are tested against the standard method based on the spectrum of snapshots matrix. Since the MP-LROM models incorporate more features and elements to construct the probabilistic mappings they achieve more accurate results. However, for high-dimensional parametric spaces, the MP-LROM models might suffer from the curse of dimensionality. Scalability challenges of MP-LROM models and the feasible ways of addressing them are also discussed in this study. This article is protected by copyright. All rights reserved.

This paper deals with model-order reduction of parametric partial differential equations (PPDE). More specifically, we consider the problem of finding a good approximation subspace of the solution manifold of the PPDE when only partial information on the latter is available. We assume that two sources of information are available: *i)* a “rough” prior knowledge, taking the form of a manifold containing the target solution manifold; *ii)* partial linear measurements of the solutions of the PPDE (the term partial refers to the fact that observation operator cannot be inverted). We provide and study several tools to derive good approximation subspaces from these two sources of information. We first identify the best worst-case performance achievable in this setup and propose simple procedures to approximate the corresponding optimal approximation subspace. We then provide, in a simplified setup, a theoretical analysis relating the achievable reduction performance to the choice of the observation operator and the prior knowledge available on the solution manifold. This article is protected by copyright. All rights reserved.

Fictitious-domain methods are attractive for shape optimization applications, since they do not require deformed or re-generated meshes. A recently developed such method is the CutFEM approach, which allows crisp boundary representations and for which uniformly well-conditioned system matrices can be guaranteed. Here we investigate the use of the CutFEM approach for acoustic shape optimization, using as test problem the design of an acoustic horn for favorable impedance-matching properties. The CutFEM approach is used to solve the Helmholtz equation, and the geometry of the horn is implicitly described by a level-set function. To promote smooth algorithmic updates of the geometry, we propose to use the nodal values of the Laplacian of the level-set function as design variables. This strategy also improves the algorithm's convergence rate, counteracts mesh dependence, and, in combination with Tikhonov regularization, controls small details in the optimized designs. An advantage with the proposed method is that the exact derivatives of the discrete objective function can be expressed as boundary integrals, as opposed to when using a traditional method that employs mesh deformations. The resulting horns possess excellent impedance-matching properties and exhibit surprising sub-wavelength structures, not previously seen, which are possible to capture due to the fixed-mesh approach. This article is protected by copyright. All rights reserved.

Laminated piezocomposite energy harvesters (LAPEHs) are multilayer arrangements of piezoelectric and non-piezoelectric materials. Multiple materials and physics, and dynamic analysis need to be considered in their design. Usually these devices are designed for harmonic excitation, however, they are subjected to other types of excitations. Thus, a novel topology optimization formulation is developed for designing LAPEHs that considers a combination of harmonic and transient optimization problems with the aim of designing the so-called “multi-entry" devices in which the power generated is the same for different types of excitation. LAPEHs are modeled by the finite element method (FEM) and the material model used for the piezoelectric layer is based on penalization and polarization model (PEMAP-P) who controls material distribution and corresponding polarization. To optimize the RLC circuit, a novel linear interpolation model of coupled electrical impedance (LIMCEI) is also introduced to consider different magnitudes of the coupled impedance. The topology optimization problem seeks to maximize the active power generated by the LAPEH at its RLC circuit, to minimize its response time measured as the slope of the power versus time curve, and to maximize its stiffness. Numerical examples are shown in order to illustrate the potential of the method. This article is protected by copyright. All rights reserved.

A new method of topology optimization is introduced in which a continuous material field is combined with adaptive mesh refinement. Using a continuous material field with separate analysis and design meshes allows the method to produce optimal designs that are free of numerical artifacts like checkerboard patterns and material islands. Adaptive mesh refinement is then applied to both meshes to precisely locate the optimal boundary of the final structure. A Helmholtz-type density filter is used to prevent the appearance of small topological features as the mesh refinement proceeds. Results are presented for several test problems, including problems with geometrically complex domain boundaries. This article is protected by copyright. All rights reserved.

This paper presents a level set-based topology optimization method for the design of an acoustic metamaterial with negative bulk modulus in an acoustic-elastic coupled system. To represent the coupled system, we propose a two-phase material model in which the solid and acoustic phases are mixed. This model allows the coupled system to be represented uniformly and avoids the need to impose the coupling conditions typically required in analyses of acoustic-elastic coupled systems. The effective bulk modulus of the acoustic metamaterial is represented using an S-parameter-based method and the optimization problem is formulated as a level set-based topology optimization. The topological derivatives for the two-phase material model are obtained using the relationship between the topological and shape derivatives. Numerical examples demonstrate the validity of the proposed two-phase material model and topological derivatives, and an optimal unit cell design for an acoustic metamaterial with negative bulk modulus is obtained at a targeted frequency. This article is protected by copyright. All rights reserved.

Gradient-dependent plasticity can be used to achieve mesh-objective results upon loss of well-posedness of the initial/boundary value problem due to the introduction of strain softening, non-associated flow and geometric non-linearity. A prominent class of gradient plasticity models considers a dependence of the yield strength on the Laplacian of the hardening parameter, usually an invariant of the plastic strain tensor. This inclusion causes the consistency condition to become a partial differential equation, in addition to the momentum balance. At the internal moving boundary one has to impose appropriate boundary conditions on the hardening parameter, or equivalently, on the plastic multiplier. This internal boundary condition can be enforced without tracking the elastic-plastic boundary by requiring -continuity with respect to the plastic multiplier. In this contribution this continuity has been achieved by using NURBS as shape functions both for the plastic multiplier and for the displacements. One advantage of this isogeometric analysis approach is that the displacements can be interpolated one order higher, making it consistent with the interpolation of the plastic multiplier. This is different from previous approaches which have been exploited. The regularising effect of gradient plasticity is shown for one and two-dimensional boundary value problems. This article is protected by copyright. All rights reserved.

Several cases of nonlinear-wave propagation are studied numerically in two dimensions within the framework of potential flow. The Laplace equation is solved with the Harmonic Polynomial cell (HPC) method, which is a field method with high-order accuracy. In the HPC method, the computational domain is divided into overlapping cells. Within each cell, the velocity potential is represented by a sum of harmonic polynomials. Two different methods denoted as Immersed Boundary (IB) and Multi-grid (MG) are used to track the free surface. The former treats the free surface as an immersed boundary in a fixed, Cartesian background grid, while the latter uses a free-surface fitted grid that overlaps with a Cartesian background grid. The simulated cases include several nonlinear wave mechanisms, such as high steepness and shallow-water effects. For one of the cases, a numerical scheme to suppress local wave breaking is introduced. Such scheme can serve as a practical mean to ensure numerical stability in simulations where local breaking is not significant for the result. For all the considered cases, both the IB and MG method generally give satisfactory agreement with known reference results. Although the two free-surface tracking methods mostly have similar performance, some differences between them are pointed out. These include aspects related to modelling of particular physical problems as well as their computational efficiency when combined with the HPC method.

An efficient method for generating the mass matrix inverse of structural dynamic problems is presented, which can be tailored to improve the accuracy of target frequency ranges and/or wave contents. The present method bypasses the use of biorthogonal construction of a kernel inverse mass matrix that requires special procedures for boundary conditions and free edges or surfaces, and constructs the free-free inverse mass matrix employing the standard FEM procedure. The various boundary conditions are realized by the the method of localized Lagrange multipliers. In particular, the present paper constructs the kernel inverse matrix by employing the standard FEM elemental mass matrices. It is shown that the accuracy of the present inverse mass matrix is almost identical to that of a conventional consistent mass matrix or a combination of lumped and consistent mass matrices. Numerical experiments with the proposed inverse mass matrix are carried out to validate its effectiveness when applied to vibration analysis of bars, beams and plain stress problems. This article is protected by copyright. All rights reserved.

phane Bordas thanks funding provided by the European Research Council Starting Independent Research Grant (ERC Stg grant agreement No. 279578)“RealTCut towards real time multiscale simulation of cutting in non-linear materials with applications to surgical simulation and computer guided surgery”.

Stéphane Bordas also thanks the financial support of the Fonds National de la Recherche Luxembourg FWO-FNR grant INTER/FWO/15/10318764.

Eleni Chatzi would like to acknowledge the support of the Swiss National Science Foundation under research grant # 200021_153379 “A Multiscale Hysteretic XFEM Scheme for the Analysis of Composite Structures".

The authors would like to thank D.Sutula for many helpful discussions on the numerical treatment of crack propagation problems. This article is protected by copyright. All rights reserved.

Proper generalized decomposition (PGD) is often used for multi-query and fast-response simulations. It is a powerful tool alleviating the curse of dimensionality affecting multi-parametric partial differential equations. Most implementations of PGD are intrusive extensions based on in-house developed finite element (FE) solvers. In this work, we propose a non-intrusive PGD scheme using off-the-shelf FE codes (such as certified commercial software) as an external solver. The scheme is implemented and monitored by in-house flow-control codes. A typical implementation is provided with downloadable codes. Moreover, a novel parametric separation strategy for the PGD resolution is presented. The parametric space is split into two- or three-dimensional subspaces, to allow PGD technique solving problems with constrained parametric spaces, achieving higher convergence ratio. Numerical examples are provided. In particular, a practical example in biomechanics is included, with potential application to patient-specific simulation. This article is protected by copyright. All rights reserved.

In Goal-Oriented Adaptivity (GOA), the error in the Quantity of Interest (QoI) is represented using the error functions of the direct and adjoint problems. This error representation is subsequently bounded above by element-wise error indicators that are used to drive optimal refinements. In this work, we propose to replace, in the error representation, the adjoint problem by an alternative operator. The main advantage of the proposed approach is that, when judiciously selecting such alternative operator, the corresponding upper bound of the error representation becomes sharper, leading to a more efficient GOA.

While the method can be applied to a variety of problems, we focus here on two- and three-dimensional (2D and 3D) Helmholtz problems. We show via extensive numerical experimentation that the upper bounds provided by the alternative error representations are sharper than the classical ones and lead to a more robust *p*-adaptive process. We also provide guidelines for finding operators delivering sharp error representation upper bounds. We further extend the results to a convection-dominated diffusion problem as well as to problems with discontinuous material coefficients. Finally, we consider a sonic Logging-While-Drilling (LWD) problem to illustrate the applicability of the proposed method. This article is protected by copyright. All rights reserved.

This paper presents a (higher-order) finite element approach for the simulation of heat diffusion and thermoelastic deformations in NC milling processes. The inherent continuous material removal in the process of the simulation is taken into account via continuous removal-dependent refinements of a paraxial hexahedron base-mesh covering a given workpiece. These refinements rely on isotropic bisections of these hexahedrons along with subdivisions of the latter into tetrahedrons and pyramids in correspondence to a milling surface triangulation obtained from the application of the marching cubes algorithm. The resulting mesh is employed for an element-wise defined characteristic function for the milling-dependent workpiece within that paraxial hexahedron base-mesh. Using this characteristic function, a (higher-order) fictitious domain method is used to compute the heat diffusion and thermoelastic deformations, where the corresponding ansatz spaces are defined for some hexahedron-based refinement of the base-mesh. Numerical experiments compared to real physical experiments exhibit the applicability of the proposed approach to predict deviations of the milled workpiece from its designed shape due to thermoelastic deformations in the process. This article is protected by copyright. All rights reserved.

The conventional SPH method has been frustrated with the low accuracy originated from the particle inconsistency. The finite particle method (FPM) is an improved SPH method with a higher order accuracy. However, the numerical accuracy and stability of FPM depend on the uniformity of particles. Facing the disorder particles, the conventional FPM may suffer from an ill-conditioned corrective matrix and is difficult to carry out long time simulations. A popular and robust particle regularization scheme is the so-called particle shifting technology (PST) which can effectively enhance the accuracy and stability of particle methods. In the context of FPM, PST is analyzed and discussed, and a modified PST (MPST) is proposed. MPST saves great amount of computational cost with respect to the conventional PST, and acquires better features of accuracy and stability. Finally, the coupled FPM method by combining MPST and *δ*-SPH is developed to simulate a series of viscous flows. The numerical results indicate that, MPST is effective in improving accuracy, stability and efficiency of PST, and the coupled FPM is shown to be robust for simulating viscous flows and has a higher accuracy and stability.

There is increasing interest in the Material Point Method (MPM) as a means of modelling solid mechanics problems in which very large deformations occur, e.g. in the study of landslides and metal forming, however some aspects vital to wider use of the method have to date been ignored, in particular methods for imposing essential boundary conditions in the case where the problem domain boundary does not coincide with the background grid element edges. In this paper we develop a simple procedure originally devised for standard finite elements for the imposition of essential boundary conditions, to the MPM, expanding its capabilities to boundaries of any inclination. To the authors' knowledge this is the first time that a method has been proposed that allows arbitrary Dirichlet boundary conditions (zero and non-zero values at any inclination) to be imposed in the MPM. The method presented in this paper is different from other MPM boundary approximation approaches, in that: (i) the boundaries are independent of the background mesh, (ii) artificially stiff regions of material points are avoided and (iii) the method does not rely on spurious *mirroring* of the problem domain to imposed symmetry. The main contribution of this work is equally applicable to standard finite elements and the MPM. This article is protected by copyright. All rights reserved.

In this paper, a new numerical method, Element Differential Method (EDM), is proposed for solving general thermal-mechanical problems. The key point of the method is the direct differentiation of the shape functions of Lagrange isoparametric elements used to characterize the geometry and physical variables. A set of analytical expressions for computing the first and second order partial derivatives of the shape functions with respect to global coordinates are derived. Based on these expressions, a new collocation method is proposed for establishing the system of equations, in which the equilibrium equations are collocated at nodes inside elements, and the traction equilibrium equations are collocated at interface nodes between elements and outer surface nodes of the problem. Attributed to the use of the Lagrange elements which can guarantee the variation of physical variables consistent through all elemental nodes, EDM has higher stability than the traditional collocation method. The other main features of EDM are that no mathematical or mechanical principles are required to set up the system of equations and no integrals are involved to form the coefficients of the system. A number of numerical examples of two- and three-dimensional problems are given to demonstrate the correctness and efficiency of the proposed method.

In order to increase the robustness of a Padé-based approximation of parametric solutions to finite element problems, an a priori estimate of the poles is proposed. The resulting original approach is shown to allow for a straightforward, efficient, subsequent Padé-based expansion of the solution vector components, overcoming some of the current convergence and robustness limitations. In particular, this enables for the intervals of approximation to be chosen a priori in direct connection with a given choice of Padè approximants. The choice of these approximants, as shown in the present work, is theoretically supported by the Montessus de Ballore theorem, concerning the convergence of a series of approximants with fixed denominator degrees. Key features and originality of the proposed approach are *i)* a component-wise expansion which allows to specifically target subsets of the solution field, and *ii)* the a priori, simultaneous choice of the Padé approximants and their associated interval of convergence for an effective and more robust approximation. An academic acoustic case study, a structural-acoustic application, and a larger acoustic problem are presented in order to demonstrate the potential of the approach proposed. This article is protected by copyright. All rights reserved.

This paper presents an adaption of periodic boundary conditions (BC), which is termed tesselation BC. While periodic BC restrict strain localization zones to obey the periodicity of the microstructure, the proposed tesselation BC adjust the periodicity frame to meet the localization zone. Thereby arbitrary developing localization zones are permitted. Still the formulation is intrinsically unsusceptible against spurious localization.

Additionally a modification of the Hough transformation is derived, which constitutes an unbiased criterion for the detection of the localization zone. The behavior of the derived formulation is demonstrated by various examples and compared to other BC. It is thereby shown, that tesselation BC lead to a reasonable dependence of the effective stress on the localization direction. Furthermore good convergence of stiffness values with increasing size of the representative volume element is shown as well as benefitial characteristics in use with strain softening material. This article is protected by copyright. All rights reserved.

The extended finite element method was introduced in 1999 to treat problems involving discontinuities with no or minimal remeshing through appropriate enrichment functions. This enables elements to be split by a discontinuity, strong or weak and hence requires the integration of discontinuous functions or functions with discontinuous derivatives over elementary volumes. A variety of approaches have been proposed to facilitate these special types of numerical integration, which have been shown to have a large impact on the accuracy and the convergence of the numerical solution. The smoothed XFEM [1], for example, makes numerical integration elegant and simple by transforming volume integrals into surface integrals. However, it was reported in [1,2] that the strain smoothing is inaccurate when non-polynomial functions are in the basis. In this paper, we investigate the benefits of a recently developed Linear smoothing procedure [3] which provides better approximation to higher order polynomial fields in the basis. Some benchmark problems in the context of linear elastic fracture mechanics are solved and the results are compared with existing approaches. We observe that the stress intensity factors computed through the proposed LSm-XFEM is more accurate than that obtained through Sm-XFEM. This article is protected by copyright. All rights reserved.

We introduce a new methodology for modeling problems with both weak and strong discontinuities independently of the finite element discretization. At variance with the eXtendend/Generalized Finite Element Method (X/GFEM), the new method, named the Discontinuity-Enriched Finite Element Method (DE-FEM), adds enriched degrees of freedom only to nodes created at the intersection between a discontinuity and edges of elements in the mesh. Although general, the method is demonstrated in the context of fracture mechanics, and its versatility is illustrated with a set of traction-free and cohesive crack examples. We show that DE-FEM recovers the same rate of convergence as the standard FEM with matching meshes, and we also compare the new approach to X/GFEM. This article is protected by copyright. All rights reserved.

This paper presents a projection method to obtain high-resolution, manufacturable structures from efficient and coarse-scale homogenization-based topology optimization results. The presented approach bridges coarse and fine scale, such that the complex periodic microstructures can be represented by a smooth and continuous lattice on the fine mesh. A heuristic methodology allows control of the projected topology, such that a minimum length scale on both solid and void features is ensured in the final result. Numerical examples show excellent behavior of the method, where performances of the projected designs are almost equal to the homogenization-based solutions. A significant reduction in computational cost is observed compared to conventional topology optimization approaches.

Topology optimization using stress constraints and considering uncertainties is a serious challenge, since a reliability problem has to be solved for each stress constraint, for each element in the mesh. In this paper, an alternative way of solving this problem is used, where uncertainty quantification is performed through the first-order perturbation approach, with proper validation by Monte Carlo simulation. Uncertainties are considered in the loading magnitude and direction. The minimum volume problem subjected to local stress constraints is formulated as a robust problem, where the stress constraints are written as a weighted average between their expected value and standard deviation. The augmented Lagrangian method is used for handling the large set of local stress constraints, whereas a gradient-based algorithm is used for handling the bounding constraints. It is shown that even in the presence of small uncertainties in loading direction, different topologies are obtained when compared to a deterministic approach. The effect of correlation between uncertainties in loading magnitude and direction on optimal topologies is also studied, where the main observed result is loss of symmetry in optimal topologies.

In this work, a fully explicit partitioned method for the simulation of Fluid Structure Interaction (FSI) problems is presented. The fluid domain is modelled with an explicit Particle Finite Element Method (PFEM) based on the hypothesis of weak compressibility. The Lagrangian description of the fluid is particularly effective in the simulation of FSI problems with free surface flows and large structural displacements, since the fluid boundaries are automatically defined by the position of the mesh nodes. A distinctive feature of the proposed FSI strategy is that the solid domain is modelled using the explicit integration FEM in an off-the-shelf commercial software (Abaqus/Explicit). This allows to perform simulations with a complete and advanced description on the structural domain, including advanced structural material models and contact. The structure-to-fluid coupling algorithm is based on a technique derived from the Domain Decomposition Methods, namely, the Gravouil and Combescure algorithm. The method allows for arbitrarily large interface displacements using different time incrementation and nonconforming meshes in the different domains, which is an essential feature for the efficiency of an explicit solver involving different materials. The resulting fully explicit and fully lagrangian finite element approach is particularly appealing for the possibility of its efficient application in a large variety of highly non-linear engineering problems.

The conventional approach to construct quadratic elements for an *n*-sided polygon will yield *n*(*n*+1)/2 shape functions, which increases the computational effort. It is well known that the serendipity elements based on isoparametric formulation suffers from mesh distortion. Floater and Lai proposed a systematic way to construct higher-order approximations over arbitrary polygons using the generalized barycentric and triangular coordinates. This approach ensures 2*n* shape functions with nodes only on the boundary of the polygon. In this paper, we extend the polygonal splines approach to 3 dimensions and construct serendipity shape functions over hexahedra and convex polyhedra. This is done by expressing the shape functions using the barycentric coordinates and the local tetrahedral coordinates. The quadratic shape functions possess Kronecker delta property and satisfy constant, linear, and quadratic precision. The accuracy and the convergence properties of the quadratic serendipity shape elements are demonstrated with a series of standard patch tests. The numerical results show that the quadratic serendipity elements pass the patch test, yield optimal convergence rate, and can tolerate *extreme* mesh distortion.

The most common technique for the numerical implementation of peridynamic theory is the uniform discretization together with constant horizon. However, unlike the nonuniform discretization and varying horizons, it is not a natural and intrinsic component of the adaptive refinement analysis and multiscale modeling. Besides, it encounters discretization difficulty in analyzing irregular structures. Therefore, to analyze problems with nonuniform discretization and varying horizons and solve the resulting problems of ghost forces and spurious wave reflection, the dual-horizon peridynamics based on uniform discretization is extended to the nonuniform discretization based on Voronoi diagrams, for which we call it Voronoi-based peridynamics. We redefine the damage definition as well. Next, an adaptive refinement analysis method based on the proposed Voronoi-based peridynamics and its numerical implementation is introduced. Finally, the traditional bond-based peridynamics and the Voronoi-based peridynamics with or without refinement are used to simulate 4 benchmark problems. The examples of 2-D quasi-static elastic deformation, 2-D wave propagation, 2-D dynamic crack growth, and 3-D simulation of the Kalthoff-Winkler experiment demonstrate the efficiency and effectivity of the proposed Voronoi-based peridynamics. Further, the adaptive refinement analysis can be used to obtain reasonable crack path and crack propagation speed with reduced computational burden.

The material point method for the analysis of deformable bodies is revisited and originally upgraded to simulate crack propagation in brittle media. In this setting, phase-field modelling is introduced to resolve the crack path geometry. Following a particle in cell approach, the coupled continuum/phase-field governing equations are defined at a set of material points and interpolated at the nodal points of an Eulerian, ie, non-evolving, mesh. The accuracy of the simulated crack path is thus decoupled from the quality of the underlying finite element mesh and relieved from corresponding mesh-distortion errors. A staggered incremental procedure is implemented for the solution of the discrete coupled governing equations of the phase-field brittle fracture problem. The proposed method is verified through a series of benchmark tests while comparisons are made between the proposed scheme, the corresponding finite element implementation, and experimental results.

Unsaturated soils are solid-water-air systems that include a solid skeleton, pore water, and pore air. Heterogeneities in porosity or degree of saturation are salient features of unsaturated soils. These heterogeneities may trigger localized deformation (eg, shear banding) in such materials as demonstrated by numerical simulations via a pseudo three-phase model. In this article, we formulate a true three-phase mathematical framework implemented via stabilized low-order mixed finite elements. With this mathematical framework, we study the evolution of pore air pressure and its role in the inception of strain localization triggered by initial heterogeneity either in porosity or suction. The numerical simulations show that pore air pressure is nonzero and nonuniform in the process of progressive failure in unsaturated soils. The heterogeneity of pore air pressure may also play a significant role in the onset of localized deformation of unsaturated soils. Therefore, a three-phase model considering the pore air phase is physically more appropriate for modeling strain localization in unsaturated soils.

Topology optimization is a methodology for assigning material or void to each point in a design domain in a way that extremizes some objective function, such as the compliance of a structure under given loads, subject to various imposed constraints, such as an upper bound on the mass of the structure. Geometry projection is a means to parameterize the topology optimization problem, by describing the design in a way that is independent of the mesh used for analysis of the design's performance; it results in many fewer design parameters, necessarily resolves the ill-posed nature of the topology optimization problem, and provides sharp descriptions of the material interfaces. We extend previous geometric projection work to 3 dimensions and design unit cells for lattice materials using inverse homogenization. We perform a sensitivity analysis of the geometric projection and show it has smooth derivatives, making it suitable for use with gradient-based optimization algorithms. The technique is demonstrated by designing unit cells comprised of a single constituent material plus void space to obtain light, stiff materials with cubic and isotropic material symmetry. We also design a single-constituent isotropic material with negative Poisson's ratio and a light, stiff material comprised of 2 constituent solids plus void space.

This paper presents a method for the optimization of multicomponent structures comprised of 2 and 3 materials considering large motion sliding contact and separation along interfaces. The structural geometry is defined by an explicit level set method, which allows for both shape and topology changes. The mechanical model assumes finite strains, a nonlinear elastic material behavior, and a quasi-static response. Identification of overlapping surface position is handled by a coupled parametric representation of contact surfaces. A stabilized Lagrange method and an active set strategy are used to model frictionless contact and separation. The mechanical model is discretized by the extended FEM, which maintains a clear definition of geometry. Face-oriented ghost penalization and dynamic relaxation are implemented to improve the stability of the physical response prediction. A nonlinear programming scheme is used to solve the optimization problem, which is regularized by introducing a perimeter penalty into the objective function. Design sensitivities are determined by the adjoint method. The main characteristics of the proposed method are studied by numerical examples in 2 dimensions. The numerical results demonstrate improved design performance when compared to models optimized with a small strain assumption. Additionally, examples with load path dependent objectives display nonintuitive designs.

The material point method (MPM) enhanced with B-spline basis functions, referred to as B-spline MPM (BSMPM), is developed and demonstrated using representative quasi-static and dynamic example problems. Smooth B-spline basis functions could significantly reduce the cell-crossing error as known for the original MPM. A Gauss quadrature scheme is designed and shown to be able to diminish the quadrature error in the BSMPM analysis of large-deformation problems for the improved accuracy and convergence, especially with the quadratic B-splines. Moreover, the increase in the order of the B-spline basis function is also found to be an effective way to reduce the quadrature error and to improve accuracy and convergence. For plate impact examples, it is demonstrated that the BSMPM outperforms the generalized interpolation material point (GIMP) and convected particle domain interpolation (CPDI) methods in term of the accuracy of representing stress waves. Thus, the BSMPM could become a promising alternative to the MPM, GIMP, and CPDI in solving certain types of transient problems.

]]>An arbitrary Lagrangian-Eulerian framework, which combines the advantages of both Lagrangian and Eulerian methods, is presented to solve incompressible multiphase flow problems. The incompressible Navier-Stokes equations are discretized using the side-centered unstructured finite volume method, where the velocity vector components are defined at the midpoint of each cell face, while the pressure term is defined at element centroids. The pressure field is treated to be discontinuous across the interface with the discontinuous treatment of density and viscosity. The surface tension term at the interface is treated as a force tangent to the interface and computed with several different approaches including the use of Legendre polynomials. In addition, the several different discretizations of interface kinematic boundary conditions are investigated. For the application of the interface kinematic boundary condition, a special attention is given to satisfy both local and global discrete geometric conservation law to conserve the total mass of both species at machine precision. The mesh vertices are deformed by solving the linear elasticity equations due to the normal displacement of interface. The resulting algebraic equations are solved in a fully coupled manner, and a one-level restricted additive Schwarz preconditioner with a block-incomplete factorization within each partitioned subdomain is used for the resulting fully coupled system. The method is validated by simulating the classical benchmark problem of a single rising bubble in a viscous fluid due to buoyancy. The results of numerical simulations are found out to be in an excellent agreement with the earlier results in the literature. The mass of the bubble is conserved, and discontinuous pressure field is obtained to avoid errors due to the incompressibility condition in the vicinity of the interface, where the density and viscosity jumps occur.

Adaptive hierarchical refinement in isogeometric analysis is developed to model cohesive crack propagation along a prescribed interface. In the analysis, the crack is introduced by knot insertion in the NURBS basis, which yields continuous basis functions. To capture the stress state smoothly ahead of the crack tip, the hierarchical refinement of the spline basis functions is used starting from a coarse initial mesh. A multilevel mesh is constructed, with a fine mesh used for quantifying the stresses ahead of the crack tip, knot insertion to insert the crack, and coarsening in the wake of the crack tip, since a lower resolution suffices there. This technique can be interpreted as a moving mesh around the crack tip. To ensure compatibility with existing finite element programs, an element-wise point of view is adopted using Bézier extraction. A detailed description is given how the approach can be implemented in a finite element data structure. The accuracy of the approach to cohesive fracture modelling is demonstrated by several numerical examples, including a double cantilever beam, an L-shaped specimen, and a fibre embedded in an epoxy matrix.

In this article, we develop a dynamic version of the variational multiscale (D-VMS) stabilization for nearly/fully incompressible solid dynamics simulations of viscoelastic materials. The constitutive models considered here are based on Prony series expansions, which are rather common in the practice of finite element simulations, especially in industrial/commercial applications. Our method is based on a mixed formulation, in which the momentum equation is complemented by a pressure equation in rate form. The unknown pressure, displacement, and velocity are approximated with piecewise linear, continuous finite element functions. To prevent spurious oscillations, the pressure equation is augmented with a stabilization operator specifically designed for viscoelastic problems, in that it depends on the viscoelastic dissipation. We demonstrate the robustness, stability, and accuracy properties of the proposed method with extensive numerical tests in the case of linear and finite deformations.

A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. When the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.

We propose a new computational framework for the treatment of acousto-magneto-mechanical coupling that arises in low-frequency electro-magneto-mechanical systems such as magnetic resonance imaging scanners. Our transient Newton–Raphson strategy involves the solution of a monolithic system obtained from the linearisation of the coupled system of equations. Moreover, this framework, in the case of excitation from static and harmonic current sources, allows us to propose a simple linearised system and rigorously motivate a single-step strategy for understanding the response of systems under different frequencies of excitation. Motivated by the need to solve industrial problems rapidly, we restrict ourselves to solving problems consisting of axisymmetric geometries and current sources. Our treatment also discusses in detail the computational requirements for the solution of these coupled problems on unbounded domains and the accurate discretisation of the fields using *h**p*–finite elements. We include a set of academic and industrially relevant examples to benchmark and illustrate our approach. Copyright © 2017 The Authors. *International Journal for Numerical Methods in Engineering* Published by John Wiley & Sons, Ltd.

Numerical stability by using certain time integration scheme is a critical issue for accurate simulation of discontinuous deformations of solids. To investigate the effects of the time integration schemes on the numerical stability of the numerical manifold method, the implicit time integration schemes, ie, the Newmark, the HHT-α, and the WBZ-α methods, and the explicit time integration algorithms, ie, the central difference, the Zhai's, and Chung-Lee methods, are implemented. Their performance is examined by conducting transient response analysis of an elastic strip subjected to constant loading, impact analysis of an elastic rod with an initial velocity, and excavation analysis of jointed rock masses, respectively. Parametric studies using different time steps are conducted for different time integration algorithms, and the convergence efficiency of the open-close iterations for the contact problems is also investigated. It is proved that the Hilber-Hughes-Taylor-α (HHT-α), Wood-Bossak-Zienkiewicz-α (WBZ-α), Zhai's, and Chung-Lee methods are more attractive in solving discontinuous deformation problems involving nonlinear contacts. It is also found that the examined explicit algorithms showed higher computational efficiency compared to those implicit algorithms within acceptable computational accuracy.

We present and analyze a validation procedure for a given state estimate *u*^{⋆} of the true field *u*^{true} based on Monte Carlo sampling of experimental observation functionals. Our method provides, given a set of *N* possibly noisy local experimental observation functionals over the spatial domain Ω, confidence intervals for the *L*^{2}(Ω) error in state and the error in *L*^{2}(Ω) outputs. For *L*^{2}(Ω) outputs, our approach also provides a confidence interval for the output itself, which can be used to improve the initial output estimate based on *u*^{⋆}. Our approach implicitly takes advantage of variance reduction, through the proximity of *u*^{⋆} to *u*^{true}, to provide tight confidence intervals even for modest values of *N*. We present results for a synthetic model problem to illustrate the elements of the methodology and confirm the numerical properties suggested by the theory. Finally, we consider an experimental thermal patch configuration to demonstrate the applicability of our approach to real physical systems.

The present study proposes a method of micro-macro concurrent topology optimization for a two-phase nonlinear solid to minimize the end compliance of its macrostructure undergoing large deformation. To reduce the computational costs to solve a 2-scale boundary value problem under geometrically nonlinear setting, we use the so-called method of *decoupling multiscale structural analysis*, in which the microscopic and macroscopic boundary value problems are decoupled in the homogenization process. An isotropic hyperelasticity model is used for the constitutive model for microstructures, while an orthotropic one is assumed to represent the macroscopic material behavior. Owing to this decoupling framework, the micro-macro concurrent optimization problem can be split into 2 individual problems at the microscale and macroscale for the sake of algorithmic simplicity. Also, a 2-scale adjoint sensitivity analysis can be performed within the framework of computational homogenization. It is verified from a series numerical examples that the proposed method is capable of computing the optimal structures at both microscale and macroscale, according to the level of applied load.

In this paper, new spherical Hankel shape functions are used to reformulate boundary element method for 2-dimensional elastostatic and elastodynamic problems. To this end, the dual reciprocity boundary element method is reconsidered by using new spherical Hankel shape functions to approximate the state variables (displacements and tractions) of Navier's differential equation. Using enrichment of a class of radial basis functions (RBFs), called spherical Hankel RBFs hereafter, the interpolation functions of a Hankel boundary element framework has been derived. For this purpose, polynomial terms are added to the functional expansion that only uses spherical Hankel RBF in the approximation. In addition to polynomial function fields, the participation of spherical Bessel function fields has also increased robustness and efficiency in the interpolation. It is very interesting that there is no Runge phenomenon in equispaced Hankel macroelements, unlike equispaced classic Lagrange ones. Several numerical examples are provided to demonstrate the effectiveness, robustness and accuracy of the proposed Hankel shape functions and in comparison with the classic Lagrange ones, they show much more accurate and stable results.

To coarsen a mesh, we usually remove a set of selected nodes one by one. Currently, the basic operation used to remove a node is edge collapsing, which does not perform well when applied to handling narrow regions in a tetrahedron mesh and could produce low-quality elements or even fail to give valid results. To overcome the drawbacks of edge collapsing, we present a new node-removal operator created by revising a topological transformation called small polyhedron reconnection. This new operator can guarantee success if the cavity that forms after a node is removed is meshable, and it produces higher-quality results and keeps the nodes unmoved, which is preferred for applications such as multigrid hierarchies.

In addition, 2 other aspects of mesh coarsening that determine whether a node should be removed and the sequence in which to remove the selected nodes are also studied. Our strategy consists of constructing a coarse node set using the sphere-packing method and removing the nodes in a reversed kd-tree sequence. The excellent performance of the new method is demonstrated by applying it to examples of adaptive meshing and multigrid hierarchy creation and comparing the results with those of the edge collapsing method.

The paper introduces a novel multiresolution scheme to topology optimization in the framework of the isogeometric analysis. A new variable parameter space is added to implement multiresolution topology optimization based on the Solid Isotropic Material with Penalization approach. Design density variables defined in the variable space are used to approximate the element analysis density by the bivariate B-spline basis functions, which are easily obtained using *k*-refinement strategy in the isogeometric analysis. While the nonuniform rational B-spline basis functions are used to exactly describe geometric domains and approximate unknown solutions in finite element analysis. By applying a refined sensitivity filter, optimized designs include highly discrete solutions in terms of solid and void materials without using any black and white projection filters. The Method of Moving Asymptotes is used to solve the optimization problem. Various benchmark test problems including plane stress, compliant mechanism inverter, and 2-dimensional heat conduction are examined to demonstrate the effectiveness and robustness of the present method.

The aim of this paper is to present a new semi-analytic numerical method for strongly nonlinear steady-state advection-diffusion-reaction equation (ADRE) in arbitrary 2-D domains. The key idea of the method is the use of the basis functions which satisfy the homogeneous boundary conditions of the problem. Each basis function used in the algorithm is a sum of an analytic basis function and a special correcting function which is chosen to satisfy the homogeneous boundary conditions of the problem. The polynomials, trigonometric functions, conical radial basis functions, and the multiquadric radial basis functions are used in approximation of the ADRE. This allows us to seek an approximate solution in the analytic form which satisfies the boundary conditions of the initial problem with any choice of free parameters. As a result, we separate the approximation of the boundary conditions and the approximation of the ADRE inside the solution domain. The numerical examples confirm the high accuracy and efficiency of the proposed method in solving strongly nonlinear equations in an arbitrary domain.

This paper describes implementation of anisotropic damage mechanics in the material point method. The approach was based on previously proposed, fourth-rank anisotropic damage tenors. For implementation, it was convenient to recast the stress update using a new damage strain partitioning tensor. This new tensor simplifies numerical implementation (a detailed algorithm is provided) and clarifies the connection between cracking strain and an implied physical crack with crack opening displacements. By using 2 softening laws and 3 damage parameters corresponding to 1 normal and 2 shear cracking strains, damage evolution can be directly connected to mixed tensile and shear fracture mechanics. Several examples illustrate interesting properties of robust anisotropic damage mechanics such as modeling of necking, multiple cracking in coatings, and compression failure. Direct comparisons between explicit crack modeling and damage mechanics in the same material point method code show that damage mechanics can quantitatively reproduce many features of explicit crack modeling. A caveat is that strengths and energies assigned to damage mechanics materials must be changed from measured material properties to apparent properties before damage mechanics can agree with fracture mechanics.

A topology optimization technique based on the topological derivative and the level set function is utilized to design/synthesize the microstructure of a pentamode material for an acoustic cloaking device. The technique provides a microstructure consisting of a honeycomb lattice composed of needle-like and joint members. The resulting metamaterial shows a highly anisotropic elastic response with effective properties displaying a ratio between bulk and shear moduli of almost three orders of magnitude. Furthermore, in accordance with previous works in the literature, it can be asserted that this kind of microstructure can be realistically fabricated. The adoption of a topology optimization technique as a tool for the inverse design of metamaterials with applications to acoustic cloaking problems is one contribution of this paper. However, the most important achievement refers to the analysis and discussion revealing the key role of the external shape of the prescribed domain where the optimization problem is posed. The efficiency of the designed microstructure is measured by comparing the scattering wave fields generated by acoustic plane waves impinging on bare and cloaked bodies. Copyright © 2017 The Authors. *International Journal for Numerical Methods in Engineering* Published by John Wiley & Sons Ltd.

The discrete element method (DEM) presents an alternative way to model complex mechanical problems of silica glass, such as brittle fracture. Since discontinuities are naturally considered by DEM, no complex transition procedure from continuum phase to discontinuum one is required. However, to ensure that DEM can properly reproduce the silica glass cracking mechanisms, it is necessary to correctly model the different features characterizing its mechanical behavior before fracture. Particularly, it is necessary to correctly model the densification process of this material, which is known to strongly influence the fracture mechanisms. The present paper proposes a new and very promising way to model such process, which is assumed to occur only under hydrostatic pressure. An accurate predictive-corrective densification model is developed. This model shows a great flexibility to reproduce extremely complex densification features. Furthermore, it involves only one calibration parameter, which makes it very easy to apply. This new model represents a major step towards accurate modeling of materials permanent deformation with the discrete element method, which has long been a huge challenge in applying this method for continuum problems.

The polynomial chaos Kalman filter (PCKF) has been gaining popularity as a computationally efficient and robust alternative to sampling methods in sequential data assimilation settings. The PCKF's sampling free scheme and attractive structure to represent non-Gaussian uncertainties makes it a promising approach for data filtering techniques in nonlinear and non-Gaussian frameworks. However, the accuracy of PCKF is dependent on the dimension and order of the polynomial chaos expansion used to represent all sources of uncertainty in the system. Thus, when independent sources of errors, like process noise and time independent sensors' errors are incorporated in the system, the curse of dimensionality hinders the efficiency and the applicability of PCKF. This study sheds light on this issue and presents a practical framework to maintain an acceptable accuracy of PCKF without scarifying the computational efficiency of the filter. The robustness and efficiency of the presented implementation is demonstrated on 3 typical numerical examples to illustrate its ability to achieve considerable accuracy at a low computational tax.

As suspensions critically affect the ride and handling performance of vehicles, considerable efforts have been made to improve their design by an optimization method. In this paper, we propose a topology optimization–based method for suspension synthesis by using a 3-dimensional model constructed with nonlinear bars and zero-length springs that discretize the 3-dimensional space between the chassis frame and a vehicle wheel. For optimization, bar cross-sections and spring stiffness values are used as design variables alongside the nodal positions of bar elements as shape optimization variables to simultaneously optimize the topology and shape. To verify the proposed approach, 2 types of design problems were solved: recovering known suspension mechanisms for a given set of wheel trajectories and synthesizing unknown suspension mechanisms that satisfy several design constraints typically used in the automobile industry. Through these examples, possibilities to design new and advanced suspensions by the proposed optimization method are clearly demonstrated.

Discontinuous Galerkin finite element schemes exhibit attractive features for accurate large-scale wave-propagation simulations on modern parallel architectures. For many applications, these schemes must be coupled with nonreflective boundary treatments to limit the size of the computational domain without losing accuracy or computational efficiency, which remains a challenging task. In this paper, we present a combination of a nodal discontinuous Galerkin method with high-order absorbing boundary conditions for cuboidal computational domains. Compatibility conditions are derived for high-order absorbing boundary conditions intersecting at the edges and the corners of a cuboidal domain. We propose a GPU implementation of the computational procedure, which results in a multidimensional solver with equations to be solved on 0D, 1D, 2D, and 3D spatial regions. Numerical results demonstrate both the accuracy and the computational efficiency of our approach.

The discontinuous Galerkin FEM is used for the numerical solution of the three-dimensional Maxwell equations. Control of errors in the numerical level for the divergence-free constraint of the magnetic field can be obtained through the use of divergence-free vector bases. In this work, the so-called perfectly hyperbolic formulation of the Maxwell equations is used to retain both divergence-free magnetic field and in the presence of charges to satisfy the Gauss constraint for the electric field at the numerical level. For both approaches, it is found that higher-order approximations have favorable effect on the preservation of the divergence constraints and that the perfectly hyperbolic formulations retains these errors to a lower level. It is shown that high-order accuracy in space and time is achieved in unstructured meshes using implicit time marching. For nonuniform meshes, local resolution refinement is used using p-type adaptivity to ensure accurate electromagnetic wave propagation. Thus, the potential of the method to reach the required higher resolution in anisotropic meshes and obtain accurate electromagnetic wave propagation with reduced computational effort is demonstrated.

One of the main difficulties that a reduced-order method could face is the poor separability of the solution. This problem is common to both *a posteriori* model order reduction (proper orthogonal decomposition, reduced basis) and *a priori* [proper generalized decomposition (PGD)] model order reduction. Early approaches to solve it include the construction of local reduced-order models in the framework of POD. We present here an extension of local models in a PGD—and thus, *a priori*—context. Three different strategies are introduced to estimate the size of the different patches or regions in the solution manifold where PGD is applied. As will be noticed, no *gluing* or special technique is needed to deal with the resulting set of local reduced-order models, in contrast to most proper orthogonal decomposition local approximations.

The resulting method can be seen as a sort of *a priori* manifold learning or nonlinear dimensionality reduction technique. Examples are shown that demonstrate pros and cons of each strategy for different problems.

Adaptive local refinement is one of the main issues for isogeometric analysis (IGA). In this paper, an adaptive extended IGA (XIGA) approach based on polynomial splines over hierarchical T-meshes (PHT-splines) for modeling crack propagation is presented. The PHT-splines overcome certain limitations of nonuniform rational B-splines–based formulations; in particular, they make local refinements feasible. To drive the adaptive mesh refinement, we present a recovery-based error estimator for the proposed method. The method is based on the XIGA method, in which discontinuous enrichment functions are added to the IGA approximation and this method does not require remeshing as the cracks grow. In addition, crack propagation is modeled by successive linear extensions that are determined by the stress intensity factors under linear elastic fracture mechanics. The proposed method has been used to analyze numerical examples, and the stress intensity factors results were compared with reference results. The findings demonstrate the accuracy and efficiency of the proposed method.

The successful design of piezoelectric energy harvesting devices relies upon the identification of optimal geometrical and material configurations to maximize the power output for a specific band of excitation frequencies. Extendable predictive models and associated approximate solution methods are essential for analysis of a wide variety of future advanced energy harvesting devices involving more complex geometries and material distributions. Based on a holistic continuum mechanics modeling approach to the multi-physics energy harvesting problem, this article proposes a monolithic numerical solution scheme using a mixed-hybrid 3-dimensional finite element formulation of the coupled governing equations for analysis in time and frequency domain. The weak form of the electromechanical/circuit system uses velocities and potential rate within the piezoelectric structure, free boundary charge on the electrodes, and potential at the level of the generic electric circuit as global degrees of freedom. The approximation of stress and dielectric displacement follows the work by Pian, Sze, and Pan. Results obtained with the proposed model are compared with analytical results for the reduced-order model of a cantilevered bimorph harvester with tip mass reported in the literature. The flexibility of the method is demonstrated by studying the influence of partial electrode coverage on the generated power output.

Finite element approximations are developed for three-dimensional domains naturally represented in either cylindrical or spherical coordinates. Lines of constant radius, axial length, or angle are used to represent the domain and cast approximations that are natural for these geometries. As opposed to general isoparametric three-dimensional elements generated in conventional parent space, these elements can be evaluated analytically and do not generate geometric discretization error. They also allow for anisotropic material coefficients that are frequently aligned in either cylindrical or spherical coordinates. Several examples are provided that show convergence properties and comparison with analytical solutions of the Poisson equation.

A class of mixed interpolated beam elements is introduced in this paper under the framework of the Carrera Unified Formulation to eliminate the detrimental effects due to shear locking. The Mixed Interpolation of Tensorial Components (MITC) method is adopted to generate locking-free displacement-based beam models using general 1D finite elements. An assumed distribution of the transverse shear strains is used for the derivation of the virtual work, and the full Gauss-Legendre quadrature is used for the numerical computation of all the components of the stiffness matrix. Linear, quadratic, and cubic beam elements are developed using the unified formulation and applied to linear static problems including compact, laminated, and thin-walled structures. A comprehensive study of how shear locking affects general beam elements when different classical integration schemes are used is presented, evidencing the outstanding capabilities of the MITC method to overcome this numerical issue. Refined beam theories based on the expansion of pure and generalized displacement variables are implemented making use of Lagrange and Legendre polynomials over the cross-sectional domain, allowing one to capture complex states of stress with a 3D-like accuracy. The numerical examples are compared to analytic, numerical solutions from the literature, and commercial software solutions, whenever it is possible. The efficiency and robustness of the proposed method demonstrated throughout all the assessments, illustrating that MITC elements are the natural choice to avoid shear locking and showing an unprecedent accuracy in the computation of transverse shear stresses for beam formulations.

In this paper, we propose a three-dimensional (3D) grayscale-free topology optimization method using a conforming mesh to the structural boundary, which is represented by the level-set method. The conforming mesh is generated in an r-refinement manner; that is, it is generated by moving the nodes of the Eulerian mesh that maintains the level-set function. Although the r-refinement approach for the conforming mesh generation has many benefits from an implementation aspect, it has been considered as a difficult task to stably generate 3D conforming meshes in the r-refinement manner. To resolve this task, we propose a new level-set based r-refinement method. Its main novelty is a procedure for minimizing the number of the collapsed elements whose nodes are moved to the structural boundary in the conforming mesh; in addition, we propose a new procedure for improving the quality of the conforming mesh, which is inspired by Laplacian smoothing. Because of these novelties, the proposed r-refinement method can generate 3D conforming meshes at a satisfactory level, and 3D grayscale-free topology optimization is realized. The usefulness of the proposed 3D grayscale-free topology optimization method is confirmed through several numerical examples. Copyright © 2017 John Wiley & Sons, Ltd.

This paper presents a simplified co-rotational formulation for quadrilateral shell elements inheriting the merit of element-independence from the traditional co-rotational approach in literature. With the objective of application to nonlinear analysis of civil engineering structures, the authors further simplify the formulation of the geometrical stiffness using the small strain assumption, which is valid in the co-rotational approach, with the warping effects considered as eccentricities. Compared with the traditional element-independent co-rotational method, the projector is neglected both in the tangent stiffness matrix and in the internal force vector for simplicity in formulation. Meanwhile, a quadrilateral flat shell element allowing for drilling rotations is adopted and incorporated into this simplified co-rotational algorithm for geometrically nonlinear analysis involved with large displacements and large rotations. Several benchmark problems are presented to confirm the efficiency and accuracy of the proposed method for practical applications.

This paper presents a multiscale finite element method with the embedded strong discontinuity model for the strain localization analysis of homogeneous and heterogeneous saturated porous media. In the proposed method, the strong discontinuities in both displacement and fluid flux fields are considered. For the localized fine element, the mathematical description and discrete formulation are built based on the so-called strong discontinuity approach. For the localized unit cell, numerical base functions are constructed based on a newly developed enhanced coarse element technique, that is, additional coarse nodes are dynamically added as the shear band propagating. Through the enhanced coarse element technique, the multiscale finite element method can well reflect the softening behavior at the post-localization stage. Furthermore, the microscopic displacement and pore pressure are obtained with the solution decomposition technique. In addition, a non-standard return mapping algorithm is given to update the displacement jumps. Finally, through three representative numerical tests comparing with the results of the embedded finite element method with fine meshes, the high efficiency and accuracy of the proposed method are demonstrated in both material homogeneous and heterogeneous cases. Copyright © 2017 John Wiley & Sons, Ltd.

This paper presents a strategy to improve the efficiency of simulations involving porous materials with linear behaviour and full saturation. The method is named parallel-lines finite difference and uses a method to decouple a discretised version of the governing equations allowing parallel computations. As a result, the complexity and the bandwidth of the global matrix are significantly reduced and hence the efficiency is improved. The other advantage of the scheme is the fulfilment of the inf-sup stability condition. The scheme is developed to solve porous media formulations derived from the theory of porous media. Both the u-p and u-v-p formulations are considered (u: displacement of solid, p: pressure of liquid, and v: velocity of liquid). Several simulations are performed to demonstrate the capabilities of the method.

In this paper, a coupling technique is developed for the combination of the wavelet-Galerkin method (WGM) with the finite element method (FEM). In this coupled method, the WGM and FEM are respectively used in different sub-domains. The WGM sub-domain and the FEM sub-domain are connected by a transition region that is described by B-spline basis functions. The basis functions of WGM and FEM are modified in the transition region to ensure the basic polynomial reconstruction condition and the compatibility of displacements and their gradients. In addition, the elements of FEM and WGM are not necessary to conform to the transition region. This newly developed coupled method is applied to the stress analysis of 2D and 3D elasticity problems in order to investigate its performance and study parameters. Numerical results show that the present coupled method is accurate and stable. The new method has a promising potential for practical applications and can be easily extended for coupling of other numerical methods. Copyright © 2017 John Wiley & Sons, Ltd.

This paper presents a finite element solver for the simulation of steady non-Newtonian flow problems, using a regularized Bingham model, with adaptive mesh refinement capabilities.

The solver is based on a stabilized formulation derived from the variational multiscale framework. This choice allows the introduction of an a posteriori error indicator based on the small scale part of the solution, which is used to drive a mesh refinement procedure based on element subdivision.

This approach applied to the solution of a series of benchmark examples, which allow us to validate the formulation and assess its capabilities to model 2*D* and 3*D* non-Newtonian flows.

A major challenge for crash failure analysis of laminated composites is to find a modelling approach, which is both sufficiently accurate, for example, able to capture delaminations, and computationally efficient to allow full-scale vehicle crash simulations. Addressing this challenge, we propose a methodology based on an equivalent single-layer shell formulation which is adaptively through-the-thickness refined to capture initiating and propagating delaminations. To be specific, single shell elements through the laminate thickness are locally and adaptively enriched using the extended finite element method such that delaminations can be explicitly modelled without having to be represented by separate elements. Furthermore, the shell formulation is combined with a stress recovery technique which increases the accuracy of predicting delamination initiations. The paper focuses on the parameters associated with identifying, introducing and extending the enrichment areas; especially on the impact of these parameters on the resulting structural deformation behaviour. We conclude that the delamination enrichment must be large enough to allow the fracture process to be accurately resolved, and we propose a suitable approach to achieve this. The proposed methodology for adaptive delamination modelling shows potential for being computationally efficient, and thereby, it has the potential to enable efficient and accurate full vehicle crash simulations of laminated composites. Copyright © 2017 John Wiley & Sons, Ltd.

We present a density-based topology optimization approach for the design of metallic microwave insert filters. A two-phase optimization procedure is proposed in which we, starting from a uniform design, first optimize to obtain a set of spectral varying resonators followed by a band gap optimization for the desired filter characteristics. This is illustrated through numerical experiments and comparison to a standard band pass filter design. It is seen that the carefully optimized topologies can sharpen the filter characteristics and improve performance. Furthermore, the obtained designs share little resemblance to standard filter layouts, and hence, the proposed design method offers a new design tool in microwave engineering. Copyright © 2017 John Wiley & Sons, Ltd.

A method based on dual kriging is proposed to process X-ray microtomographic scans of textile composites in order to construct a 3D representation of the fiber architecture with a regulated level of details. The geometry is optimized by using the curvature energy of fiber tow profiles in order to determine the best discretization scheme; then the nugget effect is applied in kriging to smooth the outward surface of fiber tows. This approach allows creating 3D models of variable resolution ranging from the X-ray scan level to geometric representations with surface meshes required for numerical simulation. The method is applied to a glass fiber textile laminate embedded in a thermoplastic matrix, and preliminary results for the estimation of the local permeability of the fiber tows are presented. Copyright © 2017 John Wiley & Sons, Ltd.

We develop an optimal transportation mesh-free particle method for advection-diffusion in which the concentration or density of the diffusive species is approximated by Dirac measures. We resort to an incremental variational principle for purposes of time discretization of the diffusive step. This principle characterizes the evolution of the density as a competition between the Wasserstein distance between two consecutive densities and entropy. Exploiting the structure of the Euler–Lagrange equations, we approximate the density as a collection of Diracs. The interpolation of the incremental transport map is effected through mesh-free max-ent interpolation. Remarkably, the resulting update is geometrically exact with respect to advection and volume. We present three-dimensional examples of application that illustrate the scope and robustness of the method. Copyright © 2017 John Wiley & Sons, Ltd.

This article discusses the use of frequency response surrogates for eigenvalue optimization problems in topology optimization that may be used to avoid solving the eigenvalue problem. The motivation is to avoid complications that arise from multiple eigenvalues and the computational complexity associated with computation of eigenvalues in very large problems. Copyright © 2017 John Wiley & Sons, Ltd.

This paper tries to accelerate the convergence rate of the general viscous dynamic relaxation method. For this purpose, a new automated procedure for estimating the critical damping factor is developed by employing a simple variant of the Lanczos algorithm, which does not require any re-orthogonalization process. All of the computational operations are performed by simple vector–matrix multiplication without requiring any matrix factorization or inversion. Some numerical examples with geometric nonlinear behavior are analyzed by the proposed algorithm. Results show that the suggested procedure could effectively decrease the total number of convergence iterations compared with the conventional dynamic relaxation algorithms. Copyright © 2017 John Wiley & Sons, Ltd.

A novel set of enrichment functions within the framework of the extended finite element method is proposed for linear elastic fracture analysis of interface cracks in bimaterials. The motivation for the new enrichment set stems from the revelation that the accuracy and conditioning of the widely accepted 12-fold bimaterial enrichment functions significantly deteriorates with the increase in material mismatch. To this end, we propose an 8-fold material-dependent enrichment set, derived from the analytical asymptotic displacement field, that well captures the near-tip oscillating singular fields of interface cracks, including the transition to weak discontinuities of bimaterials.

The performance of the proposed material-dependent enrichment functions is studied on 2 benchmark examples. Comparisons are made with the 12-fold bimaterial enrichment as well as the classical 4-fold homogeneous branch functions, which have also been used for bimaterials. The numerical studies clearly demonstrate the superiority of the new enrichment functions, which yield the most accurate results but with less number of degrees of freedom and significantly improved conditioning than the 12-fold functions.

The macroscopic strength domain and failure mode of heterogeneous or composite materials can be quantitatively determined by solving an auxiliary limit analysis problem formulated on a periodic representative volume element. In this paper, a novel numerical procedure based on kinematic theorem and homogenization theory for limit analysis of periodic composites is developed. The total velocity fields, instead of fluctuating (periodic) velocity, at microscopic level are approximated by finite elements, ensuring that the resulting optimization problem is similar to the limit analysis problem formulated for structures, except for additional constraints, which are periodic boundary conditions and averaging relations. The optimization problem is then formulated in the form of a standard second-order cone programming problem to be solved by highly efficient solvers. Effects of loading condition, representative volume element architecture, and fiber/air void volume fraction on the macroscopic strength of perforated and fiber reinforced composites are studied in numerical examples. Copyright © 2017 John Wiley & Sons, Ltd.

This paper focuses on the interpolation of the kinematic fields describing the configuration of geometrically exact beams, namely, the position and rotation fields. Two kinematic representations are investigated: the classical approach that treats the displacement and rotation fields separately and the motion approach that treats those two fields as a unit. The latter approach is found to be more consistent with the kinematic description of beams. Then, two finite element interpolation strategies are presented and contrasted. The first interpolates the displacement and rotation fields separately, whereas the second interpolates both fields as a unit, in a manner consistent with the motion approach. The performance of both strategies is evaluated in light of the fundamental requirements for the convergence of the finite element method: the ability to represent rigid-body motions and constant strain states. It is shown that the traditional uncoupled interpolation scheme for the position field approximates that based on the motion approach and that the coupling induced by the interpolation of motion yields superior convergence rates for the representation of constant strain states. This property is known to lead to finite elements that are less prone to the locking phenomenon. Copyright © 2017 John Wiley & Sons, Ltd.

From a mathematical point of view, phase field theory can be understood as a smooth approximation of an underlying sharp interface problem. However, the smooth phase field approximation is not uniquely defined. Different phase field approximations are known to converge to the same sharp interface problem in the limiting case—if the thickness of the diffuse interface converges to zero. In this respect and focusing on numerics, a question that naturally arises is as follows: What are the convergence rates of the different phase field models? The paper deals precisely with this question for a certain family of phase field models. The focus is on an Allen–Cahn-type phase field model coupled to continuum mechanics. This model is rewritten into a unified variational phase field framework that covers different homogenization assumptions in the diffuse interfaces: Voigt/Taylor, Reuss/Sachs and more sound homogenization approaches falling into the range of rank-one convexification. It is shown by means of numerical experiments that the underlying phase field model—that is, the homogenization assumption in the diffuse interface—indeed influences the convergence rate. Copyright © 2017 John Wiley & Sons, Ltd.

Structural reliability methods aim at computing the probability of failure of systems with respect to prescribed limit state functions. A common practice to evaluate these limit state functions is using Monte Carlo simulations. The main drawback of this approach is the computational cost, because it requires computing a large number of deterministic finite element solutions. Surrogate models, which are built from a limited number of runs of the original model, have been developed, as substitute of the original model, to reduce the computational cost. However, these surrogate models, while decreasing drastically the computational cost, may fail in computing an accurate failure probability. In this paper, we focus on the control of the error introduced by a reduced basis surrogate model on the computation of the failure probability obtained by a Monte Carlo simulation. We propose a technique to determine bounds of this failure probability, as well as a strategy of enrichment of the reduced basis, based on limiting the bounds of the error of the failure probability for a multi-material elastic structure. Copyright © 2017 John Wiley & Sons, Ltd.

This article presents a novel approach to model validation and to the calibration of complex structural systems, through the adoption of heterogeneous (numerical/physical) simulation based on dynamic substructuring (HDS). HDS isolates the physical sub-system (PS) that contains the key region of nonlinear behavior of interest and is tested experimentally, separate from the remainder of the system, that is, the numerical sub-system (NS), which is numerically simulated. A parallel partitioned time integrator based on the finite element tearing and interconnecting method plays a central role in solving the coupled system response, enabling a rigorous and stable synchronization between sub-systems and a realistic interaction between PS and numerical sub-system response. This feature enhances the quality of benchmarks for validation and calibration of low-discrepancy models through *virtual structural testing*. As a proof of concept, we select an old reinforced concrete viaduct, subjected to seismic loading. Several HDS were conducted at the European Laboratory for Structural Assessment in Ispra (Italy) considering two physical piers and related concave sliding bearings as PSs of the heterogeneous system. As a result, the benefit of employing HDS to set benchmarks for model validation and calibration is highlighted, by developing low-discrepancy FE models of critical viaduct components. Copyright © 2017 John Wiley & Sons, Ltd.

A reduction/hyper reduction framework is presented for dramatically accelerating the solution of nonlinear dynamic multiscale problems in structural and solid mechanics. At each scale, the dimensionality of the governing equations is reduced using the method of snapshots for proper orthogonal decomposition, and computational efficiency is achieved for the evaluation of the nonlinear reduced-order terms using a carefully designed configuration of the energy conserving sampling and weighting method. Periodic boundary conditions at the microscales are treated as linear multipoint constraints and reduced via projection onto the span of a basis formed from the singular value decomposition of Lagrange multiplier snapshots. Most importantly, information is efficiently transmitted between the scales without incurring high-dimensional operations. In this proposed proper orthogonal decomposition–energy conserving sampling and weighting nonlinear model reduction framework, training is performed in two steps. First, a microscale hyper reduced-order model is constructed *in situ*, or using a mesh coarsening strategy, in order to achieve significant speedups even in non-parametric settings. Next, a classical offline–online training approach is performed to build a parametric hyper reduced-order macroscale model, which completes the construction of a fully hyper reduced-order parametric multiscale model capable of fast and accurate multiscale simulations. A notable feature of this computational framework is the minimization, at the macroscale level, of the cost of the offline training using the *in situ* or coarsely trained hyper reduced-order microscale model to accelerate snapshot acquisition. The effectiveness of the proposed hyper reduction framework at accelerating the solution of nonlinear dynamic multiscale problems is demonstrated for two problems in structural and solid mechanics. Speedup factors as high as five orders of magnitude are shown to be achievable. Copyright © 2017 John Wiley & Sons, Ltd.

This paper introduces a hierarchical sequential arbitrary Lagrangian-Eulerian (ALE) model for predicting the tire-soil-water interaction at finite deformations. Using the ALE framework, the interaction between a rolling pneumatic tire and the fluid-infiltrated soil underneath will be captured numerically. The road is assumed to be a fully saturated two-phase porous medium. The constitutive response of the tire and the solid skeleton of the porous medium is idealized as hyperelastic. Meanwhile, the interaction between tire, soil, and water will be simulated via a hierarchical operator-split algorithm. A salient feature of the proposed framework is the steady state rolling framework. While the finite element mesh of the soil is fixed to a reference frame and moves with the tire, the solid and fluid constituents of the soil are flowing through the mesh in the ALE model according to the rolling speed of the tire. This treatment leads to an elegant and computationally efficient formulation to investigate the tire-soil-water interaction both close to the contact and in the far field. The presented ALE model for tire-soil-water interaction provides the essential basis for future applications, for example, to a path-dependent frictional-cohesive response of the consolidating soil and unsaturated soil, respectively. Copyright © 2017 John Wiley & Sons, Ltd.

We present a linearization scheme for an interior penalty discontinuous Galerkin method for two-phase porous media flow model, which includes dynamic effects on the capillary pressure. The fluids are assumed immiscible and incompressible, and the solid matrix non-deformable. The physical laws are approximated in their original form, without using the global or complementary pressures. The linearization scheme does not require any regularization step. Furthermore, in contrast with Newton or Picard methods, there is no computation of derivatives involved. We prove rigourously that the scheme is robust and linearly convergent. We make an extensive parameter study to compare the behaviour of the L-scheme with the Newton method. Copyright © 2017 John Wiley & Sons, Ltd.

This paper addresses the explicit time integration for solving multi-model structural dynamics by the Arlequin method. Our study focuses on the stability of the central difference scheme in the Arlequin framework. Although the Arlequin coupling matrices can introduce a weak instability, the time integrator remains stable as long as the initial kinematic conditions of both models agree on the coupling zone. After showing that the Arlequin weights have an adverse impact on the critical time step, we present two approaches to circumvent this issue. Computational tests confirm that the two approaches effectively preserve a feasible critical time step and show the efficiency of the Arlequin method for structural explicit dynamic simulations. Copyright © 2017 John Wiley & Sons, Ltd.

The Koiter method recovers the equilibrium path of an elastic structure using a reduced model, obtained by means of a quadratic asymptotic expansion of the finite element model. Its main feature is the possibility of efficiently performing sensitivity analysis by including *a posteriori* the effects of the imperfections in the reduced nonlinear equations. The state-of-art treatment of geometrical imperfections is accurate only for small imperfection amplitudes and linear pre-critical behaviour. This work enlarges the validity of the method to a wider range of practical problems through a new approach, which accurately takes into account the imperfection without losing the benefits of the *a posteriori* treatment. A mixed solid-shell finite element is used to build the discrete model. A large number of numerical tests, regarding nonlinear buckling problems, modal interaction, unstable post-critical and imperfection sensitive structures, validates the proposal. Copyright © 2017 John Wiley & Sons, Ltd.

The scaled boundary radial point interpolation method (SBRPIM), a new semi-analytical technique, is introduced and applied to the analysis of the stress–strain problems. The proposed method combines the advantages of the scaled boundary finite element method and the boundary radial point interpolation method. In this method, no mesh is required, nodes are scattered only on the boundary of the domain, no fundamental solution is required, and as the shape functions constructed based on the radial point interpolation method possess the Kronecker delta function property, the boundary conditions of problems can be imposed accurately without additional efforts. The main ideas of the SBRPIM are introducing a new method based on boundary scattered nodes without the need to element connectivity information, satisfying Kronecker delta function property, and being capable to handle singular problems. The equations of the SBRPIM in stress–strain fields are outlined in this paper. Several benchmark examples of 2-D elastostatic are analyzed to validate the accuracy and efficiency of the proposed method. It is found that the SBRPIM is very easy to implement and the obtained results of the proposed method show a very good agreement with the analytical solution. Copyright © 2017 John Wiley & Sons, Ltd.

We introduce a new cell-centered finite volume discretization for elasticity with weakly enforced symmetry of the stress tensor. The method is motivated by the need for robust discretization methods for deformation and flow in porous media and falls in the category of multi-point stress approximations (MPSAs). By enforcing symmetry weakly, the resulting method has flexibility beyond previous MPSA methods. This allows for a construction of a method that is applicable to simplexes, quadrilaterals, and most planar-faced polyhedral grids in both 2D and 3D, and in particular, the method amends a convergence failure in previous MPSA methods for certain simplex grids. We prove convergence of the new method for a wide range of problems, with conditions that can be verified at the time of discretization. We present the first set of comprehensive numerical tests for the MPSA methods in three dimensions, covering Cartesian and simplex grids, with both heterogeneous and nearly incompressible media. The tests show that the new method consistently is second order convergent in displacement, despite being the lowest order, with a rate that mostly is between 1 and 2 for stresses. The results further show that the new method is more robust and computationally cheaper than previous MPSA methods. Copyright © 2017 The Authors. *International Journal for Numerical Methods in Engineering* Published by John Wiley & Sons Ltd.

No abstract is available for this article.

]]>Fourier solvers have become efficient tools to establish structure–property relations in heterogeneous materials. Introduced as an alternative to the finite element (FE) method, they are based on fixed-point solutions of the Lippmann–Schwinger type integral equation. Their computational efficiency results from handling the kernel of this equation by the fast Fourier transform (FFT). However, the kernel is derived from an auxiliary homogeneous linear problem, which renders the extension of FFT-based schemes to nonlinear problems conceptually difficult. This paper aims to establish a link between FE-based and FFT-based methods in order to develop a solver applicable to general history-dependent and time-dependent material models. For this purpose, we follow the standard steps of the FE method, starting from the weak form, proceeding to the Galerkin discretization and the numerical quadrature, up to the solution of nonlinear equilibrium equations by an iterative Newton–Krylov solver. No auxiliary linear problem is thus needed. By analyzing a two-phase laminate with nonlinear elastic, elastoplastic, and viscoplastic phases and by elastoplastic simulations of a dual-phase steel microstructure, we demonstrate that the solver exhibits robust convergence. These results are achieved by re-using the nonlinear FE technology, with the potential of further extensions beyond small-strain inelasticity considered in this paper. Copyright © 2016 John Wiley & Sons, Ltd.

This study presents a gradient-based shape optimization over a fixed mesh using a non-uniform rational B-splines-based interface-enriched generalized finite element method, applicable to multi-material structures. In the proposed method, non-uniform rational B-splines are used to parameterize the design geometry precisely and compactly by a small number of design variables. An analytical shape sensitivity analysis is developed to compute derivatives of the objective and constraint functions with respect to the design variables. Subtle but important new terms involve the sensitivity of shape functions and their spatial derivatives. Verification and illustrative problems are solved to demonstrate the precision and capability of the method. Copyright © 2016 John Wiley & Sons, Ltd.

We propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particle swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms. Copyright © 2016 John Wiley & Sons, Ltd.

For the two-dimensional three-temperature radiative heat conduction problem appearing in the inertial confinement numerical stimulations, we choose the *Freezing coefficient method* to linearize the nonlinear equations, and initially apply the well-known mixed finite element scheme with the lowest order Raviart–Thomas element associated with the triangulation to the linearized equations, and obtain the convergence with one order with respect to the space direction for the temperature and flux function approximations, and design a simple but efficient algorithm for the discrete system. Three numerical examples are displayed. The former two verify theoretical results and show the super-convergence for temperature and flux functions at the barycenter of the element, which is helpful for solving the radiative heat conduction problems. The third validates the robustness of this scheme with small energy conservative error and one order convergence for the time discretization. Copyright © 2016 John Wiley & Sons, Ltd.