In this paper we present a solution framework for high-order discretizations of conjugate heat transfer (CHT) problems on non-body-conforming meshes. The framework consists of and leverages recent developments in discontinuous Galerkin (DG) discretization, simplex cut-cell techniques, and anisotropic output-based adaptation. With the cut-cell technique, the mesh generation process is completely decoupled from the interface definitions.

In addition, the adaptive scheme combined with the DG discretization automatically adjusts the mesh in each sub-domain and achieves high-order accuracy in outputs of interest.

We demonstrate the solution framework through several multi-domained CHT problems consisting of laminar and turbulent flows, curved geometry, and highly coupled heat transfer regions. The combination of these attributes yield non-intuitive coupled interactions between fluid and solid domains which can be difficult to capture with user-generated meshes. Copyright © 2016 John Wiley & Sons, Ltd.

The discrete element method, developed by Cundall and Strack, typically uses some variation of the central difference numerical integration scheme. However, like all explicit schemes, the scheme is only conditionally stable, with the stability determined by the size of the time-step. The current methods for estimating appropriate DEM time-steps are based on many assumptions; therefore large factors of safety are usually applied to the time-step to ensure stability which substantially increases the computational cost of a simulation. This work introduces a general framework for estimating critical time-steps for any planar rigid body subject to linear damping and forcing. A numerical investigation of how system damping, coupled with non-collinear impact, affects the critical time-step is also presented. It is shown that the critical time-step is proportional to
if a linear contact model is adopted, where *m* and *k* represent mass and stiffness, respectively. The term which multiplies this factor is a function of known physical parameters of the system. The stability of a system is independent of the initial conditions. This article is protected by copyright. All rights reserved.

This work focuses on providing accurate low-cost approximations of stochastic finite elements simulations in the framework of linear elasticity. In [E. Florentin, P. Diez, Adaptive reduced basis strategy based on goal oriented error assessment for stochastic problems, Comput. Methods Appl. Mech. Engrg. 225-228 (2012) 116-127], an adaptive strategy has been introduced as an improved Monte-Carlo method for multi-dimensional large stochastic problems. We provide here a complete analysis of the method including a new enhanced goal-oriented error estimator and estimates of CPU cost gain. Technical insight of these two topics are presented in details and numerical examples show the interest of these new developments. This article is protected by copyright. All rights reserved.

We study the alternative ‘simultaneous analysis and design’ (SAND) formulation of the local stress- and slope-constrained topology design problem. It is demonstrated that a standard trust-region Lagrange-Newton sequential quadratic programming (SQP)-type algorithm—based, in this case, on strictly convex and separable approximate subproblems—may converge to singular optima of the local stress-constrained problem without having to resort to relaxation or perturbation techniques. Moreover, due to the negation of the sensitivity analyses—in SAND the density and displacement variables are independent—and the immense sparsity of the SAND problem, solutions to large-scale problem instances may be obtained in a reasonable amount of computation time. This article is protected by copyright. All rights reserved.

The computation of elastic continua is widely used in today's engineering practice and Finite Element Models yield a reasonable approximation of the physically observed behaviour. In contrast the failure of materials due to overloading can be seen as a sequence of discontinuous effects finally leading to a system failure. Until now it has not been possible to sufficiently predict this process with numerical simulations. It has recently been shown that discrete models like Lattice Spring Models are a promising alternative to Finite Element Models for computing the breakdown of materials due to static overstress and fatigue. In this paper we will address one of the downsides of current Lattice Spring Models, the need for a periodic mesh leading to a mesh-induced anisotropy of material failure in simulations. We will show how to derive irregular cells that still behave as part of a homogeneous continuum irrespectively of their shape and which should be able to eliminate mesh-induced anisotropy. In addition, no restraints concerning the material stiffness tensor are introduced, which allows the simulation of non-isotropic materials. Finally, we compare the elastic response of the presented model with present Lattice Spring Models using mechanical systems with a known analytical stress field. This article is protected by copyright. All rights reserved.

In this paper, an effective local radial basis function collocation method (LRBFCM) is presented for computing the band structures of the two-dimensional (2D) solid/fluid and fluid/solid phononic crystals. Both systems of solid scatterers embedded in a fluid matrix (solid/fluid phononic crystals) and fluid scatterers embedded in a solid matrix (fluid/solid phononic crystals) are investigated. The solid-fluid interactions are taken into account by properly formulating and treating the continuity/equilibrium conditions on the solid-fluid interfaces, which require an accurate computation of the normal derivatives of the displacements and the pressure on the fluid-solid interfaces and the unit-cell boundaries. The developed LRBFCM for the mixed wave propagation problems in 2D solid/fluid and fluid/solid phononic crystals is validated by the corresponding results obtained by the finite element method (FEM). To the best knowledge of the authors, the present LRBFCM has yet not been applied to the band structure computations of 2D solid/fluid and fluid/solid phononic crystals. For different lattice forms, scatterers’ shapes, acoustic impedance ratios and material combinations (solid scatterers in fluid matrix or fluid scatterers in solid matrix), numerical results are presented and discussed to reveal the efficiency and the accuracy of the developed LRBFCM for calculating the band structures of 2D solid/fluid and fluid/solid phononic crystals. A comparison of the present numerical results with that of the FEM shows that the present LRBFCM is much more efficient than the FEM for the band structure computations of the considered 2D solid/fluid and fluid/solid phononic crystals. This article is protected by copyright. All rights reserved.

A reduced order model (ROM) is presented for the long-term calculation of sub-surface oil/water flows. As in several previous ROMs in the field, the Newton iterations in the full model (FM) equations, which are implicit in time, are projected onto a set of modes obtained by applying proper orthogonal decomposition (POD) to a set of snapshots computed by the FM itself. The novelty of the present ROM is that the POD modes are (i) first calculated from snapshots computed by the FM in a short initial stage, and then (ii) updated on the fly along the simulation itself, using new sets of snapshots computed by the FM in even shorter additional runs. Thus, the POD modes adapt themselves to the local dynamics along the simulation, instead of being completely calculated at the outset, which requires a computationally expensive preprocess. This strategy is robust and computationally efficient, which is tested in 10 and 30 year simulations for a realistic reservoir model taken from the SAIGUP project. This article is protected by copyright. All rights reserved.

This paper is concerned with the development of mesh free models for the static analysis of smart laminated composite beams. The overall smart composite beam is composed of a laminated substrate composite beam and a piezoelectric layer attached partially or fully at the top surface of the substrate beam. The piezoelectric layer acts as the distributed actuator layer of the smart beam. A layer wise displacement theory and an equivalent single layer theory have been used to derive the models. Several cross-ply substrate beams are considered for presenting the numerical results. The responses of the smart composite beams computed by the present new mesh free model based on the layer wise displacement theory excellently match with those obtained by the exact solutions. The mesh free model based on the equivalent single layer theory cannot accurately compute the responses due to transverse actuation by the piezoelectric actuator. The models derived here suggest that the mesh free method can be efficiently used for the numerical analysis of smart structures.

Atomistic models, which are crucial for performing molecular dynamics (MD) simulations of carbon nanostructures (CNS), consist of virtual hexagonal meshes with defects properly distributed in the intersectional areas. Currently, atomistic models are created mostly by hand, which is a notably tedious and time-consuming process. In this paper, we develop a method that produces atomistic models automatically. Because a hexagonal mesh and triangulation represent dual graphs, our work focuses on the creation of proper triangulation. The edge lengths of the triangulation should be compatible with the lengths of the C-C bonds, and vertices with valences other than 6 (due to defects in the hexagonal mesh) should be properly arranged around the boundaries of the different components of a CNS. Two techniques play important roles in our method: (1) sphere packing is used to place the nodes for triangulation, which produces nearly constant edge lengths of the triangles, and (2) the movement and editing of defects is used to control the number and positions of the defects. We subsequently develop a computer program based on this method that can create models much easier and faster than the current hand-work method, thereby reducing the operation time significantly.

A computational method is developed for evaluating the plastic strain gradient hardening term within a crystal plasticity formulation. While such gradient terms reproduce the size effects exhibited in experiments, incorporating derivatives of the plastic strain yields a nonlocal constitutive model. Rather than applying mixed methods, we propose an alternative method whereby the plastic deformation gradient is variationally projected from the elemental integration points onto a smoothed nodal field. Crucially, the projection utilizes the mapping between Lie groups and algebras in order to preserve essential physical properties, such as orthogonality of the plastic rotation tensor. Following the projection, the plastic strain field is directly differentiated to yield the Nye tensor. Additionally, an augmentation scheme is introduced within the global Newton iteration loop such that the computed Nye tensor field is fed back into the stress update procedure. Effectively, this method results in a fully-implicit evolution of the constitutive model within a traditional displacement-based formulation. An elemental projection method with explicit time integration of the plastic rotation tensor is compared as a reference. A series of numerical tests are performed for several element types in order to assess the robustness of the method, with emphasis placed upon polycrystalline domains and multi-axis loading. This article is protected by copyright. All rights reserved.

The research work extends the *scaled boundary finite element method* (SBFEM) to non-deterministic framework defined on random domain wherein random behaviour is exhibited in the presence of the random-field uncertainties. The aim is to blend the SBFEM into the Galerkin spectral stochastic methods, which leads to a proficient procedure for handling the stress singularity problems and crack analysis. The Young's modulus of structures is considered to have random-field uncertainty resulting in the stochastic behaviour of responses. Mathematical expressions and the solution procedure are derived to evaluate the statistical characteristics of responses (displacement, strain and stress) and stress intensity factors of cracked structures. The feasibility and effectiveness of the presented method are demonstrated by particularly chosen numerical examples. This article is protected by copyright. All rights reserved.

We study the SAND (simultaneous analysis and design) formulation of the ‘classical’ topology optimization problem subject to linear constraints on material density variables. Based on a dual method in theory, and a primal-dual method in practice, we propose a separable and strictly convex quadratic Lagrange-Newton subproblem for use in sequential approximate optimization (SAO) of the SAND formulated classical topology design problem.

The SAND problem is characterised by a large number of nonlinear equality constraints (the equations of equilibrium) which are linearized in the approximate convex subproblems. The availability of cheap second-order information is exploited in a Lagrange-Newton sequential quadratic programming (SQP)-like framework. In the spirit of efficient structural optimization methods the quadratic terms are restricted to the diagonal of the Hessian matrix; the subproblems have minimal storage requirements, are easy to solve, and positive definiteness of the diagonal Hessian matrix is trivially enforced.

Theoretical considerations reveal that the dual statement of the proposed subproblem for SAND minimum compliance design agrees with the ever-popular optimality criterion (OC) method—which is a NAND (nested analysis and design) formulation. This relates, in turn, to the known equivalence between rudimentary dual SAO algorithms based on reciprocal (and exponential) intervening variables and the OC method. Copyright © 2016 John Wiley & Sons, Ltd.

For thin elastic structures submerged in heavy fluid, e.g., water, a strong interaction between the structural domain and the fluid domain occurs and significantly alters the eigenfrequencies. Therefore, the eigenanalysis of the fluid-structure interaction system is necessary. In this paper, a coupled finite element and boundary element (FE-BE) method is developed for the numerical eigenanalysis of the fluid-structure interaction problems. The structure is modeled by the finite element method (FEM). The compressibility of the fluid is taken into consideration and hence the Helmholtz equation is employed as the governing equation and solved by the boundary element method (BEM). The resulting nonlinear eigenvalue problem is converted into a small linear one by applying a contour integral method. Adequate modifications are suggested to improve the efficiency of the contour integral method and avoid missing the eigenvalues of interest. The Burton-Miller formulation is applied to tackle the fictitious eigenfrequency problem of the BEM and the optimal choice of its coupling parameter is investigated for the coupled FE-BE method. Numerical examples are given and discussed to demonstrate the effectiveness and accuracy of the developed FE-BE method. This article is protected by copyright. All rights reserved.

We introduce a method to mesh the boundary *Γ* of a smooth, open domain in
*R*^{3} immersed in a mesh of tetrahedra. The mesh follows by mapping a specific collection of triangular faces in the mesh to *Γ*. Two types of surface meshes follow: (a) a mesh that *exactly* meshes *Γ*, and (b) meshes that approximate *Γ* to any order, by interpolating the map over the selected faces; i.e., an isoparametric approximation to *Γ*. The map we use to deform the faces is the closest point projection to *Γ*. We formulate conditions for the closest point projection to define a homeomorphism between each face and its image. These are conditions on some of the tetrahedra intersected by the boundary, and they essentially state that each such tetrahedron should: (a) have a small enough diameter, and (b) have two of its dihedral angles be acute. We provide explicit upper bounds on the mesh size, and these can be computed on the fly.

We showcase the quality of the resulting meshes with several numerical examples. More importantly, *all* surfaces in these examples were meshed with a *single* background mesh. This is an important feature for problems in which the geometry evolves or changes, since it could be possible for the background mesh to never change as the geometry does. In this case, the background mesh would be a universal mesh (see [1] ) for all these geometries. We expect the method introduced here to be the basis for the construction of universal meshes for domains in three dimensions. This article is protected by copyright. All rights reserved.

In this paper, we propose a fully smoothed extended finite element method (SmXFEM) for axisymmetric problems with weak discontinuities. The salient feature of the proposed approach is that all the terms in the stiffness and mass matrixes can be computed by smoothing technique. This is accomplished by combining the Green's divergence theorem with the evaluation of indefinite integral based on smoothing technique, which is used to transform the domain integral into boundary integral. The proposed technique completely eliminates the need for isoparametric mapping and the computing of Jacobian matrix even for the mass matrix. When employed over the enriched elements, the proposed technique does not require sub-triangulation for the purpose of numerical integration. The accuracy and convergence properties of the proposed technique are demonstrated with a few problems in elastostatics and elastodynamics with weak discontinuities. It can be seen that the proposed technique yields stable and accurate solutions and is less sensitive to mesh distortion.

In this paper, we develop a robust reduced-order modeling method, named algebraic dynamic condensation, which is based on the improved reduced system (IRS) method. Using algebraic substructuring, the global mass and stiffness matrices are divided into many small submatrices without considering the physical domain, and substructures and interface boundary are defined in the algebraic perspective. The reduced model is then constructed using three additional procedures: substructural stiffness condensation, interface boundary reduction, and substructural inertial effect condensation. The formulation of the reduced model is simply expressed at a submatrix level without using a transformation matrix that induces huge computational cost. Through various numerical examples, the performance of the proposed method is demonstrated in terms of accuracy and computational cost.

We present a higher-order discretization scheme for the compressible Euler and Navier-Stokes equations with immersed boundaries. Our approach makes use of a Discontinuous Galerkin (DG) discretization in a domain that is implicitly defined by means of a level set function. The zero iso-contour of this level set function is considered as an additional domain boundary where we weakly enforce boundary conditions in the same manner as in boundary-fitted cells. In order to retain the full order of convergence of the scheme, it is crucial to perform volume and surface integrals in boundary cells with high accuracy. This is achieved using a linear moment-fitting strategy. Moreover, we apply a non-intrusive cell-agglomeration technique that averts problems with very small and ill-shaped cuts. The robustness, accuracy and convergence properties of the scheme are assessed in several two-dimensional test cases for the steady compressible Euler and Navier-Stokes equations. Approximation orders range from zero to four, even though the approach directly generalizes to even higher orders. In all test cases with a sufficiently smooth solution, the experimental order of convergence matches the expected rate for DG schemes. This article is protected by copyright. All rights reserved.

Finite Element Method (FEM) is a well-developed method to solve real-world problems that can be modeled with differential equations. As the available computational power increases, complex and large size problems can be solved using FEM which typically involves multiple degrees of freedom (DOF) per node, high order of elements and an iterative solver requiring several sparse matrix-vector multiplication (SpMV) operations. In this work, a new storage scheme is proposed for sparse matrices arising from FEM simulations with multiple DOF per node. A SpMV kernel and its variants using the proposed scheme are also given for CUDA-enabled GPUs. The proposed scheme and the kernels rely on the mesh connectivity data from FEM discretization and the number of DOF per node. The proposed kernel performance was evaluated on seven test matrices for double-precision floating point operations. The performance analysis showed that the proposed GPU kernel outperforms the ELLPACK (ELL) and CUSPARSE Hybrid (HYB) format GPU kernels by an average of 42% and 32%, respectively, on a Tesla K20c card. This article is protected by copyright. All rights reserved.

This paper proposes a new methodology to guarantee the accuracy of the homogenisation schemes that are traditionally employed to approximate the solution of PDEs with random, fast evolving diffusion coefficients. More precisely, in the context of linear elliptic diffusion problems in randomly packed particulate composites, we develop an approach to strictly bound the error in the expectation and second moment of quantities of interest, without ever solving the fine-scale, intractable stochastic problem. The most attractive feature of our approach is that the error bounds are computed without any integration of the fine-scale features. Our computations are purely macroscopic, deterministic, and remain tractable even for small scale ratios. The second contribution of the paper is an alternative derivation of modelling error bounds through the Prager-Synge hypercircle theorem. We show that this approach allows us to fully characterise and optimally tighten the interval in which predicted quantities of interest are guaranteed to lie. We interpret our optimum result as an extension of Reuss-Voigt approaches, which are classically used to estimate the homogenised diffusion coefficients of composites, to the estimation of macroscopic engineering quantities of interest. Finally, we make use of these derivations to obtain an efficient procedure for multiscale model verification and adaptation. This article is protected by copyright. All rights reserved.

The current article presents a Lagrangian cell-centred finite volume solution methodology for simulation of metal forming processes. Details are given of the mathematical model in updated Lagrangian form, where a hyperelastoplastic *J*_{2} constitutive relation has been employed. The cell-centred finite volume discretisation is described, where a modified discretised is proposed to alleviate erroneous hydrostatic pressure oscillations; an outline of the memory efficient segregated solution procedure is given. The accuracy and order of accuracy of the method is examined on a number of 2-D and 3-D elastoplastic benchmark test cases, where good agreement with available analytical and finite element solutions is achieved. This article is protected by copyright. All rights reserved.

We consider a technique to estimate an approximate gradient using an ensemble of randomly chosen control vectors, known as Ensemble Optimization (EnOpt) in the oil and gas reservoir simulation community. In particular we address how to obtain accurate approximate gradients when the underlying numerical models contain uncertain parameters because of geological uncertainties. In that case 'robust optimization' is performed by optimizing the expected value of the objective function over an ensemble of geological models. In earlier publications, based on the pioneering work of Chen et al. (2009), it has been suggested that a straightforward one-to-one combination of random control vectors and random geological models is capable of generating sufficiently accurate approximate gradients. However, this form of EnOpt does not always yield satisfactory results. In a recent article, Fonseca et al. (2015) formulate a modified EnOpt algorithm, referred to here as a Stochastic Simplex Approximate Gradient (StoSAG; in earlier publications referred to as "modified robust EnOpt") and show, via computational experiments, that StoSAG generally yields significantly better gradient approximations than the standard EnOpt algorithm. Here, we provide theoretical arguments to show why StoSAG is superior to EnOpt. This article is protected by copyright. All rights reserved.

When geometric uncertainties arising from manufacturing errors are comparable to the characteristic length or the product responses are sensitive to such uncertainties, the products of deterministic design cannot perform robustly. This paper presents a new level set-based framework for robust shape and topology optimization against geometric uncertainties. We first propose a stochastic level set perturbation model of uncertain topology/shape to characterize manufacturing errors in conjunction with Karhunen–Loève (K–L) expansion. We then utilize polynomial chaos expansion (PCE) to implement the stochastic response analysis. In this context, the mathematical formulation of the considered robust shape and topology optimization problem is developed, and the adjoint-variable shape sensitivity scheme is derived. An advantage of this method is that relatively large shape variations and even topological changes can be accounted for with desired accuracy and efficiency. Numerical

examples are given to demonstrate the validity of the present formulation and numerical techniques. In particular, this method is justified by the observations in minimum compliance problems, where slender bars vanish when the manufacturing errors become comparable to the characteristic length of the structures. This article is protected by copyright. All rights reserved.

A numerical procedure for analysis of general laminated plates under transverse load is developed utilizing the Mindlin plate theory, the finite volume discretization, and a segregated solution algorithm. The force and moment balance equations with the laminate constitutive relations are written in the form of a generic transport equation. In order to obtain discrete counterparts of the governing equations, the plate is subdivided into N control volumes by a Cartesian numerical mesh. As a result, 5 sets of N linear equations with N unknowns are obtained and solved using the conjugate gradient method with preconditioning.

For the method validation, a number of test cases are designed to cover thick and thin laminated plates with aspect ratio (width to thickness) from 4 to 100. Simply-supported orthotropic, symmetric cross-ply and angle-ply laminated plates under uniform and sinusoidal pressure loads are solved and results are compared with available analytical solutions. The shear correction factor of 5/6 is utilized throughout procedure, which is consistent with test cases used in the reviewed literature. Comparisons of the finite volume method results for maximum deflections at the center of the plate and the Navier solutions obtained for aspect ratios 10, 20, and 100 shows a very good agreement. This article is protected by copyright. All rights reserved.

Dynamic fragmentation is a rapid and catastrophic failure of a material. During this process, the nucleation, propagation, branching and coalescence of cracks results in the formation of fragments. The numerical modeling of this phenomenon is challenging because it requires high-performance computational capabilities. For this purpose the finite-element method with dynamic insertion of cohesive elements was chosen. This paper describes the parallel implementation of its fundamental algorithms in the C++ open-source library Akantu. Moreover a numerical instability that can cause the loss of energy conservation and possible solutions to it are illustrated. Finally, the method is applied to the dynamic fragmentation of a hollow sphere subjected to uniform radial expansion. This article is protected by copyright. All rights reserved.

In nonlinear model order reduction, hyper reduction designates the process of approximating a projection-based reduced-order operator on a reduced mesh, using a numerical algorithm whose computational complexity scales with the small size of the Projection-based Reduced-Order Model (PROM). Usually, the reduced mesh is constructed by sampling the large-scale mesh associated with the high-dimensional model underlying the PROM. The sampling process itself is governed by the minimization of the size of the reduced mesh for which the hyper reduction method of interest delivers the desired accuracy for a chosen set of training reduced-order quantities. Because such a construction procedure is combinatorially hard, its key objective function is conveniently substituted with a convex approximation. Nevertheless, for large-scale meshes, the resulting mesh sampling procedure remains computationally intensive. In this paper, three different convex approximations that promote sparsity in the solution are considered for constructing reduced meshes that are suitable for hyper reduction, and paired with appropriate active set algorithms for solving the resulting minimization problems. These algorithms are equipped with carefully designed parallel computational kernels in order to accelerate the overall process of mesh sampling for hyper reduction, and therefore achieve practicality for realistic, large-scale, nonlinear structural dynamics problems. Conclusions are also offered as to what algorithm is most suitable for constructing a reduced mesh for the purpose of hyper reduction. This article is protected by copyright. All rights reserved.

We propose a method to couple Smoothed Particle Hydrodynamics (SPH) and Finite Elements (FEM) methods for non-linear transient fluid-structure interaction simulations by adopting different time-steps depending on the fluid or solid sub-domains. These developments were motivated by the need to simulate highly non-linear and sudden phenomena requiring the use of explicit time integrators on both sub-domains (Explicit Newmark for the solid and Runge-Kutta 2 for the fluid). However, due to critical time-step required for the stability of the explicit time-integrators in, it becomes important to be able to integrate each sub-domain with a different time-step while respecting the features that a previously developed mono time-step coupling algorithm offered. For this matter, a dual-Schur decomposition method originally proposed for structural dynamics was considered, allowing to couple time-integrators of the Newmark family with different time-steps with the use of Lagrange multipliers. This article is protected by copyright. All rights reserved.

This article presents a family of variational integrators from a continuous time point of view. A general procedure for deriving symplectic integration schemes preserving an energy-like quantity is shown, which is based on the principle of virtual work. The framework is extended to incorporate holonomic constraints without using additional regularization. In addition, it is related to well-known partitioned Runge-Kutta methods and to other variational integration schemes. As an example, a concrete integration scheme is derived for the planar pendulum using both polar and Cartesian coordinates. This article is protected by copyright. All rights reserved.

In this work, a novel time marching procedure is presented, in which adaptive time integration parameters are locally computed, allowing different spatial and temporal distributions. As a consequence, a more versatile technique arises, enabling an enhanced performance. The main features of the proposed technique are: (i) it is simple; (ii) it has guaranteed stability; (iii) it is an efficient non-iterative adaptive single-step procedure; (iv) it provides enhanced accuracy; (v) it enables advanced controllable algorithmic dissipation in the higher modes; (vi) it is truly self-starting; (vii) it is entirely automatic, requiring no input parameter from the user. The proposed technique is very complete, providing the main positive attributes that are requested from an effective time marching procedure. Numerical results are presented along the paper, illustrating the excellent performance of the method.

The treatments of heterogeneities and periodic boundary conditions are explored to properly perform isogeometric analysis (IGA) based on NURBS basis functions in solving homogenization problems for heterogeneous media with omni-directional periodicity and composite plates with in-plane periodicity. Since the treatment of the combination of different materials in IGA models is not trivial especially for periodicity constraints, the first priority is to clearly specify points at issue in the numerical modeling, or equivalently mesh generation, for IG homogenization analysis (IGHA). The most awkward, but important issue is how to generate patches for NURBS representation of the geometry of a rectangular parallelepiped unit cell to realize appropriate deformations in consideration of the convex-hull property of IGA. The issue arises from the introduction of overlapped control points located at angular points in the heterogeneous unit cell, which must satisfy multiple point constraint (MPC) conditions associated with periodic boundary conditions (PBCs). Although two measures may be conceivable, we suggest the use of multiple patches along with double MPC that imposes PBCs and the continuity conditions between different patches simultaneously. Several numerical examples of numerical material and plate tests are presented to demonstrate the validity of the proposed strategy of IG modeling for IGHA. This article is protected by copyright. All rights reserved.

A large amount of research in computational mechanics has biased toward atomistic simulations. This trend, on one hand, is due to the increased demand to perform computations in nano-scale and, on the other hand, is due to the rather simple applications of pairwise potentials in modeling the interactions between atoms of a given crystal. The Cauchy-Born hypothesis has been used effectively to model the behavior of crystals under different loading conditions, in which the comparison with molecular dynamics simulations presents desirable coincidence between the results. A number of research works have been devoted to the validity of CB hypothesis and its application in post-elastic limit. However, the range of application of CB hypothesis is limited and it remains questionable whether it is still applicable beyond the validity limit. In this paper, a multi-scale technique is developed for modeling of plastic deformations in nano-scale materials. The deformation gradient is decomposed into the plastic and elastic parts, i.e. F = F^{p}F^{e}. This decomposition is in contrast to the conventional decomposition, F = F^{e}F^{p}, generally encountered in continuum and crystal plasticity. It is shown that the former decomposition is more appropriate for the problem dealt within this work. Inspired by crystal plasticity, the plastic part is determined from the slip on potential slip systems. Based on the assumption that the CB hypothesis remains valid in the homogeneous deformation, the elastic deformation gradient resulting from the aforementioned decomposition is employed in conjunction with the CB hypothesis to update the state variables for FCC crystals. The assumption of homogeneity of elastic deformation gradient is justified by the fact that elastic deformations are considerably smaller than the plastic deformations. The computational algorithms are derived in details and numerical simulations are presented through several examples to demonstrate the capability of the proposed computational algorithm in modeling of Golden crystals under different loading conditions. This article is protected by copyright. All rights reserved.

We present a framework to efficiently solve large deformation contact problems with nearly incompressible materials by implementing adaptive re-meshing. Specifically, nodally integrated elements are employed to avoid mesh locking when linear triangular or tetrahedral elements are used to facilitate mesh re-generation. Solution variables in the bulk and on contact surfaces are transferred between meshes such that accuracy is maintained and re-equilibration on the new mesh is possible. In particular, the displacement transfer in the bulk is accomplished through a constrained least squares (LS) problem based on nodal integration, while the transfer of contact tractions relies on parallel transport. Finally, a residual-based error indicator is chosen to drive adaptive mesh refinement. The proposed strategies are applicable to both two or three dimensional analysis and are demonstrated to be robust by a number of numerical examples. This article is protected by copyright. All rights reserved.

It was observed in [1,2] that the strain smoothing technique over higher order elements and arbitrary polytopes yields less accurate solutions than other techniques such as the conventional polygonal finite element method. In this work, we propose a linear strain smoothing scheme that improves the accuracy of linear and quadratic approximations over convex polytopes. The main idea is to subdivide the polytope into simplicial subcells and use a linear smoothing function in each subcell to compute the strain. This new strain is then used in the computation of the stiffness matrix. The convergence properties and accuracy of the proposed scheme are discussed by solving a few benchmark problems. Numerical results show that the proposed linear strain smoothing scheme makes the approximation based on polytopes able to deliver the same optimal convergence rate as traditional quadrilateral and hexahedral approximations. The accuracy is also improved, and all the methods tested pass the patch test to machine precision. This article is protected by copyright. All rights reserved.

This work proposes novel stability-preserving model order reduction approaches for vibro-acoustic finite element models. As most research in the past for these systems has focused on noise attenuation in the frequency-domain, stability-preserving properties were of low priority. However as the interest for time-domain auralization and (model based) active noise control increases, stability-preserving model order reduction techniques are becoming indispensable. The original finite element models for vibro-acoustic simulation are already well established, but require too much computational load for these applications. This work therefore proposes two new global approaches for the generation of stable reduced-order models. Based on proven conditions for stability preservation under one-sided projection, a reformulation of the displacement-fluid velocity potential (*u* − *ϕ*) formulation is proposed. In contrast to the regular formulation, the proposed approach leads to a new asymmetric structure for the system matrices which is proven to preserve stability under one-sided projection. The second approach starts from a displacement-pressure (*u* − *p*) description where the system level projection space is decoupled for the two domains, for which we also prove the preservation of stability. Two numerical validation cases are presented which demonstrate the inadequacy of straightforward model order reduction on typical vibro-acoustic models for time-domain simulation and compare the performance of the proposed approaches. Both proposed approaches effectively preserve the stability of the original system. This article is protected by copyright. All rights reserved.

Discretization-induced oscillations in the load–displacement curve are a well-known problem for simulations of cohesive crack growth with finite elements. The problem results from an insufficient resolution of the complex stress state within the cohesive zone ahead of the crack tip. This work demonstrates that the *hp*-version of the finite element method is ideally suited to resolve this complex and localized solution characteristic with high accuracy and low computational effort. To this end, we formulate a local and hierarchic mesh refinement scheme that follows *dynamically* the propagating crack tip. In this way, the usually applied *static* a priori mesh refinement along the complete potential crack path is avoided, which significantly reduces the size of the numerical problem. Studying systematically the influence of *h*-refinement, *p*-refinement, and *h**p*-refinement, we demonstrate why the suggested *h**p*-formulation allows to capture accurately the complex stress state at the crack front preventing artificial snap-through and snap-back effects. This allows to decrease significantly the number of degrees of freedom and the simulation runtime. Furthermore, we show that by combining this idea with the finite cell method, the crack propagation within complex domains can be simulated efficiently without resolving the geometry by the mesh. Copyright © 2016 John Wiley & Sons, Ltd.

The FFT-based homogenization method of Moulinec–Suquet has recently emerged as a powerful tool for computing the macroscopic response of complex microstructures for elastic and inelastic problems. In this work, we generalize the method to problems discretized by trilinear hexahedral elements on Cartesian grids and physically nonlinear elasticity problems. We present an implementation of the basic scheme that reduces the memory requirements by a factor of four and of the conjugate gradient scheme that reduces the storage necessary by a factor of nine compared with a naive implementation.

For benchmark problems in linear elasticity, the solver exhibits mesh- and contrast-independent convergence behavior and enables the computational homogenization of complex structures, for instance, arising from computed tomography computed tomography (CT) imaging techniques. There exist 3D microstructures involving pores and defects, for which the original FFT-based homogenization scheme does not converge. In contrast, for the proposed scheme, convergence is ensured. Also, the solution fields are devoid of the spurious oscillations and checkerboarding artifacts associated to conventional schemes. We demonstrate the power of the approach by computing the elasto-plastic response of a long-fiber reinforced thermoplastic material with 172 × 10^{6} (displacement) degrees of freedom. Copyright © 2016 John Wiley & Sons, Ltd.

This work introduces a semi-analytical formulation for the simulation and modeling of curved structures based on the scaled boundary finite element method (SBFEM). This approach adapts the fundamental idea of the SBFEM concept to scale a boundary to describe a geometry. Until now, scaling in SBFEM has exclusively been performed along a straight coordinate that enlarges, shrinks, or shifts a given boundary. In this novel approach, scaling is based on a polar or cylindrical coordinate system such that a boundary is shifted along a curved scaling direction. The derived formulations are used to compute the static and dynamic stiffness matrices of homogeneous curved structures. The resulting elements can be coupled to general SBFEM or FEM domains. For elastodynamic problems, computations are performed in the frequency domain. Results of this work are validated using the global matrix method and standard finite element analysis. Copyright © 2016 John Wiley & Sons, Ltd.

We propose a new method to obtain contact forces under a non-smoothed contact problem between arbitrarily-shaped bodies which are discretized by finite element method. Contact forces are calculated by the specific contact algorithm between two particles of smoothed particle hydrodynamics, which is a meshfree method, and that are applied to each colliding body. This approach has advantages that accurate contact forces can be obtained within an accelerated collision without a jump problem in a discrete time increment. Also, this can be simply applied into any contact problems like a point-to-point, a point-to-line, and a point-to-surface contact for complex shaped and deformable bodies. In order to describe this method, an impulse based method, a unilateral contact method and smoothed particle hydrodynamics method are firstly introduced in this paper. Then, a procedure about the proposed method is handled in great detail. Finally, accuracy of the proposed method is verified by a conservation of momentum through three contact examples. Copyright © 2016 John Wiley & Sons, Ltd.

This paper considers stochastic hybrid stress quadrilateral finite element analysis of plane elasticity equations with stochastic Young's modulus and stochastic loads. Firstly, we apply Karhunen–Loève expansion to stochastic Young's modulus and stochastic loads so as to turn the original problem into a system containing a finite number of deterministic parameters. Then we deal with the stochastic field and the space field by *k*−version/*p*−version finite element methods and a hybrid stress quadrilateral finite element method, respectively. We derive a priori error estimates, which are uniform with respect to the Lamè constant *λ*∈(0,+*∞*). Finally, we provide some numerical results. Copyright © 2016 John Wiley & Sons, Ltd.

This paper builds on recent work developed by the authors for the numerical analysis of large strain solid dynamics, by introducing an upwind cell centred hexahedral finite volume framework implemented within the open source code OpenFOAM [http://www.openfoam.com/]. In Lee, Gil and Bonet (2013), a first-order hyperbolic system of conservation laws was introduced in terms of the linear momentum and the deformation gradient tensor of the system, leading to excellent behaviour in two-dimensional bending dominated nearly incompressible scenarios. The main aim of this paper is the extension of this algorithm into three dimensions, its tailor-made implementation into OpenFOAM and the enhancement of the formulation with three key novelties. First, the introduction of two different strategies in order to ensure the satisfaction of the underlying involutions of the system, that is, that the deformation gradient tensor must be curl-free throughout the deformation process. Second, the use of a discrete angular momentum projection algorithm and a monolithic Total Variation Diminishing Runge–Kutta time integrator combined in order to guarantee the conservation of angular momentum. Third, and for comparison purposes, an adapted Total Lagrangian version of the hyperelastic-GLACE nodal scheme of Kluth and Després (2010) is presented. A series of challenging numerical examples are examined in order to assess the robustness and accuracy of the proposed algorithm, benchmarking it against an ample spectrum of alternative numerical strategies developed by the authors in recent publications. Copyright © 2016 John Wiley & Sons, Ltd.

This paper describes a novel methodology that combines smoothed discrete particle hydrodynamics (SDPH) and finite volume method (FVM) to enhance the effective performance in solving the problems of gas-particle multiphase flow. To describe the collision and fluctuation of particles, this method also increases a new parameter, namely, granular temperature, according to the kinetic theory of granular flow. The coupled framework of SDPH–FVM has been established, in which the drag force and pressure gradient act on the SDPH particles and the momentum sources of drag force are added back onto the FVM mesh. The proposed technique is a coupled discrete-continuum method based on the two-fluid model. To compute for the discrete phase, its SDPH is developed from smoothed particle hydrodynamics (SPH), in which the properties of SPH are redefined with some new physical quantities added into the traditional SPH parameters, so that it is more beneficial for SDPH in representing the particle characteristics. For the continuum phase, FVM is employed to discretize the continuum flow field on a stationary grid by capturing fluid characteristics. The coupled method exhibits strong efficiency and accuracy in several two-dimensional numerical simulations. Copyright © 2016 John Wiley & Sons, Ltd.

We present an adaptive variant of the measure-theoretic approach for stochastic characterization of micromechanical properties based on the observations of quantities of interest at the coarse (macro) scale. The salient features of the proposed nonintrusive stochastic inverse solver are identification of a nearly optimal sampling domain using enhanced ant colony optimization algorithm for multiscale problems, incremental Latin-hypercube sampling method, adaptive discretization of the parameter and observation spaces, and adaptive selection of number of samples. A complete test data of the TORAY T700GC-12K-31E and epoxy #2510 material system from the National Institute for Aviation Research report is employed to characterize and validate the proposed adaptive nonintrusive stochastic inverse algorithm for various unnotched and open-hole laminates. Copyright © 2016 John Wiley & Sons, Ltd.

A new smoothed finite element method (S-FEM) with tetrahedral elements for finite strain analysis of nearly incompressible solids is proposed. The proposed method is basically a combination of the F-bar method and edge-based S-FEM with tetrahedral elements (ES-FEM-T4) and is named ‘F-barES-FEM-T4’. F-barES-FEM-T4 inherits the accuracy and shear locking-free property of ES-FEM-T4. At the same time, it also inherits the volumetric locking-free property of the F-bar method. The isovolumetric part of the deformation gradient (*F*^{iso}) is derived from the ** F** of ES-FEM-T4, whereas the volumetric part (

In this study, a new mean-strain 10-node tetrahedral element is developed using energy-sampling stabilization. The proposed 10-node tetrahedron is composed of several four-node linear tetrahedral elements, four tetrahedra in the corners and four tetrahedra that tile the central octahedron in three possible sets of four-node linear tetrahedra, corresponding to three different choices for the internal diagonal. The assumed strains are calculated from mean ‘basis function gradients.’ The energy-sampling technique introduced previously for removing zero-energy modes in the mean-strain hexahedron is adapted for the present element: the stabilization energy is evaluated on the four-corner tetrahedra. The proposed element naturally leads to a lumped-mass matrix and does not have unphysical low-energy vibration modes. For simplicity, we limit our developments to linear elasticity with compressible and nearly incompressible material. The numerical tests demonstrate that the present element performs well compared with the classical 10-node tetrahedral elements for shell and plate structures, and nearly incompressible materials. Copyright © 2016 John Wiley & Sons, Ltd.

Couple stress formulations have been given much attention lately because of the possibility to explain cases, when the classical theory of elasticity fails to describe adequately the mechanical behavior. Such cases may include size-dependent stiffness, high stress gradients, and the response of materials with microstructure. Here, a new mixed Lagrangian formulation is developed for elastodynamic response within consistent size-dependent skew-symmetric couple stress theory. With a specific choice of mixed variables, the formulation can be written with only C^{0} continuity requirements, without the need to introduce a Lagrange multiplier or penalty method. Furthermore, this formulation permits, for the first time, the determination of natural frequencies within the consistent couple stress theory. Details for the strong form of the equilibrium equations, constitutive model relations, boundary conditions, and the corresponding weak form are provided. In addition, the discrete forms are also discussed with two approaches for reducing variables. Several simple two-dimensional computational example problems are then examined, along with a brief investigation of the effect of couple stress on natural frequencies, which exhibit size-dependence for most, but not all, modes. Copyright © 2016 John Wiley & Sons, Ltd.

Contact and fracture in the material point method require grid-scale enrichment or partitioning of material into distinct velocity fields to allow for displacement or velocity discontinuities at a material interface. A new method is presented in which a kernel-based damage field is constructed from the particle data. The gradient of this field is used to dynamically repartition the material into contact pairs at each node. This approach avoids the need to construct and evolve explicit cracks or contact surfaces and is therefore well suited to problems involving complex 3-D fracture with crack branching and coalescence. A straightforward extension of this approach permits frictional ‘self-contact’ between surfaces that are initially part of a single velocity field, enabling more accurate simulation of granular flow, porous compaction, fragmentation, and comminution of brittle materials. Numerical simulations of self contact and dynamic crack propagation are presented to demonstrate the accuracy of the approach. Copyright © 2016 John Wiley & Sons, Ltd.

We present three new sets of *C*^{1} hierarchical high-order tensor-product bases for conforming finite elements. The first basis is a high-order extension of the Bogner–Fox–Schmit basis. The edge and face functions are constructed using a combination of cubic Hermite and Jacobi polynomials with *C*^{1} global continuity on the common edges of elements. The second basis uses the tensor product of fifth-order Hermite polynomials and high-order functions and achieves global *C*^{1} continuity for meshes of quadrilaterals and *C*^{2} continuity on the element vertices. The third basis for triangles is also constructed using the tensor product of one-dimensional functions defined in barycentric coordinates. It also has global *C*^{1} continuity on edges and *C*^{2} continuity on vertices. A patch test is applied to the three considered elements. Projection and plate problems with smooth fabricated solutions are solved, and the performance of the *h*- and *p*-refinements are evaluated by comparing the approximation errors in the *L*_{2}- and energy norms. A plate with singularity is then studied, and *h*- and *p*-refinements are analysed. Finally, a transient problem with implicit time integration is considered. The results show exponential convergence rates with increasing polynomial order for the triangular and quadrilateral meshes of non-distorted and distorted elements. Copyright © 2016 John Wiley & Sons, Ltd.

In this paper, a simple locking-free triangular plate element, labeled here as Mindlin-type triangular plate element with nine degrees of freedom (MTP9), is presented. The element employs an incompatible approximation with nine degrees of freedom (DOFs) independent of the nodes and the shape of the triangle to define the displacements *u*/*v*/*w*(which is similar to a general solid element) along the *x*/*y*/*z* axes. It is free of shear locking, has a proper rank, and provides stable solutions for thick and thin plates. Moreover, the paper provides a new way to develop simple and efficient locking-free thick–thin-plate/shell elements. A variety of numerical examples demonstrate the convergence, accuracy, and robustness of the present element MTP9. Copyright © 2016 John Wiley & Sons, Ltd.

The paper is concerned with the modeling of the planar motion of a horizontal sheet of metal in a rolling mill. Inhomogeneous velocity profiles, with which the material is generated at one roll stand and enters the next one, lead to the time evolution of the deformation of the metal strip. We propose a novel kinematic description in which the axial coordinate is an Eulerian one, while the transverse motion of the sheet is modeled in a Lagrangian framework. The material volume travels across a finite element mesh, whose boundaries are in contact with the roll stands.

The concise mathematical formulation of the method is different from the classical form of the arbitrary Lagrangian–Eulerian approach with a rate form of constitutive relations.

The undeformed state of the strip is incompatible owing to the varying time rate of the generation of material. We treat this phenomenon by introducing the notion of intrinsic strains, which are mathematically described using the multiplicative decomposition of the deformation gradient.

We currently present the approach for quasistatic simulations of in-plane elastoplastic deformations of the strip. A practically relevant problem with two strip segments and three roll stands is studied in a numerical example. Copyright © 2016 John Wiley & Sons, Ltd.

In this paper, we develop a dual-horizon peridynamics (DH-PD) formulation that naturally includes varying horizon sizes and completely solves the ‘ghost force’ issue. Therefore, the concept of dual horizon is introduced to consider the unbalanced interactions between the particles with different horizon sizes. The present formulation fulfills both the balances of linear momentum and angular momentum exactly. Neither the ‘partial stress tensor’ nor the ‘slice’ technique is needed to ameliorate the ghost force issue. We will show that the traditional peridynamics can be derived as a special case of the present DH-PD. All three peridynamic formulations, namely, bond-based, ordinary state-based, and non-ordinary state-based peridynamics, can be implemented within the DH-PD framework. Our DH-PD formulation allows for *h*-adaptivity and can be implemented in any existing peridynamics code with minimal changes. A simple adaptive refinement procedure is proposed, reducing the computational cost. Both two-dimensional and three-dimensional examples including the Kalthoff–Winkler experiment and plate with branching cracks are tested to demonstrate the capability of the method. Copyright © 2016 John Wiley & Sons, Ltd.

The variational approach to fracture is effective for simulating the nucleation and propagation of complex crack patterns but is computationally demanding. The model is a strongly nonlinear non-convex variational inequality that demands the resolution of small length scales. The current standard algorithm for its solution, alternate minimization, is robust but converges slowly and demands the solution of large, ill-conditioned linear subproblems. In this paper, we propose several advances in the numerical solution of this model that improve its computational efficiency. We reformulate alternate minimization as a nonlinear Gauss–Seidel iteration and employ over-relaxation to accelerate its convergence; we compose this accelerated alternate minimization with Newton's method, to further reduce the time to solution, and we formulate efficient preconditioners for the solution of the linear subproblems arising in both alternate minimization and in Newton's method. We investigate the improvements in efficiency on several examples from the literature; the new solver is five to six times faster on a majority of the test cases considered. © 2016 The Authors International Journal for Numerical Methods in Engineering Published by John Wiley & Sons Ltd.

Among all 3D 8-node hexahedral solid elements in current finite element library, the ‘best’ one can produce good results for bending problems using coarse regular meshes. However, once the mesh is distorted, the accuracy will drop dramatically. And how to solve this problem is still a challenge that remains outstanding. This paper develops an 8-node, 24-DOF (three conventional DOFs per node) hexahedral element based on the virtual work principle, in which two different sets of displacement fields are employed simultaneously to formulate an unsymmetric element stiffness matrix. The first set simply utilizes the formulations of the traditional 8-node trilinear isoparametric element, while the second set mainly employs the analytical trial functions in terms of 3D oblique coordinates (*R*, *S*, *T*). The resulting element, denoted by US-ATFH8, contains no adjustable factor and can be used for both isotropic and anisotropic cases. Numerical examples show it can strictly pass both the first-order (constant stress/strain) patch test and the second-order patch test for pure bending, remove the volume locking, and provide the invariance for coordinate rotation. Especially, it is insensitive to various severe mesh distortions. Copyright © 2016 John Wiley & Sons, Ltd.

This paper presents a new efficient monolithic finite element solution scheme to treat the set of PDEs governing a 2D, biphasic, saturated theory of porous media model with intrinsically coupled and incompressible solid and fluid constituents for infinitesimal and large elastic deformation. Our approach, which inherits some of its techniques from CFD, is characterized by the following aspects: (1) it only performs operator evaluation with no additional Gateaux derivatives. In particular, the computations of the time-consuming material tangent matrix are not involved here; (2) it solves the non-linear dynamic problem with no restriction on the strength of coupling; (3) it is more efficient than the linear **u****v***p* solver discussed in previous works; (4) it requires weaker derivatives, and hence, lower-order FE can be tested; and (5) the boundary conditions are reduced, solution independent and more convenient to apply than in the old **u****v***p* formulation. For the purpose of validation and comparison, prototypical simulations including analytical solutions are carried out, and at the end, an adaptive time stepping procedure is introduced to handle the rapid change in the numbers of nonlinear iterations that may occur. Copyright © 2016 John Wiley & Sons, Ltd.

Accurate sizing functions are crucial for efficient generation of high-quality meshes, but to define the sizing function is often the bottleneck in complicated mesh generation tasks because of the tedious user interaction involved. We present a novel algorithm to automatically create high-quality sizing functions for surface mesh generation. First, the tessellation of a Computer Aided Design (CAD) model is taken as the background mesh, in which an initial sizing function is defined by considering geometrical factors and user-specified parameters. Then, a convex nonlinear programming problem is formulated and solved efficiently to obtain a smoothed sizing function that corresponds to a mesh satisfying necessary gradient constraint conditions and containing a significantly reduced element number. Finally, this sizing function is applied in an advancing front mesher. With the aid of a walk-through algorithm, an efficient sizing-value query scheme is developed. Meshing experiments of some very complicated geometry models are presented to demonstrate that the proposed sizing-function approach enables accurate and fully automatic surface mesh generation. Copyright © 2016 John Wiley & Sons, Ltd.

This contribution presents a novel approach to structural shape optimization that relies on an embedding domain discretization technique. The evolving shape design is embedded within a uniform finite element background mesh which is then used for the solution of the physical state problem throughout the course of the optimization. We consider a boundary tracking procedure based on adaptive mesh refinement to separate between interior elements, exterior elements, and elements intersected by the physical domain boundary. A selective domain integration procedure is employed to account for the geometric mismatch between the uniform embedding domain discretization and the evolving structural component. Thereby, we avoid the need to provide a finite element mesh that conforms to the structural component for every design iteration, as it is the case for a standard Lagrangian approach to structural shape optimization. Still, we adopt an explicit shape parametrization that allows for a direct manipulation of boundary vertices for the design evolution process. In order to avoid irregular and impracticable design updates, we consider a geometric regularization technique to render feasible descent directions for the course of the optimization. Copyright © 2016 John Wiley & Sons, Ltd.

The paper deals with two main advantages in the analysis of slender elastic structures both achieved through the mixed (stress and displacement) format with respect to the more commonly used displacement one: (i) the smaller error in the extrapolations usually employed in the solution strategies of nonlinear problems and (ii) the lower polynomial dependence of the problem equations on the finite element degrees of freedom when solid finite elements are used. The smaller extrapolation error produces a lower number of iterations and larger step length in path-following analysis and a greater accuracy in Koiter asymptotic method. To focus on the origin of the phenomenon, the two formats are derived for the same finite element interpolation. The reduced polynomial dependence improves the Koiter asymptotic strategy in terms of both computational efficiency, accuracy and simplicity. Copyright © 2016 John Wiley & Sons, Ltd.

This paper presents a novel class of preconditioners for the iterative solution of the sequence of symmetric positive-definite linear systems arising from the numerical discretization of transient parabolic and self-adjoint partial differential equations. The preconditioners are obtained by nesting appropriate projections of reduced-order models into the classical iteration of the preconditioned conjugate gradient (PCG). The main idea is to employ the reduced-order solver to project the residual associated with the conjugate gradient iterations onto the space spanned by the reduced bases. This approach is particularly appealing for transient systems where the full-model solution has to be computed at each time step. In these cases, the natural reduced space is the one generated by full-model solutions at previous time steps. When increasing the size of the projection space, the proposed methodology highly reduces the system conditioning number and the number of PCG iterations at every time step. The cost of the application of the preconditioner linearly increases with the size of the projection basis, and a trade-off must be found to effectively reduce the PCG computational cost. The quality and efficiency of the proposed approach is finally tested in the solution of groundwater flow models. © 2016 The Authors. *International Journal for Numerical Methods in Engineering* Published by John Wiley & Sons Ltd.

A shear-flexible isogeometric Reissner–Mindlin shell formulation using non-uniform rational B-splines basis functions is introduced, which is used for the demonstration of a coupling approach for multiple non-conforming patches. The six degrees of freedom formulation uses the exact surface normal vectors and curvature. The shell formulation is implemented in an isogeometric analysis framework for computation of structures composed of multiple geometric entities. To enable local model refinement as well as non-matching domains coupling, a conservative multi-patch approach using Lagrange multipliers for structured non-uniform rational B-splines patches is presented. Here, an additional border frame mesh is used to couple geometries during structural analyses. This frame interface approach avoids the problem of excessive constraints when multiple patches are coupled at one point. First, the shell formulation is verified with several reference cases. Then the influence of the frame interface discretization and frame penalty stiffness on the smoothness of the results is investigated. The effects of the perturbed Lagrangian method in combination with the frame interface approach is shown. In addition, results of models with T-joint interface connections and perpendicular stiffener patches are presented. Copyright © 2016 John Wiley & Sons, Ltd.

A non-gradient-based approach for topology optimization using a genetic algorithm is proposed in this paper. The genetic algorithm used in this paper is assisted by the Kriging surrogate model to reduce computational cost required for function evaluation. To validate the non-gradient-based topology optimization method in flow problems, this research focuses on two single-objective optimization problems, where the objective functions are to minimize pressure loss and to maximize heat transfer of flow channels, and one multi-objective optimization problem, which combines earlier two single-objective optimization problems. The shape of flow channels is represented by the level set function. The pressure loss and the heat transfer performance of the channels are evaluated by the Building-Cube Method code, which is a Cartesian-mesh CFD solver. The proposed method resulted in an agreement with previous study in the single-objective problems in its topology and achieved global exploration of non-dominated solutions in the multi-objective problems. © 2016 The Authors International Journal for Numerical Methods in Engineering Published by John Wiley & Sons Ltd

Reproducing kernel particle method (RKPM) has been applied to many large deformation problems. RKPM relies on polynomial reproducing conditions to yield desired accuracy and convergence properties but requires appropriate kernel support coverage of neighboring nodes to ensure kernel stability. This kernel stability condition is difficult to achieve for problems with large particle motion such as the fragment-impact processes that commonly exist in extreme events. A new reproducing kernel formulation with ‘quasi-linear’ reproducing conditions is introduced. In this approach, the first-order polynomial reproducing conditions are approximately enforced to yield a nonsingular moment matrix. With proper error control of the first-order completeness, nearly second-order convergence rate in L2 norm can be achieved while maintaining kernel stability. The effectiveness of this quasi-linear RKPM formulation is demonstrated by modeling several extremely large deformation and fragment-impact problems. Copyright © 2016 John Wiley & Sons, Ltd.

We propose a formulation for linear elastic fracture mechanics in which the stress intensity factors are found directly from the solution vector of an extended boundary element method formulation. The enrichment is embedded in the boundary element method formulation, rather than adding new degrees of freedom for each enriched node. Therefore, a very limited number of new degrees of freedom is added to the problem, which contributes to preserving the conditioning of the linear system of equations. The Stroh formalism is used to provide boundary element method fundamental solutions for any degree of anisotropy, and these are used for both conventional and enriched degrees of freedom. Several numerical examples are shown with benchmark solutions to validate the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.

The fractional step method (FSM) is an efficient solution technique for the particle finite element method, a Lagrangian-based approach to simulate fluid–structure interaction (FSI). Despite various refinements, the applicability of the FSM has been limited to low viscosity flow and FSI simulations with a small number of equations along the fluid–structure interface. To overcome these limitations, while incorporating nonlinear response in the structural domain, an FSM that unifies structural and fluid response in the discrete governing equations is developed using the quasi-incompressible formulation. With this approach, fluid and structural particles do not need to be treated separately, and both domains are unified in the same system of equations. Thus, the equations along the fluid–structure interface do not need to be segregated from the fluid and structural domains. Numerical examples compare the unified FSM with the non-unified FSM and show that the computational cost of the proposed method overcomes the slow convergence of the non-unified FSM for high values of viscosity. As opposed to the non-unified FSM, the number of iterations required for convergence with the unified FSM becomes independent of viscosity and time step, and the simulation run time does not depend on the size of the FSI interface. Copyright © 2016 John Wiley & Sons, Ltd.

Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models (ROMs) can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based ROMs, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers' test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full-order model is recovered to within discretization error. A parallel version of the resulting method can be used on supercomputers to generate proper orthogonal decomposition-based ROMs, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space. Copyright © 2016 John Wiley & Sons, Ltd.

The identification of the geological structure from seismic data is formulated as an inverse problem. The properties and the shape of the rock formations in the subsoil are described by material and geometric parameters, which are taken as input data for a predictive model. Here, the model is based on the Helmholtz equation, describing the acoustic response of the system for a given wave length. Thus, the inverse problem consists in identifying the values of these parameters such that the output of the model agrees the best with observations. This optimization algorithm requires multiple queries to the model with different values of the parameters. Reduced order models are especially well suited to significantly reduce the computational overhead of the multiple evaluations of the model.

In particular, the proper generalized decomposition produces a solution explicitly stating the parametric dependence, where the parameters play the same role as the physical coordinates. A proper generalized decomposition solver is devised to inexpensively explore the parametric space along the iterative process. This exploration of the parametric space is in fact seen as a post-process of the generalized solution. The approach adopted demonstrates its viability when tested in two illustrative examples. Copyright © 2016 John Wiley & Sons, Ltd.

This paper proposes an efficient metamodeling approach for uncertainty quantification of complex system based on Gaussian process model (GPM). The proposed GPM-based method is able to efficiently and accurately calculate the mean and variance of model outputs with uncertain parameters specified by arbitrary probability distributions. Because of the use of GPM, the closed form expressions of mean and variance can be derived by decomposing high-dimensional integrals into one-dimensional integrals. This paper details on how to efficiently compute the one-dimensional integrals. When the parameters are either uniformly or normally distributed, the one-dimensional integrals can be analytically evaluated, while when parameters do not follow normal or uniform distributions, this paper adopts the effective Gaussian quadrature technique for the fast computation of the one-dimensional integrals. As a result, the developed GPM method is able to calculate mean and variance of model outputs in an efficient manner independent of parameter distributions. The proposed GPM method is applied to a collection of examples. And its accuracy and efficiency is compared with Monte Carlo simulation, which is used as benchmark solution. Results show that the proposed GPM method is feasible and reliable for efficient uncertainty quantification of complex systems in terms of the computational accuracy and efficiency. Copyright © 2016 John Wiley & Sons, Ltd.

A nonparametric probabilistic approach for modeling uncertainties in projection-based, nonlinear, reduced-order models is presented. When experimental data are available, this approach can also quantify uncertainties in the associated high-dimensional models. The main underlying idea is twofold. First, to substitute the deterministic reduced-order basis (ROB) with a stochastic counterpart. Second, to construct the probability measure of the stochastic reduced-order basis (SROB) on a subset of a compact Stiefel manifold in order to preserve some important properties of a ROB. The stochastic modeling is performed so that the probability distribution of the constructed SROB depends on a small number of hyperparameters. These are determined by solving a reduced-order statistical inverse problem. The mathematical properties of this novel approach for quantifying model uncertainties are analyzed through theoretical developments and numerical simulations. Its potential is demonstrated through several example problems from computational structural dynamics. Copyright © 2016 John Wiley & Sons, Ltd.

By separation of scales and the homogenization of a flow through porous media, a two-scale problem arises where a Darcy-type flow is present on the macroscale and a Stokes flow on the subscale. In this paper, the problem is given as the minimization of a potential. Additional constraints imposing periodicity in a weak sense are added using Lagrange multipliers. In particular, the upper and lower energy bounds for the corresponding strongly periodic problem are produced, quantifying the accuracy of the weakly periodic boundary conditions. A numerical example demonstrates the evaluation of energy bounds and the performance of weakly periodic boundary conditions on a representative volume element. © 2016 The Authors. International Journal for Numerical Methods in Engineering Published by John Wiley & Sons Ltd

In this paper, a four-node quadrilateral flat shell element is proposed for geometrically nonlinear analysis based on updated Lagrangian formulation with the co-rotational kinematics concept. The flat shell element combines the membrane element with drilling degrees of freedom and the plate element with shear deformation. By means of these linearized elements, a simplified nonlinear analysis procedure allowing for warping of the flat shell element and large rotation is proposed. The tangent stiffness matrix and the internal force recovery are formulated in this paper. Several classic benchmark examples are presented to validate the accuracy and efficiency of the proposed new and more proficient element for practical engineering analysis of shell structures. Copyright © 2016 John Wiley & Sons, Ltd.

It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits ‘fictitious’ mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, the canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. We present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions. Copyright © 2016 John Wiley & Sons, Ltd.

This work discusses a discontinuous Galerkin (DG) discretization for two-phase flows. The fluid interface is represented by a level set, and the DG approximation space is adapted such that jumps and kinks in pressure and velocity fields can be approximated sharply. This adaption of the original DG space, which can be performed ‘on-the-fly’ for arbitrary interface shapes, is referred to as extended discontinuous Galerkin. By combining this ansatz with a special quadrature technique, one can regain spectral convergence properties for low-regularity solutions, which is demonstrated by numerical examples. This work focuses on the aspects of spatial discretization, and special emphasis is devoted on how to overcome problems related to quadrature, small cut cells, and condition number of linear systems. Temporal discretization will be discussed in future works. Copyright © 2016 John Wiley & Sons, Ltd.

Digital imaging technologies such as X-ray scans and ultrasound provide a convenient and non-invasive way to capture high-resolution images. The colour intensity of digital images provides information on the geometrical features and material distribution which can be utilised for stress analysis. The proposed approach employs an automatic and robust algorithm to generate quadtree (2D) or octree (3D) meshes from digital images. The use of polygonal elements (2D) or polyhedral elements (3D) constructed by the scaled boundary finite element method avoids the issue of hanging nodes (mesh incompatibility) commonly encountered by finite elements on quadtree or octree meshes. The computational effort is reduced by considering the small number of cell patterns occurring in a quadtree or an octree mesh. Examples with analytical solutions in 2D and 3D are provided to show the validity of the approach. Other examples including the analysis of 2D and 3D microstructures of concrete specimens as well as of a domain containing multiple spherical holes are presented to demonstrate the versatility and the simplicity of the proposed technique. Copyright © 2016 John Wiley & Sons, Ltd.

A new isogeometric density-based approach for the topology optimization of multi-material structures is presented. In this method, the density fields of multiple material phases are represented using the isogeometric non-uniform rational B-spline-based parameterization leading to exact modeling of the geometry, removing numerical artifacts and full analytical computation of sensitivities in a cost-effective manner. An extension of the perimeter control technique is introduced where restrictions are imposed on the perimeters of density fields of all phases. Consequently, not only can one control the complexity of the optimal design but also the minimal lengths scales of all material phases. This leads to optimal designs with significantly enhanced manufacturability and comparable performance. Unlike the common element-wise or nodal-based density representations, owing to higher order continuity of density fields in this method, their gradients required for perimeter control restrictions are calculated exactly without additional computational cost. The problem is formulated with constraints on either (1) volume fractions of different material phases or (2) the total mass of the structure. The proposed method is applied for the minimal compliance design of two-dimensional structures consisting of multiple distinct materials as well as functionally graded ones. Copyright © 2016 John Wiley & Sons, Ltd.

We propose a numerical method for a fluid–structure interaction problem. The material of the structure is homogeneous, isotropic, and it can be described by the compressible neo-Hookean constitutive equation, while the fluid is governed by the Navier–Stokes equations. Our study does not use turbulence model. Updated Lagrangian method is used for the structure and fluid equations are written in Arbitrary Lagrangian–Eulerian coordinates. One global moving mesh is employed for the fluid–structure domain, where the fluid–structure interface is an ‘interior boundary’ of the global mesh. At each time step, we solve a monolithic system of unknown velocity and pressure defined on the global mesh. The continuity of velocity at the interface is automatically satisfied, while the continuity of stress does not appear explicitly in the monolithic fluid–structure system. This method is very fast because at each time step, we solve only one linear system. This linear system was obtained by the linearization of the structure around the previous position in the updated Lagrangian formulation and by the employment of a linear convection term for the fluid. Numerical results are presented. Copyright © 2016 John Wiley & Sons, Ltd.

In this paper, we present a generalized prismatic hybrid meshing method for viscous flow simulations. One major difficulty in implementing a robust prismatic hybrid meshing tool is to handle boundary layer mesh collisions, and normally an extra data structure (e.g. quadtree in two-dimensional and octree in three-dimensional) is required. The proposed method overcomes this difficulty via an heuristic approach, and it only relies on constrained delaunay triangulation/tetrahedralization(CDT). No extra data structures are required. Geometrical reasoning is used to approximate the maximum marching distance of each point by walking through the CDT. This is combined with post-processing of marching vectors and distance and prohibition of multilevel differences to form an automatic and robust mechanism to remove boundary layer mesh collisions. Benefiting from the matureness of CDT techniques, the proposed method is robust, efficient and simple to implement. Its capability is demonstrated by generating quality prismatic hybrid meshes for industrial models with complex geometries. The proposed method is believed to be able considerably reduce the effort to implement a robust hybrid prismatic mesh generator for viscous flow simulations. © 2016 The Authors. International Journal for Numerical Methods in Engineering. Published by John Wiley & Sons Ltd.

A computational framework for assisting in the development of novel textiles is presented. Electronic textiles are key in the rapidly growing field of wearable electronics for both consumer and military uses. There are two main challenges to the modeling of electronic textiles: the discretization of the textile microstructure and the interaction between electromagnetic and mechanical fields. A director-based beam formulation with an assumed electrical current is used to discretize the fabric at the level of individual fibrils. The open-source package FEniCS was used to implement the finite element model. Contact integrals were added into the FEniCS framework so that multiphysics contact laws can be incorporated in the same framework, leveraging the code generation and automated differentiation capabilities of FEniCS to produce the tangents needed by the implicit solution method. The computational model is used to construct and determine the mechanical, thermal, and electrical properties of a representative volume elements of a plain woven textile. Dynamic relaxation to solve the mechanical fields and the electrical and thermal fields is solved statically for a given mechanical state. The simulated electrical responses are fit to a simplified Kirchhoff network model to determine effective resistances of the textile. Copyright © 2016 John Wiley & Sons, Ltd.

This paper develops and demonstrates a strategy for identifying the lack of dynamic interaction, or coupling, between potential design modifications to a vibrating base structure. Such decoupling may be exploited to efficiently compare the performance of competing engineering designs. In particular, it is shown how different design modifications may be represented as the addition or removal of substructures. When these substructures are uncoupled according to the metric developed here, the computational cost of determining the optimal system design can be greatly reduced. For example, if a designer considers seven possible modifications and wishes to examine all possible combinations of the modifications, 128 possible structures must be analyzed. However, if all modifications are dynamically uncoupled, significant computational effort need only be spent on eight of the possible structures in order to generate the responses for all remaining designs. Example problems demonstrate this cost reduction and illustrate cases where dynamic uncoupling occurs. Copyright © 2016 John Wiley & Sons, Ltd.

This article revisits a topology optimization design approach for micro-manufacturing and extends it to optical microlithography with partially coherent illumination. The solution is based on a combination of two technologies, the topology optimization and the proximity error correction in microlithography/nanolithography. The key steps include (i) modeling the physical inputs of the fabrication process, including the ultraviolet light illumination source and the mask, as the design variables in optimization and (ii) applying physical filtering and heaviside projection for topology optimization, which corresponds to the aerial image formulation and the pattern development processes, respectively. The proposed approach results in an effective source and a binary design mask, which can be sent directly to fabrication without additional post-processing steps for proximity error correction. Meanwhile, the performance of the device is optimized and robust with respect to process variations, such as dose/photo-resist variations and lens defocus. A compliant micro-gripper design example is considered to demonstrate the applicability of this approach. Copyright © 2016 John Wiley & Sons, Ltd.

A new higher-order accurate method is proposed that combines the advantages of the classical *p*-version of the FEM on body-fitted meshes with embedded domain methods. A background mesh composed by higher-order Lagrange elements is used. Boundaries and interfaces are described implicitly by the level set method and are within elements. In the elements cut by the boundaries or interfaces, an automatic decomposition into higher-order accurate sub-elements is realized. Therefore, the zero level sets are detected and meshed in a first step, which is called reconstruction. Then, based on the topological situation in the cut element, higher-order sub-elements are mapped to the two sides of the boundary or interface. The quality of the reconstruction and the mapping largely determines the properties of the resulting, automatically generated conforming mesh. It is found that optimal convergence rates are possible although the resulting sub-elements are not always well-shaped. Copyright © 2016 John Wiley & Sons, Ltd.

The contribution of this work is the implementation of a new elastic solution method for thick laminated composites and sandwich structures based on a generalized unified formulation using finite elements. A quadrilateral four-node element was developed and evaluated using an in-house finite element program. The C-1 continuity requirements are fulfilled for the transversal displacement field variable. This method is tagged as Caliri's generalized formulation. The results employing the proposed solution method yielded coherent results with deviations as low as 0.05% for a static simply supported symmetric laminate and 0.5% for the modal analyses of a soft core sandwich structure. Copyright © 2016 John Wiley & Sons, Ltd.

A robust and efficient strategy is proposed to simulate mechanical problems involving cohesive fractures. This class of problems is characterized by a global structural behavior that is strongly affected by localized nonlinearities at relatively small-sized critical regions. The proposed approach is based on the division of a simulation into a suitable number of sub-simulations where adaptive mesh refinement is performed only once based on refinement window(s) around crack front process zone(s). The initialization of Newton-Raphson nonlinear iterations at the start of each sub-simulation is accomplished by solving a linear problem based on a secant stiffness, rather than a volume mapping of nonlinear solutions between meshes. The secant stiffness is evaluated using material state information stored/read on crack surface facets which are employed to explicitly represent the geometry of the discontinuity surface independently of the volume mesh within the generalized finite element method framework. Moreover, a simplified version of the algorithm is proposed for its straightforward implementation into existing commercial software. Data transfer between sub-simulations is not required in the simplified strategy. The computational efficiency, accuracy, and robustness of the proposed strategies are demonstrated by an application to cohesive fracture simulations in 3-D. Copyright © 2016 John Wiley & Sons, Ltd.

A new nonconforming brick element is introduced, which only has 13 DOFs locally and takes as its shape functions space. The vector-valued version generates, together with a discontinuous approximation, an inf-sup stable finite element pair of order 2 for the Stokes problem in the energy norm. Copyright © 2016 John Wiley & Sons, Ltd.

Three new contributions to the field of multisurface plasticity are presented for general situations with an arbitrary number of nonlinear yield surfaces with hardening or softening. A method for handling linearly dependent flow directions is described. A residual that can be used in a line search is defined. An algorithm that has been implemented and comprehensively tested is discussed in detail. Examples are presented to illustrate the computational cost of various components of the algorithm. The overall result is that a single Newton-Raphson iteration of the algorithm costs between 1.5 and 2 times that of an elastic calculation. Examples also illustrate the successful convergence of the algorithm in complicated situations. For example, without using the new contributions presented here, the algorithm fails to converge for approximately 50% of the trial stresses for a common geomechanical model of sedementary rocks, while the current algorithm results in complete success. Because it involves no approximations, the algorithm is used to quantify the accuracy of an efficient, pragmatic, but approximate, algorithm used for sedimentary-rock plasticity in a commercial software package. The main weakness of the algorithm is identified as the difficulty of correctly choosing the set of initially active constraints in the general setting. Copyright © 2016 John Wiley & Sons, Ltd.

In this work, a mixed variational formulation to simulate quasi-incompressible electro-active or magneto-active polymers immersed in the surrounding free space is presented. A novel domain decomposition is used to disconnect the primary coupled problem and the arbitrary free-space mesh update problem. Exploiting this decomposition, we describe a block-iterative approach to solving the linearised multiphysics problem, and a physically and geometrically based, three-parameter method to update the free space mesh. Several application-driven example problems are implemented to demonstrate the robustness of the mixed formulation for both electro-elastic and magneto-elastic problems involving both finite deformations and quasi-incompressible media. Copyright © 2016 John Wiley & Sons, Ltd.

Uniform grid solvers of the periodic Lippmann–Schwinger equation have been introduced by Moulinec and Suquet for the numerical homogenization of heterogeneous materials. Based on the fast Fourier transform, these methods use the strain as main unknown and usually do not produce displacement fields. While this is generally not perceived as a restriction for homogenization purposes, some tasks might require kinematically admissible displacement fields. In this paper, we show how the numerical solution to the periodic Lippmann–Schwinger equation can be post-processed to reconstruct a displacement field. Our procedure applies to any variant of the Moulinec–Suquet solver. The reconstruction is formulated as an auxiliary elastic equilibrium problem of a homogeneous material, which is solved with displacement-based finite elements. Taking advantage of periodicity, uniformity of the grid and homogeneity of the material, the resulting linear system is formulated and solved efficiently in Fourier space. The cost of our procedure is lower than that of one iteration of the Lippmann–Schwinger solver. Two applications are proposed, in two and three dimensions. In the first application, the reconstructed displacement field is used to compute a rigorous upper bound on the effective shear modulus. In the second application, the quality of the reconstruction is assessed quantitatively. Copyright © 2016 John Wiley & Sons, Ltd.

Mechanical computations in multiphase domains raise numerous difficulties from the generation of the initial mesh to its adaptation throughout the simulation. All alternatives to mesh adaptation, such as level-set methods, have the well-known drawback of inducing volume conservation issues. In this paper, a moving mesh method is coupled to a topological mesh adaptation technique in order to track moving and deforming interfaces in multiphase simulations, with a robust control of mesh quality. Level-set functions are used as intermediaries to enhance the mesh adaptation technique with a volume conservation constraint, which is compatible both with implicit and with body-fitted interfaces. Results show that this method has the same advantage of permitting important displacements, deformations, and topological changes (coalescence of interfaces, for example) as a standard level-set method, while volume diffusion is drastically reduced. Copyright © 2016 John Wiley & Sons, Ltd.

This study concerns the development of a numerical methodology for initializing immersed interface-based CFD solvers for using complex computer-aided design (CAD) geometry. CFD solvers with higher-order discretization stencils require larger stencil widths, which become problematic in regions of space where insufficient mesh resolution is available. This problem becomes especially challenging when convoluted triangulated surface meshes generated from complex solid models are used to initialize the cut-cells. A pragmatic balance between desired local geometry resolution and numerical accuracy is often required to find a practical solution. Here, a robust iterative fill algorithm is presented that balances geometry resolution with numerical accuracy (*via* stencil size). Several examples are presented to illustrate the use of this initialization procedure that employs both the original CAD generated triangulated surface mesh, along with a level set representation of the surface to initialize cut-cells and boundary proximity measures for creation of CFD stencils. Convergence error analysis of surface area and enclosed volumes is first presented to show the effects of fill on the geometry as a function of desired stencil size and grid resolution. The algorithm is then applied to geometrically complex problems using large eddy simulation. Two problems are considered. The first is flow around the Eiffel Tower. The second is a combustion swirler in the context of a design problem. Copyright © 2016 John Wiley & Sons, Ltd.

A comprehensive study of the two sub-steps composite implicit time integration scheme for the structural dynamics is presented in this paper. A framework is proposed for the convergence accuracy analysis of the generalized composite scheme. The local truncation errors of the acceleration, velocity, and displacement are evaluated in a rigorous procedure. The presented and proved accuracy condition enables the displacement, velocity, and acceleration achieving second-order accuracy simultaneously, which avoids the drawback that the acceleration accuracy may not reach second order. The different influences of numerical frequencies and time step on the accuracy of displacement, velocity, and acceleration are clarified. The numerical dissipation and dispersion and the initial magnitude errors are investigated physically, which measure the errors from the algorithmic amplification matrix's eigenvalues and eigenvectors, respectively. The load and physically undamped/damped cases are naturally accounted. An optimal algorithm-Bathe composite method (Bathe and Baig, 2005; Bathe, 2007; Bathe and Noh, 2012) is revealed with unconditional stability, no overshooting in displacement, velocity, and acceleration, and excellent performance compared with many other algorithms. The proposed framework also can be used for accuracy analysis and design of other multi-sub-steps composite schemes and single-step methods under physical damping and/or loading. Copyright © 2016 John Wiley & Sons, Ltd.

This contribution presents a numerical strategy to evaluate the effective properties of image-based microstructures in the case of random material properties. The method relies on three points: (1) a high-order fictitious domain method; (2) an accurate spectral stochastic model; and (3) an efficient model-reduction method based on the proper generalized decomposition in order to decrease the computational cost introduced by the stochastic model. A feedback procedure is proposed for an automatic estimation of the random effective properties with a given confidence. Numerical verifications highlight the convergence properties of the method for both deterministic and stochastic models. The method is finally applied to a real 3D bone microstructure where the empirical probability density function of the effective behaviour could be obtained. Copyright © 2016 John Wiley & Sons, Ltd.

In this contribution, a novel method for a fail-safe optimal design of structures is proposed, which is a coupled approach of optimization employing a genetic algorithm, the structural analysis conducted in the framework of fracture mechanics and uncertainty analysis. The idea of fail-safe structures is to keep their functionality and integrity even under damage conditions, for example, a local failure of substructures. In the present work, a design concept of a substructure exhibiting a damage accumulating function due to the application of crack arresters is introduced. If such a substructure is integrated within a system of coupled substructures, it will accumulate the damage arising from the boundary conditions change induced by the failure of certain neighbouring structural elements and hinder further damage escalation. The investigation of failure of the damage accumulating substructure is introduced within a finite element framework by a combination of discrete fracturing and configurational mechanics based criteria. In order to design a structure, which will fail safely according to a predefined scenario, uncertainties are taken into account. The developed approach optimizes the configuration of crack arresters within the damage accumulating substructure so that the uncertain crack propagation is hindered and only a local failure of this element occurs. Copyright © 2016 John Wiley & Sons, Ltd.

In this paper, a multibody meshfree framework is proposed for the simulation of granular materials undergoing deformations at the grain scale. This framework is based on an implicit solving of the mechanical problem based on a weak form written on the domain defined by all the grains composing the granular sample. Several technical choices, related to the displacement field interpolation, to the contact modelling, and to the integration scheme used to solve the dynamic equations, are explained in details. A first implementation is proposed, under the acronym Multibody ELement-free Open code for DYnamic simulation (MELODY), and is made available for free download. Two numerical examples are provided to show the convergence and the capability of the method. Copyright © 2016 John Wiley & Sons, Ltd.

Parametric and implicit methods are traditionally thought to be two irrelevant approaches in structural shape optimization. Parametric method works as a Lagrangian approach and often uses the parametric boundary representation (B-rep) of curves/surfaces, for example, Bezier and B-splines in combination with the conformal mesh of a finite element model, while implicit method relies upon level-set functions, that is, implicit functions for B-rep, and works as an Eulerian approach in combination with the fixed mesh within the scope of extended finite element method or finite cell method. The original contribution of this work is the unification of both methods. First, a new shape optimization method is proposed by combining the features of the parametric and implicit B-reps. Shape changes of the structural boundary are governed by parametric B-rep on the fixed mesh to maintain the merit in computer-aided design modeling and avoid laborious remeshing. Second, analytical shape design sensitivity is formulated for the parametric B-rep in the framework of fixed mesh of finite cell method by means of the Hamilton–Jacobi equation. Numerical examples are solved to illustrate the unified methodology. Copyright © 2016 John Wiley & Sons, Ltd.

This paper studies the behaviour of the error incurred when numerically integrating the elasto-plastic mechanical relationships of a constitutive model for soils using an explicit substepping formulation with automatic error control. The correct update of all the variables involved in the numerical integration of the incremental stress–strain relationships is central to the computational performance of the integration scheme, and, although often missed in the literature, all variables (including specific volume) should be explicitly considered in the algorithmic formulation. This is demonstrated in the paper by studying, in the context of the Cam clay formulations for saturated soils, the influence that the updating of the specific volume has on the accuracy of the numerical solution. The fact that the variation of the local error with the size of the integrated strain depends on the order of local accuracy of the numerical method is also used in the paper to propose a simple and powerful strategy to verify the correctness of the implemented mathematical formulation. Copyright © 2016 John Wiley & Sons, Ltd.

An efficient method is proposed to estimate the first-order global sensitivity indices based on failure probability and variance by using maximum entropy theory and Nataf transformation. The computational cost of this proposed method is quite small, and the proposed method can efficiently overcome the ‘dimensional curse’ due to dimensional reduction technique. Ideas for the estimation of higher-order sensitivity indices are discussed. Copyright © 2016 John Wiley & Sons, Ltd.

This paper describes a novel method for mapping between basis representation of a field variable over a domain in the context of numerical modelling and inverse problems. In the numerical solution of inverse problems, a continuous scalar or vector field over a domain may be represented in different finite-dimensional basis approximations, such as an unstructured mesh basis for the numerical solution of the forward problem, and a regular grid basis for the representation of the solution of the inverse problem. Mapping between the basis representations is generally lossy, and the objective of the mapping procedure is to minimise the errors incurred. We present in this paper a novel mapping mechanism that is based on a minimisation of the *L*^{2} or *H*^{1} norm of the difference between the two basis representations. We provide examples of mapping in 2D and 3D problems, between an unstructured mesh basis representative of an FEM approximation, and different types of structured basis including piecewise constant and linear pixel basis, and blob basis as a representation of the inverse basis. A comparison with results from a simple sampling-based mapping algorithm shows the superior performance of the method proposed here. © 2016 The Authors. International Journal for Numerical Methods in Engineering Published by John Wiley & Sons Ltd.

No abstract is available for this article.

]]>No abstract is available for this article.

]]>The generation of a set of particles with high initial volume fraction is a major problem in the context of discrete element simulations. Advancing front algorithms provide an effective means to generate dense packings when spherical particles are assumed. The objective of this paper is to extend an advancing front algorithm to a wider class of particles with generic size and shape. In order to get a dense packing, each new particle is placed in contact with other two (or three in 3D) particles of the advancing front. The contact problem is solved analytically using wrapping intersection technique. The results presented herein will be useful in the application of these algorithms to a wide variety of practical problems. Examples of geometric models for applications to biomechanics and cutting tools are presented. Copyright © 2015 John Wiley & Sons, Ltd.

In this paper, we present a homogenization approach that can be used in the geometrically nonlinear regime for stress-driven and strain-driven homogenization and even a combination of both. Special attention is paid to the straightforward implementation in combination with the finite-element method. The formulation follows directly from the principle of virtual work, the periodic boundary conditions, and the Hill–Mandel principle of macro-homogeneity. The periodic boundary conditions are implemented using the Lagrange multiplier method to link macroscopic strain to the boundary displacements of the computational model of a representative volume element. We include the macroscopic strain as a set of additional degrees of freedom in the formulation. Via the Lagrange multipliers, the macroscopic stress naturally arises as the associated ‘forces’ that are conjugate to the macroscopic strain ‘displacements’. In contrast to most homogenization schemes, the second Piola–Kirchhoff stress and Green–Lagrange strain have been chosen for the macroscopic stress and strain measures in this formulation. The usage of other stress and strain measures such as the first Piola–Kirchhoff stress and the deformation gradient is discussed in the Appendix. Copyright © 2015 John Wiley & Sons, Ltd.

Linear buckling constraints are important in structural topology optimization for obtaining designs that can support the required loads without failure. During the optimization process, the critical buckling eigenmode can change; this poses a challenge to gradient-based optimization and can require the computation of a large number of linear buckling eigenmodes. This is potentially both computationally difficult to achieve and prohibitively expensive. In this paper, we motivate the need for a large number of linear buckling modes and show how several features of the block Jacobi conjugate gradient (BJCG) eigenvalue method, including optimal shift estimates, the reuse of eigenvectors, adaptive eigenvector tolerances and multiple shifts, can be used to efficiently and robustly compute a large number of buckling eigenmodes. This paper also introduces linear buckling constraints for level-set topology optimization. In our approach, the velocity function is defined as a weighted sum of the shape sensitivities for the objective and constraint functions. The weights are found by solving an optimization sub-problem to reduce the mass while maintaining feasibility of the buckling constraints. The effectiveness of this approach in combination with the BJCG method is demonstrated using a 3D optimization problem. Copyright © 2016 John Wiley & Sons, Ltd.

The finite cell method (FCM) is an immersed domain finite element method that combines higher-order non-boundary-fitted meshes, weak enforcement of Dirichlet boundary conditions, and adaptive quadrature based on recursive subdivision. Because of its ability to improve the geometric resolution of intersected elements, it can be characterized as an *immersogeometric* method. In this paper, we extend the FCM, so far only used with Cartesian hexahedral elements, to higher-order non-boundary-fitted tetrahedral meshes, based on a reformulation of the octree-based subdivision algorithm for tetrahedral elements. We show that the resulting TetFCM scheme is fully accurate in an immersogeometric sense, that is, the solution fields achieve optimal and exponential rates of convergence for *h*-refinement and *p*-refinement, if the immersed geometry is resolved with sufficient accuracy. TetFCM can leverage the natural ability of tetrahedral elements for local mesh refinement in three dimensions. Its suitability for problems with sharp gradients and highly localized features is illustrated by the immersogeometric phase-field fracture analysis of a human femur bone. Copyright © 2016 John Wiley & Sons, Ltd.