By exploiting the meshless property of kernel-based collocation methods, we propose a fully automatic numerical recipe for solving interpolation/regression and boundary value problems adaptively. The proposed algorithm is built upon a least-squares collocation formulation on some quasi-random point sets with low discrepancy. A novel strategy is proposed to ensure that the fill distances of data points in the domain and on the boundary are in the same order of magnitude. To circumvent the potential problem of ill-conditioning due to extremely small separation distance in the point sets, we add an extra dimension to the data points for generating shape parameters such that nearby kernels are of distinctive shape. This effectively eliminates the needs of shape parameter identification. Resulting linear systems were then solved by a greedy trial space algorithm to improve the robustness of the algorithm. Numerical examples are provided to demonstrate the efficiency and accuracy of the proposed methods.

This paper proposes novel strategies to enable multigrid preconditioners within iterative solvers for linear systems arising from contact problems based on mortar finite element formulations. The so-called dual mortar approach that is exclusively employed here allows for an easy condensation of the discrete Lagrange multipliers. Therefore, it has the advantage over standard mortar methods that linear systems with saddle point structure are avoided, which generally require special preconditioning techniques. However, even with the dual mortar approach the resulting linear systems turn out to be hard to solve using iterative linear solvers. A basic analysis of the mathematical properties of the linear operators reveals why the naive application of standard iterative solvers shows instabilities and provides new insights how the contact modeling affects the corresponding linear systems. This information is used to develop new strategies which make multigrid methods efficient preconditioners for the class of contact problems based on dual mortar methods. It is worth mentioning that these strategies primarily adapt the input of the multigrid preconditioners in a way, that no contact-specific enhancements are necessary in the multigrid algorithms. This makes the implementation comparably easy. With the proposed method we are able to solve large contact problems which is an important step towards the application of dual mortar based contact formulations in industry. Numerical results are presented illustrating the performance of the presented algebraic multigrid method.

In this paper, three new kinds of time-domain numerical methods of exponentially damped systems are presented, namely the simplified Newmark integration method, the precise integration method and the simplified complex mode superposition method. Based on the traditional Newmark integration method, and transforming equation of motion with exponentially damping kernel functions into an equivalent second-order equation of motion by using the internal variables technique, the simplified Newmark integration method is developed by using a decoupling technique to reduce the computer run time and storage. By transforming equation of motion with exponentially damping kernel functions into a first-order state-space equation, the precise integration technique is used to numerically solve the state-space equation. Based on symmetric state-space equation and complex mode superposition method, a delicate and simplified general solution of exponentially damped linear systems, completely in real value form, is developed. The accuracy and efficiency of the developed numerical methods are compared and discussed by two benchmark examples.

In this paper, by combining the dimension splitting method and the improved complex variable element-free Galerkin method, the dimension splitting - improved complex variable element-free Galerkin (DS-ICVEFG) method is presented for three-dimensional transient heat conduction problems. Using the dimension splitting method, a three-dimensional transient heat conduction problem is translated into a series of two-dimensional ones which can be solved with the improved complex variable element-free Galerkin (ICVEFG) method. In the ICVEFG method for each two-dimensional problem, the improved complex variable moving least-square (ICVMLS) approximation is used to obtain the shape functions, and the penalty method is used to apply the essential boundary conditions. Finite difference method is used in the one-dimensional direction. And the Galerkin weak form of three-dimensional transient heat conduction problem is used to obtain the final discretized equations. Then the DS-ICVEFG method for three-dimensional transient heat conduction problems is presented. Three numerical examples are given to show that the new method has higher computational precision and efficiency.

Single-curvature plates are commonly encountered in mechanical and civil structures. In this paper, we introduce a topology optimization method for the stiffness-based design of structures made of curved plates with fixed thickness. The geometry of each curved plate is analytically and explicitly represented by its location, orientation, dimension and curvature radius, and therefore our method renders designs that are distinctly made of curved plates. To perform the primal and sensitivity analyses, we employ the geometry projection method, which smoothly maps the analytical geometry of the curved plates onto a continuous density field defined over a fixed uniform finite element grid. A size variable is ascribed to each plate and penalized in the spirit of Solid Isotropic Material with Penalization (SIMP), which allows the optimizer to remove a plate from the design. We also introduce in our method a constraint that ensures that no portion of a plate lies outside the design envelope. This prevents designs that would otherwise require cuts to the plates that may be very difficult to manufacture. We present numerical examples to demonstrate the validity and applicability of the proposed method.

In this work, a new strategy for solving multi-scale topology optimization problems is presented. An alternate directions algorithm and a pre-computed off-line micro-structure database (Computational Vademecum) are used to efficiently solve the problem. In addition, the influence of considering manufacturable constraints is examined. Then, the strategy is extended to solve the coupled problem of designing both the macroscopic and microscopic topologies. Full details of the algorithms and numerical examples to validate the methodology are provided.

In this paper, we present a general model for non-Fickian diffusion and drug dissolution from a controlled drug delivery device coated with a thin polymeric layer. First, we study the stability and deduce an analytic solution of the problem. Then, we consider this solution and provide suitable boundary conditions to replace the problem of mass transport in the coating of a coronary drug-eluting stent. With this approach, we reduced the computational cost of performing numerical simulations in complex 3D geometries. The model for mass transport by a coronary drug-eluting stent is coupled with a non-Newtonian blood model flow. In order to show the effectiveness of the method, numerical experiments and a model validation with experimental data are also included. In particular, we investigate the influence of the non-Newtonian flow regime on the drug deposition in the arterial wall.

A key limitation of most constitutive models that reproduce degradation of quasi-brittle materials is that they generally do not address issues related to fatigue. One reason is the huge computational costs to resolve each load cycle on the structural level. The goal of this paper is the development of a temporal integration scheme which significantly increases the computational efficiency of the finite element method in comparison to conventional temporal integrations.

The essential constituent of the fatigue model is an implicit gradient-enhanced formulation of the damage rate. The evolution of the field variables is computed as a multiscale Fourier series in time. On a microchronological scale attributed to single cycles, the initial boundary value problem (IBVP) is approximated by linear BVPs with respect to the Fourier coefficients. Using the adaptive cycle jump concept, the obtained damage rates are transferred to a coarser macrochronological scale associated with the duration of material deterioration. The performance of the developed method is hence improved due to an efficient numerical treatment of the microchronological problem in combination with the cycle jump technique on the macrochronological scale.

Validation examples demonstrate convergence of the obtained solutions to the reference simulations while significantly reducing the computational costs.

An approach for investigating finite deformation contact problems with frictional effects with a special emphasis on non-smooth geometries such as sharp corners and edges is proposed in this contribution. The contact conditions are separately enforced for point contact, line contact and surface contact by employing three different sets of Lagrange multipliers and, as far as possible, a variationally consistent discretization approach based on mortar finite element methods. The discrete unknowns due to the Lagrange multiplier approach are eliminated from the system of equations by employing so-called dual or biorthogonal shape functions. For the combined algorithm, no transition parameters are required and the decision between point contact, line contact and surface contact is implicitly made by the variationally consistent framework. The algorithm is supported by a penalty regularization for the special scenario of non-parallel edge-to-edge contact. The robustness and applicability of the proposed algorithms are demonstrated with several numerical examples.

With the postulation of the principle of virtual action, we propose in this paper a variational framework for describing the dynamics of finite dimensional mechanical systems which contain frictional contact interactions. Together with the contact and impact laws formulated as normal cone inclusions, the principle of virtual action directly leads to the measure differential inclusions commonly used in the dynamics of nonsmooth mechanical systems. The discretization of the principle of virtual action in its strong and weak variational form by local finite elements in time provides a structured way to derive various time-stepping schemes. The constitutive laws for the impulsive and non-impulsive contact forces, i.e. the contact and impact laws, are treated on velocity-level by using a discrete contact law for the percussion increments in the sense of Moreau. Using linear shape functions and different quadrature rules, we obtain three different stepping schemes. Besides the well established Moreau time-stepping scheme, we can present two alternative integrators referred to as symmetric and variational Moreau-type stepping schemes. A suitable benchmark example shows the superiority of the newly proposed integrators in terms of energy conservation properties, accuracy and convergence.

A computational certification framework under limited experimental data is developed. By this approach, a high-fidelity model (HFM) is first calibrated to limited experimental data. Subsequently, the HFM is employed to train a low fidelity model (LFM). Finally, the calibrated low fidelity model is utilized for component analysis. The rational for utilizing HFM in the initial stage stems from the fact that constitutive laws of individual microphases in HFM are rather simple, so that the number of material parameters that needs to be identified is less than in the LFM. The added complexity of material models in LFM is necessary to compensate for simplified kinematical assumptions made in LFM and for smearing discrete defect structure. The first order computational homogenization model, which resolves microstructural details including the structure of defects, is selected as the HFM, whereas the reduced order homogenization is selected as the LFM. Certification framework illustration, verification and validation are conducted for ceramic matrix composite (CMC) material system comprised of eight-harness weave (8HW) architecture. *Blind* validation is performed against experimental data to validate the proposed computational certification framework.

In multicomponent fluid flow simulations using Smooth Particle Hydrodynamics(SPH), the Lagrangian particles employed, are mostly of equal mass. This is preferred over multimass particle setup(particles with different values of mass) as it resolves the fluid interfaces comparatively better. But, the flip side of using uniform mass particle setup is that, it may not be computationally economical in situations with large density ratios. Hence, employing multimass particle setup is both economical and perhaps inevitable. An attractive feature of multimass particle setup is that, it allows uniform resolution in regions with different values of density. In order to take advantage of the multimass setup, it is therefore imperative to reduce the error associated with its usage. In this work, we present suitable multimass correction terms and assess its effectiveness using the ∇*h*-SPH scheme. Standard benchmark problems viz., Shock Tube test, Triple-point shock test, Rayleigh-Taylor Instability(RTI) and Kelvin-Helmholtz Instability(KHI) were solved with multimass particle setup, where significant improvements could be achieved in resolving the associated contact discontinuities. This article is protected by copyright. All rights reserved.

Domain Decomposition strategies and the Proper Generalized Decomposition are efficiently combined to obtain a fast evaluation of the solution approximation in parameterized elliptic problems with complex geometries. The classical difficulties associated to the combination of layered domains with arbitrarily oriented mid-surfaces, which may require in-plane–out-of-plane techniques, are now dismissed. More generally, solutions on large domains can now be confronted within a Domain Decomposition approach. This is done with a reduced cost in the offline phase. Because, the Proper Generalized Decomposition gives an explicit description of the solution in each subdomain in terms of the solution at the interface. Thus, the evaluation of the approximation in each subdomain is a simple function evaluation given the interface values (and the other problem parameters). The interface solution can be characterized by any a priori user-defined approximation. Here, for illustration purposes, hierarchical polynomials are used. The repetitiveness of the subdomains is exploited to reduce drastically the offline computational effort. The online phase requires to solve a nonlinear problem to determine all the interface solutions. But this problem only has degrees of freedom on the interfaces and the Jacobian matrix is explicitly determined. Obviously, other parameters characterizing the solution (material constants, external loads, geometry) can also be incorporated in the explicit description of the solution.

In this paper, a novel Characteristic-Based Penalty (CBP) scheme for Finite Element Method (FEM) is proposed to solve two-dimensional incompressible laminar flow. This new CBP scheme employs the Characteristic-Galerkin method (CG) to stabilize the convective oscillation. To mitigate the incompressible constraint, Selective Reduced Integration (SRI) and recently proposed Selective Node-based Smoothed Finite Element Method (SNS-FEM) are used for 4-node Quadrilateral element (CBP-Q4SRI) and 3-node Triangular element (CBP-T3SNS), respectively. Meanwhile, the Reduced Integration (RI) for Q4 element (CBP-Q4RI) and NS-FEM for T3 element (CBP-T3NS) with CBP scheme are also investigated. The quasi-implicit CBP scheme is applied to allow a large time step for sufficient large penalty parameters. Due to no pressure degree of freedoms, quasi-implicit CBP-FEM has higher efficiency than quasi-implicit CBS-FEM. In this paper, the CBP-Q4SRI has been verified and validated with high accuracy, stability and fast convergence. Unexpectedly, CBP-Q4RI is of no instability, high accuracy and even slightly faster convergence than CBP-Q4SRI. For unstructured T3 elements, CBP-T3SNS also shows high accuracy and good convergence but with pressure oscillation using a large penalty parameter; CBP-T3NS produces oscillated wrong velocity and pressure results. In addition, the applicable ranges of penalty parameter for different proposed methods have been investigated.

Multi-material topology optimization often leads to members containing composite materials. However, in some instances, designers might be interested in using only one material for each member. Therefore, we propose an algorithm that selects a single preferred material from multiple materials per overlapping set. We develop the algorithm, based on the evaluation of both the strain energy and the cross-sectional area of each member, to control the material profile (i.e., number of materials) in each subdomain of the final design. This algorithm actively and iteratively selects materials to ensure a single material is used for each member. In this work, we adopt a multi-material formulation that handles an arbitrary number of volume constraints and candidate materials. To efficiently handle such volume constraints, we employ the Zhang-Paulino-Ramos (ZPR) design variable update scheme for multi-material optimization, which is based upon the separability of the dual objective function of the convex subproblem with respect to Lagrange multipliers. We provide an alternative derivation of this update scheme based on the Karush-Kuhn-Tucker (KKT) conditions. Through numerical examples, we demonstrate that the proposed material selection algorithm, which can be readily implemented in multi-material optimization, along with the ZPR update scheme, is robust and effective for selecting a single preferred material among multiple materials. This article is protected by copyright. All rights reserved.

The paper presents a multiscale model based on a FEMxDEM approach, a method that couples Discrete Elements at the micro-scale and Finite Elements at the macro-scale. FEMxDEM has proven to be an effective way to treat real-scale engineering problems by embedding constitutive laws numerically obtained using Discrete Elements into a standard Finite Element framework. The proposed paper focuses on some numerical open-issues of the method. Given the problem non-linearity, Newton's method is required. The standard full Newton method is modified by adopting operators different from the consistent tangent matrix and by developing *ad-hoc* solution strategies. The efficiency of several existing operators is compared and a new, original strategy is proposed, which is shown to be numerically more efficient than the existing propositions. Furthermore, a shared memory parallelization framework using OpenMP directives is introduced. The combination of these enhancements allow to overcome the FEMxDEM computational limitations, thus making the approach competitive with classical FEM in terms of stability and computational cost. This article is protected by copyright. All rights reserved.

In this paper we propose a stress recovery procedure for low-order finite elements in 3-D. For each finite element, the recovered stress field is obtained by satisfying equilibrium in an average sense and by projecting the directly-calculated stress field onto a conveniently chosen space. Compared to existing recovery techniques, the current procedure gives more accurate stress fields, it is simpler to implement, and can be applied to different types of elements without further modification. We demonstrate through a set of examples in linear elasticity, that the recovered stresses converge at a higher rate than that of directly-calculated stresses and that, in some cases, the rate of convergence is the same as that of the displacement field. This article is protected by copyright. All rights reserved.

The CH micromechanical model based on a static hypothesis, not unlike other models developed separately at around the same epoch, has proved its efficiency in predicting soil behaviour. For solving boundary value problems, the model has now integrated stress-strain relationships by considering both the micro and macro levels. The first step was to solve the linearized mixed control constraints by the introduction of a predictor-corrector scheme and then to implement the micro-macro relationships through an iterative procedure. Two return mapping schemes, consisting of the closest point projection method (CPPM) and the cutting plane algorithm (CPA), were subsequently integrated into the inter-particle force-displacement relations. Both algorithms have proved to be efficient in studies devoted to elementary tests and boundary value problems. CPPM compared to CPA, however, has the advantage of being more intensive cost efficient and just as accurate in the computational task of integrating the local laws into the micromechanical model. The results obtained demonstrate that the proposed linearized method is capable of performing loadings under stress and strain control. Finally, by applying a finite element analysis with a biaxial test and a square footing, it can be recognized that the CH micromechanical model performs efficiently in multiscale modelling.

Summary

In this paper, we present a novel scheme to generate trimmed quadrilateral shell meshes using a paving and cutting algorithm. Paving with well-shaped quadrilateral shell elements is initiated from a seed element placed in the interior of a geometric surface, and proceeds around the boundary of the paved mesh in all directions. The paved all-quadrilateral mesh covering the entire domain is cut by the boundaries of a geometric surface. Pentagonal shell elements are created by cutting the corners of quadrilateral shell elements at the domain boundaries. Pentagonal shape functions are constructed using Wachspress coordinates, and assumed natural strains in the form of the mixed interpolation of tensorial components (MITC) approach are employed to avoid the transverse shear locking of pentagonal shell elements. Several numerical examples are investigated to show the effectiveness of the present method.

The parametric level set approach is an extension of the conventional level set methods for topology optimization. By parameterizing the level set function, level set methods can be directly coupled with mathematical programming to achieve better numerical robustness and computational efficiency. Moreover, the parametric level set scheme can not only inherit the primary advantages of the conventional level set methods, such as clear boundary representation and the flexibility in handling topological changes, but also alleviate some undesired features from the conventional level set methods, such as the need for reinitialization. However, in the existing radial basis function (RBF) based parametric level set method, it is difficult to identify the range of the design variables. Besides, the parametric level set evolution often struggles with large fluctuations during the optimization process. Those issues cause difficulties both in numerical stability and in material property mapping. In this paper, a cardinal basis function (CBF) is constructed based on the RBF partition of unity collocation method to parameterize the level set function. The benefit of using CBF is that the range of the design variables can now be clearly specified as the value of the level set function. A distance regularization energy functional is also introduced, aiming to maintain the desired signed distance property during the level set evolution. With this desired feature, the level set evolution is stabilized against large fluctuations. In addition, the material properties mapped from the level set function to the finite element model can be more accurate.

This paper presents an accurate and efficient computational strategy for the 3D simulation of heterogeneous structures with unreinforced masonry (URM) components. A mesoscale modelling approach is employed for the URM parts, while other material components are modelled independently with continuous meshes. The generally non-matching meshes of the distinct domains are coupled with the use of a mesh tying method. The physical interaction between the components is captured with the use of zero-thickness cohesive interface elements. This strategy enables the optimisation of the individual meshes leading to increased computational efficiency. Furthermore, the elimination of the mesh compatibility requirement allows the 3D modelling of complex heterogeneous structures, ensuring the accurate representation of each component's nonlinear behaviour and their interaction. Numerical examples, including a comparative analysis on the elastic and nonlinear response of a masonry bridge considering arch-backfill interaction and the nonlinear simulation of a multi-leaf wall, are presented to show the unique features of the proposed strategy as well as its predictive power in comparison with experimental and numerical results found in the literature.

In this paper, we develop a mixed isogeometric analysis approach based on subdivision–stabilization to study strongly coupled diffusion in solids in both small and large deformation ranges. Coupling the fluid pressure and the solid deformation, the mixed formulation suffers from numerical instabilities in the incompressible and the nearly incompressible limit due to the violation of the inf-sup condition. We investigate this issue using subdivision–stabilized NURBS elements as well as different families of mixed isogeometric analysis techniques and assess their stability through a numerical inf-sup test. Further, the validity of the inf-sup stability test in poromechanics is supported by a mathematical proof concerning the corresponding stability estimate. Finally, two numerical examples involving the rigid strip foundation on saturated soils and the swelling hydrogel structure are presented to validate the stability and demonstrate the robustness of the proposed approach.

*Hierarchical Model (HiMod) Reduction* is intended to solve efficiently Partial Differential Equations in domains with a geometrically dominant direction. In many engineering applications, these problems are often reduced to 1D differential systems. This guarantees computational efficiency, yet dumps local accuracy as non-axial dynamics are dropped. HiMod recovers the secondary components of the dynamics of interest with a combination of different discretization techniques, following up a natural separation of variables. The dominant direction is generally solved by the Finite Element Method or IsoGeometric Analysis to guarantee flexibility, while the transverse components are solved by spectral methods, to guarantee a small number of degrees of freedom. By judiciously selecting the number of transverse modes, the method has been proven to improve significantly the accuracy of purely 1D solvers, with great computational efficiency. A Cartesian framework has been used so far both in slab domains and cylindrical pipes (including arteries) mapped to Cartesian reference domains. In this paper, we investigate the alternative use of a polar coordinates system for the transverse dynamics in circular or elliptical pipes. This seems a natural choice for applications like computational hemodynamics. In spite of this, the selection of a basis function set for the transverse dynamics is troublesome. As pointed out in the literature - even for simple elliptical problems - there is no “best” basis available. In this paper we perform an extensive investigation of HiMod in polar coordinates to discuss different possible choices for the transverse basis, pointing out pros and cons of the polar coordinate system. This article is protected by copyright. All rights reserved.

A popular version of the finite-strain Maxwell fluid is considered, which is based on the multiplicative decomposition of the deformation gradient tensor. The model combines Newtonian viscosity with hyperelasticity of Mooney-Rivlin type; it is a special case of the viscoplasticty model proposed by Simo and Miehe (1992). A simple, efficient and robust implicit time stepping procedure is suggested. Lagrangian and Eulerian versions of the algorithm are available, with equivalent properties. The numerical scheme is iteration free, unconditionally stable and first order accurate. It exactly preserves the inelastic incompressibility, symmetry, positive definiteness of the internal variables, and w-invariance. The accuracy of the stress computations is tested using a series of numerical simulations involving a non-proportional loading and large strain increments. In terms of accuracy, the proposed algorithm is equivalent to the modified Euler backward method with exact inelastic incompressibility; the proposed method is also equivalent to the classical integration method based on exponential mapping. Since the new method is iteration free, it is more robust and computationally efficient. The algorithm is implemented into MSC.MARC and a series of initial boundary value problems is solved in order to demonstrate the usability of the numerical procedures. This article is protected by copyright. All rights reserved.

The major goal of this work is to develop a robust modelling strategy for the simulation of ductile damage development including crack initiation and subsequent propagation. For that purpose, a Gurson-type model is used. This model class, as many other damage models, leads to significant material softening and must be used within a large deformation framework due to the ductile character of the materials. This leads to two main difficulties that should be dealt with carefully: mesh-dependency and volumetric locking. In this work, a logarithmic finite strain framework is adopted in which the Gurson-Tvergaard-Needleman constitutive law is reformulated. Then a nonlocal formulation with regularization of hardening variable is applied so as to solve mesh dependency and strain localization problem. In addition, the nonlocal model is combined with mixed “displacement-pressure-volume variation” elements to avoid volumetric locking. Thereby a mesh-independent and locking-free finite strain framework suitable for the modelling of ductile rupture is established. Attention is paid to mathematical properties and numerical performance of the model. Finally, the model parameters are identified on an experimental database for a nuclear piping steel. Simulations of standard test specimens (Notched Tensile (NT) bars, CT and SENT cracked specimens) are carried out and compared to experimental results.

We introduce a coupled finite and boundary element formulation for acoustic scattering analysis over thin shell structures. A triangular Loop subdivision surface discretisation is used for both geometry and analysis fields. The Kirchhoff-Love shell equation is discretised with the finite element method and the Helmholtz equation for the acoustic field with the boundary element method. The use of the boundary element formulation allows the elegant handling of infinite domains and precludes the need for volumetric meshing. In the present work the subdivision control meshes for the shell displacements and the acoustic pressures have the same resolution. The corresponding smooth subdivision basis functions have the *C*^{1} continuity property required for the Kirchhoff-Love formulation and are highly efficient for the acoustic field computations. We redverify the proposed isogeometric formulation through a closed-form solution of acoustic scattering over a thin shell sphere. Furthermore, we demonstrate the ability of the proposed approach to handle complex geometries with arbitrary topology that provides an integrated isogeometric design and analysis workflow for coupled structural-acoustic analysis of shells. This article is protected by copyright. All rights reserved.

This study proposes a framework for uncertainty analysis by incorporating explicit numerical solutions of governing equations for flood wave propagation with the expectation operator. It aims at effectively evaluating the effect of variations in initial and boundary conditions on the estimation of flood waves. Spatiotemporal semi-variogram models are employed to quantify the correlation of the variables in time and space. The 1-D nonlinear kinematic wave equation for the overland flow (named *EVO_NS_KWE*) is applied in the model development. Model validation is made by comparison with the Monte Carlo simulation (*MCS*) model in the calculation of statistical properties of model outputs (i.e. flow depths), that is, the mean, standard deviation and coefficient of variation (*CV*). The results from the model validation show that the *EVO_NS_KWE* model can produce excellent approximations of the mean and less satisfactory approximations of the standard deviation and *CV* compared with those obtained using the *MCS* model. It concludes that the uncertainties of flow depths in the domain are significantly affected by variations in the boundary condition. Future, application of the *EVO_NS_KWE* model enable the evaluation of uncertainty in model outputs induced by the initial and boundary condition subject to uncertainty, and will also provide corresponding probabilistic information for risk quantification method.

We formulate extensions to data-riven computing for both distance-minimizing and entropy-maximizing schemes to incorporate time integration. Previous works focused on formulating both types of solvers in the presence of static equilibrium constraints. Here, formulations assign data points to a variable relevance depending on distance to the solution and on maximum-entropy weighting, with distance-minimizing schemes discussed as a special case. The resulting schemes consist of the minimization of a suitably defined free energy over phase space subject to compatibility and a time-discretized momentum conservation constraint. We present selected numerical tests that establish the convergence properties of both types of data-driven solvers and solutions.

In this paper, a novel method to determine the distribution of a random variable from a sample of data is presented. The approach is called generalized kernel density maximum entropy method, because it adopts a kernel density representation of the target distribution, while its free parameters are determined through the principle of maximum entropy (ME). Here, the ME solution is determined by assuming that the available information is represented from *generalized* moments, which include as their subsets the power and the fractional ones. The proposed method has several important features: (1) applicable to distributions with any kind of support, (2) computational efficiency because the ME solution is simply obtained as a set of systems of linear equations, (3) good trade-off between bias and variance, and (4) good estimates of the tails of the distribution, in the presence of samples of small size. Moreover, the joint application of generalized kernel density maximum entropy with a bootstrap resampling allows to define credible bounds of the target distribution. The method is first benchmarked through an example of stochastic dynamic analysis. Subsequently, it is used to evaluate the seismic fragility functions of a reinforced concrete frame, from the knowledge of a small set of available ground motions.

This work addresses the numerical approximation of solutions to a dimensionless form of the Weertman equation, which models a steadily moving dislocation and is an important extension (with advection term) of the celebrated Peierls-Nabarro equation for a static dislocation. It belongs to the class of nonlinear reaction-advection-diffusion integro-differential equations with Cauchy-type kernel, thus involving an integration over an unbounded domain. In the Weertman problem, the unknowns are the shape of the core of the dislocation *and* the dislocation velocity. The proposed numerical method rests on a time-dependent formulation that admits the Weertman equation as its long-time limit. Key features are (1) time iterations are conducted by means of a new, robust, and inexpensive *Preconditioned Collocation Scheme* in the Fourier domain, which allows for *explicit* time evolution but amounts to implicit time integration, thus allowing for large time steps; (2) as the integration over the unbounded domain induces a solution with slowly decaying tails of important influence on the overall dislocation shape, the action of the operators at play is evaluated with *exact* asymptotic estimates of the tails, combined with discrete Fourier transform operations on a finite computational box of size *L*; (3) a specific device is developed to compute the moving solution in a comoving frame, to minimize the effects of the finite-box approximation. Applications illustrate the efficiency of the approach for different types of nonlinearities, with systematic assessment of numerical errors. Converged numerical results are found insensitive to the time step, and scaling laws for the combined dependence of the numerical error with respect to *L* and to the spatial step size are obtained. The method proves fast and accurate and could be applied to a wide variety of equations with moving fronts as solutions; notably, Weertman-type equations with the Cauchy-type kernel replaced by a fractional Laplacian.

This special issue contains 12 papers on topology optimization that address research topics dealing with a variety of engineering approaches, ranging from theoretical approaches to industrial applications powering the most recent advances in topology optimization.

Three-dimensional hybrid-Trefftz stress finite elements for plates and shells are proposed. Two independent fields are approximated: stresses within the element and displacement on their boundary. The required stress field derived from the Papkovitch-Neuber solution of the Navier equation, which a priori satisfies the Trefftz constraint, is generated using homogeneous harmonic polynomials. Restriction on the polynomial degree in the coordinate measured along the thickness direction is imposed to reduce the number of independent terms. Explicit expressions of the corresponding independent polynomials are listed up to the fifth order. Illustrative applications to evaluate displacements and stresses are conducted by hexahedral hybrid-Trefftz stress element models. The hierarchical *p*- and *h*-refinement strategy are exploited in the numerical tests.

The stability and reflection-transmission properties of the bipenalty method are studied in application to explicit finite element analysis of one-dimensional contact-impact problems. It is known that the standard penalty method, where an additional stiffness term corresponding to contact boundary conditions is applied, attacks the stability limit of finite element model. Generally, the critical time step size rapidly decreases with increasing penalty stiffness. Recent comprehensive studies have shown that the so-called bipenalty technique, using mass penalty together with standard stiffness penalty, preserves the critical time step size associated to contact-free bodies. In this paper, the influence of the penalty ratio (ratio of stiffness and mass penalty parameters) on stability and reflection-transmission properties in one-dimensional contact-impact problems using the same material and mesh size for both domains is studied. The paper closes with numerical examples, which demonstrate the stability and reflection-transmission behavior of the bipenalty method in one-dimensional contact-impact and wave propagation problems of homogeneous materials.

The present work addresses a multiscale framework for fast-Fourier-transform–based computational homogenization. The framework considers the scale bridging between microscopic and macroscopic scales. While the macroscopic problem is discretized with finite elements, the microscopic problems are solved by means of fast-Fourier-transforms (FFTs) on periodic representative volume elements (RVEs). In such multiscale scenario, the computation of the *effective properties* of the microstructure is crucial. While effective quantities in terms of stresses and deformations can be computed from surface integrals along the boundary of the RVE, the computation of the associated moduli is not straightforward. The key contribution of the present paper is the derivation and implementation of an *algorithmically consistent macroscopic tangent operator* which directly resembles the effective moduli of the microstructure. The macroscopic tangent is derived by means of the classical Lippmann-Schwinger equation and can be computed from a simple system of linear equations. This is performed through an efficient FFT-based approach along with a conjugate gradient solver. The viability and efficiency of the method is demonstrated for a number of two- and three-dimensional boundary value problems incorporating linear and nonlinear elasticity as well as viscoelastic material response.

Since the pioneering works on digital image correlation, a huge effort has been devoted to the elaboration of strategies coupling measurements with numerical simulations. The main issue achieved with such coupling is the identification of constitutive law parameters from experimental configurations yielding heterogeneous strain/stress fields. The main limitation of this framework is that a constitutive law is to be postulated, what sometimes reveals far from being straightforward. To circumvent this limitation, a methodology is proposed to build a parametric description not only of the displacement field for digital image correlation but also of the local stress. With no assumption on the constitutive relation, the method allows for estimating displacement, strain, and stress fields. The ability of the method is illustrated for different situations.

With the fast development of additive manufacturing technology, topology optimization involving multiple materials has received ever increasing attention. Traditionally, this kind of optimization problem is solved within the implicit solution framework by using the Solid Isotropic Material with Penalization or level set method. This treatment, however, will inevitably lead to a large number of design variables especially when many types of materials are involved and 3-dimensional (3D) problems are considered. This is because for each type of material, a corresponding density field/level function defined on the entire design domain must be introduced to describe its distribution. In the present paper, a novel approach for topology optimization with multiple materials is established based on the Moving Morphable Component framework. With use of this approach, topology optimization problems with multiple materials can be solved with much less numbers of design variables and degrees of freedom. Numerical examples provided demonstrate the effectiveness of the proposed approach.

Weight-adjusted inner products are easily invertible approximations to weighted *L*^{2} inner products. These approximations can be paired with a discontinuous Galerkin (DG) discretization to produce a time-domain method for wave propagation which is low storage, energy stable, and high-order accurate for arbitrary heterogeneous media and curvilinear meshes. In this work, we extend weight-adjusted DG methods to the case of matrix-valued weights, with the linear elastic wave equation as an application. We present a DG formulation of the symmetric form of the linear elastic wave equation, with upwind-like dissipation incorporated through simple penalty fluxes. A semidiscrete convergence analysis is given, and numerical results confirm the stability and high-order accuracy of weight-adjusted DG for several problems in elastic wave propagation.

This work concerns an application of the Tsallis entropy to homogenization problem of the fiber-reinforced and also of the particle-filled composites with random material and geometrical characteristics. Calculation of the effective material parameters is done with two alternative homogenization methods—the first is based upon the deformation energy of the Representative Volume Element (RVE) subjected to the few specific deformations, while the second uses explicitly the so-called homogenization functions determined under periodic boundary conditions imposed on this RVE. Probabilistic homogenization is made with the use of three concurrent non-deterministic methods, namely Monte-Carlo simulation, iterative generalized stochastic perturbation technique as well as the semi-analytical approach. The last two approaches are based on the Least Squares Method with polynomial basis of the statistically optimized order— this basis serves for further differentiation in the 10th-order stochastic perturbation technique, while semi-analytical method uses it in probabilistic integrals. These three approaches are implemented all as the extensions of the traditional Finite Element Method (FEM) with contrastively different mesh sizes, and they serve in computations of Tsallis entropies of the homogenized tensor components as the functions of input coefficient of variation.

In this paper, a new 4-node hybrid stress element is proposed using a node-based smoothing technique of tetrahedral mesh. The conditions for hybrid stress field required are summarized, and the field should be continuous for better performance of a constant-strain tetrahedral element. Nodal stress is approximated by the node-based smoothing technique, and the stress field is interpolated with standard shape functions. This stress field is linear within each element and continuous across elements. The stress field is expressed by nodal displacements and no additional variables. The element stiffness matrix is calculated using the Hellinger-Reissner functional, which guarantees the strain field from displacement field to be equal to that from the stress field in a weak sense. The performance of the proposed element is verified by through several numerical examples.

This manuscript presents the formulation and implementation of a hybrid multiscale integration scheme for multiscale problems that exhibit different scale separation characteristics in different directions. The proposed approach uses the key ideas of the variational multiscale enrichment at directions that exhibit poor scale separation and computational homogenization at directions with good scale separability. The proposed approach is particularly attractive for surface degradation problems in structures operating in the presence of aggressive environmental agents. Formulated in the context of variational multiscale principles, we develop a novel integration scheme that takes advantage of homogenization-like integration along directions that exhibit scale separation. The proposed integration scheme is applied to the reduced-order variational multiscale enrichment method to arrive at a computationally efficient multiscale solution strategy for surface degradation problems. Numerical verifications are performed to verify the implementation of the hybrid multiscale integrator. The results of the verifications reveal high accuracy and computational efficiency, compared with the direct reduced-order variational multiscale enrichment simulations. A coupled transport-thermomechanical analysis is presented to demonstrate the capability of the proposed computational framework.

In this paper is presented a return mapping algorithm for an elastoplastic/damage model that couples the constitutive equations of an unconventional plasticity model with the phenomenological description of ductile damage in the continuum damage mechanics framework. This approach combines the advantages of describing the mechanical property degradation by using the Lemaitre model with the realistic accumulation of plastic deformation in cyclic mobility problems from the subloading surface model. Most of the damage models for ductile failure focus on monotonic loading conditions, whereas the present model aims to extend the investigation to cyclic loading and low-cycle fatigue problems. A cutting-plane algorithm is adopted to describe the material behavior that considers both isotropic and kinematic hardening laws. Two simple numerical studies show that the elastoplastic and damage model is implemented correctly, displaying a quadratic rate of convergence for local equilibria and precise solutions, even for large-prescribed strain increments.

Greedy algorithm has been widely adopted for the point selection procedure of radial basis function–based mesh deformation. However, in large deformation simulations with thousands of points selected, the greedy point selection will be too expensive and thus become a performance bottleneck. To improve the efficiency of the point selection procedure, a parallel multiselection greedy method has been developed in this paper. Multiple points are selected at each step to accelerate the convergence speed of the greedy algorithm. In addition, 2 strategies are presented to determine the specific selecting number. The parallelization of the greedy point selection is realized on the basis of a master-slave model, and a hybrid decomposition algorithm is proposed to address the load imbalance problem. Numerical benchmarks show that both our multiselection method and the parallelization could obviously improve the point selection efficiency. Specifically, total speedups of 20 and 55 are separately obtained for the 3D undulating fish with 10^{6} cell mesh and the 3D rotating hydrofoil with 11 million cell mesh.

The Koiter-Newton (KN) method is a combination of local multimode polynomial approximations inspired by Koiter's initial postbuckling theory and global corrections using the standard Newton method. In the original formulation, the local polynomial approximation, called a reduced-order model, is used to make significantly more accurate predictions compared to the standard linear prediction used in conjunction with Newton method. The correction to the exact equilibrium path relied exclusively on Newton-Raphson method using the full model.

In this paper, we proposed a modified Newton-type KN method to trace the geometrically nonlinear response of structures. The developed predictor-corrector strategy is applied to each predicted solution of the reduced-order model. The reduced-order model can be used also in the correction phase, and the exact full nonlinear model is applied only to calculate force residuals. Remainder terms in both the displacement expansion and the reduced-order model are well considered and constantly updated during correction. The same augmented finite element model system is used for both the construction of the reduced-order model and the iterations for correction. Hence, the developed method can be seen as a particular modified Newton method with a constant iteration matrix over the single KN step. This significantly reduces the computational cost of the method. As a side product, the method has better error control, leading to more robust step size adaptation strategies. Numerical results demonstrate the effectiveness of the method in treating nonlinear buckling problems.

In this paper, the coupling of the improved interpolating element-free Galerkin (IIEFG) method and the variable-order infinite acoustic wave envelope element (WEE) method is studied. A coupled IIEFG-WEE method for computing sound radiation is proposed to make use of their advantages while evading their disadvantages. The coupling is achieved by constructing the hybrid shape function of continuity and compatibility on the interface between the IIEFG and WEE domains. In the IIEFG domain, the improved interpolating moving least-squares (IIMLS) method is used to form the shape functions satisfying the Kronecker delta condition while nonsingular weight functions can be used. The impacts of the size of the influence domain and the shape parameter on the performance of this coupled method are investigated. The numerical results show that the coupled IIEFG-WEE method can take full advantage of both the IIEFG and WEE methods and that it not only can achieve higher accuracy but also has a faster convergence speed than the conventional method of the finite element coupled with the WEE. The experimental results show that the method is very flexible for acoustic radiation prediction in the infinite domain.

This paper deals with model-order reduction of parametric partial differential equations (PPDEs). More specifically, we consider the problem of finding a good approximation subspace of the solution manifold of the PPDE when only partial information on the latter is available. We assume that 2 sources of information are available: (*a*) a “rough” prior knowledge taking the form of a manifold containing the target solution manifold and (*b*) partial linear measurements of the solutions of the PPDE (the term partial refers to the fact that observation operators cannot be inverted). We provide and study several tools to derive good approximation subspaces from these 2 sources of information. We first identify the best worst-case performance achievable in this setup and propose simple procedures to approximate the corresponding optimal approximation subspace. We then provide, in a simplified setup, a theoretical analysis relating the achievable reduction performance to the choice of the observation operator and the prior knowledge available on the solution manifold.

Beam-to-beam contact can be dually described depending on the scale as contact between curves or contact between surfaces. The surface-to-surface algorithm requires a computationally expensive continuum description of the beams. The curve-to-curve contact algorithm is generated from the corresponding closest point projection procedure determining a minimal distance between curves and involves less expensive beam finite elements.

The existence and uniqueness criterion of the closest point projection procedure formulated already in earlier publications is analyzed in detail, especially in the case of multiple solutions for “parallel curves.” It is shown that the curve-to-curve contact algorithm cannot be applied for general parallel curves, which can be generated in an arbitrary range of angles between the tangent lines. A new curve-to-solid beam contact algorithm is here developed. This algorithm requires a special solid-beam finite element and is combining the advantages of surface contact together with the flexibility of the application to beam-to-beam contact.

Further, a special case with “parallel tangent” vectors is considered from the classical Hertzian contact problem. It is found numerically that the corresponding length of a contact zone is well correlated with the Hertz theory. A possible application of the developed strategy for wire ropes is analyzed numerically.

The unsymmetric finite element method is a promising technique to produce distortion-immune finite elements. In this work, a simple but robust 4-node 12-DOF unsymmetric quadrilateral membrane element is formulated. The test function of this new element is determined by a concise isoparametric-based displacement field that is enriched by the Allman-type drilling degrees of freedom. Meanwhile, a rational stress field, instead of the displacement one in the original unsymmetric formulation, is directly adopted to be the element's trial function. This stress field is obtained based on the analytical solutions of the plane stress/strain problem and the quasi-conforming technique. Thus, it can a priori satisfy related governing equations. Numerical tests show that the presented new unsymmetric element, named as US-Q4θ, exhibits excellent capabilities in predicting results of both displacement and stress, in most cases, superior to other existing 4-node element models. In particular, it can still work very well in severely distorted meshes even when the element shape deteriorates into concave quadrangle or degenerated triangle.

In this paper, we present a new method for inserting several triangulated surfaces into an existing tetrahedral mesh generated by the meccano method. The result is a conformal mesh where each inserted surface is approximated by a set of faces of the final tetrahedral mesh. First, the tetrahedral mesh is refined around the inserted surfaces to capture their geometric features. Second, each immersed surface is approximated by a set of faces from the tetrahedral mesh. Third, following a novel approach, the nodes of the approximated surfaces are mapped to the corresponding immersed surface. Fourth, we untangle and smooth the mesh by optimizing a regularized shape distortion measure for tetrahedral elements in which we move all the nodes of the mesh, restricting the movement of the edge and surface nodes along the corresponding entity they belong to. The refining process allows approximating the immersed surface for any initial meccano tetrahedral mesh. Moreover, the proposed projection method avoids computational expensive geometric projections. Finally, the applied simultaneous untangling and smoothing process delivers a high-quality mesh and ensures that the immersed surfaces are interpolated. Several examples are presented to assess the properties of the proposed method.

Fractures tend to propagate along the least resistance paths, and homogeneous-based models may not be able to reliably predict the true crack paths, as they are not capable of capturing nonlinearities and local damage induced by local inhomogeneity. This paper presents a stochastic numerical modelling framework for simulating fracturing in natural heterogeneous materials. Fracture propagation is modelled using Francfort and Marigo's variational theory, and randomness in the material properties is introduced by random field principle. A computational strategy on the basis of nonlinear dimensionality reduction framework is developed that maps domain of spatially variable properties of the materials to a low-dimensional space. This strategy allows us to predict the most probable fracture patterns leading to failure by an optimisation algorithm. The reliability and performance of the developed methodology are examined through simulation of experimental case studies and comparison of predictions with measured data.

Dynamic relaxation is an iterative method to solve nonlinear systems of equations, which is frequently used for form finding and analysis of structures that undergo large displacements. It is based on the solution of a fictitious dynamic problem where the vibrations of the structure are traced by means of a time integration scheme until a static equilibrium is reached. Fictitious values are used for the mass and damping parameters. Heuristic rules exist to determine these values in such a way that the time integration procedure converges rapidly without becoming unstable. Central to these heuristic rules is the assumption that the highest convergence rate is achieved when the ratio of the highest and lowest eigenfrequency of the structure is minimal. This short communication shows that all eigenfrequencies become identical when a fictitious mass matrix proportional to the stiffness matrix is used. If, in addition, specific values are used for the fictitious damping parameters and the time integration step, the dynamic relaxation method becomes completely equivalent to the Newton-Raphson method. The Newton-Raphson method can therefore be regarded as a specific form of dynamic relaxation. This insight may help to interpret and improve nonlinear solvers based on dynamic relaxation and/or the Newton-Raphson method.

This paper presents a new concurrent simulation approach to couple isogeometric analysis (IGA) with the meshfree method for studying of crack problems. In the present method, the overall physical domain is divided into 2 subdomains that are formulated with the IGA and meshfree method, respectively. In the meshfree subdomain, the moving least squares shape function is adopted for the discretization of the area around crack tips, and the IGA subdomain is adopted in the remaining area. Meanwhile, the interface region between the 2 subdomains is represented by coupled shape functions. The resulting shape function, which comprises both IGA and meshfree shape functions, satisfies the consistency condition, thus ensuring convergence of the method. Moreover, the meshfree shape functions augmented with the enriched basis functions to predict the singular stress fields near a crack tip are presented. The proposed approach is also applied to simulate the crack propagation under a mixed-mode condition. Several numerical examples are studied to demonstrate the use and robustness of the proposed method.

A methodology for the calculation of gradients with respect to design parameters in general fluid-structure interaction problems is presented. It is based on fixed-point iterations on the adjoint variables of the coupled system using algorithmic differentiation. This removes the need for the construction of the analytic Jacobian for the coupled physical problem, which is the usual limitation for the computation of adjoints in most realistic applications. The formulation is shown to be amenable to partitioned solution methods for the adjoint equations. It also poses no restrictions to the nonlinear physics in either the fluid or structural field, other than the existence of a converged solution to the primal problem from which to compute the adjoints. We demonstrate the applicability of this procedure and the accuracy of the computed gradients on coupled problems involving viscous flows with geometrical and material nonlinearities in the structural domain.

Two stable approximation space configurations are treated for the mixed finite element method for elliptic problems based on curved meshes. Their choices are guided by the property that, in the master element, the image of the flux space by the divergence operator coincides with the potential space. By using static condensation, the sizes of global condensed matrices, which are proportional to the dimension of border fluxes, are the same in both configurations. The meshes are composed of different topologies (tetrahedra, hexahedra, or prisms). Simulations using asymptotically affine uniform meshes, exactly fitting a spherical-like region, and constant polynomial degree distribution *k*, show *L*^{2} errors of order *k*+1 or *k*+2 for the potential variable, while keeping order *k*+1 for the flux in both configurations. The first case corresponds to RT(*k*) and BDFM(*k*+1) spaces for hexahedral and tetrahedral meshes, respectively, but holding for prismatic elements as well. The second case, further incrementing the order of approximation of the potential variable, holds for the three element topologies. The case of *h**p*-adaptive meshes is considered for a problem modelling a porous media flow around a cylindrical horizontal well with elliptical drainage area. The effect of parallelism and static condensation in CPU time reduction is illustrated.

In this paper, we propose a novel unfitted finite element method for the simulation of multiple body contact. The computational mesh is generated independently of the geometry of the interacting solids, which can be arbitrarily complex. The key novelty of the approach is the combination of elements of the CutFEM technology, namely, the enrichment of the solution field via the definition of overlapping fictitious domains with a dedicated penalty-type regularisation of discrete operators and the LaTIn hybrid-mixed formulation of complex interface conditions. Furthermore, the novel P1-P1 discretisation scheme that we propose for the unfitted LaTIn solver is shown to be stable, robust, and optimally convergent with mesh refinement. Finally, the paper introduces a high-performance 3D level set/CutFEM framework for the versatile and robust solution of contact problems involving multiple bodies of complex geometries, with more than 2 bodies interacting at a single point.

We present 2 adaptive refinement techniques, namely, adaptive local refinement and adaptive hierarchical refinement, for isogeometric analysis. An element-wise point of view is adopted, exploiting Bézier extraction, which facilitates the implementation of adaptive refinement in isogeometric analysis. Locally refined and hierarchical T-splines are used for the description of the geometry as well as for the approximation of the solution space in the analysis. The refinement is conducted with the aid of a subdivision operator, which is computed by again exploiting the Bézier extraction operator. The concept and algorithm of an element-based adaptive isogeometric analysis are illustrated. Numerical examples are given to examine the accuracy, the convergence, and the condition number.

This paper presents a framework for r-adaptive quasi-static configurational force (CF) brittle crack propagation, cast within a discontinuous Galerkin (DG) symmetric interior penalty (SIPG) finite element scheme. Cracks are propagated in discrete steps, with a staggered algorithm, along element interfaces, which align themselves with the predicted crack propagation direction. The key novelty of the work is the exploitation of the DG face stiffness terms existing along element interfaces to propagate a crack in a mesh-independent r-adaptive quasi-static fashion, driven by the CF at the crack tip. This adds no new degrees of freedom to the data structure. Additionally, as DG methods have element-specific degrees of freedom, a geometry-driven p-adaptive algorithm is also easily included allowing for more accurate solutions of the CF on a moving crack front. Further, for nondeterminant systems, we introduce an average boundary condition that restrains rigid body motion leading to a determinant system. To the authors' knowledge, this is the first time that such a boundary condition has been described. The proposed formulation is validated against single and multiple crack problems with single- and mixed-mode cracks, demonstrating the predictive capabilities of the method.

In this paper, a level-set-based method is presented to deal with the multi-material topology optimization of compliant mechanisms with stress constraints. A novel stress-based multi-material topology optimization model of compliant mechanisms is proposed. In this model, the multi-material level set topology description model and the separable stress interpolation scheme are adopted. The weighted sum method is used to deal with the multi-objective optimization of the output displacement and compliance of compliant mechanisms. The penalty of stresses is also considered in the objective function to control the local stress level in different materials. To solve the optimization problem, the parametric level set method is employed, and the sensitivity analysis is conducted. Application of the method is demonstrated by 2 numerical examples. Results show that the multi-material structures without undesirable de facto hinges can be obtained. The output displacement and compliance of the compliant mechanisms are optimized, and stress constraints in different materials are simultaneously satisfied.

This paper introduces multivariate input-output models to predict the errors and bases dimensions of local parametric Proper Orthogonal Decomposition reduced-order models. We refer to these mappings as the multivariate predictions of local reduced-order model characteristics (MP-LROM) models. We use Gaussian processes and artificial neural networks to construct approximations of these multivariate mappings. Numerical results with a viscous Burgers model illustrate the performance and potential of the machine learning-based regression MP-LROM models to approximate the characteristics of parametric local reduced-order models. The predicted reduced-order models errors are compared against the multifidelity correction and reduced-order model error surrogates methods predictions, whereas the predicted reduced-order dimensions are tested against the standard method based on the spectrum of snapshots matrix. Since the MP-LROM models incorporate more features and elements to construct the probabilistic mappings, they achieve more accurate results. However, for high-dimensional parametric spaces, the MP-LROM models might suffer from the curse of dimensionality. Scalability challenges of MP-LROM models and the feasible ways of addressing them are also discussed in this study.

The paper deals with the use of model order reduction within a posteriori error estimation procedures in the context of the finite element method. More specifically, it focuses on the constitutive relation error concept, which has been widely used over the last 40 years for FEM verification of computational mechanics models. A technical key-point when using constitutive relation error is the construction of admissible fields, and we propose here to use the proper generalized decomposition to facilitate this task. In addition to making the implementation into commercial FE software easier, it is shown that the use of proper generalized decomposition enables to optimize the verification procedure and to get both accurate and reasonably expensive upper bounds on the discretization error. Numerical illustrations are presented to assess the performance of the proposed approach.

A detailed and systematic analysis is performed on the local and global properties of the recently developed harmonic polynomial cell (HPC) method, a very accurate and efficient field solver for problems governed by the Laplace equation. At the local cell level, a simple rule is identified for the proper choice of harmonic polynomials in the local representation of the velocity potential in cells with symmetry properties. The local solution error, its convergence rate, its dependence on the cell topology, its distribution inside the cell, and its features across cells with different dimensions are carefully examined with relevant findings for HPC numerical implementations. At the global level, the error convergence rate is analytically estimated in terms of error contributions from the boundary conditions and from inside the liquid domain. In most cases, the error associated with boundary conditions dominates the global error. In order to minimize it, Quadtree grid strategies or high-order local expressions of the velocity potential are proposed for cells near critical boundary portions. To model accurately the boundary conditions on rigid or deformable surfaces with generic geometries, 3 different grid strategies are proposed by adopting concepts of immersed boundary method and overlapping grids. They are comparatively studied for a circular rigid cylinder in infinite fluid and for the propagation of a free-surface wave. Then, an immersed boundary strategy, using numerical choices suggested in this paper, is successfully compared against a fully nonlinear boundary element method for the case of a surface-piercing circular cylinder heaving in otherwise calm water.

The paper presents a novel model order reduction technique for large-scale linear parameter-varying (LPV) systems. The approach is based on decoupling the original dynamics into smaller dimensional LPV subsystems that can be independently reduced by parameter-varying reduction methods. The decomposition starts with the construction of a modal transformation that separates the modal subsystems. Hierarchical clustering is applied then to collect the dynamically similar modal subsystems into larger groups. The resulting parameter-varying subsystems are then independently reduced. This approach substantially differs from most of the previously proposed LPV model reduction techniques, since it performs manipulations on the LPV model itself, instead of on a set of linear time-invariant models defined at fixed scheduling parameter values. Therefore, the interpolation, which is often a challenging part in reduction techniques, is inherently solved. The applicability of the developed algorithm is thoroughly investigated and demonstrated by numerical case studies.

This paper focuses on topology optimization for linear transient thermo-mechanical problems. The latter are, for example, encountered for extreme precision tools and instrumentation. Due to the transient nature, a standard adjoint sensitivity analysis will result in a backward transient analysis for the adjoint variables, leading to both storage and computational inefficiencies. A method is proposed that rigorously eliminates the backward transient integration for the adjoint sensitivity analysis. At the basis is a model-order reduction technique, which relies on a reduced thermal modal basis combined with static correction. The modal amplitudes can be readily obtained semi-analytically using simple convolutions. This accurate but reduced-order model is the starting point for an adjoint sensitivity analysis. Via a tactic selection of adjoint variables, the backward transient analysis for the adjoints is completely eliminated, whereas computational efficiency and consistency are maintained. The effectiveness of the resulting adjoint sensitivities and their application in topology optimization are demonstrated on the basis of several test examples.

We present a three-dimensional vector level set method coupled to a recently developed stable extended finite element method (XFEM). We further investigate a new enrichment approach for XFEM adopting discontinuous linear enrichment functions in place of the asymptotic near-tip functions. Through the vector level set method, level set values for propagating cracks are obtained via simple geometrical operations, eliminating the need for solution of differential evolution equations. The first XFEM variant ensures optimal convergence rates by means of geometrical enrichment, ie, the use of enriched elements in a fixed volume around the crack front, without giving rise to conditioning problems. The linear enrichment approach, significantly simplifies implementation and reduces the computational cost associated with numerical integration, while providing nonoptimal convergence rates similar to standard finite elements. The 2 dicretization schemes are tested for different benchmark problems, and their combination to the vector level set method is verified for nonplanar crack propagation problems.

A novel Lagrangian gradient smoothing method (L-GSM) is developed to solve “solid-flow” (flow media with material strength) problems governed by Lagrangian form of Navier-Stokes equations. It is a particle-like method, similar to the smoothed particle hydrodynamics (SPH) method but without the so-called tensile instability that exists in the SPH since its birth. The L-GSM uses gradient smoothing technique to approximate the gradient of the field variables, based on the standard GSM that was found working well with Euler grids for general fluids. The Delaunay triangulation algorithm is adopted to update the connectivity of the particles, so that supporting neighboring particles can be determined for accurate gradient approximations. Special techniques are also devised for treatments of 3 types of boundaries: no-slip solid boundary, free-surface boundary, and periodical boundary. An advanced GSM operation for better consistency condition is then developed. Tensile stability condition of L-GSM is investigated through the von Neumann stability analysis as well as numerical tests. The proposed L-GSM is validated by using benchmarking examples of incompressible flows, including the Couette flow, Poiseuille flow, and 2D shear-driven cavity. It is then applied to solve a practical problem of solid flows: the natural failure process of soil and the resultant soil flows. The numerical results are compared with theoretical solutions, experimental data, and other numerical results by SPH and FDM to evaluate further L-GSM performance. It shows that the L-GSM scheme can give a very accurate result for all these examples. Both the theoretical analysis and the numerical testing results demonstrate that the proposed L-GSM approach restores first-order accuracy unconditionally and does not suffer from the tensile instability. It is also shown that the L-GSM is much more computational efficient compared with SPH, especially when a large number of particles are employed in simulation.

This paper presents a (higher-order) finite element approach for the simulation of heat diffusion and thermoelastic deformations in NC-milling processes. The inherent continuous material removal in the process of the simulation is taken into account via continuous removal-dependent refinements of a paraxial hexahedron base-mesh covering a given workpiece. These refinements rely on isotropic bisections of these hexahedrons along with subdivisions of the latter into tetrahedrons and pyramids in correspondence to a milling surface triangulation obtained from the application of the marching cubes algorithm. The resulting mesh is used for an element-wise defined characteristic function for the milling-dependent workpiece within that paraxial hexahedron base-mesh. Using this characteristic function, a (higher-order) fictitious domain method is used to compute the heat diffusion and thermoelastic deformations, where the corresponding ansatz spaces are defined for some hexahedron-based refinement of the base-mesh. Numerical experiments compared to real physical experiments exhibit the applicability of the proposed approach to predict deviations of the milled workpiece from its designed shape because of thermoelastic deformations in the process.

We study time integration schemes for discontinuous Galerkin discretizations of problems with moving immersed interfaces. Two approaches have been discussed in literature so far: splitting schemes and space-time methods. Splitting schemes are cheap and easy to implement, but are non-conservative, inherently limited to low orders of accuracy, and require extremely small time steps. Space-time methods, on the other hand, are conservative, allow for large time steps, and generalize to arbitrary orders of accuracy. However, these advantages come at the expense of a severe growth the systems to be solved. Within this work, we present a generic strategy that combines the advantages of both concepts by evaluating numerical fluxes in a moving reference frame and by making use of a conservative cell-agglomeration strategy. We study the performance of this strategy in combination with backward-difference-formulas and explicit Runge-Kutta schemes in the context of the scalar transport equation, the Burgers equation, and the heat equation. Our results indicate that higher order spatial and temporal convergence rates can be achieved without increasing the size of the systems to be solved.

The paper describes the main features of an automatic and three-dimensional Cartesian mesher specifically designed for compressible inviscid/viscous flow solvers based on an immersed boundary technique. The development of a meshing tool able of conducting non-isotropic cell refinements is a very tiresome task. The major difficulty is to imagine, at the pre-design phase, a light but flexible data management, which minimizes the memory and CPUs' resources. In particular, the embedded geometry has to be detected by means of a fast and robust tagging procedure. Cells in proximity of the wall have to be refined in a proper way to adequately solve large flow gradients. Smooth variation of mesh density between differently refined zones has to be guaranteed to increase the flow solver robustness. A procedure to obtain accurate data on the geometry surfaces should be foresee. Here, a robust algorithm is developed to reconstruct a surface triangulation starting from the intersection points among volume cells and the geometry surfaces. The paper attempts to address all the above issues to help the readers in designing their own tools and suggesting them a way forward.

In this paper, conserving time-stepping algorithms for frictionless and full stick friction dynamic contact problems are presented. Time integration algorithms for frictionless and full stick friction dynamic contact problems have been designed to preserve the conservation of key discrete properties satisfied at the continuum level. Energy and energy-momentum–preserving algorithms for frictionless and full stick friction dynamic contact problems, respectively, have been designed and implemented within the framework of the direct elimination method, avoiding the drawbacks linked to the use of penalty-based or Lagrange multipliers methods. An assessment of the performance of the resulting formulation is shown in a number of selected and representative numerical examples, under full stick friction and slip frictionless contact conditions. Conservation of key discrete properties exhibited by the time-stepping algorithm is shown.

This paper presents a high fidelity homogenization method for periodically layered composite structures that accounts for plasticity in the matrix material and quasi-brittle damage in the reinforcing layers, combined with strong geometrical nonlinearities. A set of deliberately chosen internal kinematic variables results in a rigorous representation of the kinematics of the 2 constituents, which in turn allows for complex constitutive laws per constituent to be used directly in the formulation. The model accounts for hyper-elastoplastic behavior in the matrix phase and hyper-elastic behavior in the reinforcement as well as for the bending stiffness of the reinforcement layers. Additionally to previously proposed models, the present method includes Lemaitre-type damage for the reinforcement, making it applicable to a wider range of engineering applications. The capability of the proposed method in representing the combined effect of plasticity, damage, and buckling at microlevel within a homogenized setting is demonstrated by means of direct comparisons to a reference discrete model.

A methodology is proposed for the efficient solution of probabilistic nonconvex constrained optimization problems with uncertain. Statistical properties of the underlying stochastic generator are characterized from an initial statistical sample of function evaluations. A diffusion manifold over the initial set of data points is first identified and an associated basis computed. The joint probability density function of this initial set is estimated using a kernel density model and an Itô stochastic differential equation (ISDE) constructed with this model as its invariant measure. This ISDE is adapted to fluctuate around the manifold yielding additional joint realizations of the uncertain parameters, design variables, and function values, which are obtained as solutions of the ISDE. The expectations in the objective function and constraints are then accurately evaluated without performing additional function evaluations. The methodology brings together novel ideas from manifold learning and stochastic Hamiltonian dynamics to tackle an outstanding challenge in stochastic optimization. Three examples are presented to highlight different aspects of the proposed methodology.

Laminated piezocomposite energy harvesters (LAPEHs) are multilayer arrangements of piezoelectric and nonpiezoelectric materials. Multiple materials and physics, and dynamic analysis need to be considered in their design. Usually these devices are designed for harmonic excitation; however, they are subjected to other types of excitations. Thus, a novel topology optimization formulation is developed for designing LAPEHs that considers a combination of harmonic and transient optimization problems with the aim of designing the so-called “multi-entry" devices in which the power generated is the same for different types of excitation. LAPEHs are modeled by the finite element method, and the material model used for the piezoelectric layer is based on penalization and polarization model who controls material distribution and corresponding polarization. To optimize the RLC circuit, a novel linear interpolation model of coupled electrical impedance is also introduced to consider different magnitudes of the coupled impedance. The topology optimization problem seeks to maximize the active power generated by the LAPEH at its RLC circuit, to minimize its response time measured as the slope of the power versus time curve, and to maximize its stiffness. Numerical examples are shown to illustrate the potential of the method.

The conventional SPH method has been frustrated with the low accuracy originated from the particle inconsistency. The finite particle method (FPM) is an improved SPH method with a higher-order accuracy. However, the numerical accuracy and stability of FPM depend on the uniformity of particles. Facing the disorder particles, the conventional FPM may suffer from an ill-conditioned corrective matrix and is difficult to conduct longtime simulations. A popular and robust particle regularization scheme is the so-called particle shifting technology (PST), which can effectively enhance the accuracy and stability of particle methods. In the context of FPM, PST is analyzed and discussed, and a modified PST (MPST) is proposed. Modified PST saves great amount of computational cost with respect to the conventional PST and acquires better features of accuracy and stability. Finally, the coupled FPM method by combining MPST and δ-SPH is developed to simulate a series of viscous flows. The numerical results indicate that MPST is effective in improving accuracy, stability, and efficiency of PST, and the coupled FPM is shown to be robust for simulating viscous flows and has a higher accuracy and stability.

A novel approach, referred to as the homotopy stochastic finite element method, is proposed to solve the eigenvalue problem of a structure associated with some amount of uncertainty based on the homotopy analysis method. For this approach, an infinite multivariate series of the involved random variables is proposed to express the random eigenvalue or even a random eigenvector. The coefficients of the multivariate series are determined using the homotopy analysis method. The convergence domain of the derived series is greatly expanded compared with the Taylor series due to the use of an approach function of the parameter *h*. Therefore, the proposed method is not limited to random parameters with small fluctuation. However, in practice, only single-variable and double-variable approximations are employed to simplify the calculation. The numerical examples show that with a suitable choice of the auxiliary parameter *h*, the suggested approximations can produce very accurate results and require reduced or similar computational efforts compared with the existing methods.

Modelling the entire ductile fracture process remains a challenge. On the one hand, continuous damage models succeed in capturing the initial diffuse damage stage but are not able to represent discontinuities or cracks. On the other hand, discontinuous methods, as the cohesive zones, which model the crack propagation behaviour, are suited to represent the localised damaging process. However, they are unable to represent diffuse damage. Moreover, most of the cohesive models do not capture triaxiality effect. In this paper, the advantages of the two approaches are combined in a single damage to crack transition framework. In a small deformation setting, a nonlocal elastic damage model is associated with a cohesive model in a discontinuous Galerkin finite element framework. A cohesive band model is used to naturally introduce a triaxiality-dependent behaviour inside the cohesive law. Practically, a numerical thickness is introduced to recover a 3D state, mandatory to incorporate the in-plane stretch effects. This thickness is evaluated to ensure the energy consistency of the method and is not a new numerical parameter. The traction-separation law is then built from the underlying damage model. The method is numerically shown to capture the stress triaxiality effect on the crack initiation and propagation.

Fictitious domain methods are attractive for shape optimization applications, since they do not require deformed or regenerated meshes. A recently developed such method is the CutFEM approach, which allows crisp boundary representations and for which uniformly well-conditioned system matrices can be guaranteed. Here, we investigate the use of the CutFEM approach for acoustic shape optimization, using as test problem the design of an acoustic horn for favorable impedance-matching properties. The CutFEM approach is used to solve the Helmholtz equation, and the geometry of the horn is implicitly described by a level-set function. To promote smooth algorithmic updates of the geometry, we propose to use the nodal values of the Laplacian of the level-set function as design variables. This strategy also improves the algorithm's convergence rate, counteracts mesh dependence, and, in combination with Tikhonov regularization, controls small details in the optimized designs. An advantage with the proposed method is that the exact derivatives of the discrete objective function can be expressed as boundary integrals, as opposed to when using a traditional method that uses mesh deformations. The resulting horns possess excellent impedance-matching properties and exhibit surprising subwavelength structures, not previously seen, which are possible to capture due to the fixed mesh approach.

Several cases of nonlinear wave propagation are studied numerically in two dimensions within the framework of potential flow. The Laplace equation is solved with the harmonic polynomial cell (HPC) method, which is a field method with high-order accuracy. In the HPC method, the computational domain is divided into overlapping cells. Within each cell, the velocity potential is represented by a sum of harmonic polynomials. Two different methods denoted as immersed boundary (IB) and multigrid (MG) are used to track the free surface. The former treats the free surface as an IB in a fixed Cartesian background grid, while the latter uses a free-surface fitted grid that overlaps with a Cartesian background grid. The simulated cases include several nonlinear wave mechanisms, such as high steepness and shallow-water effects. For one of the cases, a numerical scheme to suppress local wave breaking is introduced. Such scheme can serve as a practical mean to ensure numerical stability in simulations where local breaking is not significant for the result. For all the considered cases, both the IB and MG method generally give satisfactory agreement with known reference results. Although the two free-surface tracking methods mostly have similar performance, some differences between them are pointed out. These include aspects related to modeling of particular physical problems as well as their computational efficiency when combined with the HPC method.

The finite analytic method (FAM) is developed to solve the 2D steady fluid flows in heterogeneous porous media with full tensor permeability on unstructured grids. The proposed FAM is constructed based upon the power-law analytic nodal solution in the angular domain with arbitrary shape. When approaching the grid node joining the subdomains, 3 different flow patterns may exist: power-law flow, linear flow, or the stagnant flow. Based on the nodal analytic solution, the triangle-based FAM is proposed. Numerical examples show that the proposed numerical scheme makes the convergences much quickly than the traditional methods, typically the weighted harmonic mean method under the cell refinement. In practical applications, the grid refinement parameter *n* = 2 or *n* = 3 is recommended, and the relative error of the calculated equivalent permeability will below 4% independent of the heterogeneity. In contrast, when using the traditional numerical scheme the refinement ratio for the grid cell needs to increase dramatically to get an accurate result, especially for strong heterogeneous porous medium.

Proper generalized decomposition (PGD) is often used for multiquery and fast-response simulations. It is a powerful tool alleviating the curse of dimensionality affecting multiparametric partial differential equations. Most implementations of PGD are intrusive extensions based on in-house developed FE solvers. In this work, we propose a nonintrusive PGD scheme using off-the-shelf FE codes (such as certified commercial software) as an external solver. The scheme is implemented and monitored by in-house flow-control codes. A typical implementation is provided with downloadable codes. Moreover, a novel parametric separation strategy for the PGD resolution is presented. The parametric space is split into two- or three-dimensional subspaces, to allow PGD technique solving problems with constrained parametric spaces, achieving higher convergence ratio. Numerical examples are provided. In particular, a practical example in biomechanics is included, with potential application to patient-specific simulation.

An efficient method for generating the mass matrix inverse of structural dynamic problems is presented, which can be tailored to improve the accuracy of target frequency ranges and/or wave contents. The present method bypasses the use of biorthogonal construction of a kernel inverse mass matrix that requires special procedures for boundary conditions and free edges or surfaces and constructs the free-free inverse mass matrix using the standard FEM procedure. The various boundary conditions are realized by the the method of localized Lagrange multipliers. In particular, the present paper constructs the kernel inverse matrix by using the standard FEM elemental mass matrices. It is shown that the accuracy of the present inverse mass matrix is almost identical to that of a conventional consistent mass matrix or a combination of lumped and consistent mass matrices. Numerical experiments with the proposed inverse mass matrix are conducted to validate its effectiveness when applied to vibration analysis of bars, beams, and plain stress problems.

Gradient-dependent plasticity can be used to achieve mesh-objective results upon loss of well-posedness of the initial/boundary value problem because of the introduction of strain softening, non-associated flow, and geometric nonlinearity. A prominent class of gradient plasticity models considers a dependence of the yield strength on the Laplacian of the hardening parameter, usually an invariant of the plastic strain tensor. This inclusion causes the consistency condition to become a partial differential equation, in addition to the momentum balance. At the internal moving boundary, one has to impose appropriate boundary conditions on the hardening parameter or, equivalently, on the plastic multiplier. This internal boundary condition can be enforced without tracking the elastic-plastic boundary by requiring -continuity with respect to the plastic multiplier. In this contribution, this continuity has been achieved by using nonuniform rational B-splines as shape functions both for the plastic multiplier and for the displacements. One advantage of this isogeometric analysis approach is that the displacements can be interpolated one order higher, making it consistent with the interpolation of the plastic multiplier. This is different from previous approaches, which have been exploited. The regularising effect of gradient plasticity is shown for 1- and 2-dimensional boundary value problems.

This paper presents a level set–based topology optimization method for the design of an acoustic metamaterial with negative bulk modulus in an acoustic-elastic coupled system. To represent the coupled system, we propose a two-phase material model in which the solid and acoustic phases are mixed. This model allows the coupled system to be represented uniformly and avoids the need to impose the coupling conditions typically required in analyses of acoustic-elastic coupled systems. The effective bulk modulus of the acoustic metamaterial is represented using an S-parameter–based method and the optimization problem is formulated as a level set–based topology optimization. The topological derivatives for the two-phase material model are obtained using the relationship between the topological and shape derivatives. Numerical examples demonstrate the validity of the proposed two-phase material model and topological derivatives, and an optimal unit cell design for an acoustic metamaterial with negative bulk modulus is obtained at a targeted frequency.

A new method of topology optimization is introduced in which a continuous material field is combined with adaptive mesh refinement. Using a continuous material field with different analysis and design meshes allows the method to produce optimal designs that are free of numerical artifacts like checkerboard patterns and material islands. Adaptive mesh refinement is then applied to both meshes to precisely locate the optimal boundary of the final structure. A Helmholtz-type density filter is used to prevent the appearance of small topological features as the mesh refinement proceeds. Results are presented for several test problems, including problems with geometrically complex domain boundaries.

We explore diffuse formulations of Nitsche's method for consistently imposing Dirichlet boundary conditions on phase-field approximations of sharp domains. Leveraging the properties of the phase-field gradient, we derive the variational formulation of the diffuse Nitsche method by transferring all integrals associated with the Dirichlet boundary from a geometrically sharp surface format in the standard Nitsche method to a geometrically diffuse volumetric format. We also derive conditions for the stability of the discrete system and formulate a diffuse local eigenvalue problem, from which the stabilization parameter can be estimated automatically in each element. We advertise metastable phase-field solutions of the Allen-Cahn problem for transferring complex imaging data into diffuse geometric models. In particular, we discuss the use of mixed meshes, that is, an adaptively refined mesh for the phase-field in the diffuse boundary region and a uniform mesh for the representation of the physics-based solution fields. We illustrate accuracy and convergence properties of the diffuse Nitsche method and demonstrate its advantages over diffuse penalty-type methods. In the context of imaging-based analysis, we show that the diffuse Nitsche method achieves the same accuracy as the standard Nitsche method with sharp surfaces, if the inherent length scales, ie, the interface width of the phase-field, the voxel spacing, and the mesh size, are properly related. We demonstrate the flexibility of the new method by analyzing stresses in a human vertebral body.

This article presents a nonlinear solver combining regression analysis and a multiscale simulation scheme. First, the proposed method repeats microscopic analysis of a local simulation domain, which is extracted from the entire global domain, to statistically estimate the relation(s) between the value of a dependent variable at a point and values at surrounding points. The relation is called regression function. Subsequent global analysis reveals the behavior of the global domain with only coarse-grained points using the regression function quickly at low computational cost, which can be accomplished using a multiscale numerical solver, called the seamless-domain method. The objective of the study is to solve a nonlinear problem accurately and at low cost by combining the 2 techniques. We present an example problem of a nonlinear steady-state heat conduction analysis of a heterogeneous material. The proposed model using fewer than 1000 points generates a solution with precision similar to that of a standard finite-element solution using hundreds of thousands of nodes. To investigate the relationship between the accuracy and computational time, we apply the seamless-domain method under varying conditions such as the number of iterations of the prior analysis for statistical data learning.

We consider an efficient preconditioner for a boundary integral equation (BIE) formulation of the two-dimensional Stokes equations in porous media. While BIEs are well-suited for resolving the complex porous geometry, they lead to a dense linear system of equations that is computationally expensive to solve for large problems. This expense is further amplified when a significant number of iterations is required in an iterative Krylov solver such as generalized minimial residual method (GMRES). In this paper, we apply a fast inexact direct solver, the inverse fast multipole method, as an efficient preconditioner for GMRES. This solver is based on the framework of
-matrices and uses low-rank compressions to approximate certain matrix blocks. It has a tunable accuracy *ε* and a computational cost that scales as
. We discuss various numerical benchmarks that validate the accuracy and confirm the efficiency of the proposed method. We demonstrate with several types of boundary conditions that the preconditioner is capable of significantly accelerating the convergence of GMRES when compared to a simple block-diagonal preconditioner, especially for pipe flow problems involving many pores.

The Koiter-Newton method had recently demonstrated a superior performance for nonlinear analyses of structures, compared to traditional path-following strategies. The method follows a predictor-corrector scheme to trace the entire equilibrium path. During a predictor step, a reduced-order model is constructed based on Koiter's asymptotic postbuckling theory that is followed by a Newton iteration in the corrector phase to regain the equilibrium of forces. In this manuscript, we introduce a robust mixed solid-shell formulation to further enhance the efficiency of stability analyses in various aspects. We show that a Hellinger-Reissner variational formulation facilitates the reduced-order model construction omitting an expensive evaluation of the inherent fourth-order derivatives of the strain energy. We demonstrate that extremely large step sizes with a reasonable out-of-balance residual can be obtained with substantial impact on the total number of steps needed to trace the complete equilibrium path. More importantly, the numerical effort of the corrector phase involving a Newton iteration of the full-order model is drastically reduced thus revealing the true strength of the proposed formulation. We study a number of problems from engineering and compare the results to the conventional approach in order to highlight the gain in numerical efficiency for stability problems.

No abstract is available for this article.

]]>This paper presents an adaption of periodic boundary conditions (BC), which is termed tessellation BC. While periodic BC restrict strain localization zones to obey the periodicity of the microstructure, the proposed tessellation BC adjust the periodicity frame to meet the localization zone. Thereby, arbitrary developing localization zones are permitted. Still the formulation is intrinsically unsusceptible against spurious localization. Additionally, a modification of the Hough transformation is derived, which constitutes an unbiased criterion for the detection of the localization zone. The behavior of the derived formulation is demonstrated by various examples and compared with other BC. It is thereby shown that tessellation BC lead to a reasonable dependence of the effective stress on the localization direction. Furthermore, good convergence of stiffness values with increasing size of the representative volume element is shown as well as beneficial characteristics in use with strain softening material.

In goal-oriented adaptivity, the error in the quantity of interest is represented using the error functions of the direct and adjoint problems. This error representation is subsequently bounded above by element-wise error indicators that are used to drive optimal refinements. In this work, we propose to replace, in the error representation, the adjoint problem by an alternative operator. The main advantage of the proposed approach is that, when judiciously selecting such alternative operator, the corresponding upper bound of the error representation becomes sharper, leading to a more efficient goal-oriented adaptivity.

While the method can be applied to a variety of problems, we focus here on two- and three-dimensional (2-D and 3-D) Helmholtz problems. We show via extensive numerical experimentation that the upper bounds provided by the alternative error representations are sharper than the classical ones and lead to a more robust *p*-adaptive process. We also provide guidelines for finding operators delivering sharp error representation upper bounds. We further extend the results to a convection-dominated diffusion problem as well as to problems with discontinuous material coefficients. Finally, we consider a sonic logging-while-drilling problem to illustrate the applicability of the proposed method.

In this work, a fully explicit partitioned method for the simulation of Fluid Structure Interaction (FSI) problems is presented. The fluid domain is modelled with an explicit Particle Finite Element Method (PFEM) based on the hypothesis of weak compressibility. The Lagrangian description of the fluid is particularly effective in the simulation of FSI problems with free surface flows and large structural displacements, since the fluid boundaries are automatically defined by the position of the mesh nodes. A distinctive feature of the proposed FSI strategy is that the solid domain is modelled using the explicit integration FEM in an off-the-shelf commercial software (Abaqus/Explicit). This allows to perform simulations with a complete and advanced description on the structural domain, including advanced structural material models and contact. The structure-to-fluid coupling algorithm is based on a technique derived from the Domain Decomposition Methods, namely, the Gravouil and Combescure algorithm. The method allows for arbitrarily large interface displacements using different time incrementation and nonconforming meshes in the different domains, which is an essential feature for the efficiency of an explicit solver involving different materials. The resulting fully explicit and fully lagrangian finite element approach is particularly appealing for the possibility of its efficient application in a large variety of highly non-linear engineering problems.

To increase the robustness of a Padé-based approximation of parametric solutions to finite element problems, an a priori estimate of the poles is proposed. The resulting original approach is shown to allow for a straightforward, efficient, subsequent Padé-based expansion of the solution vector components, overcoming some of the current convergence and robustness limitations. In particular, this enables for the intervals of approximation to be chosen a priori in direct connection with a given choice of Padé approximants. The choice of these approximants, as shown in the present work, is theoretically supported by the Montessus de Ballore theorem, concerning the convergence of a series of approximants with fixed denominator degrees. Key features and originality of the proposed approach are (1) a component-wise expansion which allows to specifically target subsets of the solution field and (2) the a priori, simultaneous choice of the Padé approximants and their associated interval of convergence for an effective and more robust approximation. An academic acoustic case study, a structural-acoustic application, and a larger acoustic problem are presented to demonstrate the potential of the approach proposed.

In this paper, a new numerical method, element differential method (EDM), is proposed for solving general thermal-mechanical problems. The key point of the method is the direct differentiation of the shape functions of Lagrange isoparametric elements used to characterize the geometry and physical variables. A set of analytical expressions for computing the first- and second-order partial derivatives of the shape functions with respect to global coordinates are derived. Based on these expressions, a new collocation method is proposed for establishing the system of equations, in which the equilibrium equations are collocated at nodes inside elements, and the traction equilibrium equations are collocated at interface nodes between elements and outer surface nodes of the problem. Attributed to the use of the Lagrange elements that can guarantee the variation of physical variables consistent through all elemental nodes, EDM has higher stability than the traditional collocation method. The other main features of EDM are that no mathematical or mechanical principles are required to set up the system of equations and no integrals are involved to form the coefficients of the system. A number of numerical examples of 2- and 3-dimensional problems are given to demonstrate the correctness and efficiency of the proposed method.

The conventional approach to construct quadratic elements for an *n*-sided polygon will yield *n*(*n*+1)/2 shape functions, which increases the computational effort. It is well known that the serendipity elements based on isoparametric formulation suffers from mesh distortion. Floater and Lai proposed a systematic way to construct higher-order approximations over arbitrary polygons using the generalized barycentric and triangular coordinates. This approach ensures 2*n* shape functions with nodes only on the boundary of the polygon. In this paper, we extend the polygonal splines approach to 3 dimensions and construct serendipity shape functions over hexahedra and convex polyhedra. This is done by expressing the shape functions using the barycentric coordinates and the local tetrahedral coordinates. The quadratic shape functions possess Kronecker delta property and satisfy constant, linear, and quadratic precision. The accuracy and the convergence properties of the quadratic serendipity shape elements are demonstrated with a series of standard patch tests. The numerical results show that the quadratic serendipity elements pass the patch test, yield optimal convergence rate, and can tolerate *extreme* mesh distortion.

There is increasing interest in the material point method (MPM) as a means of modelling solid mechanics problems in which very large deformations occur, e.g. in the study of landslides and metal forming; however, some aspects vital to wider use of the method have to date been ignored, in particular methods for imposing essential boundary conditions in the case where the problem domain boundary does not coincide with the background grid element edges. In this paper, we develop a simple procedure originally devised for standard finite elements for the imposition of essential boundary conditions, for the MPM, expanding its capabilities to model boundaries of any inclination. To the authors' knowledge, this is the first time that a method has been proposed that allows arbitrary Dirichlet boundary conditions (zero and nonzero values at any inclination) to be imposed in the MPM. The method presented in this paper is different from other MPM boundary approximation approaches, in that (1) the boundaries are independent of the background mesh, (2) artificially stiff regions of material points are avoided, and (3) the method does not rely on *mirroring* of the problem domain to impose symmetry. The main contribution of this work is equally applicable to standard finite elements and the MPM.

Topology optimization using stress constraints and considering uncertainties is a serious challenge, since a reliability problem has to be solved for each stress constraint, for each element in the mesh. In this paper, an alternative way of solving this problem is used, where uncertainty quantification is performed through the first-order perturbation approach, with proper validation by Monte Carlo simulation. Uncertainties are considered in the loading magnitude and direction. The minimum volume problem subjected to local stress constraints is formulated as a robust problem, where the stress constraints are written as a weighted average between their expected value and standard deviation. The augmented Lagrangian method is used for handling the large set of local stress constraints, whereas a gradient-based algorithm is used for handling the bounding constraints. It is shown that even in the presence of small uncertainties in loading direction, different topologies are obtained when compared to a deterministic approach. The effect of correlation between uncertainties in loading magnitude and direction on optimal topologies is also studied, where the main observed result is loss of symmetry in optimal topologies.