In this talk, we present a comparative analysis of reliable quadrature techniques for approximating fractional operators, emphasizing error estimates and their effectiveness in preconditioning the Riesz operator. This operator, essential in fractional models such as anomalous diffusion, depends on a parameter which lies between 1 and 2. When the parameter is close to 2, several robust preconditioning methods with linear computational cost exist. However, as the parameter approaches 1, achieving efficient preconditioners with linear complexity becomes more challenging. Previous work approximated the Riesz operator as a fractional power of a discretized Laplacian using the Gauss-Jacobi rule. Recent studies have enhanced this by using advanced quadrature rules, such as Gauss-Laguerre and sinc quadratures, which provide faster convergence. By appropriately selecting the number of quadrature points, both methods generate preconditioners based on sums of a few shifted Laplacian inverses, ensuring high efficiency and accuracy. Numerical tests show that the sinc-based preconditioner is more versatile than Gauss-Laguerre and both outperform the Gauss-Jacobi approach.
Reduced order modeling (ROM) techniques are increasingly recognized as essential tools for accelerating simulations in a wide variety of fields. On the other hand, isogeometric analysis (IGA) has emerged as a powerful technique for discretizing complex problems in engineering and applied sciences, especially in the context of CAD-based simulations. IGA provides a seamless integration of geometry and analysis, enabling high-fidelity representations of complex geometries and efficient simulations of physical phenomena.
This talk explores their synergies, focusing on applications ranging from unfitted discretizations, in particular shells, to metamaterials. The combination of IGA and ROM techniques offers a promising framework to address the challenges associated with high-dimensional parameter spaces and complex geometries. By leveraging the strengths of both approaches, we can achieve significant computational savings while maintaining the accuracy and fidelity of the simulations.
A key result of this synergy is the enhanced ability to perform shape optimization and design exploration more efficiently. The integration of ROM with IGA allows for the rapid evaluation of design variations and sensitivities, enabling a wider design space to be explored and optimal configurations to be identified more effectively.
Many numerical approximations suffer from degradation due to poor mesh adaptation or misalignment. In isogeometric analysis, mesh adaptation is typically performed in two distinct ways. The first is h-refinement, which introduces additional degrees of freedom using dedicated spline constructions such as (truncated) hierarchical B-splines (THB) or locally refined (LR) splines for instance. The second is r-refinement, which redistributes the mesh through reparameterization techniques or the composition of mappings, without increasing the number of degrees of freedom. Although h-refinement is widely used in the literature for its ability to handle complex phenomena, an important question remains: is it always the superior approach? Surprisingly, benchmark comparisons of performance and accuracy challenge this choice. In fact, it turns out that h-refinement alone is often insufficient to address alignment issues, as it primarily conforms to the geometry rather than the underlying problem data. This limitation becomes particularly evident in problems such as the Grad-Shafranov equation, which involves anisotropic diffusion in plasma physics.
Joint work with Angelos Mantzaflaris.
The use of spline Gauss quadrature rules [1] for solving boundary value problems (BVPs) using the Nyström method will be discussed. When solving BVPs, one converts the corresponding partial differential equation inside a domain into the Fredholm integral equation of the second kind on the boundary in the sense of boundary integral equation (BIE). The Fredholm integral equation is then solved using the Nyström method, which involves the use of a particular quadrature rule, thus, converting the BIE problem to a linear system. This concept is demonstrated on the 2D Laplace problem over domains with smooth boundary as well as domains containing corners [2]. The proposed approach is validated on benchmark examples and the results indicate that, for a fixed number of quadrature points (i.e., the same computational effort), the spline Gauss quadratures return an approximation that is by one to two orders of magnitude more accurate compared to the solution obtained by traditional polynomial Gauss counterparts.
Adopting a space-time method instead of a time-stepping scheme incurs in a higher computational complexity unless carefully chosen algorithms are employed.
If one is interested in solving the Laplace problem on a d-dimensional cuboid the fast-diagonalization method computes the solution in N(d+1)/d floating-point-operations. The same method can be used as a preconditioner for a parametrized patch in the IGA context.
Similar methods are available for the parabolic heat equation and also for the Schrödinger equation on cuboids. The talk will be a short overview of these methods.
Stokes' flow is a classical problem in fluid dynamics. As such, it has been widely discussed and several numerical methods for its resolution have been developed. A modern and challenging aspect of this problem is to provide discrete solutions that respect the geometry and the physics of the continuous problem. To this end, many technologies, such as mimetic methods, have been proposed.
In this talk we exploit (high order) Whitney forms to enforce the geometrical and physical features of the continuous problem into a finite elements approximation of the VVP (vorticity-velocity-pressure) formulation of Stokes' flow. To this end, we select weights as degrees of freedom for such elements, since these linear functionals are naturally associated with a geometrical subdivision of the domain. This natural duality allows to carry many desired properties from the continuous to the discrete level. An algorithmic construction of weights, their theoretical features, and numerical results are presented to support this approach.
In this talk, we describe the relationship between the convergence behavior of Isogeometric Analysis (IGA) collocation methods and the associated error within the weighted residual framework. Specifically, we will introduce and analyze an overdetermined collocation approach.
Joint work with Maria Roberta Belardo.
In this talk we consider 3D interior and exterior Helmholtz problems, reformulated in terms of a Boundary Integral Equation (BIE). For their numerical solution, we rely on a collocation Boundary Element Method (BEM) formulated in the general framework of Isogeometric Analysis (IGA-BEM), adopting in particular conforming multi-patch discretizations. As it is well known (using BEM as well as IGA-BEM), the matrices of the resulting linear system are fully populated and non-symmetric, a drawback that prevents the application of this strategy to large scale realistic problems. As a possible remedy to reduce the global complexity of the method, we propose a numerical scheme based on the hierarchical matrix (H-matrix) technique. Using a suitable admissibility condition, it starts with hierarchically partitioning the matrix into full- and low-rank blocks. The former are stored and computed in conventional way, meanwhile the latter are approximated by the Adaptive Cross Approximation (ACA) methodology, which successfully compresses the dense matrices of the multi-patch IGA-BEM approach. Furthermore, the cost of the matrix-vector product is reduced and this allows us to increase the overall computational efficiency of the Generalized Minimal Residual Method (GMRES), adopted for the solution of the linear system. Several numerical examples are given to demonstrate the accuracy and efficiency of the proposed methodology.
Joint work with Giuseppe Alessio D'Inverno, Maria Lucia Sampoli, and Alessandra Sestini.
When locally refined B-splines (LRB) were introduced [1], we were asked, ‘Why not simply select all minimal-support B-splines (MSB) on the mesh?’ We responded that partition of unity is ensured when each mesh refinement also splits a tensor product (TP) B-spline.
Patrizi [2] has proved that linear dependence could occur with as few as six active TP B-splines for MSB, whereas at least eight TP B-splines are required for LRB. In these cases, five and seven TP B-splines, respectively, are nested within the support of a larger TP B-spline. Since LRB is a subset of MSB over the LR-mesh, studying linear dependence of MSB provides valuable insight into LRB.
Collect all TP B-splines actively participating in a linear dependency relation and consider the boundary of the union of their supports. We observe that: (1) Regardless of whether MSB or LRB is used, there is always at least one TP B-spline that does not touch the above boundary; (2) In the case of LRB, there is always at least one TP B-spline at nesting level 2, i.e., it is nested within the support of another TP B-spline, which itself is nested.
These observations can serve as useful criteria for detecting potential linear dependence in a collection of LR B-splines and can help trigger additional refinement ensuring a LRB basis.
Solving large-scale linear algebra problems arising from the discretization of partial differential equations (PDEs) is essential in many scientific and engineering applications, such as fluid dynamics, climate modeling, and structural analysis. These problems often lead to massive sparse linear systems that require significant computational power to solve efficiently. Distributed computing is crucial for handling such large-scale problems, as it enables parallel processing across multiple nodes, reducing computation time and memory constraints. Efficient algorithms, such as domain decomposition methods and iterative solvers like Krylov subspace methods, leverage distributed architectures to accelerate convergence and improve scalability. Optimizing these solvers for high-performance computing (HPC) environments ensures that complex simulations can be performed with high accuracy and efficiency. In this talk I will describe the effort gone into the construction of the Parallel Sparse Computational Toolkit (PSCToolkit) and how it can be used to enable the solution of large-scale linear algebra problems in a manner agnostic to the choice of discretization method.
Joint work with Pasqua D'Ambra and Salvatore Filippone.
Quadrature rules represent a fundamental aspect in several numerical applications and, in particular, in Isogeometric Analysis (IGA). Using quadrature rules designed for polynomials on triangles as element-wise quadrature rules for smooth splines is a feasible approach. However, this strategy might nullify, or at least significantly reduce, the advantage of smooth splines in IGA because the intrinsic smoothness of the spaces is ignored, resulting in a too high computational cost.
In this talk, we identify polynomial quadrature rules on triangles that remain exact for sufficiently smooth spline spaces sharing the same degree, both on the Clough-Tocher 3-split and on the uniform Powell-Sabin 6-split. Our analysis is based on the representation of the considered macro-elements in terms of suitable simplex splines, and offers insights that can be further extended to the three-dimensional case.
Joint work with Carla Manni and Hendrik Speleers.
The mathematical foundations of adaptive methods for the numerical solution of PDEs have been widely studied for several decades. Starting from a given initial mesh, the aim is to increase the accuracy of a discrete solution by iterating the four building blocks of the so-called adaptive loop. At each refinement step, the discrete solution on the current mesh is derived (1), local contributions of some a posteriori error estimator are computed (2), a set of mesh elements are marked for refinement/coarsening (3), and the new mesh for the next iterative step is generated by refining/coarsening (at least) all marked elements (4). It should be noted, however, that in spite of such a long history of adaptivity theory, the application of adaptive methods in 3D is often very complex and expensive. Consequently, even though many research results are of significant theoretical value, they have not been so far significantly exploited in real CAD/CAE industrial processes, where adaptivity is rarely used or driven by heuristic approaches. The talk will discuss the interplay of optimality and efficiency in the context of adaptive isogeometric methods with focus on hierarchical spline constructions and related extensions.
The dimension of the space of smooth trivariate polynomial splines on an unstructured tetrahedral partition typically depends on the geometry of the partition, which often complicates spline implementation. A local construction of C1 splines, with degrees of freedom linked solely to the combinatorial properties of the partition, generally requires either a high polynomial degree or the use of special splitting techniques. In this contribution, we focus on constructing C1 splines on a Worsey-Farin refinement, which is obtained by splitting each tetrahedron into 12 smaller tetrahedra. This refinement allows the construction of C1 splines of lower degree, starting at degree three. We demonstrate how high-order super-smoothness inside the original tetrahedra can facilitate spline characterization and how Bernstein-Bézier techniques can be employed to generate basis functions for spline spaces of arbitrary degree.
PSYDAC is a high-performance Python library designed for isogeometric analysis. It is an academic, open-source project created by numerical mathematicians specifically for computational plasma physics applications. It can solve general systems of partial differential equations in weak form, which users define using the domain-specific language provided by SymPDE. It supports finite element exterior calculus (FEEC) with tensor-product spline spaces and handles multi-patch geometries in various ways.
PSYDAC automatically generates Python code for the assembly of user-defined functionals and linear and bilinear forms from the weak formulation of the problem. This Python code is then accelerated to C/Fortran speed using Pyccel. The library also enables large parallel computations on distributed-memory supercomputers using MPI and OpenMP.
In this contribution, we step through two examples of usage, including the problem definition in Python and the parallel scaling on a supercomputer.
Joint work with Julian Owezarek and the PSYDAC developers.
The construction of spline bases over hierarchical T-meshes is fundamentally important for numerous applications, ranging from geometric modeling to isogeometric analysis. The case of Cs smooth splines of degree p = 2s + 1 is particularly attractive, as the dimension of the resulting spaces is well understood and these function spaces have proven useful in various contexts. We explore the generation of these functions as linear combinations of elements from B-systems, which are collections of tensor-product B-splines associated with cross vertices. Specifically, we examine the linear independence of the resulting system of generating functions, thereby generalizing earlier results by Kang, Xu, Chen, and Deng [1] to a more general refinement strategy that produces a broader class of T-meshes.
Joint work with Maodong Pan.
We derive explicit closed-form expressions for the eigenvalues and eigenvectors of the matrices resulting from isogeometric Galerkin discretizations based on outlier-free spline subspaces for the Laplace operator, under different types of homogeneous boundary conditions on bounded intervals. For optimal spline subspaces and specific reduced spline spaces, represented in terms of B-spline-like bases, we show that the corresponding mass and stiffness matrices exhibit a Toeplitz-minus-Hankel or Toeplitz-plus-Hankel structure. Such matrix structure holds for any degree p and implies that the eigenvalues are an explicitly known sampling of the spectral symbol of the Toeplitz part. Moreover, by employing tensor-product arguments, we extend the closed-form property of the eigenvalues and eigenvectors to a d-dimensional box. As a side result, we have an algebraic confirmation that the considered optimal and reduced spline spaces are indeed outlier-free.
Joint work with Carla Manni, Ahmed Ratnani, Stefano Serra-Capizzano, and Hendrik Speleers.
For the PS-12 split introduced by Powell and Sabin in [1] we present an optimal symmetric 4 point quadrature rule and a collection of weighted rules. These are useful for an efficient formation of the linear system arising in Galerkin discretization on this split. We use the S-spline version of simplex splines introduced by Cohen, L., Riesenfeld in [2], and a global basis based on the theory of minimal determining sets adapted to S-splines on the PS-12 split.
Joint work with Salah Eddargani, Carla Manni, and Hendrik Speleers.
The process of assembling isogeometric Galerkin matrices arising from hierarchical B-spline (HB-spline) discretizations is a topic of active research in view of the associated computational issues, especially when the dimension and the polynomial degree of the basis increase. To address this challenge, different approaches have been investigated. These include specialized quadrature rules, which reduce the number of evaluation points necessary for integration, and efficient Bézier extraction operators, which suitably combine isogeometric analysis with finite element codes. In addition, new assembly methods that go beyond classical element-wise algorithms leveraging the structure of the underlying basis were also proposed. In this talk, we propose a novel representation of HB-spline system matrices as block-wise Hadamard products, obtained through univariate integrals. We use dedicated data structures to manipulate the sparse tensors involved in the assembly process, in order to reduce the memory footprint and the computational complexity of the method.
Trimming introduces discontinuities in a spline parametrization. This reduction of continuity violates prerequisites of fast formation and assembly based on weighted quadrature, which usually speeds up simulations tremendously, especially for higher degrees.
This work presents a solution to overcome this issue. The proposed discontinuous weighted quadrature concept incorporates information about trimmed areas into the integration rules for test functions cut by trim curves. Thus, it facilitates applying fast formation and assembly to trimmed spline models. The numerical results demonstrate that treating cut elements becomes the computational bottleneck of the simulation. Therefore, different routines for integrating cut elements are compared.
In the last decade, tensors have shown their potential as valuable tools for various tasks in numerical linear algebra as the representation of finite-dimensional operators stemming from the discretization of high-dimensional PDEs. While most of the research has been focusing on how to compress a given tensor in order to maintain information as well as reducing the storage demand for its allocation, the solution of linear tensor equations is a less explored venue. Even if many of the routines available in the literature are based on alternating minimization schemes (ALS), we pursue a different path and utilize Krylov methods instead. The use of Krylov methods in the tensor realm is not new. However, these routines often turn out to be rather expensive in terms of computational cost and ALS procedures are preferred in practice.
In this talk we show how to enhance Krylov methods for linear tensor equations with a panel of diverse randomization-based strategies which remarkably increase the efficiency of these solvers making them competitive with state-of-the-art ALS schemes. The up-to-date randomized approaches we employ range from sketched Krylov methods with incomplete orthogonalization and structured sketching transformations to streaming algorithms for tensor rounding. The promising performance of our new solver for linear tensor equations is demonstrated by many numerical results.
Joint work with Alberto Bucci and Leonardo Robol.
We present a novel quadrature technique for boundary element spline discretizations. Traditional methods classify integrals as weakly singular, nearly singular, or regular, but our approach eliminates the need for this distinction by reducing the classification to just two natural cases: (weakly) singular and non-singular. This is achieved through a smoothly varying quadrature rule that automatically adapts based on the physical distance from singularities in the integral kernels, improving both accuracy and efficiency. Additionally, by integrating over entire B-spline supports rather than individual elements, our method significantly reduces computational cost, particularly for higher-degree splines. We apply this approach to boundary element simulations of Stokes flow, a fundamental problem in fluid dynamics, porous media, and biomechanics, and a stepping stone to more complex models such as the Navier-Stokes equations.
Converting imaging data from different modalities into computational models that are suitable for analysis is a major challenge in the biomedical field. Geometric primitives like tetrahedra and hexahedra are commonly used to create volumetric meshes for both geometric and analytical applications. Isogeometric Analysis (IGA) has revolutionized the field by providing an alternative to conventional finite element methods. By utilizing high-order spline models to describe geometry, a more straightforward and precise method is achieved. Although previous studies, such as NURBSDiff, have made progress in this field, they do not specifically tackle the difficulties associated with volumetric spline fitting. Conventional tensor-product splines, such as B-splines and NURBS, are not efficient in improving geometric accuracy because they do not have the ability to locally refine, resulting in a compromise between accuracy and efficiency. The objective of this study is to create a computationally efficient and precise framework for fitting volumetric CAD models that are suitable for IGA to unstructured point clouds. Truncated Hierarchical B-splines (THB-splines) utilize local refinement in tensor-product B-splines. Building upon our existing surface fitting framework, we have expanded the application of differentiable programming to volumetric splines, emphasizing the following key contributions. We have developed a novel differentiable THB-spline module that can be easily integrated into existing computational frameworks. This module is designed specifically for volumetric fitting. We have also conducted a thorough investigation of local refinement strategies, with a focus on optimizing them for volumetric fitting. Additionally, we have extensively tested and validated our module by applying it to 3D imaging data from biological models. Through this testing, we have demonstrated the effectiveness of our module on raw point cloud data. This work not only improves the accuracy in modeling biomedical imaging data but also makes a significant contribution to the broader field of IGA by providing a more sophisticated and efficient method for volumetric CAD modeling.
Splines over triangulations or tetrahedral partitions are useful in many applications, such as finite element analysis, computer aided design, and other engineering problems. For several of these applications, continuous piecewise linear polynomials do not suffice. In some cases, one needs smoother elements for modeling or higher polynomial degrees to increase the approximation order. Smoothness over a triangular partition can be achieved either by using high degrees of polynomials or keeping the degrees low and splitting the triangles into subtriangles.
In this talk we focus on the Clough-Tocher split and the use of the recently introduced C1 simplex-splines [1] as a suitable approximation tool in numerical simulation over general triangulations. The simplex-spline basis was constructed for C1 spline spaces of any degree d ≥ 3 and can also be generalized to any spatial dimension. The C1 connection between adjacent triangles and the corresponding smoothness conditions can be used to build a global basis for the C1 spline space over a general triangulation. A selection of numerical examples show the optimal approximation power of the proposed approach.
Joint work with Jean-Louis Merrien, Maria Lucia Sampoli, and Hendrik Speleers.
Spline orbifolds are arbitrarily smooth — open or closed — freeform splines defined over the unit sphere, the affine or hyperbolic plane of topological genus 0, 1 or higher, respectively. They are piecewise polynomial over the affine plane and piecewise rational over the sphere and the hyperbolic plane. Over the plane, they are the well-known splines over triangulations, for which we can build spline spaces (for instance by splitting triangles into micro triangles), construct bases and compute dimensions.
I will point out that basically all we know about piecewise polynomial splines over planar triangulations can be carried over to piecewise rational orbifold splines over spherical and hyperbolic triangulations because triangular rational spline orbifolds have homogeneous polynomial representations over spatial triangulations. Fixing their weights, they form linear spaces with Ck conditions that are structurally equal to the ones for polynomial splines.
In this talk, I will review and propose a few ideas to build fast solvers for problems with a natural tensor structure. When discretizing 2D or 3D problems over tensorized domains, the matrices obtained usually are in form of a Kronecker sum. This can be readily exploited to recast the problems as matrix equations (in 2D) or tensor equations (in 3D).
Under suitable — but somewhat restrictive — assumptions, this enables the creation of fast linear system solvers that work in O(n) time for n × n or n × n × n grids; this has to be put in comparison with the "optimal" nd complexity that would be given by a classical method that ignores this structure, such as a multigrid scheme. I will describe both classical results in this direction and more recent developments.
Under less stringent assumptions on the geometry or the regularity of the problems, the ideas described can still be a powerful tools to devise preconditioners and/or accelerate iterative solvers. I will discuss advantages and limitations of this approach, and try to draw a few ideas for possible future research lines.
Our study starts from the observation that most hierarchical spline constructions combine systems of functions spanning certain background spaces with proper selection procedures. Based on certain assumptions that are fulfilled by the background spaces, and features of the selection procedure, relevant and useful properties of the resulting systems of hierarchical splines can be achieved.
In this talk we present an abstract framework that shows how assumptions regarding the background spaces and features of the selection procedures are related to the resulting properties of the hierarchical spline space so obtained. The assumptions and properties of the selection procedures can be organized into groups that correspond to certain properties of the hierarchical space, and this helps us gain further insights into the relationships and dependencies between the existing constructions.
Joint work with Bert Jüttler, Dominik Mokriš, and Francesca Pelosi.
This talk explores the possibility of extending isogeometric analysis to evolutionary partial differential equations (PDEs) by approximating the time dependence of solutions of using splines. This idea has been explored in the literature already, offering an alternative to well-established space-time methods that typically employ discontinuous Galerkin approximation in time. While discontinuous Galerkin methods inherently lead to a sequential time-stepping solving procedure, the computational viability of spline-based temporal discretization crucially depends on the development of efficient and, ideally, parallel solvers. I discuss a class of solvers that exploit the tensor-product structure of spline spaces, achieving high computational efficiency through tensor linear algebra techniques. This presentation will discuss in particular the use of space-time isogeometric analysis for parabolic and hyperbolic PDEs, its advantages, limitations, practical implications.
Joint work with Sara Fraschini, Gabriele Loli, Andrea Moiola, Monica Montardini, and Mattia Tani.
The idea of Generalized Locally Toeplitz (GLT) sequences has been introduced as a generalization both of classical Toeplitz sequences and of variable coefficient differential operators and, for every sequence of the class, it has been demonstrated that it is possible to give a rigorous description of the asymptotic spectrum in terms of a function (the symbol) that can be easily identified.
This generalizes the notion of a symbol for differential operators (discrete and continuous) or for Toeplitz sequences, where for the latter it is identified through the Fourier coefficients and is related to the classical Fourier Analysis.
For every r,d ≥ 1 the r-block d-level GLT class has nice *-algebra features and indeed it has been proven that it is stable under linear combinations, products, and inversion when the sequence which is inverted shows a sparsely vanishing symbol (sparsely vanishing symbol = a symbol whose minimal singular value vanishes at most in a set of zero Lebesgue measure). Furthermore, the GLT *-algebras virtually include any approximation of partial differential equations (PDEs), fractional differential equations (FDEs), integro-differential equations (IDEs) by local methods (Finite Difference, Finite Element, Isogeometric Analysis, etc) and, based on this, we demonstrate that our results on GLT sequences can be used in a PDE/FDE/IDE setting in various directions, including preconditioning, multigrid, spectral detection of branches, fast 'matrix-less' computation of eigenvalues, stability issues, and challenges such as the GLT use in tensors, stochastic, machine learning algorithms. We will discuss also the impact and the further potential of the theory with special attention to new tools and to new directions as those based on symmetrization tricks, on the extra-dimensional approach, and on blocking structures/operations.
In 1D the Nyström scheme is a classical method for the numerical solution of integral equations with smooth kernels, having the attractive feature of naturally producing a continuous extension of the discrete solution. Combining it with Gauss-Legendre quadrature rules, efficient formulations are obtained which can also be profitably extended to the multivariate setting when domains can be decomposed into hypercubes. Otherwise, for general shapes, the only proposal available in the literature is that given in [1], which however is not designed for scattered nodes. Conversely, here we rely on a class of meshless moment-free quadrature rules recently introduced in [2] which are based on B-spline spaces of immersed type. The efficiency of the scheme is increased by an innovative decoupling procedure, separating quadrature nodes from collocation points. Theoretical arguments and experimental evidence confirm that a suitable usage of decoupling is profitable especially for rapidly varying kernels. Experiments on CAD domains will be presented.
Joint work with Bruno Degli Esposti.
We are concerned with fast iterative solvers for Isogeometric Analysis (IGA) with a focus on robustness. Robustness can refer to the geometry and its parametrization, it can refer to model parameters, and it can refer to discretization parameters, like the grid size, spline degrees and smoothness. The latter two are particularly relevant in the context of IGA. Objects from real-world applications are typically represented as multi-patch geometries. A natural choice to solve linear systems resulting from the discretization of partial differential equations over multi-patch geometries are domain decomposition solvers, like FETI-DP solvers. Their adaption to IGA is also known as IsogEometric Tearing and Interconnecting (IETI) method. While the IETI solver allows to freely choose the solving strategy for the patch-local systems, we restrict ourselves to direct solvers. Our experiments have shown that the IETI solvers with direct solvers are relatively robust with respect to the parametrization of the geometry. We will see that these solvers are not restricted to simple settings of the Poisson equation. On the one hand, we can handle non-matching interfaces, which might be due to non-matching parametrizations, sliding interfaces, or adaptive grid refinements. On the other hand, extensions to other differential equations are possible; this includes linear elasticity equations or Stokes flow equations. We present the most interesting theoretical results and illustrate our findings with the results from numerical experiments.
Consider the Poisson problem on a d-dimensional cube. It is well-known that, if the problem is discretized with linear finite elements on a uniform tensor product mesh, the resulting stiffness matrix can be diagonalized using the Fast Fourier Transform. This fact can be exploited to solve the linear system yielding O(N logN) complexity, where N represents the number of degrees of freedom. Such an approach is referred to as a fast Poisson solver.
In this talk, we show how to generalize this idea to the case of B-splines of arbitrary degree p. The resulting algorithm solves the linear system with O((N + p) logN) complexity. This is achieved by splitting the spline space into an outlier-free subspace and a subspace with low dimension. On the latter subspace, the eigenvectors of the problem are computed numerically. On the former subspace, on the other hand, the eigenvectors are approximated using interpolated sinusoidal functions. The resulting approximated eigendecomposition can be used as a preconditioner for the linear system, yielding extremely fast convergence independently of N and p.
This talk will focus on the isogeometric version of Finite Element Exterior Calculus (FEEC), a mathematical framework that integrates finite element methods with concepts from differential geometry and algebraic topology. It provides a systematic approach to discretizing partial differential equations, ensuring stability and compatibility across various applications in computational electromagnetism and fluid mechanics. We will discuss the use of (Truncated) Hierarchical B-splines for constructing efficient isogeometric FEEC discretizations, and will provide an overview of some extensions and applications of this approach. We will also discuss another aspect of "fast methods": high-productivity software frameworks that support the development of such methods. In particular, this talk will introduce one such Julia package, Mantis.jl, designed to enable easy prototyping of FEEC discretizations with adaptively-refinable tensor-product and unstructured spline differential forms.
In quasi-magnetostatics problems it is common to use formulations in terms of the magnetic vector potential, which leads to a problem for the curl-curl operator. In this kind of problems the solution is in general not unique, since adding any irrotational function to our solution will give another valid solution. It is then necessary to add a gauging condition to recover uniqueness.
Tree/cotree gauging is a very efficient technique used in finite elements for gauging. It is based on creating a spanning tree on the mesh, i.e., a chain of edges that passes through every vertex of the mesh, without creating closed loops. The cotree is then formed by all the edges of the mesh that do not belong to the tree, and the solution of the curl-curl problem in the space generated by the cotree is unique. The main advantage with respect to other gauging techniques, such as imposing a zero divergence, either by penalty or through a multiplier, is that it reduces the size of the linear system and the associated matrix is symmetric and positive definite.
It has been shown that, based on the existence of commutative isomorphisms between spline spaces and low order finite elements, the tree/cotree decomposition can be applied for tensor-product B-splines without effort, using the same algorithms existing for FEM.
The generalization to hierarchical splines is not trivial. Indeed, the multilevel structure of hierarchical splines is similar to the existence of hanging nodes in FEM, and the mesh cannot be interpreted as a graph where one can build the spanning tree. In this talk I will show that the tree/cotree technique can be used with hierarchical splines, constructing a suitable tree on each level.
In this work, we present a novel isogeometric topology optimization (TO) method for shell structures that involve complex design domains. These domains often stem from existing designs, which are typically represented as trimmed models containing geometric flaws, rendering them unsuitable for direct analysis or optimization. To address this limitation, we first propose a semi-automatic and scalable pipeline to reparametrize such models into watertight and smooth representations using analysis-suitable unstructured T-splines (ASUTS). On top of it, minimum compliance is studied as the model problem, where the Kirchhoff-Love shell is used to compute the structural response and a generalized Cahn-Hilliard phase-field model is proposed to perform TO. Given that both models are governed by high-order partial differential equations, ASUTS-based isogeometric analysis (IGA) is adopted for the spatial discretization. Moreover, we propose a fast assembly method for unstructured splines to enhance computational efficiency. To demonstrate the efficacy of our approach, we perform several benchmark tests to show that the generalized Cahn-Hilliard model can naturally handle complex topological changes without special treatment. Finally, a couple of real-world engineering structures are studied to show the capability of the proposed method dealing with complex design domains.
Generative manufacturing applies the power of artificial intelligence (AI) to generate and execute optimal solutions given customer-defined constraints and parameters, such as functional specifications, cost, and lead time, by exploring vast combinations of design and production alternatives based on material and process availability. In this talk, I will present our latest research on combining AI with isogeometric analysis (IGA) for applications in additive manufacturing (AM). It includes a machine learning (ML) framework for inverse design and manufacturing of self-assembling fiber-reinforced composites in 4D printing, IGA-based topology optimization for AM of heat exchangers, as well as data-driven residual deformation prediction to enhance metal component printability and lattice support structure design in the laser powder bed fusion (LPBF) AM process. By speeding up geometry distortion predictions from several hours to mere seconds, our model can be deployed to prevent generation of infeasible designs. Our on-going efforts also include developing digital twins to enable rapid prediction of stress-induced build failures in LPBF manufacturing using dynamic neural surrogates and transformers, where reduced order modeling is a key technique to efficiently simulate underlying physics.