Improving hp-Variational Physics-Informed Neural Networks: A Tensor-driven Framework for Complex Geometries, and Singularly Perturbed and Fluid Flow Problems
Abstract
Scientific machine learning (SciML) combines traditional computational science and physical
modeling with data-driven deep learning techniques to solve complex problems. It generally
involves incorporating physical constraints such as differential equations or experimental data
into neural networks to solve Partial Differential Equations (PDEs). In the field of Scientific
Machine Learning, Physics Informed Neural Networks (PINNs) represent a class of neural
networks that can solve PDEs by incorporating the PDE residual into the optimization problem
along with boundary constraints. This enables the neural network to obtain solutions within
the domain using spatial and temporal coordinates as inputs. Although these methods have
longer training times compared to traditional numerical methods, they have shown superior
performance in terms of inference times and solving inverse problems, making them an important
subject of study. A significant advancement called hp-Variational Physics Informed
Neural Networks (hp-VPINNs) was introduced, which uses the variational form of the residual
(as used in Finite Element Methods) in the loss formulation. This approach offers two key
advantages: first, the differentiability requirement of the loss functional is reduced, resulting
in lower numerical errors during gradient computation; second, the use of h- and p-refinement
enables the network to capture higher frequency solutions. However, this method faces two
major limitations. First, the training time is comparatively higher, especially when the number
of elements in the domain increases (due to h-refinement), which negates its benefits. Second,
the existing framework cannot handle complex geometries, which is essential for solving
real-world applications.
In this thesis, we address these limitations by improving the hp-VPINNs algorithm and
demonstrate its application to various problems as detailed below. FastVPINNs: A Tensor-Driven Accelerated framework for Variational Physics informed
neural networks in complex domains To address the main challenges in the existing hp-
VPINNs framework, such as the increase in training time with increasing number of elements
and the inability to handle complex geometries, we have developed FastVPINNs, a tensor-based
VPINNs framework. Using optimized tensor operations, FastVPINNs achieves a 100-fold reduction
in median training time per epoch compared to traditional hp-VPINNs. Further, through
the implementation of Mapped Finite Elements, the framework can effectively handle complex
geometries. Beyond improving upon existing implementations, we demonstrate that with
proper hyperparameter selection, FastVPINNs surpasses conventional PINNs in both speed
and accuracy, particularly for problems with high-frequency solutions. We also demonstrate
the framework’s capability in solving inverse problems, including both constant parameter
identification and spatially-varying parameter estimation for scalar PDEs.
FastVPINNs for Navier Stokes Equations Although hp-VPINNs possess significant advantages
over PINNs, they have not been extended to solve incompressibleNavier-Stokes equations,
despite PINNs being successfully applied to these problems. This limitation can be attributed
to the slow training times of existing hp-VPINNs algorithms and the complex implementation
challenges associated with hp-VPINNs for flow problems. In this work, we implement the
Navier-Stokes equations using FastVPINNs to solve forward problems such as lid-driven cavity
flow, flow through a channel, Falkner-Skan boundary layer, flow past a cylinder, flow past a
backward-facing step, and Kovasznay flow for Reynolds numbers ranging from 1 to 200 in the
laminar regime. We compare our results with PINNs in terms of accuracy and training time to
demonstrate the significance of this implementation. Our experiments show that FastVPINNs
trains 2.4 times faster than PINNs while achieving comparable accuracy to results reported
in the literature. Additionally, we demonstrate the framework’s capability in solving inverse
problems for the Navier-Stokes equations by successfully identifying Reynolds numbers from
sparse solution observations, highlighting the versatility of our approach.
FastVPINNs for Singularly-Perturbed problems Singularly-perturbed problems arise in
convection-dominated regimes and are challenging test cases to solve due to the spurious
oscillations that might occur while solving the problem with conventional numerical methods.
Stabilization schemes like Streamline-Upwind Petrov-Galerkin (SUPG) and cross-wind loss
functionals enhance numerical stability. Since SUPG stabilization is proposed in the weak
formulation of PDEs, Variational PINNs are a suitable candidate for solving these problems. In
this work, we explore different stabilization schemes and their effects on singularly-perturbed
problems, comparing the accuracy of our results with the existing literature. We demonstrate
that stabilized VPINNs perform better than PINNs proposed in the literature, both in terms of
training time and accuracy. Additionally, we propose an neural network model that predicts
the SUPG stabilization parameter along with the solution, addressing a challenging task in
conventional methods. We also explore adaptive hard constraint functions for boundary layer
problems, using neural networks to adjust the slope based on diffusion coefficients, improving
accuracy and reducing the need for tuning hyperparameters.
In Addition to this, we also present the implementation details of the FastVPINNs library as
a Python pip package. Developed using TensorFlow 2.0. The library includes a comprehensive
test suite with unit, integration, and compatibility tests, achieving over 96% code coverage. It
also features CI/CD actions on GitHub for streamlined deployment. Documentation is available
at https://cmgcds.github.io/fastvpinns