Modeling Vortex-Induced Vibrations Using Self-Attention Transformers

This abstract has open access
Abstract Summary
Vortex-induced vibrations (VIV) of floating structures play an essential role in offshore engineering design. The accurate prediction of structural response is critical as vortex shedding behind bluff bodies may lead to continuous degradation of structural performance or even catastrophic failure. Typically, the description of vortex-induced vibrations requires a high-fidelity fluid-structure interaction model, coupling the structure's nonlinear dynamics (large displacements) with turbulent flows of the surrounding fluid. The last typically involves Computational Fluid Dynamics (CFD) approaches based on solving Navier-Stokes with a fine mesh that has to be frequently adapted to the structure motion. Unfortunately, the resulting model tends to be quite expensive in terms of computational costs, especially considering extensive multi-query analysis, like optimization, real-time response, or Uncertainty quantification. Such time-consuming tasks often hamper the use of high-fidelity codes constructed upon physics-based models. A good alternative to overcome such limitations is the construction of surrogate models that have become popular within many research fields due to their success in being efficient proxies for high-fidelity models. Such models have become essential tools to simplify the analysis and can be very useful in broad industrial applications, obtaining predictions with a much lower computational cost than CFD. In such a context, data-driven machine learning (ML) models, with the potential to combine field or experimental data with high-fidelity simulations, have gained prominence due to the potential to enhance the capability of computational simulations to describe complex physical systems. Several works have been dedicated to constructing predictive data-driven machine learning models to return accurate predictions at a low cost. Recently, transformer models built on self-attention were applied to model dynamical systems that can replace otherwise expensive computational models. Such a model has been proven able to accurately predict various dynamical systems and outperforms classical methods commonly used in the scientific machine learning literature. In this work, we propose a machine-learning approach based on the self-attention transformers to act as a surrogate model for VIV dynamics. We show through numerical experimentation that the surrogate model can yield accurate predictions of the VIV dynamics. More importantly, it amplifies the ability to investigate critical aspects of frequently used wake oscillator models.
Abstract ID :
71
Universidade Federal do Rio de Janeiro
Postdoc
,
Université libre de Bruxelles
, ́Ecole Polytechnique de Ouagadougou
Professor
,
Universidade Federal Fluminense
Professor
,
UFRJ - Universidade Federal do Rio de Janeiro
12 visits