This study explores a reinforcement learning-based approach to aerodynamic shape optimization, where airfoil geometries are evolved from a fixed initial state through a flexible B-Spline parameterization. The optimization process directly interfaces with an aerodynamic solver based on Xfoil's viscid–inviscid interaction method, bypassing the need for surrogate models, enabling unbounded exploration of optimal policies, and eliminating the uncertainties associated with surrogate-based techniques. The focus is on developing global optimization strategies for single-point airfoil design via DRL, leveraging the single-step DRL capacity to guide the exploration and learning process for airfoil shape optimization across various fluid dynamic parameters, including Reynolds number, angle of attack, and bypass transition conditions. Without any explicit or embedded prior knowledge of aerodynamic principles, the system reliably converges to high-performance airfoils under a wide range of flow conditions. Notably, the optimized shapes display characteristics typical of classical high-lift and natural laminar flow airfoils, such as extended laminar flow regions, smooth concave pressure recovery, and Stratford-like expansion peaks. Efficiency levels reach values as high as 600, with lift coefficients exceeding 4.0 under favorable conditions. Beyond the numerical results, the study reveals how classical aerodynamic strategies can emerge organically from the optimization, even under minimal constraints. The observed sensitivity to Reynolds number, angle of attack, and transition location suggests a robust and adaptable optimization process. While the current implementation is limited to single-point performance, the framework sets the stage for future advancements, where increasingly sophisticated design behaviors and multi-objective challenges can be explored.
Deep reinforcement learning-based airfoil design and optimization: An aerodynamic analysis / Scavella, P.; Paolillo, G.; Greco, C. S.. - In: AEROSPACE SCIENCE AND TECHNOLOGY. - ISSN 1270-9638. - 167:(2025). [10.1016/j.ast.2025.110638]
Deep reinforcement learning-based airfoil design and optimization: An aerodynamic analysis
Scavella P.
;Paolillo G.;Greco C. S.
2025
Abstract
This study explores a reinforcement learning-based approach to aerodynamic shape optimization, where airfoil geometries are evolved from a fixed initial state through a flexible B-Spline parameterization. The optimization process directly interfaces with an aerodynamic solver based on Xfoil's viscid–inviscid interaction method, bypassing the need for surrogate models, enabling unbounded exploration of optimal policies, and eliminating the uncertainties associated with surrogate-based techniques. The focus is on developing global optimization strategies for single-point airfoil design via DRL, leveraging the single-step DRL capacity to guide the exploration and learning process for airfoil shape optimization across various fluid dynamic parameters, including Reynolds number, angle of attack, and bypass transition conditions. Without any explicit or embedded prior knowledge of aerodynamic principles, the system reliably converges to high-performance airfoils under a wide range of flow conditions. Notably, the optimized shapes display characteristics typical of classical high-lift and natural laminar flow airfoils, such as extended laminar flow regions, smooth concave pressure recovery, and Stratford-like expansion peaks. Efficiency levels reach values as high as 600, with lift coefficients exceeding 4.0 under favorable conditions. Beyond the numerical results, the study reveals how classical aerodynamic strategies can emerge organically from the optimization, even under minimal constraints. The observed sensitivity to Reynolds number, angle of attack, and transition location suggests a robust and adaptable optimization process. While the current implementation is limited to single-point performance, the framework sets the stage for future advancements, where increasingly sophisticated design behaviors and multi-objective challenges can be explored.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


