open access publication

Article, 2023

Log-law recovery through reinforcement-learning wall model for large eddy simulation

Physics of Fluids, ISSN 1089-7666, 1070-6631, Volume 35, 5, Page 055122, 10.1063/5.0147570

Contributors

Vadrot, Aurélien 0000-0003-3107-8110 [1] Yang, Xiang I A [2] Bae, Hyunji Jane 0000-0001-6789-6209 [3] Abkar, Mahdi 0000-0002-6220-870X (Corresponding author) [1]

Affiliations

  1. [1] Aarhus University
  2. [NORA names: AU Aarhus University; University; Denmark; Europe, EU; Nordic; OECD];
  3. [2] Pennsylvania State University
  4. [NORA names: United States; America, North; OECD];
  5. [3] California Institute of Technology
  6. [NORA names: United States; America, North; OECD]

Abstract

This paper focuses on the use of reinforcement learning (RL) as a machine-learning (ML) modeling tool for near-wall turbulence. RL has demonstrated its effectiveness in solving high-dimensional problems, especially in domains such as games. Despite its potential, RL is still not widely used for turbulence modeling and is primarily used for flow control and optimization purposes. A new RL wall model (WM) called VYBA23 is developed in this work, which uses agents dispersed in the flow near the wall. The model is trained on a single Reynolds number (Reτ=104) and does not rely on high-fidelity data, as the backpropagation process is based on a reward rather than an output error. The states of the RLWM, which are the representation of the environment by the agents, are normalized to remove dependence on the Reynolds number. The model is tested and compared to another RLWM (BK22) and to an equilibrium wall model, in a half-channel flow at eleven different Reynolds numbers {Reτ∈[180;1010]}. The effects of varying agents' parameters, such as actions range, time step, and spacing, are also studied. The results are promising, showing little effect on the average flow field but some effect on wall-shear stress fluctuations and velocity fluctuations. This work offers positive prospects for developing RLWMs that can recover physical laws and for extending this type of ML models to more complex flows in the future.

Keywords

ML models, Reynolds, Reynolds number, action, agent parameters, agents, average flow field, backpropagation, backpropagation process, control, data, dependence, domain, eddy simulation, effect, environment, equilibrium, equilibrium wall model, error, field, flow, flow control, flow field, fluctuations, future, game, half-channel flow, high-dimensional problems, high-fidelity data, law, learning, machine-learning, model, near-wall turbulence, number, optimization, optimization purposes, output, output error, parameters, physical laws, positive prospects, potential, problem, process, prospects, purposes, recovery, reinforcement, reinforcement learning, representation, results, reward, simulation, space, state, steps, stress fluctuations, time, time steps, turbulence, turbulence model, velocity, velocity fluctuations, wall, wall model, wall shear stress fluctuations

Funders

  • Danish Agency for Science and Higher Education
  • Office of Naval Research
  • United States Department of the Navy

Data Provider: Digital Science