UFR 3-33 Best Practice Advice
Turbulent flow past a smooth and rigid wall-mounted hemisphere
Semi-confined flows
Underlying Flow Regime 3-33
Best Practice Advice
Key physics
The case UFR 3-33 consists of a smooth rigid hemisphere mounted on a smooth plate and exposed to a turbulent boundary layer.
To characterize the problem, the flow field can be divided into several key flow regions:
- The horseshoe vortex system located just upstream of the body results from the separation of the boundary layer from the ground. This is due to the positive pressure gradient in front of the hemisphere acting as a flow barrier. The size and formation of this particular flow structure depends on the properties of the approaching boundary layer such as the turbulence intensity, the velocity distribution and the overall thickness of the boundary layer.
- The stagnation area is located in the lower front of the hemisphere, where the stagnation point is found. Its location depends of the horseshoe vortex system size.
- Behind this stagnation area the flow is accelerated (acceleration zone). Strong vorticity is generated in the vicinity of the surface.
- This high level of vorticity leads to a flow detachment from the surface of the hemisphere along a separation line. The position of the separation line is influenced by the properties of the approaching boundary layer. A high level of turbulent intensity upstream of the body moves the separation line downstream.
- After separation the flow forms the recirculation area. Its size and form depends on the position of the separation line and consequently on the properties of the approaching boundary layer.
- On the top of the recirculation area strong shear layer vorticity is observed leading to the production of Kelvin-Helmholtz vortices which travel downstream in the wake.
- A reattachment area is obviously present. The splatting effect occurs, redistributing momentum from the wall-normal direction to the streamwise and spanwise directions.
To fully describe the problem, the unsteady flow features are also highlighted:
- The horseshoe vortex system trails past the hemisphere and forms stable necklace-vortices that stretch out into the wake region.
- The flow detaches from the surface of the hemisphere along the indicated separation line and the vortices roll-up. They interact and sometimes merge with the horseshoe vortices behind the hemisphere. Larger vortical structures appear: Entangled vortical hairpin-structures of different sizes and orientations traveling downstream. Note that smaller hairpin-structures can also be observed in the wake growing from the ground as usual in a turbulent boundary layer.
- The vortex shedding mentioned above is complex and its type and frequency vary with its location: At the top of the hemisphere, arch-type-vortices are observed with a shedding frequency in the range 7.9 Hz ≤ f1 ≤ 10.6 Hz. On the sides of the hemisphere another shedding type is present. Von Karman shedding of vortices occurs at a frequency of f2 = 5.5 Hz. This vortex shedding on the lower sides of the hemisphere involves a pattern of two distinguishable types that switch in shape and time: The first kind can be described as a quasi-symmetric process where the vortical structures detach in a double-sided symmetric manner. The second kind relates to a quasi-periodic vortex shedding resulting in a single-sided alternating detachment pattern.
Numerical modeling
- Discretization accuracy: In order to perform LES predictions it is required that spatial and temporal discretization are both at least of second-order accuracy. It is also important that the numerical schemes applied possess low numerical diffusion (and dispersion) properties in order to resolve all the scales and not to dampen them out. A predictor-corrector scheme (projection method) of second-order accuracy forms the kernel of the fluid solver. In the predictor step an explicit Runge-Kutta scheme advances the momentum equation in time. This explicit method is chosen because of its accuracy, speed and low memory consumption. The discretization in space is done with second order central discretization scheme with a flux blending including 5% of a first-order upwind scheme.
- Grid resolution: The second critical issue to perform LES is the grid resolution. The mesh near the wall, in the free-shear layers and also in the interior flow domain has to be fine enough. For wall-resolved LES the recommendations given by Piomelli and Chasnov (1996) should be followed or outperformed, e.g., y+ < 2, Δx+ < 50, Δz+ < 50-150. In the present investigation the grid possesses about 30 million CVs. The first cell center is positioned at a distance of Δz/D = 5 × 10-5. It was found to be sufficient to resolve the flow accurately at walls as well as in the free shear layers. Similar to the classical flow around a cylinder it is important to resolve the region close to the separation point and the evolving shear layer region adequately.
- Grid quality: The third point is the quality of the grid. Smoothness and orthogonality is a very important issue for LES computations. In order to capture separations and reattachments on the hemisphere reliably, the orthogonality of the curvilinear grid in the vicinity of the walls has to be high.
- Inlet boundary condition: At the inlet a 1/7 power law with δ/D = 0.5 and without any perturbation is applied. However, to mimic the targeted boundary layer, perturbations generated by a synthetic turbulence inflow generator are injected as source terms upstream of the hemisphere. These additional perturbations are important to reach a good agreement between experimental and LES. Indeed, as demonstrated in Wood et al. (2016), they directly affect the size of the horseshoe vortex, the position of the separation line and consequently the recirculation area.
- Outlet boundary condition: A mix of convective and non-convective outflow boundary condition is applied. The convective outlet boundary condition favored allowing vortices to leave the integration domain without significant disturbances (Breuer, 2002). The convection velocity is set to the 1/7 power law without perturbation.
Physical modeling
- Wall-resolved LES: As mentioned above the flow in the present test case is turbulent and has a Reynolds number of Re = 50,000. Since in LES a large spectrum of scales is resolved by the numerical method, this methodology is well suited. The near-wall regions are resolved too in order to obtain a reference LES solution. Later, wall functions can be used and compared.
Application uncertainties
Application uncertainties can arise due to:
- CFD inflow condition: The length scales used to generate the turbulent perturbations for the inlet are not depending on the location. This is not the case in reality and thus represents an approximation.
- Laser-Doppler Anemometry: LDA is a calibration free measurement system. Some issues should be kept in mind while measuring the flow field. A few of the most important points for the present test case (i.e. wind tunnel measurements) are stated here:
- Low seeding in wind tunnel / data rate: The seeding density is dependent on the overall size of the wind tunnel. Larger test sections suffer from low seeding densities especially in air applications like the present test case which additionally has an open test section. The data rate of droplet measurements also decreases in the near wall region. To get proper results the duration interval of each measurement point has to be long enough to collect sufficient data. In fully automated application this must be taken into consideration to adapt the measurement length in critical regions, like walls.
- Evaluation of velocity spectra: The low seeding in a wind tunnel has also an impact on the correct measurement of velocity spectra, such as the commonly used power spectrum density (PSD) analysis. Since the used droplets pass the measurement volume of the LDA system randomly there is no equidistant time signature of the measured velocity components. In this case so called sampling and hold algorithms are used to add the missing data by artificially generating an equdistant time pattern and literally filling up the space between single measurements by holding the last measured value until the next actual measurement is arrived. By doing this, the advantages of FFT-algorithms can be processed which usually work with equidistant time spacing and can be easily integrated into the evaluation of velocity spectra. In flows with very high seeding density the sampling and hold algorithm has only minor impact on the measurement results as the time spacing between single measurements is very small. In this case the algorithm has to fill up only very few values to achieve an equdistant measurement grid. This is completely different for flows with a very low seeding density where there are lesser measurements and the time spacing between single measurements can be rather large. Here the sampling and hold algorithm fills up more artificial data and the velocity measurement can be biased in a non-physical direction as described by Benedict et al. (2000) and Broersen et al. (2000). Additionally, Adrian and Yao (1986) have charaterized the behavior of the sampling and hold algorithm as it acts as a first-order low-pass filter with a cut-off frequency of about $f_co=n/2Pi$ fco =n/2 , · where n is the average data rates per second. So the benefits of sampling and hold algorithms are limited to the maximum frequency a study wants/has to reveal and is mainly dependent on the data rate that can be achieved in a specific setup.
- Reflections at the surface: cause a bad signal-to-noise ratio
- Seeding: Choice of seeding particles
- Estimation of cross-moments (coincidence measurements, window-functions)
TO BE UPDATED
- Constant Temperature Anemometry: CTA is a widely established measurement system but can have certain difficulties which shall be mentioned here:
- temperature dependency of the measurement
- calibration relates to a non-linearity between voltage and velocity
- invasive measurement
- no measurement of the flow direction with standard probes
TO BE UPDATED
Recommendations for Future Work
- The numerical computations were performed based on a wall-resolving LES. This implies fine-resolved grids leading to a high computational effort. Wall functions should be tested to reduce the resolution and thus decrease the required effort.
- As shown by Schmidt and Breuer (2016) (TO BE UPDATED), the original formulation of the source term using with the STIG data (presented in Section Synthetic turbulent inflow generator (STIG)) leads to an undesired change of the target autocorrelations and thus to a integral time scale within the numerical simulation differing form the defined integral time scale at the beginning of the generation process of the STIG. Therefore, an alternative expression of the source terms based on a ratio between (Φ′)syn and the integral time scale of the inflow T:
has been developed and employed in future works.
- The flow field highly depends on the turbulent intensity introduced upstream of the hemisphere (see Wood et al., 2016). The LES predictions are done with the synthetic turbulence inflow generator by Klein et al. (2003). The entire synthetic inflow profile is defined by one integral time scale and two integral length scales. The integral scales observed within the boundary layer depend on the distance to the wall. Therefore a segmentation of the synthetically generated flow field into several regions with different integral scales is of interest.
- The case UFR 3-33 with its flow separations and different vortex sheddings is an appropriate configuration to test and validate new turbulence models or new turbulence wall functions.
Acknowledgments
The work reported here was financially supported by the Deutsche Forschungsgemeinschaft under the contract numbers BR 1847/12-1 and BR 1847/12-2 (Breuer, HSU Hamburg). The large computations were carried out on the German Federal Top-Level Supercomputer SuperMUC at LRZ Munich under the contract number pr84na. Furthermore, the authors want to thank Markus Klein (Universität der Bundeswehr München) for providing the original source code of the digital filter based inflow procedure as starting point of the source term development.
Contributed by: Jens Nikolas Wood, Guillaume De Nayer, Stephan Schmidt, Michael Breuer — Helmut-Schmidt Universität Hamburg
© copyright ERCOFTAC 2024