UFR 3-33 Best Practice Advice

From KBwiki
Jump to navigation Jump to search

Turbulent flow past a smooth and rigid wall-mounted hemisphere

Front Page

Description

Test Case Studies

Evaluation

Best Practice Advice

References

Semi-confined flows

Underlying Flow Regime 3-33

Best Practice Advice

Key physics

The case UFR 3-33 consists of a smooth rigid hemisphere mounted on a smooth plate and exposed to a turbulent boundary layer.

To characterize the problem, the flow field can be divided into several key flow regions:

  • The horseshoe vortex system located just upstream of the body results from the separation of the boundary layer from the ground. This is due to the positive pressure gradient in front of the hemisphere acting as a flow barrier. The size and formation of this particular flow structure depends on the properties of the approaching boundary layer such as the turbulence intensity, the velocity distribution and the overall thickness of the boundary layer.
  • The stagnation area is located in the lower front of the hemisphere, where the stagnation point is found. Its location depends of the horseshoe vortex system size.
  • Behind this stagnation area the flow is accelerated (acceleration zone). Strong vorticity is generated in the vicinity of the surface.
  • This high level of vorticity leads to a flow detachment from the surface of the hemisphere along a separation line. The position of the separation line is influenced by the properties of the approaching boundary layer. A high level of turbulent intensity upstream of the body moves the separation line downstream.
  • After separation the flow forms the recirculation area. Its size and form depends on the position of the separation line and consequently on the properties of the approaching boundary layer.
  • On the top of the recirculation area strong shear layer vorticity is observed leading to the production of Kelvin-Helmholtz vortices which travel downstream in the wake.
  • A reattachment area is obviously present. The splatting effect occurs, redistributing momentum from the wall-normal direction to the streamwise and spanwise directions.


To fully describe the problem, the unsteady flow features are also highlighted:

  • The horseshoe vortex system trails past the hemisphere and forms stable necklace-vortices that stretch out into the wake region.
  • The flow detaches from the surface of the hemisphere along the indicated separation line and the vortices roll-up. They interact and sometimes merge with the horseshoe vortices behind the hemisphere. Larger vortical structures appear: Entangled vortical hairpin-structures of different sizes and orientations traveling downstream. Note that smaller hairpin-structures can also be observed in the wake growing from the ground as usual in a turbulent boundary layer.
  • The vortex shedding mentioned above is complex and its type and frequency vary with its location: At the top of the hemisphere, arch-type-vortices are observed with a shedding frequency in the range 7.9 Hz ≤ f1 ≤ 10.6 Hz. On the sides of the hemisphere another shedding type is present. Von Karman shedding of vortices occurs at a frequency of f2 = 5.5 Hz. This vortex shedding on the lower sides of the hemisphere involves a pattern of two distinguishable types that switch in shape and time: The first kind can be described as a quasi-symmetric process where the vortical structures detach in a double-sided symmetric manner. The second kind relates to a quasi-periodic vortex shedding resulting in a single-sided alternating detachment pattern.

Numerical modeling

  • Discretization accuracy: In order to perform LES predictions, it is required that the spatial and temporal discretization are both at least of second-order accuracy. It is also important that the numerical schemes applied possess low numerical diffusion (and dispersion) properties in order to resolve all the scales and not to dampen them out. A predictor-corrector scheme (projection method) of second-order accuracy forms the kernel of the fluid solver. In the predictor step an explicit Runge-Kutta scheme advances the momentum equation in time. This explicit method is chosen because of its accuracy, speed and low memory consumption. The small time steps fit well to the The discretization in space is done with a second-order central discretization scheme with a flux blending including not more than 5% of a first-order upwind scheme.
  • Grid resolution: The second critical issue to perform LES is the grid resolution. The mesh near the wall, in the free-shear layers and also in the interior flow domain has to be fine enough. For wall-resolved LES the recommendations given by Piomelli and Chasnov (1996) should be followed or outperformed, e.g., y+ < 2, Δx+ < 50, Δz+ < 50-150. In the present investigation the grid possesses about 30 million CVs. The first cell center is positioned at a distance of Δz/D = 5 × 10-5. It was found to be sufficient to resolve the flow accurately at the walls as well as in the free shear layers. Similar to the classical flow around a cylinder it is important to resolve the region close to the separation point and in the evolving shear layer region adequately.
  • Grid quality: The third point is the quality of the grid. Smoothness and orthogonality are very important issues for LES computations. In order to capture flow separations and reattachments on the hemisphere reliably, the orthogonality of the curvilinear grid in the vicinity of the walls has to be high. The grid used in the present case shows in the whole computational domain a high level of the skew quality metric (as defined by Knupp, 2003) close to unity (see Fig. 1), which ensures a high grid quality.


UFR3-33 hemisphere LES mesh skew metric.png

Fig. 1: Contour levels of the skew quality metric of the present grid.


  • Outlet boundary condition: A mix of convective and non-convective outflow boundary conditions is applied. The convective outlet boundary condition is favored allowing vortices to leave the integration domain without significant disturbances (Breuer, 2002). Thus, it is applied in all regions, where this phenomenon is relevant. The convection velocity is set to the 1/7 power law without perturbation.

Physical modeling

  • Wall-resolved LES: As mentioned above. the flow in the present test case is turbulent and has a Reynolds number of Re = 50,000. Since in LES a large spectrum of scales is resolved by the numerical method, this methodology is well suited. The near-wall regions are resolved too in order to obtain a reference LES solution. Later, wall functions can be used and compared.
  • Inlet boundary condition: At the inlet a 1/7 power law with δ/D = 0.5 and without any perturbation is applied. However, to mimic the targeted approaching boundary layer, perturbations generated by a synthetic turbulence inflow generator are injected as source terms upstream of the hemisphere. These additional perturbations are important to reach a good agreement between experimental data and LES results. Indeed, as demonstrated in Wood et al. (2016), they directly affect the size of the horseshoe vortex, the position of the separation line and consequently the recirculation area.

Application uncertainties

Application uncertainties can arise due to:

  • CFD inflow condition: The length scales used to generate the turbulent perturbations for the inlet are not depending on the location. This is not the case in reality and thus represents an approximation.
  • Laser-Doppler anemometry: LDA is a calibration-free measurement system. Some issues should be kept in mind while measuring the flow field. A few of the most important points for the present test case (i.e., wind tunnel measurements) are stated here:
    • Low seeding in wind tunnel / data rate: The seeding density depends on the overall size of the wind tunnel. Larger test sections suffer from low seeding densities especially in air applications like the present test case which additionally has an open test section. The data rate of droplet measurements also decreases in the near-wall region. To get proper results, the duration interval of each measurement point has to be long enough to collect sufficient data. In fully automated applications this must be taken into consideration to adapt the measurement duration in critical regions such as walls.
    • Evaluation of velocity spectra: The low seeding in a wind tunnel has also an impact on the correct measurement of velocity spectra, such as the commonly used power spectral density (PSD) analysis. Since the used droplets pass the measurement volume of the LDA system randomly there is no equidistant time signature of the measured velocity components. In this case so called sampling and hold algorithms are used to add the missing data by artificially generating an equidistant time pattern and literally filling up the space between single measurements by holding the last measured value until the next actual measurement is arrived. By doing this, the advantages of FFT-algorithms can be exploited which usually work with equidistant time spacing and can be easily integrated into the evaluation of velocity spectra. In flows with very high seeding density the sampling and hold algorithm has only a minor impact on the measurement results as the time spacing between single measurements is very small. In this case the algorithm has to fill up only very few values to achieve an equidistant measurement grid. This is completely different for flows with a very low seeding density, where there are lesser measurements and the time spacing between single measurements can be rather large. Here the sampling and hold algorithm fills up more artificial data and the velocity measurement can be biased in a non-physical direction as described by Benedict et al. (2000) and Broersen et al. (2000). Additionally, Adrian and Yao (1986) have characterized the behavior of the sampling and hold algorithm as it acts as a first-order low-pass filter with a cut-off frequency of about fco=ṅ/(2π), where ṅ is the average data rate per second. So the benefits of sampling and hold algorithms are limited to the maximum frequency a study wants/has to reveal and is mainly dependent on the data rate that can be achieved in a specific setup. In some cases (as the present one) it is useful to fall back to other measurement devices such as the constant temperature anemometer.
    • Estimation of Reynolds shear stresses: The Reynolds shear stresses are useful to describe the flow physics, but often complicated to measure accurately. In LDA measurements the data collection of the flow components is recorded independently which means that there is no direct correlation between the components in a series of measurements due to the independent time signatures in which the droplets pass the specific measuring plane. However, an approximation of the cross-correlations is possible by utilizing coincidence algorithms. These algorithms match the velocity components by using window functions that set a time interval in which the velocity components are considered to be correlated. For sufficient correlations it is necessary to maintain comparable data rates for each velocity component.
    • Reflections at surfaces should be minimized by using black varnishing or other light absorbing/defusing paint. In some setups reflections cannot be avoided. The data has to be viewed critically, since the signal-to-noise-ratio is often insufficient. In this case the measurements cannot be taken into the evaluation of the flow field.
    • The used seeding medium for the present case are DEHS droplets which appear to be a good choice in air flow applications since they are very stable.
  • Constant temperature anemometry: CTA is a widely established measurement system but holds certain difficulties which shall be mentioned here:
    • Temperature dependency of the measurement: While conducting time-consuming measurement series, the wind tunnel and its surrounding tend to heat up gradually due to the energy that is emitted from the blower. The temperature influence also depends on the size of the test facility and the passive/active cooling system. CTA measuring equipment is very sensitive to temperature changes. It is therefore recommended to measure the room temperature in parallel to the actual velocity measurements to compensate the data sets.
    • Calibration process: The calibration process of the CTA system sets the output voltage of the CTA probe in correlation to the calibration velocity. This correlation is highly non-linear. The calibration process should contain a sufficient amount of data points to evaluate the best fit curve between voltage and velocity. The influence of the temperature has to be taken into consideration to avoid systematic errors during the calibration process. Most CTA systems have a guide line how to calibrate a specific probe. Nevertheless, it is necessary to check further influences, such as variations in cable length, on the final measurement results.
    • Invasive measurement: CTA measurements are invasive as they must be installed in the flow field. It is necessary to minimize this influence by designing an appropriate bearing. In high energy flows this can be a problem as too fragile structures tend to oscillate due to the wind loads and errors in the velocity measurement occur.
    • A major disadvantage of standard CTA-probes is their inability to measure the flow direction. The data acquisition is restricted to the velocity magnitude of the flow field.

Recommendations for future work

  • The numerical computations were performed based on a wall-resolving LES. This implies very fine grids leading to a high computational effort. Wall functions should be tested to reduce the grid resolution and thus to decrease the required effort.
  • As shown by Schmidt and Breuer (2016), the original formulation of the source term using with the STIG data (presented in Section Synthetic turbulent inflow generator (STIG)) leads to an undesired change of the target autocorrelations and thus to an integral time scale within the numerical simulation which deviates form the defined integral time scale at the beginning of the generation process of the STIG. Therefore, an alternative expression of the source term based on a ratio between (Φ′)syn and the integral time scale of the inflow T:

    has been developed and is employed in future works.
  • The flow field highly depends on the turbulent intensity introduced upstream of the hemisphere (see Wood et al., 2016). The LES predictions are done with the synthetic turbulence inflow generator by Klein et al. (2003). The entire synthetic inflow profile is defined by one integral time scale and two integral length scales. The integral scales observed within the boundary layer depend on the distance to the wall. Therefore, a segmentation of the synthetically generated flow field into several regions with different integral scales is of interest.
  • Due to a compromise concerning the computational effort and the length of the time signals, the previously generated STIG data consist of about 180,000 time steps. The subsequently carried out numerical simulations of the flow require more time steps to deliver statistically converged distributions leading to several re-use of the limited STIG data. In order to avoid these recycling of the STIG data, a direct coupling between the STIG and the numerical simulation within each time step is desirable to generate continuous time signals of the STIG data with theoretically infinite number of time steps.
  • The case UFR 3-33 with its complex flow phenomena including separation, reattachment and different types of vortex shedding is an appropriate configuration to test and validate new turbulence models or new wall functions.

Acknowledgments

The work reported here was financially supported by the Deutsche Forschungsgemeinschaft under the contract numbers BR 1847/12-1 and BR 1847/12-2 (Breuer, HSU Hamburg). The large computations were carried out on the German Federal Top-Level Supercomputer SuperMUC at LRZ Munich under the contract number pr84na. Furthermore, the authors want to thank Markus Klein (Universität der Bundeswehr München) for providing the original source code of the digital filter based inflow procedure as the starting point of the source term development.




Contributed by: Jens Nikolas Wood, Guillaume De Nayer, Stephan Schmidt, Michael Breuer — Helmut-Schmidt Universität Hamburg

Front Page

Description

Test Case Studies

Evaluation

Best Practice Advice

References


© copyright ERCOFTAC 2024