UFR 3-33 Best Practice Advice: Difference between revisions
Rapp.munchen (talk | contribs) |
m (Dave.Ellacott moved page SilverP:UFR 3-33 Best Practice Advice to UFR 3-33 Best Practice Advice) |
||
(19 intermediate revisions by 3 users not shown) | |||
Line 84: | Line 84: | ||
* The used ''seeding medium'' for the present case are DEHS droplets which appear to be a good choice in air flow applications since they are very stable. | * The used ''seeding medium'' for the present case are DEHS droplets which appear to be a good choice in air flow applications since they are very stable. | ||
* ''Traverse system'': It is essential to estimate the systematic errors which occur due to the traverse system. In the present study the operating distances of the LDA probe within the setup are taken into account to estimate this error. As an appropriate evaluation criterion certain fixed paths were chosen such as the symmetry line of the flat plate, which coincides with the symmetry plane of the hemisphere (see Fig. 30). The LDA measurement volume is first set to the beginning of the measurement plane (here x/D=-1.5, y/D=0, z/D=0). Then, the LDA probe is moved to the maximum traveling distance. For the symmetry line in x-direction, this refers to the coordinates x/D=2, y/D=0, z/D=0. For the vertical path in z-direction, this refers to x/D=-1.5, y/D=0, z/D=1. At the maximum traveling point of each direction the normal distance (out-of-plane distance) between the reference line and the position of the LDA probe was measured. Following this procedure, an optimum fit between the traveling distance of the measurement volume along the reference lines was found. The maximum relative error along the symmetry plane of the hemisphere is determined to be about 0.4%. In vertical direction, the maximum relative error is estimated to about 0.5%. | |||
</li> | </li> | ||
[[Image:UFR3-33_traverse_error_lda.png|600px]] | |||
Fig. 30: Sketch of the systematic error of the LDA traverse system. | |||
Line 105: | Line 109: | ||
* The numerical computations were performed based on a wall-resolving LES. This implies very fine grids leading to a high computational effort. Wall functions should be tested to reduce the grid resolution and thus to decrease the required effort. | * The numerical computations were performed based on a wall-resolving LES. This implies very fine grids leading to a high computational effort. Wall functions should be tested to reduce the grid resolution and thus to decrease the required effort. | ||
* As shown by Schmidt and Breuer (2016), the original formulation of the source term using the STIG data (presented in Section [http://qnet-ercoftac.cfms.org.uk/w/index.php/Lib:UFR_3-33_Test_Case# | * As shown by Schmidt and Breuer (2016), the original formulation of the source term using the STIG data (presented in Section [http://qnet-ercoftac.cfms.org.uk/w/index.php/Lib:UFR_3-33_Test_Case#Synthetic_turbulence_inflow_generator_.28STIG.29 Synthetic turbulent inflow generator (STIG)]) leads to an undesired change of the target autocorrelations and thus to an integral time scale within the numerical simulation, which deviates from the defined integral time scale at the beginning of the generation process of the STIG. Therefore, an alternative expression of the source term based on a ratio between (Φ′)<sup>syn</sup> and the integral time scale of the inflow <var>T</var>: <br> <center> <math>S^{syn}_{\phi} = \int_{V} \frac{\left(\phi^{\prime}\right)^{syn}}{T} dV </math> </center> <br>has been developed and is employed in future works. | ||
* The flow field depends highly on the turbulent intensity introduced upstream of the hemisphere (see Wood et al., 2016). The LES predictions are done with the synthetic turbulence inflow generator by Klein et al. (2003). The entire synthetic inflow profile is defined by one integral time scale and two integral length scales. The integral scales observed within the boundary layer depend on the distance to the wall. Therefore, a segmentation of the synthetically generated flow field into several regions with different integral scales is of interest. | * The flow field depends highly on the turbulent intensity introduced upstream of the hemisphere (see Wood et al., 2016). The LES predictions are done with the synthetic turbulence inflow generator by Klein et al. (2003). The entire synthetic inflow profile is defined by one integral time scale and two integral length scales. The integral scales observed within the boundary layer depend on the distance to the wall. Therefore, a segmentation of the synthetically generated flow field into several regions with different integral scales is of interest. |
Latest revision as of 13:50, 12 February 2017
Best Practice Advice
Key physics
The case UFR 3-33 concerns a smooth rigid hemisphere mounted on a smooth plate exposed to a turbulent boundary layer.
To characterize the problem, the flow field can be divided into several key flow regions:
- The horseshoe vortex system located just upstream of the body results from the separation of the boundary layer from the ground. This is due to the positive pressure gradient in front of the hemisphere acting as an obstacle to the flow. The size and formation of this particular flow structure depends on the properties of the approaching boundary layer such as the turbulence intensity, the velocity distribution and the overall thickness of the boundary layer.
- The stagnation area is located in the lower front of the hemisphere, where the stagnation point is found. Its location depends on the size of the horseshoe vortex system.
- Past this stagnation area the flow is accelerated (acceleration zone). Strong vorticity is generated in the vicinity of the surface.
- The adverse pressure gradient leads to a flow detachment from the surface of the hemisphere along a separation line. The position of the separation line is influenced by the properties of the approaching boundary layer. A high level of turbulent intensity upstream of the body moves the separation line downstream.
- After separation the flow forms the recirculation area. Its size and form depends on the position of the separation line and consequently on the properties of the approaching boundary layer.
- On the top of the recirculation area strong shear layer vorticity is observed leading to the production of Kelvin-Helmholtz vortices which travel downstream in the wake.
- The recirculation zone ends at the reattachment of the separated flow on the ground wall. Here, the splatting effect occurs, redistributing momentum from the wall-normal direction to the streamwise and spanwise directions.
To fully describe the problem, the unsteady flow features are also highlighted:
- The horseshoe vortex system trails past the hemisphere and forms stable necklace-vortices that stretch out into the wake region.
- The flow detaches from the surface of the hemisphere along the separation line (see Fig. 25) and the vortices roll-up. They interact and sometimes merge with the horseshoe vortices behind the hemisphere. Larger vortical structures appear: Entangled vortical hairpin-structures of different sizes and orientations travel downstream. Note that smaller hairpin-structures can also be observed in the wake, growing from the ground as usual in a turbulent boundary layer.
- The vortex shedding mentioned above is complex and its type and frequency vary with its location: At the top of the hemisphere, arch-type-vortices are observed with a shedding frequency in the range 0.23 ≤ St1=f1 D / U∞ ≤ 0.31. On the sides of the hemisphere another shedding type is present. Von Karman shedding of vortices occurs at a Strouhal number of St2 ≈ 0.16. This vortex shedding on the lower sides of the hemisphere involves a pattern of two distinguishable types that switch in shape and time: The first kind can be described as a quasi-symmetric process where the vortical structures detach in a double-sided symmetric manner. The second kind relates to a quasi-periodic vortex shedding resulting in a single-sided alternating detachment pattern.
Numerical modeling
- Discretization accuracy: In order to perform LES predictions, it is required that the spatial and temporal discretization are both at least of second-order accuracy. It is also important that the numerical schemes applied possess low numerical diffusion (and dispersion) properties in order to resolve most of the scales and not to damp them out. A predictor-corrector scheme (projection method) of second-order accuracy forms the kernel of the fluid solver. In the predictor step, an explicit Runge-Kutta scheme advances the momentum equation in time. This explicit method is chosen because of its accuracy, speed and low memory consumption. The small time steps fit well to the temporal resolution requirements of the LES approach. The discretization in space is done with a second-order central discretization scheme with a flux blending including not more than 5% of a first-order upwind scheme.
- Grid resolution: The second critical issue to perform LES is the grid resolution. The mesh near the wall, in the free-shear layers and also in the interior flow domain has to be fine enough. For wall-resolved LES the recommendations given by Piomelli and Chasnov (1996) should be followed or outperformed, e.g., y+ < 2, Δx+ < 50, Δz+ < 50-150. This is obeyed in the present investigation. The grid possesses about 30 million CVs. The first cell center is positioned at a distance of Δz/D = 5 × 10-5. It was found to be sufficient to resolve the flow accurately at the walls as well as in the free shear layers. Similar to the classical flow around a cylinder it is important to resolve the region close to the separation point and in the evolving shear layer region adequately.
- Grid quality: The third point is the quality of the grid. Smoothness and orthogonality are very important issues for LES computations. In order to capture flow separations and reattachments on the hemisphere reliably, the orthogonality of the curvilinear grid in the vicinity of the walls has to be high. The grid used in the present case shows in the whole computational domain a high level of the skew quality metric (as defined by Knupp, 2003) close to unity (see Fig. 29), which ensures a high grid quality.
Fig. 29: Contour levels of the skew quality metric of the present grid.
- Outlet boundary condition: A mix of convective and non-convective outflow boundary conditions is applied. The convective outlet boundary condition is favored allowing vortices to leave the integration domain without significant disturbances (Breuer, 2002). Thus, it is applied in all regions, where this phenomenon is relevant. The convection velocity is set to the 1/7 power law without perturbation.
Physical modeling
- Wall-resolved LES: As mentioned above. the flow in the present test case is turbulent and has a Reynolds number of Re = 50,000. Since in LES a large spectrum of scales is resolved by the numerical method, this methodology is well suited. The near-wall regions are resolved in the study reported here in order to obtain a reference LES solution. Later, wall functions can be used and tested against the reference solution. Based on experiences with comparable test cases such as the wall-mounted cube by Martinuzzi and Tropea (1993), RANS predictions are unlikely to produce satisfactory results. Due to the complex flow separation, reattachment and vortex shedding processes appearing around the curved obstacle, the flow past the wall-mounted hemisphere is even more challenging than the cube case.
- Inlet boundary condition: At the inlet a 1/7 power law with δ/D = 0.5 and without any perturbation is applied. Since the grid in this region is typically quite coarse (as in the present study), the inflow turbulence can not be superimposed directly at the inlet. However, to mimic the targeted approaching boundary layer, perturbations generated by a synthetic turbulence inflow generator are injected as source terms upstream of the hemisphere. This procedure is strongly recommended for the present setup and is also useful for similar configurations such as the flow past an airfoil (Schmidt and Breuer, 2016). These additional perturbations are important to reach a good agreement between experimental data and LES results. Indeed, as demonstrated in Wood et al. (2016), they directly affect the size of the horseshoe vortex, the position of the separation line and consequently the recirculation area.
Uncertainties in the numerical investigation
Application uncertainties can arise due to:
- CFD inflow condition: The length scales used to generate the turbulent perturbations for the inlet do not depend on the location. This is not the case in reality and thus represents an approximation introducing uncertainties.
- Subgrid-scale model: Besides the main numerical issues mentioned in the previous sections (i.e.. appropriate resolution, high-quality grid, accuracy of the temporal and spatial discretization), the application of the subgrid-scale model required for LES introduces uncertainties. In order to evaluate the influence of the SGS model on the current case, in Wood et al. (2016) five additional simulations were carried out with different SGS models: The clasical Smagorinsky model (1963) with three different Smagorinsky constants ( = 0.065, 0.1 and 0.18), the dynamic Smagorinsky model (Germano et al., 1991) and the WALE model (Nicoud and Ducros, 1999). The outcome is as follows: The Smagorinsky model with = 0.065 or 0.1 leads to nearly identical results as the dynamic model. The WALE model predicts similar results as the dynamic model, except in the horseshoe vortex region. Applying the classical Smagorinsky model with = 0.18, several of the characteristic regions show significant differences compared to the dynamic model. Therefore, the classical Smagorinsky model with 0.065 ≤ ≤ 0.1 or the dynamic Smagorinsky model can be equivalently used for the current case. As mentioned before, for the main simulation in the present study the Smagorinsky model with = 0.1 is used.
Uncertainties in the experimental investigation
- Laser-Doppler anemometry: LDA is a calibration-free measurement system. Some issues should be kept in mind while measuring the flow field. A few of the most important points for the present test case (i.e., wind tunnel measurements) are stated here:
- Low seeding in wind tunnel / data rate: The seeding density depends on the overall size of the wind tunnel. Larger test sections suffer from low seeding densities especially in air applications like the present test case which additionally has an open test section. The data rate of droplet measurements also decreases in the near-wall region. To get proper results, the duration interval of each measurement point has to be long enough to collect sufficient data. In fully automated applications this must be taken into consideration to adapt the measurement duration in critical regions such as walls.
- Evaluation of velocity spectra: The low seeding in a wind tunnel has also an impact on the correct measurement of velocity spectra, such as the commonly used power spectral density (PSD) analysis. Since the used droplets pass the measurement volume of the LDA system randomly there is no equidistant time signature of the measured velocity components. In this case so called sampling and hold algorithms are used to add the missing data by artificially generating an equidistant time pattern and literally filling up the space between single measurements by holding the last measured value until the next actual measurement is arrived. By doing this, the advantages of FFT-algorithms can be exploited which usually work with equidistant time spacing and can be easily integrated into the evaluation of velocity spectra. In flows with very high seeding density the sampling and hold algorithm has only a minor impact on the measurement results as the time spacing between single measurements is very small. In this case the algorithm has to fill up only very few values to achieve an equidistant measurement grid. This is completely different for flows with a very low seeding density, where there are fewer measurements and the time spacing between single measurements can be rather large. Here the sampling and hold algorithm fills up more artificial data and the velocity measurement can be biased in a non-physical direction as described by Benedict et al. (2000) and Broersen et al. (2000). Additionally, Adrian and Yao (1986) have characterized the behavior of the sampling and hold algorithm as it acts as a first-order low-pass filter with a cut-off frequency of about fco=ṅ/(2π), where ṅ is the average data rate per second. So the benefits of sampling and hold algorithms are limited to the maximum frequency a study wants/has to reveal and is mainly dependent on the data rate that can be achieved in a specific setup. In some cases (as the present one) it is useful to fall back to other measurement devices such as the constant temperature anemometer.
- Estimation of Reynolds shear stresses: The Reynolds shear stresses are useful to describe the flow physics, but often complicated to measure accurately. In LDA measurements the data collection of the flow components is recorded independently which means that there is no direct correlation between the components in a series of measurements due to the independent time signatures in which the droplets pass the specific measuring plane. However, an approximation of the cross-correlations is possible by utilizing coincidence algorithms. These algorithms match the velocity components by using window functions that set a time interval in which the velocity components are considered to be correlated. For sufficient correlations it is necessary to maintain comparable data rates for each velocity component.
- Reflections at surfaces should be minimized by using black varnishing or other light absorbing/defusing paint. In some setups reflections cannot be avoided. The data has to be viewed critically, since the signal-to-noise-ratio is often insufficient. In this case the measurements cannot be taken into the evaluation of the flow field.
- The used seeding medium for the present case are DEHS droplets which appear to be a good choice in air flow applications since they are very stable.
- Traverse system: It is essential to estimate the systematic errors which occur due to the traverse system. In the present study the operating distances of the LDA probe within the setup are taken into account to estimate this error. As an appropriate evaluation criterion certain fixed paths were chosen such as the symmetry line of the flat plate, which coincides with the symmetry plane of the hemisphere (see Fig. 30). The LDA measurement volume is first set to the beginning of the measurement plane (here x/D=-1.5, y/D=0, z/D=0). Then, the LDA probe is moved to the maximum traveling distance. For the symmetry line in x-direction, this refers to the coordinates x/D=2, y/D=0, z/D=0. For the vertical path in z-direction, this refers to x/D=-1.5, y/D=0, z/D=1. At the maximum traveling point of each direction the normal distance (out-of-plane distance) between the reference line and the position of the LDA probe was measured. Following this procedure, an optimum fit between the traveling distance of the measurement volume along the reference lines was found. The maximum relative error along the symmetry plane of the hemisphere is determined to be about 0.4%. In vertical direction, the maximum relative error is estimated to about 0.5%.
Fig. 30: Sketch of the systematic error of the LDA traverse system.
- Constant temperature anemometry: CTA is a widely established measurement system but is subject to certain difficulties which shall be mentioned here:
- Temperature dependency of the measurement: While conducting time-consuming measurement series, the wind tunnel and its surrounding tend to heat up gradually due to the energy that is emitted from the blower. The temperature influence also depends on the size of the test facility and the passive/active cooling system. CTA measuring equipment is very sensitive to temperature changes. It is therefore recommended to measure the room temperature in parallel to the actual velocity measurements to compensate the data sets.
- Calibration process: The calibration process of the CTA system sets the output voltage of the CTA probe in correlation to the calibration velocity. This correlation is highly non-linear. The calibration process should contain a sufficient amount of data points to evaluate the best fit curve between voltage and velocity. The influence of the temperature has to be taken into consideration to avoid systematic errors during the calibration process. Most CTA systems have a guide line how to calibrate a specific probe. Nevertheless, it is necessary to check further influences, such as variations in cable length, on the final measurement results.
- Invasive measurement: CTA measurements are invasive as the probes must enter in the flow field. It is necessary to minimize this influence by designing an appropriate bearing. In high energy flows this can be a problem as too fragile structures tend to oscillate due to the wind loads and errors in the velocity measurement occur.
- A major disadvantage of standard CTA-probes is their inability to measure the flow direction. The data acquisition is restricted to the velocity magnitude of the flow field.
Recommendations for future work
- The numerical computations were performed based on a wall-resolving LES. This implies very fine grids leading to a high computational effort. Wall functions should be tested to reduce the grid resolution and thus to decrease the required effort.
- As shown by Schmidt and Breuer (2016), the original formulation of the source term using the STIG data (presented in Section Synthetic turbulent inflow generator (STIG)) leads to an undesired change of the target autocorrelations and thus to an integral time scale within the numerical simulation, which deviates from the defined integral time scale at the beginning of the generation process of the STIG. Therefore, an alternative expression of the source term based on a ratio between (Φ′)syn and the integral time scale of the inflow T:
has been developed and is employed in future works.
- The flow field depends highly on the turbulent intensity introduced upstream of the hemisphere (see Wood et al., 2016). The LES predictions are done with the synthetic turbulence inflow generator by Klein et al. (2003). The entire synthetic inflow profile is defined by one integral time scale and two integral length scales. The integral scales observed within the boundary layer depend on the distance to the wall. Therefore, a segmentation of the synthetically generated flow field into several regions with different integral scales is of interest.
- Due to a compromise concerning the computational effort and the length of the time signals, the previously generated STIG data consist of about 180,000 time steps. The subsequently carried out numerical simulations of the flow require more time steps to deliver statistically converged distributions leading to repeated re-use of the limited STIG data. In order to avoid this recycling of the STIG data, a direct coupling between the STIG and the numerical simulation within each time step is desirable to generate continuous time signals of the STIG data with theoretically infinite number of time steps.
- The case UFR 3-33 with its complex flow phenomena including separation, reattachment and different types of vortex shedding is an appropriate configuration to test and validate new or existing turbulence models or wall functions.
Acknowledgments
The work reported here was in parts financially supported by the Deutsche Forschungsgemeinschaft under the contract numbers BR 1847/12-1 and BR 1847/12-2 (Breuer, HSU Hamburg). The large computations were carried out on the German Federal Top-Level Supercomputer SuperMUC at LRZ Munich under the contract number pr84na. Furthermore, the authors want to thank Markus Klein (Universität der Bundeswehr München) for providing the original source code of the digital filter based inflow procedure as the starting point of the source term development.
Contributed by: Jens Nikolas Wood, Guillaume De Nayer, Stephan Schmidt, Michael Breuer — Helmut-Schmidt Universität Hamburg
© copyright ERCOFTAC 2024