Clustering functions for CFD mesh generation
Before running a computational fluid dynamics simulation (CFD), we need to spatially discretise the domain into many small control volumes or ‘cells’. To reduce computational cost, we should use more grid points to resolve regions with high gradients, and fewer grid points where the flow field is approximately uniform. In turbomachinery flows, this implies small cells in boundary layers and near blade leading and trailing edges; while in the free stream cells can be a factor of 10 to 100 larger.
Usually, rapid changes in spacing reduce the accuracy of the CFD solution. We not only have to use cell sizes based on the local length scale to be resolved, but also blend smoothly from small to large cells. A rule of thumb is to keep the expansion ratio between adjacent cell sizes less than, say, \(\newcommand{\ER}{\mathit{ER}}\ER=\Delta_{i+1}/\Delta_i<1.2\).
In my experience, robust general algorithms for distributing points according to these constraints are surprisingly difficult to implement. There are lots of edge cases and wrong ways to go about the problem.
In this post I present some clustering functions for generating one-dimensional distributions of points suitable for CFD mesh generation. The full Python code is available on my Sourcehut.
Without loss of generality, we consider distributing \(x\) coordinates over the unit interval \(0 \le x \le 1\). After clustering, the distribution can be scaled to run between arbitrary coordinates.
Single-sided
In the single-sided case, we start with a small spacing \(\newcommand{\Dmin}{\Delta_0}\Dmin\) and stretch the grid until a maximum spacing \(\newcommand{\Dmax}{\Delta_\mathrm{max}}\Dmax\) is reached, beyond which the grid spacing is uniform.
A specified number of grid points \(N\) determines which of several forms the distribution will take. Disregarding the \(\Dmax\) constraint for now, the length of \(N\) cells with stretching at a constant expansion ratio is given by the sum of a geometric series: $$ L = \sum_{n=0}^{N-1} \Dmin \ER^n = \Dmin \frac{1 - \ER^{N-1}}{1-\ER} \tag{I} $$ There are then three possible cases:
- \(L<1\) implies that we do not have enough points to generate a clustered distribution that satisfies the expansion ratio constraint;
- \(L=1\) means the expansion ratio and minimum spacing are just sufficient to cover the unit interval with the specified value of \(N\);
- \(L>1\) implies we have more grid points than needed, and the expansion ratio must be reduced to produce a clustered distribution of unit length.
If the maximum spacing constraint is active, then the geometric series for spacings runs for a lower number of grid points, \(\newcommand{\Ns}{N_*}\Ns<N\), where $$ \Ns = \left\lfloor \frac{\log \Dmax/\Dmin}{\log \ER} \right\rfloor + 1 $$ The spacing at the end of the expanding section is then $$ \newcommand{\Ds}{\Delta_*}\Ds = \Dmin \ER^{\Ns-1} $$ If we then let the grid spacing vary linearly in the quasi-uniform section from \(\Ds\) at \(n=\Ns-1\) to \(\Dmax\) at \(n=N-1\) then the total length is $$ L = \Dmin \frac{1 - \ER^{\Ns-1}}{1-\ER} + \frac{1}{2}(N-\Ns)(\Ds + \Dmax) \tag{II} $$ As with the unconstrained Eqn. (I), there are theree cases:
- If \(L<1\) then \(N\) is too low to meet the clustering criteria;
- If \(L=1\) then \(N\) is just sufficient to yield an exact solution;
- If \(L>1\) implies we have more grid points than needed.
In case 3, we have a choice of what to do with the extra grid points. Either, expand more slowly away from \(\Dmin\), or, reduce \(\Dmax\) to use up more points in the quasi-uniform section. The latter is preferable because there are fewer smaller cells, helping with CFD convergence.
My Python code first determines if the spacing needs to be capped. Then, after checking for case 1, Eqn. (I) or (II) is solved for the parameters that give unit length. Analytic derivatives are specified to speed up convergence. In Eqn. (I), we solve for the \(\ER\) that gives \(L=1\). In Eqn. (II), a coarse binary search finds a reduced value of \(\Dmax\) that gives close to unit length, and then the expansion ratio is fine-tuned to get \(L=1\) to high accuracy.
If the number of grid points is not determined, to reduce computational expense, the minimum number should be used that satisfies the maximum expansion ratio and spacing caps. I found it impossible to work out the minimum number of grid points analytically, but straightforward to code a loop that tries increasing values of \(N\) until the constraints are satisfied.
Double-sided
It is possible in principle to construct a double-sided clustering function, with (in general different) small boundary spacings at both \(x=0\) and \(x=1\), by joining two single-sided distributions calculated with the algorithm in the previous section. This is unwieldy, however, because the joining value of \(x\) and the split of \(N\) across each side is not known a-priori and requires an expensive multi-variable iteration to find.
Vinokur (1983) presented a general double-sided clustering function, a scaled segment of \(\tanh\) or \(\tan\), that gives a favourable discretisation with low truncation error. Full equations are in the paper
Vinokur, M. (1983). ``On one-dimensional stretching functions for finite-difference calculations’’. J. Comput. Phys. 50(2) pp. 215-234. doi:10.1016/0021-9991(83)90065-7
His algorithms are very fast, because he approximates the required solutions of differential equations using polynomial fits. However, due to the approximations, the end spacings are not exactly as prescribed. Furthermore, no account of our expansion ratio or thickness constraints is taken.
My implementation mitigates the approximation error by using a simple fixed-point iteration to scale the input boundary spacings until the output matches the prescribed spacings. When the number of points is undetermined, we can implement any constraints we like in an iteration loop, increasing \(N\) until the maximum expansion ratio and spacing are on target (although we could possible use fewer points if we used the single-sided method that specifies these constraints directly).
Takeaway
My key takeaway here is: if something is difficult to work out analytically, don’t hesitate to formulate the problem in terms of easier variables that allow explicit evaluation and then brute force the answer!