Thin plate spline
Interpolation/smoothing is an important aspect of PIV. Linear interpolation is the simplest option but it cannot smooth noise, and for 2D (or 3D) fields it does not extrapolate data beyond the convex hull of the initial data set. Global polynomial interpolation generally produce excessive smoothing, for low order polynomials, or spurious oscillations, for high order polynomials. Cubic spline is an optimum general method to interpolate a 1D data series measured at a set of abscissa The spline interpolation function is made of piecewise cubic polynomials matched at the measurement abscissa by the condition , with the continuity of the first derivative. It can be shown that this spline function minimizes the global curvature . The name ’spline’ refers to a flexible wooden strip used in ship design to draw a smooth curved line pinned on a set of reference points. Its equilibrium shape indeed minimizes the elastic energy. This is proportional to the integral of if it represents a small transverse displacement from the absissa line.
To deal with noisy data, smoothing spline do not go strictly through the measurement values, but minimises a linear combination of distance to these values and curvature , where \rho is a smoothing parameter. In the limit of small\rho , the weight of the distance constraint becomes very strong so the optimum approaches the pure interpolation spline, with . In the opposite limit of large , the curvature constraint becomes very strong, so the optimum tends to be linear and approaches the least square linear fit by the minimisation of the distance term contribution.
Generalisation to multi-dimensional spaces can be performed as products of spline functions along each coordinate. However this choice is not optimum and depends on the coordinate axis. The proper generalisation is the ’thin plate spline’ proposed by Duchon[Duchon]. The name refers to a physical analogy with the bending of a thin sheet of metal. Practical algorithms have been first developed by Pahia Montes .It has been first used in the field of cartography and first applied to flow measurements by NguyenDuc and Sommeria(..). We use here a more recent algorithm proposed by Wahba, 1990, rely on the standard inversion matrix functions provided by Matlab.
We consider a set of measurement points , and the corresponding set of measurements values for a quantity (a velocity component for PIV data).
A pure interpolation function must exactly reach these values at the measurement points, while a smoothing function should approach these values within a range smaller than the estimated error bar.
Considering first the case of a pure interpolation, the function must minimise the ’elastic energy’ or mean squared curvature
with the constraints equ. 1. These constraints can be also written as a domain integral with Dirac functions at measurement points :
This variational problem is solved by introducing a Lagrange parameter for each constraint (the factor is put for further convenience). The variation of the energy can be rewritten by a double integration by part, leading to . The optimum function is thus solution of the double Laplacian equation with pointwise source terms at locations
The solution for a single source point is such that everywhere outside the source. Using the axisymmetric form of the Laplacian , we can show that the general axisymmetric solution is
whose Laplacian writes
The coefficient must be zero to avoid divergence of at the origin. The constant can be included in a global constant . The constant can be viewed as a change of unit for (replacing by with ), and can be set to 0 without loss of generality. The integral of over a small neighborhood of the source must be equal to . By the divergence theorem, this is also equal to the flux of around the source, equal to . This must be equal to , which sets . The elementary function is thus .
The general form of is thus
with . The values at the measurement points are
In other words, defining the vector and the matrix
the sources are obtained by solving the matrix equation .
To account for measurement noise, interpolation should involve smoothing, so that the interpolated data can deviate from the measurement at the measurement sites . This can be performed by minimising a linear combination of the elastic energy and the quadratic deviation to the measurement values
The variational problem then leads to the equation
The solution is still obtained as a sum of radial basis functions , whose weight now satisfies , with the matrix
where is the NxN identity matrix extended by three columns of 0.
Spatial derivatives of the interpolated quantity can be obtained by direct differentiation of the result. For any function , with radial distance , , , and . This yields , so that,
The computing cost and memory required to invert the matrix equ. 9 increases as the cube of the number of measurement points, so that large data sets need to be partitioned in sub-domains. The interpolation is made independently in each subdomain. If sub-domain overlap, the average of the different answers is taken.