Fast local crosscorrelations of images Dave Hale Colorado School of Mines Summary Consider two multidimensional digital signals each with samples
140K - views

Fast local crosscorrelations of images Dave Hale Colorado School of Mines Summary Consider two multidimensional digital signals each with samples

For some number of lags the cost of computing a single crosscorrelation function of these two signals is proportional to By exploiting several properties of Gaussian windows we can compute local crosscorrelation functions again with computational c

Tags : For some number
Download Pdf

Fast local crosscorrelations of images Dave Hale Colorado School of Mines Summary Consider two multidimensional digital signals each with samples




Download Pdf - The PPT/PDF document "Fast local crosscorrelations of images D..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.



Presentation on theme: "Fast local crosscorrelations of images Dave Hale Colorado School of Mines Summary Consider two multidimensional digital signals each with samples"— Presentation transcript:


Page 1
Fast local cross-correlations of images Dave Hale, Colorado School of Mines Summary Consider two multi-dimensional digital signals, each with samples. For some number of lags , the cost of computing a single cross-correlation function of these two signals is proportional to By exploiting several properties of Gaussian windows, we can compute local cross-correlation functions, again with computational cost proportional to . Here, local means the cross-correlation of signals after applying a Gaussian window centered on a single sample. Compu- tational cost is independent of the

size of the window. Introduction Cross-correlations are ubiquitous in digital signal process- ing. We use cross-correlations to estimate relative shifts between two signals, and to compute filters that shape one signal to match another. We use auto-correlations (a special case of cross-correlations) to compute prediction error filters, and to estimate the orientations of features in multi-dimensional images. In such applications we must assume that the quantities estimated do not vary significantly for the duration of the signals cross-correlated. But those quantities often

do vary, and the variations can be important. For example, consider the two seismic images displayed in Figures 1 and 2. To simulate the compaction of a hypo- thetical reservoir located near the center of these images, we warped the image of Figure 1 to obtain the image of Figure 2. The overlying grids in the two figures highlight this warping. If this were a real time-lapse seismic experiment, then the spatially varying displacements in these two figures would be important, as they could be related to strains and stresses near the reservoir. These relationships are complex (e.g.,

Hatchell and Bourne, 2005), but their anal- ysis begins by estimating displacements like those shown here. Because the displacements in the two images of Figures 1 and 2 vary spatially, they cannot be estimated using a single global cross-correlation of the two images. To estimate these spatially varying displacements, we computed many local cross-correlations, one for every sample in these images . Figure 3 shows a small subset of those local cross-correlations. How might we compute these local cross-correlations? Conventionally, we might first truncate or taper our sig- nals to zero

outside some specified window, and then cross-correlate those windowed signals. We might then Figure 1: A 2-D seismic image with 315 315 samples. Figure 2: The seismic image of Figure 1 after warping to simulate compaction of a hypothetical reservoir located at the center of the image. compute a suite of local cross-correlations by repeating these window-and-correlate steps for multiple overlapping windows. Again, in conventional practice, we typically choose the number of windows and their shape to reduce computa- tional costs. For example, we might avoid a Gaussian win-
Page

2
Fast local cross-correlations of images Figure 3: A subset of local 2-D cross-correlations of the images in Figures 1 and 2. Shown here are 225 = 15 15 lags for only 1 225 of the number of cross- correlations computed. The local correlation window is a two-dimensional Gaussian function with radius = 8 samples. dow shape (although that shape is optimal with respect to the uncertainty relation) because the Gaussian function is nowhere zero. We might also choose a small number of windows because the cost of the repeated window-and- correlate process is proportional to that number. In this

paper I describe an efficient method for computing a distinct local cross-correlation function for every sample in a multi-dimensional signal. The number of windows equals the number of signal samples, and their shape is Gaussian. Cross-correlations The cross-correlation of two sequences and for integer lags is defined by ] = ( f?g )[ (1) Figure 4 shows an example of this cross-correlation. Note that the number of lags = 21 for which we have computed the cross-correlation is significantly less than the number of samples = 101 in the two sequences and . This scenario is common

in digital signal processing. When , we might more efficiently compute the cross-correlation via fast Fourier transforms, after padding the sequences and sufficiently with zeros to avoid aliasing. For , however, the most efficient way to com- pute a cross-correlation is to simply evaluate the sum as Figure 4: Cross-correlation of two sequences and written in equation 1. In this case, the computational complexity of cross-correlation is clearly ). This means, for example, that the cost of cross-correlation dou- bles if we double either the number of samples or the number of lags

The generalization to multi-dimensional signals is straightforward. For two-dimensional signals, equation 1 becomes ,l ] = ,j ,j (2) In two dimensions, lags have two components and The computational complexity is again ), where is the total number of samples in the 2-D signals, and is the number of 2-D lags for which we compute each cross-correlation. Local cross-correlations Consider now the local cross-correlation process illus- trated in Figure 5. In this example, we have multiplied the sequences and by a Gaussian window function defined by (3) with Fourier transform ikx ) = Figure 5

shows the computation of a single local cross- correlation. If we slide the Gaussian window to the right or left, we obtain a different local cross-correlation. In- deed, we could compute lags of a local cross-correlation for each of the samples in the sequences and by centering a Gaussian window on each of those samples. We choose a Gaussian window for several reasons:
Page 3
Fast local cross-correlations of images Figure 5: Cross-correlation of two Gaussian-windowed sequences and (1) Optimal resolution. Only the Gaussian window min- imizes the resolution product , where and

denote consistently-defined widths of and ), respectively (e.g., Bracewell, 1978). (2) Isotropic and separable in N dimensions. Convolu- tion with an N-dimensional Gaussian window (fil- ter) can be performed by applying a sequence of one-dimensional Gaussian filters. Only the Gaussian window is both isotropic and separable (Kannappan and Sahoo, 1992). (3) Fast recursive-filter implementation. The cost of ap- plying a filter with an approximately Gaussian im- pulse response is independent of the filter length (De- riche, 1992; van Vliet et al., 1998). (4) The

product of any two shifted Gaussians with equal widths is a Gaussian. The last three properties (2), (3) and (4) lead to an effi- cient method for computing local cross-correlations. Two methods Although the Gaussian function is nowhere zero, we may in practice truncate to zero each Gaussian window for samples far from its central peak. Let denote the number of non-zero samples in this truncated Gaussian window. Then Algorithm 1 is a straightforward method for computing local cross-correlations. The computational complexity of Algorithm 1 is clearly ), which is costly, especially for

large windows of multi-dimensional signals. Fortunately, an alternative method with computational complexity of only ) is feasible. To obtain the more efficient algorithm, first define the product l/ 2) l/ 2) of two shifted Gaussians, and let 2 denote the effective non-zero width of this Algorithm 1 Simple method 1: for ,...,N do 2: for ,...,j do 3: 4: 5: end for 6: for ,...,N do 7: 8: for ,...,j do 9: ] + 10: end for 11: end for 12: end for product. [Property (4) above states that this product is Gaussian.] Then rearrange the loops in Algorithm 1 to obtain the

equivalent Algorithm 2. Algorithm 2 Equivalent method 1: for ,...,N do 2: for ,...,N do 3: 4: end for 5: for ,...,N do begin shift 6: l/ 2) interpolate for odd 7: end for end shift 8: for ,...,N do begin Gaussian filter 9: 10: for ,...,j do 11: ] + 12: end for 13: end for end Gaussian filter 14: end for As written, Algorithm 2 is more complex and no more ef- ficient than Algorithm 1. However, lines 8 through 13 of Algorithm 2 represent convolution with a Gaussian filter. And property (2) above states that recursive implemen- tations of this filter have

computational costs that are independent of the width of the Gaussian window. Specifically, the computational cost of the 1-D recursive Gaussian filter that we use is approximately 16 (not ) multiplications and additions. And for -dimensional signals, the separable property (3) of Gaussian windows implies that this cost grows only linearly with the number of dimensions . For example, the cost of applying a 2-D Gaussian filter to samples of a 2-D image is approximately 32 Therefore, the complexity of computing lags of lo- cal correlations is ). In the 2-D example of Figure 3,

we computed almost 100 000 local 2-D cross- correlations for a cost of only about 32 times that of com- puting a single global 2-D cross-correlation. For odd lags , the shift in lines 5 through 7 of Algorithm 2
Page 4
Fast local cross-correlations of images requires interpolation. We perform this interpolation us- ing an 8-sample approximation to the sinc function. Like Gaussian filtering, this shift is separable when applied to multi-dimensional signals. Application The local cross-correlations shown in Figure 3 show sig- nificant variations in the bandwidth and

orientation of features in the images of Figures 1 and 2. We also observe variation in the locations of the peaks of these local cross- correlations, and those peak locations yield estimates of relative displacements. To estimate displacements, we first searched each local cross-correlation for the 2-D lag with the largest cross- correlation value. This 2-D lag has two (vertical and hor- izontal) integer components, and is a crude quantized es- timate of displacement. To refine this estimate, we then computed the location of the peak of a quadratic function least-squares fit

to the correlation values for the nine lags nearest to and including the lag with largest correlation value. Figures 6 and 7 show the vertical and horizontal components of our refined estimated displacements. The vertical displacements in Figure 6 are downward (positive) above our hypothetical reservoir and upward (negative) below it, consistent with the warping shown in Figure 2. Figure 7 indicates that lateral displacement left of the reservoir is generally rightward (positive), and lateral dis- placement right of the reservoir is generally leftward (neg- ative). Comparing Figures 6

and 7, we observe that es- timates of lateral displacements are less consistent than those of vertical displacements. Errors in Figure 7 are largely due to the more or less hor- izontal orientation of features in our seismic images. As we might expect, estimates of displacements perpendicu- lar to linear or planar features will be more reliable than those of displacements parallel to such features. This synthetic example illustrates just one application (suggested by J. Rickett, personal communication, 2006) of local cross-correlations. Other applications include es- timating and compensating

for seismic attenuation and near-surface velocity variations, multi-dimensional pre- diction filtering, and seismic interferometry. Acknowledgement Thanks to James Rickett for suggesting the application of local cross-correlations to the problem of time-lapse imaging of compacting reservoirs, and to Ken Larner for reviewing this abstract. References Bracewell, R., 1978, The Fourier transform and its appli- cations (2nd edition): McGraw-Hill. Deriche, R., 1992, Recursively implementing the Gaussian Figure 6: Vertical displacements (in samples) estimated from local cross-correlations.

Figure 7: Horizontal displacements (in samples) esti- mated from local cross-correlations. and its derivatives: Proceedings of the 2nd International Conference on Image Processing, Singapore, 263267. Kannappan, P., and Sahoo, P.K., 1992, Rotation invari- ant separable functions are Gaussian: SIAM Journal on Mathematical Analysis, 23 , 13421351. Hatchell, P., and Bourne, S., 2005, Rocks under strain: strain-induced time-lapse time shifts are observed for de- pleting reservoirs: The Leading Edge, 24 , 12221225. van Vliet, L., Young, I., and Verbeek, P. 1998, Recursive Gaussian derivative

filters: Proceedings of the Interna- tional Conference on Pattern Recognition, Brisbane, 509 514.