Robotics Research Technical Report, Vol. 255: Reconstruction from Zero-Crossings in Scale-Space (Classic Reprint)
Author: Robert Hummel
Publisher: Forgotten Books
Release Date: September 27, 2015
Excerpt from Robotics Research Technical Report, Vol. 255: Reconstruction From Zero-Crossings in Scale-Space
A useful representation of signal data, besides being a complete and stable transformation of the information, should make explicit useful features in the data. In computer vision, the one-parameter family of images obtained from the Laplacian of a Gaussian-filtered version of the image, parameterized by the width of the Gaussian, has proven to be a useful data structure for the extraction of feature data. In particular, the zero-crossings of this so-called scale-space data are associated with edges, and were proposed by Marr and others as the basis of a representation of the image data. The question arises as to whether the representation is complete and stable. We survey some of the results and studies related to these questions, and survey several papers that attempt reconstructions based on this or related representations. We then formulate a new method for the reconstruction from zero-crossings in scale-space, based on minimizing equation error, and present results showing that the reconstruction is possible, but can be unstable. We further show that the method applies when gradient data along the zero-crossings is included in the representation, and demonstrate that the reconstruction is then stable.
In the fields of signal analysis and image processing, the first stage of a complete pattern recognition system typically applies some numerical process to the digital data. In image processing, for example, it is generally considered useful to extract edges, comers, and textured regions in the image. In other words, the features and salient symbolic information about signal and image data are dynamically associated with groups or regions of the data, and typically depend more explicitly on primitive features such as discontinuities, extrema, and local statistical properties of the spatially-sampled data. It seems reasonable, therefore, to transform the initial data to intermediate representations to make the primitive features more accessible to the algorithms for signal analysis. The collection of all intermediate representations, which might include binary edge images, texture measures, and other feature detectors, comprise a representation that replaces the original signal for analysis purposes. This is the central idea for example, in vision processing, of Marr's "Primal Sketch"  or Tenenbaum and Barrow's "Intrinsic Images." 
Since all analysis is done on the intermediate representation, the representation should carry all the information necessary for the interpretation of the data. Of course, the main idea is that the relevant information should be more explicit than the original samples, and that data redundancies should be eliminated. The representation may constitute a data compressed version of the original sampling, but might as easily contain more bulk data, in the attempt to represent different features. It should be possible to reconstruct a version of the original signal from the intermediate representation since the representation should contain all the essential information.
About the Publisher
Forgotten Books publishes hundreds of thousands of rare and classic books. Find more at www.forgottenbooks.com
This book is a reproduction of an important historical work. Forgotten Books uses state-of-the-art technology to digitally reconstruct the work, preserving the original format whilst repairing imperfections present in the aged copy. In rare cases, an imperfection in the original, such as a blemish or missing page, may be replicated in our edition. We do, however, repair the vast majority of imperfections successfully; any imperfections that remain are intentionally left to preserve the state of such historical works.