Today, digitization of images is usually achieved using CCD cameras in conjunction with PC components suitable for image acquisition (framegrabber boards, firewire interfaces, etc.). The chief advantage offered by the CCD camera over the competing CMOS chip lies in its high metrological quality. For example, the very linear relationship between the light intensity input signal and the digital output signal is essential for precision measurement.
The software used to detect the measured points from the digitized image data also greatly influences the quality of the measured results obtained from image processing sensors. A basic distinction can be made between two different software concepts: the “edge finder” and the processing of image contours. With the edge finder, the intersecting points of predefined lines in the image are determined via the visible contours of the object. This is repeated successively at different locations in a predetermined evaluation range or window. The result is a number of measured points which are then combined by the window to form a group. A separate, one-dimensional evaluation is performed for each determination of a discrete point. The comprehensive, two-dimensional information contained in the image is therefore not taken into account. This causes problems especially for measurements in reflected light. Interference contours caused by surface structures, pits and contamination can be detected and therefore compensated only under certain conditions. During the processing of image contours (Figs. 10), the image is viewed as a whole surface inside an evaluation window. Contours are extracted from this image using suitable mathematical algorithms (operators). One measured point is thus allocated to each pixel (or picture element) of a contour. The measured points are then lined up in a manner resembling a string of pearls. This makes it possible to detect and filter out interfering influences during measurement without changing the shape of the contours. Several different contours can be distinguished within a single capture range. This is important for practical application. The resolution or measuring uncertainty is limited directly by the pixel distance. In a second step, high-quality systems interpolate within the pixel grid (subpixeling), thus enabling even higher accuracies [5].