As imaging technology advances, the types of cameras and their interfaces continually evolve to meet the needs of a host of applications. For machine vision applications in the semiconductor, electronics, biotechnology, assembly, and manufacturing industries where inspection and analysis are key, using the best camera system for the task at hand is crucial to achieving the best image quality. From analog and digital cameras, to progressive scan and interlaced scan formats, to firewire and GigE interfaces, understanding parameters such as camera types, digital interfaces, power, and software provides a great opportunity to move from imaging novice to imaging expert.
CAMERA TYPES AND THEIR ADVANTAGES
Analog vs. Digital Cameras
On the most general level, cameras can be divided into two types: analog and digital. Analog cameras transmit a continuously variable electronic signal in real-time. The frequency and amplitude of this signal is then interpreted by an analog output device as video information. Both the quality of the analog video signal and the way in which it is interpreted affect the resulting video images. Also, this method of data transmission has both pros and cons. Typically, analog cameras are less expensive and less complicated than their digital counterparts, making them cost-effective and simple solutions for common video applications. However, analog cameras have upper limits on both resolution (number of TV lines) and frame rate. For example, one of the most common video signal formats in the United States, called NTSC, is limited to about 800 TV lines (typically 525) and 30 frames per second. The PAL standard uses 625 TV lines and a frame rate of 25 frames per second. Analog cameras are also very susceptible to electronic noise, which depends on commonly-overlooked factors such as cable length and connector type.
Digital cameras, the newest introduction and steadily becoming the most popular, transmit binary data (a stream of ones and zeroes) in the form of an electronic signal. Although the voltage corresponding to the light intensity for a given pixel is continuous, the analog-to-digital conversion process discretizes this and assigns a grayscale value between 0 (black) and 2N-1, where N is the number of bits of the encoding. An output device then converts the binary data into video information. Of importance are two key differences unique to digital and not analog cameras types:
- The digital video signal is exactly the same when it leaves the camera as when it reaches an output device.
- The video signal can only be interpreted in one way.
These differences eliminate errors in both transmission of the signal and interpretation by an output device due to the display. Compared to analog counterparts, digital cameras typically offer higher resolution, higher frame rates, less noise, and more features. Unfortunately these advantages come with costs - digital cameras are generally more expensive than analog ones. Furthermore, feature-packed cameras may involve more complicated setup, even for video systems that require only basic capabilities. Digital cameras are also limited to shorter cable lengths in most cases. Table 1 provides a brief comparison of analog and digital cameras types.
Table 1: Comparison of Analog Camera and Digital Camera Types
|Analog Cameras||Digital Cameras|
|Vertical resolution is limited by the bandwidth of the analog signal||Vertical resolution is not limited; offer high resolution in both horizontal and vertical directions|
|Standard-sized sensors||With no bandwidth limit, offer large numbers of pixels and sensors, resulting in high resolution|
|Computers and capture boards can be used for digitizing, but are not necessary for display||Computer and capture board (in some cases) required to display signal|
|Analog printing and recording easily incorporated into system||Signal can be compressed so user can transmit in low bandwidth|
|Signal is susceptible to noise and interference which causes loss in quality||Output signal is digital; little signal loss occurs during signal processing|
|Limited frame rates||High frame rates and fast shutters|
Interlaced vs. Progressive Scan Cameras
Camera formats can be divided into interlaced, progressive, area, and line scan. To easily compare, it is best to group them into interlaced vs. progressive and area vs. line. Conventional CCD cameras use interlaced scanning across the sensor. The sensor is divided into two fields: the odd field (rows 1, 3, 5..., etc.) and the even field (rows 2, 4, 6..., etc.). These fields are then integrated to produce a full frame. For example, with a frame rate of 30 frames per second (fps), each field takes 1/60 of a second to read. For most applications, interlaced scanning does not cause a problem. However, some trouble can develop in high-speed applications because by the time the second field is scanned, the object has already moved. This causes ghosting or blurring effects in the resulting image (Figures 1a – 1b). In Figure 1a, notice how TECHSPEC® Man appears skewed when taking his picture with an interlaced scanning sensor.
In contrast, progressive scanning solves the high-speed issue by scanning the lines sequentially (rows 1, 2, 3, 4..., etc.). Unfortunately, the output for progressive scanning has not been standardized so care should be taken when choosing hardware. Some progressive scan cameras offer an analog output signal, but few monitors are able to display the image. For this reason, capture boards are recommended to digitize the analog image for display.
Figure 1a: Ghosting and Blurring of TECHSPEC® Man's High-Speed Movement Using an Interlaced Scanning Sensor
Figure 1b: TECHSPEC® Man's High-Speed Movement Using a Progressive Scanning Sensor