Resolution, one of the most critical parameters in an imaging system, is the imaging system's ability to distinguish fine object detail. Learn how the imaging optics, camera, and illumination affect resolution with real-world application examples. Join Gregory Hollows, Director of Machine Vision Solutions, in this in-depth discussion on the importance of, and the factors governing, an imaging system's resolution. For more information, read our Resolution or Imaging Electronics 101: Camera Resolution for Improved Imaging System Performance application notes.
Hi, I am Greg Hollows and welcome to the Imaging Lab. We are going to talk about Resolution In Depth now. Resolution is defined as the smallest object detail that can be seen by the imaging system. One of the things that actually controls that level of detail is the camera itself and the size of the pixel in the camera. Now it is really important to understand that that is not the only limiting detail in the system. The optics themselves and the type of lighting that is using all lead to the different levels of resolution that can be obtained. They all have to be working together in concert to get the right level of resolution out of the system or to maximize what your camera is able to obtain. Let's look at the examples here in this slide. As we look at the left-hand example, we will see that there are two objects that are very close together. If you look closely, you will see on the paper, the orange squares have a certain amount of finite distance between them. It is very small, but it is there. In this particular camera system, they get imaged up onto the sensors so close together that they actually create one object that is two pixels in size. It is really impossible because there is not enough information there to determine how close they are together or if it is one object or two or anything else like that. They have actually blended all the way together. This is where we are hitting the limiting resolution in the system. What we do in optics is we look to try and maximize the amount of contrast that we can get at a given resolution. And have enough space between the objects that we are looking at, when we are doing the designs, to be able to get the space that you see in the example on the right-hand side of the frame where we are actually seeing the two objects with enough space between them. This is a line pair. That is going to be important in a later discussion where we talk about MTF. This allows us to get a frequency in line pairs per millimeter that we can actually examine the resolution and contrast capabilities of the lens across a wide range. Now resolution, we have to understand here as we think about this, let’s put it into a real world example. Imagine for a moment that I have a piece of paper and I have a thousand spots on that piece of paper. And we have a camera system that has a thousand pixels in an array running along its length. If I was to image them like we see in the example in part A on the slide, they would be imaged all adjacent to each other across that array of pixels. This will create a line that is one thousand pixels long. We would not see a thousand distinctive spots. So, the limiting resolution, the amount of parts we can actually break up the field into and see distinctive resolution features is five hundred. One on, one off, alternating. That is about the limit that we can go to with the system. Now it is important when we think about optics though. We can do this calculation on the camera side, for how many pixels I have, how far I can break up the field. But what we are going to find is not all objects are actually able to resolve to the limiting level of the camera. New high-resolution sensors that have hit the market have really shown this effect in detail. We are going to start looking at some real-world images here and examples of test targets and real objects to see how resolution is affected in the system. So, let's look at this first example here. We are looking at a couple of light bulbs. And we are going to look at two different sensors that are in play here. One is a standard resolution camera nowadays. It used to be a higher resolution camera many years ago with 640 by 480 resolution. You can see it's broken up the field into some very large areas and we're not able to see all the detail we would like to see on the light bulb itself. Now with the newer cameras that have come out, we are going to look at how much you break the field up into when you go to smaller pixels looking at the same amount of area. There is a 1.3-megapixel camera on the right. And as we look at this, we are seeing that the object is broken up into much more detail. Again, we would need the right optics to maximize this capability. But let's see how it works on a target and give you some information as feedback on that. If you notice in this next slide, on the left and the right, there are pictures of a target that is there. This is a US Airforce 1951 target. And it gives us resolution of the horizontal and vertical dimensions with vertical and horizontal lines respectively. They spiral in and go to higher and higher frequency, so we can see different resolution levels all at the same time. Now the one thing you will notice that jumps right off the page at you is that the resolution on the left is far lower than the one on the right. Now understand that the sensors are the same size in this particular example, as well as the objects lighting and working distance. The only thing that is changed is the actual spatial resolution on the image side. And that's why we are seeing much greater detail on the right-hand image. And that's to be expected as we go to higher resolution cameras. Now one caveat to that is that in the system, the lens that was utilized has higher resolution than either of the cameras can produce. So, we are not limited by the lensing system in this example. Let's take this a step further now. Let's look at two high resolution cameras with different lenses on the system. This is actually looking at a UPS label. In that label, there is a variety of details in there. 1D barcode information. 2D barcode information with some centering circles that are there. And then text, very popular applications. Read the barcode, read the text and do a variety of different OCR and OCV applications. We are using a two-megapixel camera for both situations here and the lenses have the exact same specifications. If you looked at the specifications used for each of them, they will be essentially identical. The important thing to take away here is as you look at the close ups of these images, in the upper corner where the 2D matrix is, you'll see a wildly different image being produced between the two of them. In this example, the image on the right is actually exceeding the sensor's capability, while on the left, the optics do not meet the sensor's capability. This is why we get this run together of the data in the 2D matrix. And, while it may be readable on this application, it will be much less reliable than the one on the right. It's important to understand that the specifications of a lens, while looking very very similar, does not mean that they actually perform the same way. Why is this? First off, it could be the design. It could not be designed for this specific area, making it not perform as well. There might be corners that were cut, or things that were done to reduce cost in that lens. Taking away elements or limiting the size of it for other applications that would actually make it have the inability to hit that performance level, or lower cost manufacturing tolerances that are employed, that will give a stack up of errors that could actually reduce the quality of any of the given lenses that are there. At the end of the day, the resolution is very important to guarantee the reliability and the repeatability of the system. Just because I can get it done once in the lab doesn't mean it's going to work perfect every time afterwards. So, it's important to look at that. Let's take it a step further now. We are going to get into how that we can look at detail of lenses and determine what's going on across the entire Field of View. In the next slide, we have an example here of a certain type of target. It's called a Star target. You will notice that it's a circle with wedges alternating black and white going towards the center. Kind of like slices in a pie or a pizza, you will notice they get narrower and narrower as we get to the center of the image. The frequency is going up because the line spacing is getting tighter. This allows us to evaluate different resolutions continuously over the entire area of the target. In the next slide, what we are going to see is we place a variety of Stars in different positions on the actual target itself. This allows us to evaluate resolution in every direction at a variety of frequencies all the way through the system in the center in the corner. What we are going to do now is we're going to pull up comparisons between two lenses and look at the center and the edge and then the corner of the image and you will see wildly different results. Some things to take away here as we get down to the corner of the image, you'll notice that the center of them are very similar and when we get to the corner they are very different. You'll notice that the resolution on the right-hand side is actually falling off differently in one diagonal dimension as opposed to the other. This can be a problem associated with higher order issues in optics, specifically astigmatism and some other aberrations. But you will notice here that you'd only get resolution in one direction but not the other with this particular lens. So, if all your defects are going one way, it will work, the other way, it may not. Resolution is a critical thing in all of your imaging systems. Next, we are going to look at how we tie contrast together with this so that you can actually get a specification at a given resolution and compare lens systems. That's a detailed look at Resolution. next is Contrast.