Vision, Quality Inspection and Manufacturing Optimization for Automation and Production Solutions
AAE (Grauel) pushes technical boundaries. We provide high tech printing & assembly solutions. We support smart printing and manufacturing equipment with Industry 4.0 technologies and solutions. This series of articles provide background information on how these developments are supported.
First we’d like to say a big thank you to Gregor Fabritius of Isotronic GmbH and Koen Deuss and Will Uijting of VIMEC.
Vision systems are frequently used in the production area to reduce defects in finished and packed products and increase efficiency of production. Another common application is the use of vision on robotic systems to provide means of guidance to a pick and place position.
Vision applied in a so called ‘pick and place’ system
With the rise of industry 4.0, machines are more and more equipped with vision systems to further improve quality and detect opportunities to optimize and stabilize production processes.
In this article we look at the basics that determine a vision application. By understanding the basics of a vision system and its operation, we can better implement and operate (complex) vision systems. The main focus of this article is on 2D visual inspection.
AAE focuses on the integration of high-end vision systems in automated solutions. This requires multi-disciplinary skills combining mechanical engineering, automation experts, control engineers and vision specialist.
Pharmaceutical vial (10 times magnified)
As an example (see above), a pharmaceutical vial is shown above (10 times magnified), where the defect is shown in the red areal. For these products, the so called “Japanese quality” is a well-known standard where extremely small defects are to be detected.
In this article we will focus on lighting technology first. We will then look at different camera types and the resolution. Since vision systems are used in production environments, important items to take into consideration are dealt with in the next paragraph. The software and algorithms used for detection of the products and defects are discussed in the “Software and Algorithms” paragraph. The last paragraph deals with the aspects of testing the vision system performance.
One of the most important aspects to evaluate in a vision system is the lighting technique. The main task of lighting is to enhance or create contrast between the object features (e.g., defects) so they can be made visible for detection.
Depending on the object to be inspected, the features that require detection (and the ones that do not require detection) determine the light source. Main types of lighting include bright field, dark field and back lighting. We look at the different types below.
Dark Field, Bright Field & Back Lighting
The most common lighting techniques are shown below graphically.
Overview of the most common lighting techniques
It is not easy to determine the correct lighting technique, as different situations need to be evaluated (e.g., product tolerances, variation in geometric shape of the product, defects shape, size and location and environmental conditions all need to be considered). Defining the correct lighting technology is normally performed by an optical engineer.
Image above shows an Isotronic GmbH vision system, mounted in a glass manufacturing system
The image above shows a vision system with back lighting (red LED) installed in a production machine (pharmaceutical container production machine).
A wide range of standard lighting components are available but dedicated vision suppliers also design their own bespoke lighting units for unique applications.
As listed earlier as well, the required lighting technique depends on aspects such as object shape, product material and the way the light is absorbed, scattered, reflected or transmitted (e.g., lighting required for a black object may be quite different that for a transparent (flint) glass object).
Glass cartridge illuminated using ‘Dark Field’ (image provided by VIMEC)
The image above shows a (glass) cartridge illuminated using “dark field”. This type of illumination is typically used to detect reflective defects (e.g., cracks, chips, breakage, etc.). The defects will reflect the light and become clearly visible. This lighting technique is less suitable for detection of (dark) dirt or spots.
Columnated vs Diffuse Lighting
Lighting techniques can be further differentiated into collimated and diffuse lighting. Each of these two techniques serve a specific purpose. As a general guideline, a columnated lighting source provides extra means to enhance the outer silhouette (shape) of a product and thus increase the edge detection (inspection accuracy).
For defect detection (e.g., cracks in glass), a diffuse lighting source may serve its purpose better as the light comes in from various angles and thus increasing the illumination of the defect.
Overview of different light sources
The two different lighting types are shown graphically above.
By using different lighting techniques, unique images can be obtained. This allows for dedicated inspection. As an example, two (2) images are included of the same product (at the same position). The upper image is better suited for defect detection while the bottom image is better for (outer) dimension inspection.
Close up image of a syringe tip
Especially when using LED illumination, different patterns can be created (at the same inspection position). By quickly changing the patterns different images can be obtained. Each image provides unique inspection capabilities. We look at patterned lighting in the next paragraph.
Patterned Pulsed Light Sources
When multiple features (defects) are to be found or geometric and cosmetic inspection are to be combined, different lighting techniques are preferred. Geometric inspection benefits from a dark image (e.g., requiring a smaller light source) when cosmetic inspection benefits from a lighter image (e.g., requiring a larger light source).
The image in the previous paragraph shows an example of the differences between two (2) types of illumination, both used for a different purpose. The top image is better suited for cosmetic inspection where the bottom image is better suited for geometric inspection.
This introduces multiple inspection stations (positions), but another approach can be used involving patterned lighting. Patterned light sources may consist of various smaller lighting units that change the illumination shape, illuminating the object from multiple and different angles. But larger panels with switching patterns are also used for enhanced feature inspection.
An example is included below where a single light source can alternate between various lighting patterns enhancing inspection functionality.
Illumination units with different patterns
Camera types, Speed and Resolution
With the lighting in place, we need a camera or sensor to (visually) detect the product and the defects. When looking at camera-based systems, two main (2) types of cameras are available. These consist of an area-scan camera and a line scan camera. We will look at the two types of cameras below:
An area-scan camera uses an imaging sensor*1 with a specific width and height (e.g., 1280 × 960 pixels) to create a single image. An important term used with these cameras is “frame rate” which provides an indication of the time required to capture one (1) image.
A line scan camera uses a sensor *1 that is long and very narrow (for example, 1 * 2.048 pixels). The image is created with the object rotating or moving across the axis of the sensor. The frame rate of a line scan camera is normally expressed as “integration time” (kHz), this is the time to take one (1) line where multiple lines are taken to generate a complete image.
*1 The main sensor types are CCD (coupled Charged Device) or CMOS (Complementary Metal Oxide Semiconductor).
An overview of the two camera types is shown below. The top image shows an area scan camera which has taken six (6) individual images (taken at 60° intervals) for the inspection. The bottom image shows a line scan camera that “scans” the product during rotation. A line scan image looks quite different than an area scan image and may take a little bit getting used to it at first.
Overview of how a scan camera operates
Line scan cameras and area scan cameras can be combined, together with different lighting patterns. Combining these functionalities provides accurate and enhanced inspection possibilities. The vision system may become very complex and requires expertise to design and operate (which is not to be underestimated).
An example is included below showing four (4) line scan cameras (out of a total of 14 cameras) and two (2) area scan cameras (using a semi-transparent mirror to optimize the optical route).
Vision system with line scan and area scan cameras
Getting a clear image
After we have identified the lighting technique and the camera type, we focus on the additional aspects to get a high-quality image. Additional aspects to evaluate to get a clear and sharp image are included below:
Overview of the most important elements when it comes to Vision
The items listed in the image above are frequently referred to and important to understand. We deal with them individually in the next paragraphs.
Resolution & Sensor
The resolution of the camera (either an area scan or line scan camera) is expressed in pixels. An area scan camera has a “rectangle” of pixels which normally are specified as 1.920 * 1.200 pixels (2.3 MP camera) or 2.592 * 1.944 pixels (5 MP camera).
A line scan camera has a single line which typically has 2.000, 4.000 or 8.000 pixels which is referred to as a 2K, 4K or 8K camera.
The sensor used in the camera transfers the light it receives into an electronic signal. The most used types of sensors in cameras are the CCD and CMOS sensors. Each sensor offers unique properties and depending on the vision application and requirements, a particular sensor might be better suitable than the other (which is better? it’s complicated).
(CCD: Coupled Charged Device, CMOS: complementary metal oxide semiconductor).
Field of View (FOV)
The field of view (fov) is the area that the camera can capture. It defines the maximum area that can be inspected. The field of view is determined by the sensor size in the camera and the focal length of the lens.
In some applications we need to inspect 20mm large products where in others we need to inspect a 30meter large object. This requires a different field of view (fov).
An overview of the field of view (FOV) and the relation to the sensor is shown below for illustration purposes:
Overview of the field of view (FOV) and the relation to the sensor
Depth of Field
The next important topic to review is the “depth of field” (DOF). This is used to describe how much “depth” of the image is still in an “acceptable sharpness” range.
A shallow depth of field describes a narrow range in which objects appear in focus, whereas a deep depth of field describes a long range in which objects appear in focus.
As an object is placed closer to or farther away than the set focus distance of a lens, the object blurs and both resolution and contrast suffer. As such, DOF only makes sense if it is defined with an associated resolution and contrast.
Picture of a camera lens
The depth of field can be changed by opening or closing the aperture of the camera (changing focal length or moving camera away from the object are also possible). By opening the aperture, more light is allowed to enter (e.g., thus increasing inspection time). The value for the aperture opening is measured using a scale of f-stops (f-stop value).
An example for shallow (left) and deep field of view (right) is shown below.
Example of shallow (left) and deep (right) field of view
In the image above the focus of the camera is on the middle syringe and the f-stop is closed for the image on the left (shallow DOF). The image on the right has the same focus point but a much larger f-stop value (deep FOV).
Note: depth of focus is sometimes referenced but this is a different concept and relates to the image sensor plane. For this article we discuss depth of field only as this is most frequently used.
Regular and Telecentric Lenses
Depending on the camera (and sensor), a specific lens can be selected. There is a relationship between the sensor (size) in the camera and the lens which can be used. Regular lenses come in a wide range of sizes and are referred to in millimeters such 25mm, 35mm or 60mm lenses.
Especially in measurement systems, a good and clear image contributes to high quality inspection. But when an object moves in the field of view, the object may appear smaller or larger (at various positions).
Example of how the type of lens affects the image
An example is included above where a camera looks at three products (top view on left-hand side). The image on the top-right shows the camera image when using a 42mm lens. The syringes on the left and right in the image appear smaller than the middle one as they are further away from the camera (and provide a smaller measurement result).
The image on the bottom-right shows the camera image when using a telecentric lens where all three images appear identical even with the two outer ones further away from the camera.
A telecentric lens is typically used to measure without perspectival distortion in case of varying object positions (across the image field in X and Y direction).
This image shows a vision system with telecentric lenses (image provided by Isotronic GmbH)
Note that for these lenses the depth of field is fixed, the full configuration needs to be moved back and forth to get the correct (focus) position.
The working distance is the distance between the front of the lens and the target object (where it is in focus). This becomes especially important when integrating a vision system in an (existing) assembly or production system.
Overview of how Extension Tubes work
A typical way to magnify the image obtained (or work in limited space) is the use of extension tubes. The magnification of the object can be increased, maintaining the camera(s) position.
Environment & Operating Conditions
Apart from a solid vision design (cameras, lenses, lighting, etc.), additional aspects are to be considered as these may have a profound impact on the inspection performance.
Testing of the vision performance is often performed on a stationary test bench but when mechanical handling (vibration / temperature conditions, etc.) and other process parameters are introduced, the results may be quite different.
We will review critical aspects that influence inspection results once installed in a manufacturing environment.
Vision system in a production environment used for glass tube inspection (image provided by Isotronic GmbH)
Product Tolerances & Material
When process parameters (such as tolerances of products) are not included in the initial vision tests, the limits for accurate and stable inspection can soon be reached (leaving an unstable system). Slight variations of the product (which are to be expected in a production environment) may cause reflections or dark spots in the image causing inaccurate results.
Glass and transparent plastics are typical examples that provide additional challenges for stable and accurate visual inspection and require dedicated (lighting) solutions. Even with the slightest tolerance changes, the images may appear quite different for the camera.
When products during inspection are extremely hot (e.g., glass or metal inspection, > 700°C), the measurement results vary when comparing them to products at room temperature. Depending on the material, temperature and inspection accuracy required; factors such as the coefficient of expansion play a role in the measurement results (comparing test results to products which have e.g., cooled off).
It is also worth mentioning that there may be a difference in measurement results between online (during production) and offline (quality check) inspection. It can be quite complicated to compare online and offline results (see also last paragraph, “testing vision performance.”)
The environment in which the vision system will be installed may offer interesting challenges. Some vision systems are to be installed in a location that has limited space (e.g., existing production machines). This may prove to be a challenge as, due to the space limitations, not all optimum distances can easily be achieved (working distance, space for cables or space for a telecentric lens, etc.).
Some production environments provide additional challenges to integrate vision systems as factors such as vibration, heat, and environmental lighting play a role.
For example, in a glass production environment, temperatures are to be considered when integrating and operating a vision system as well as a small working area to integrate the vision system.
Visual Inspection technology at the hot end production of pharmaceutical vials (image provided by Isotronic GmbH)
Software and Algorithms
We dealt with vision technology and the environmental conditions in the previous paragraphs but another key factor to understand is the software and algorithms used.
Vision systems transmit raw images at a high frame rate, which takes up a lot of memory. All these images need to be converted to the required image format in real-time. Image processing algorithms must provide the means to process all this data. Next to this, vision algorithms are used to decode meaning from observed pixel patterns in the acquired images.
Some companies such as Keyence and Cognex have a dedicated vision library, included with their systems that they have developed and deploy. Alternatives are programs such as Halcon and OpenCV which are available as well and can be used for different vision configuration (and parametrized by the developers and users). Some vision companies have specialized in specific inspection areas and develop their own dedicated algorithms especially suited for that inspection (e.g., VIMEC).
Understanding of the workings of algorithms is important to ensure it is applicable for the intended purpose. A general-purpose algorithm from a vision library (e.g., an edge detection algorithm) may not always work for a dedicated product and may even lead to incorrect measurements.
Understanding the “workings” of a vision algorithm is important to verify if it will provide the performance required. An algorithm may work well for most common defects but will completely “miss” a specific one as it was not included in the original test sequence. The company working on the vison application may not be aware of all aspects required for the product or defects that may appear.
Testing Vision Performance
Vision systems generate a lot of data which is (normally) nicely presented on the screen. But the “reliability” of the measurement results is to be verified. Computer screens tend to provide a false sense of confidence.
A well know method to verify the performance and results of a vision system is through a “Measurement System Analysis”. A useful reference here is the “Measurement System Analysis”, reference manual fourth edition. A copy of the manual is available on the internet (see below for the link).
A “Gage R&R” test (Anova) can be used to test the repeatability and reproducibility of the measurement results. Simply put, all measurement results (%GRR) with a value below 10 indicate that the vision system is suitable for the process.
Gage R&R rest results
A value between 10 – 30 indicates that the system may be suitable for inspection. This depends on the importance of the aspect measured. A value over 30 indicates that the system is not suitable to be used for in that process.
Another aspect to review is the difference between the “true” value and the value measured. This is referred to as the bias value. Understanding the difference in measured and “true” values (often from an offline system) can be extremely complex.
Measurement results compared to a caliper measurement (bias verification)
Stay tuned for more, coming up next! In the meantime, please consider following Grauel, a brand of AAE, on LinkedIn for weekly updates and extra content.
Inkjet; A Flexible Printing Strategy for High Quality Print, Unique Part Identification and Batch Customization, by Ivo Brouwer – Business Developer Production Automation at AAE b.v.
(2021) Edmundoptics.com. Available at: https://www.edmundoptics.com/ViewDocument/silhouetting-Illumination-18.pdf (Accessed: 7 April 2021).
Webmaster, A. (2021) (MSA) Measurement System Analysis | AIAG , Aiag.org. Available at: https://www.aiag.org/quality/automotive-core-tools/msa (Accessed: 7 April 2021).
(2021) Rubymetrology.com. Available at: http://www.rubymetrology.com/add_help_doc/MSA_Reference_Manual_4th_Edition.pdf (Accessed: 7 April 2021).