The global site of the UK's leading magazine for automation, motion engineering and power transmission
3 April, 2020

Product and Supplier Search


How to avoid machine vision’s blind spots

11 April, 2013

Martin Gadsby, director of Optimal Industrial Automation, looks at some of the challenges in “difficult” machine vision applications, and outlines ways to tackle them.

Machine vision has become an essential element of quality and process control in many industries. Recent advances in camera technologies, processing power and software algorithms, are allowing users to automate many tasks that would not have been possible a decade ago.

But getting machine vision applications to work reliably and cost-effectively requires considerable skill and experience on the part of the system integrator. Frequently, decisions about the lighting, product presentation, camera fixturing and the operation of the machine vision system, can have as much of an impact on its performance as the choice of hardware and analysis technologies.

Product variations

Variable products make machine vision harder. Significant variations in the size or shape of the products being inspected by the system can create problems for a single fixed camera position, for example. Similarly, variations in product colour or surface finish can create challenges for the selection of appropriate lighting. In response to these issues, automatic or manually adjustable camera fixtures can be used to ensure that all the relevant parts of all products are in the image, and in focus, while lighting systems are available which adapt automatically to maintain image quality.

Smarter lighting can help in other ways too. Features that may be invisible under normal room lighting can be highly visible under controlled directional light or backlighting, for example. Beyond this, the use of UV, IR or even thermal cameras can turn apparently impossible features into clearly inspectable images.

Sometimes there are wide natural variations in the appearance of good products. Flexible items, such as confectionery packets, or those using non-synchronised printed foils, can vary significantly from one product to the next, for example. Advanced software approaches, including sophisticated calibration, pattern unwrapping and adaptive tools can help to overcome these issues.

Appropriate product presentation can also simplify image acquisition and processing. For example, if bottles on a line are all oriented with their labels in the same direction, a single camera may be adequate. If the labels are presented in a random orientation, three or four cameras may be needed to ensure that an image of the full label is available. Similarly, precise product fixturing can simplify and increase inspection accuracy.

In general, time spent optimising the image before the acquisition stage will be repaid many times over during the life of the project, in terms of software development, inspection robustness, and system maintenance requirements.

Image requirements

One of the key factors in determining the architecture of an automated vision system is the pixel resolution needed to achieve the required inspection functions. In industrial applications, the tightest measured tolerance must typically represent 5 to 10 pixels of the acquired image. So a tolerance of ±0.5mm may require a pixel resolution of 100µm. In addition, any feature to be detected must occupy several pixels; single-pixel features are prone to noise and “edge effects” and cannot be detected reliably.

A good rule-of-thumb is that a feature should be 3x3 pixels for detection, so a resolution of around 150µm is needed to detect features 0.5mm in size. Some special processing tools also have their own requirements. Optical character recognition tools typically need individual characters to be 20 to 30 pixels high, for example, so a 12-point typeface would require a resolution of around 200µm.

Once the resolution is known, it is possible to define the camera and lens combination that will allow this to be achieved over the object to be inspected. Difficulties arise when this leads to very large image requirements. For instance, a 1.5m-wide web may contain visible defects that are 250µm across, so 18,000 pixels are needed to span the width of the web. This equates to a typical image size of more than 200 Megapixels – beyond the capabilities of current industrial camera technology. The solution here is to use several cameras, custom optics and software to select areas of interest, or to use linescan or contact image sensor (CIS) technologies.

In one recent project, my company supplied an array of eight 8-Megapixel cameras integrated with an advanced datalogging system to measure the contents of 96 well assay trays for a pharmaceutical business. The resulting high-resolution images were processed on several PCs enabling inspection at high production speeds.

Linear speed

The linear speed of moving products and the cycle time available for inspection also play a key role in determining the camera hardware and processing capabilities required to deliver a working system.

If a product is moving continuously, then the acquisition must freeze the movement to avoid motion blur in the image. This can be done using either short exposure times or strobe lighting. In both cases, intense light is needed, and specialised sources are often used to achieve an adequately bright image. With “standard” cameras, it is feasible to operate at exposures of around 0.1ms. For example, a resolution of 5µm and a close-up image 8mm wide would allow a linear speed of around 50mm/s before discernible blurring occurred. Specialised high-speed cameras can offer dramatically faster acquisition.

Once the sensor has been exposed, the data must be transferred from the camera to the processor. In general, high-resolution cameras have lower maximum frame rates, and this is also affected by the data transfer interface. Five to 100 frames per second are typical in the field. Recently, several high-speed camera interfaces have become available, including CameraLink and the more recent coaXpress standard.

The acquired images then have to be processed and analysed. The time taken to achieve this depends on the image content and the algorithms being used. Simple edge-finding or thresholding operations can be executed in sub-millisecond timeframes, even on large images, but advanced pattern recognition, character reading, or blob analysis can extend this by orders of magnitude. Higher speeds and more complex analyses are facilitated by increased processing power, and the most advanced systems use high-powered, intelligent cameras, and multiple, multi-core PCs with image processing distributed across them.

The bigger picture

Finally, whatever the technologies involved, the machine vision system must work smoothly with the organisation’s wider production and quality assurance processes. Getting this right requires attention to a host of factors that cover the full scope of the process. At the work-cell level, the system design should minimise the load on operators, reduce the possibility of errors and be reconfigured easily to accommodate product changeovers.

Across the production process, machine vision activities must integrate with other aspects of automation and quality control, from PLCs to check-weighers and label printers. At the management level, the system must store relevant product data in an accessible and regulatory-compliant manner. In the past, integrating machine vision systems in this way required extensive and labour-intensive custom programming, but today the availability of dedicated integration packages – such as Optimal’s synTI print and inspect system – have greatly simplified, accelerated and reduced the cost of such efforts, ensuring that machine vision is seamlessly integrated into the bigger picture.

  • To view a digital copy of the latest issue of Drives & Controls, click here.

    To visit the digital library of past issues, click here

    To subscribe to the magazine, click here



Birmingham 2020The next Drives & Controls Exhibition and Conference will take place in Birmingham, UK, from 25-27 January, 2021. For more information on the event, visit the Show Web site


"Do you think that robots create or destroy jobs?"



Most Read Articles