Checking for potential issues during production allows manufacturers to scrap or rework unacceptable parts at the beginning of a run, and correct issues before a lot of parts are produced – this saves a significant amount of time and expense.

New Artificial Intelligence Error Proofing Features Machine Learning Technology
New Artificial Intelligence Error Proofing Features Machine Learning Technology

Case Study from | FANUC America Corporation

At a recent show, a FANUC LR Mate 200iD equipped with iRVision 2D Inspection Process checked for defects on a model car.

First, the robot picks up a model car from a ramp and places it in front of the cell for error proofing. 

While the car is in the error proofing station, the robot inspects the car with a robot-mounted camera in an IP67 enclosure that includes a ring light.  The robot moves in continuous motion around the car as the camera captures images to perform 11 inspections.  Within two seconds, the robot checks all of the wheels, headlights, doors, roof, license plate, and interior.  Each inspection uses FANUC’s new iRVision AI Error Proofing feature.  In each case, the inspection was trained by presenting examples of good and bad parts.  The red ring light around the camera flashes briefly to highlight the location of each inspection.

Once the inspection is complete, the robot picks up the car and presents it orientated with the found defect to the front of the work cell, and a monitor displays an image of the failed inspection. 

Finally, the robot returns the car to the back of the staging ramp and the process repeats.  

 

What is error proofing?  Error proofing determines that a production process happens according to plans.  During production, there are typically known problem areas that result in the creation of a bad part.  Checking for potential issues during production allows manufacturers to scrap or rework unacceptable parts at the beginning of a run, and correct issues before a lot of parts are produced – this saves a significant amount of time and expense.   The new AI Error Proofing tool is designed to check for two distinct situations, and example images of both situations need to be used to train the tool. For example, if this tool is used for checking the presence or absence of a welded nut, images of the part with the nut and without the nut need to be used to train the AI EP tool.  AI EP is not designed for detection of flaws such as scratches or dents that occur in random positions on a part.

FANUC introduced iRVision in 2006, and each year continues to add new features and functionality that make iRVision easier and more powerful.  The new AI Error proofing tool is built into iRVision and allows AI without any additional hardware.  Like every iRVision product, AI Error Proofing does not require an additional processor - all processing happens within FANUC’s highly reliable robot controller.  The same processor that controls the robot and its motion does the vision processing, including the AI Error Proofing function. Since iRVision does not use a PC or smart camera, it does not negatively impact the reliability of a workcell.

What makes FANUC’s new AI Error Proofing artificial intelligence?  By providing multiple examples of good parts and bad parts, the AI Error Proofing tool is able differentiate between the two during production runs.  During setup, the operator can present multiple examples of work pieces and classify them into two categories – good and bad. Once the operator classifies the images, the AI Error Proofing feature automatically classifies the parts during production runs.

Figure 1 shows an example of AI Error Proofing finding a welded nut on a shock mount bracket.  Examples of the welded nut and the missing nut were used in AI Error Proofing’s learning process.  In the example, class 1 was trained with the nut and class 2 was trained without the nut. Figure 1 shows the welded nut in class 1, highlighted in cyan.

C:\Users\personjl\Documents\Vision\Marketing\AI EP Article 2019\Screen Shots\AI_Nut.JPG

Figure 1 Trained AI Error Proofing

Figure 2 shows an example where the operator differentiates between examples.  The operator classifies a plastic applicator with a lid as class 1 and without a lid as class 2.  All class 1 examples are cyan and class 2 examples are orange.

C:\Users\personjl\Documents\Vision\Marketing\AI EP Article 2019\Screen Shots\epoxy.JPG

Figure 2 Classification

Figure 3 shows the results of the classifications from Figure 2.  Multiple objects may be classified in the same image. Figure 3’s example shows two different applicators.  The one with a lid is highlighted in cyan and the one without a lid is highlighted in orange. In this case, iRVision’s GPM Locator Tool identified the location and orientation of the applicator.  Combining the GPM Locator Tool’s pattern matching ability with the AI Error Proofing Tool allows parts to be found and classified at the same time in the same image.  The combination of these tools allows the robot to pick plastic applicators from a conveyor and place the ones with a lid into the filling machine, and the ones without into a reject bin.

C:\Users\personjl\Documents\Vision\Marketing\AI EP Article 2019\Screen Shots\epoxy3.JPG

Figure 3 Found Results

Since AI is a learning process, an operator may easily add images to the library.  During production startup, parts that are incorrectly categorized can be added to the learned data as a properly categorized part and improve the learned model.

In the current scenario, AI Error Proofing outputs examples as either class 1 or 2.  If the example does not fall into either class, it will output undetermined. If class is undetermined, then it can be added manually to improve the learned model.  Along with the class, the confidence is also output. The higher the confidence, the surer the AI Error Proofing tool believes that the example fits into one of the two classes.  Based on a user-defined threshold, the application can be setup to flag inspections with a low confidence and allow the operator to manually add the example to the learned data to improve the learned model.

Like all iRVision products, AI Error Proofing supports both robot-mounted cameras and fix mounted cameras.  A robot-mounted camera allows the robot to inspect parts from multiple angles/locations. In many cases, a camera can be added to the tooling to add the error proofing functionality with minimal impact on the existing process.  In other instances, it may be more cost-effective to add a new robot to position the camera in different locations around the part. The camera does not have to be robot mounted – it can be set up in the work cell to error proof one particular area of the work piece.  Since iRVision can support up to 27 cameras, any combination of robot- or fixed-mounted cameras can be used to error proof all the required areas of the work piece.

Companies that use AI Error Proofing will not require an experienced vision engineer to set up the process.  As long as the eye can detect the differences between parts, then AI Error Proofing will also be able to differentiate between work pieces.  AI Error Proofing can be used in instances where even an experienced vision engineer would struggle to do the job with conventional machine vision tools.      

Even without AI Error Proofing, an experienced vision engineer may be able to setup the error proofing vision process for many applications using iRVision’s suite of tools - but it often takes a significant amount of time to set up and ensure reliability for some of the more complicated processes.  Using the AI Error Proofing feature to learn to differentiate between good and bad parts eliminates the need to have an expert vision engineer. It also reduces the complexity of the vision setup, saving time and money during integration and startup.  

Proper and consistent lighting is always important with machine vision applications.  With AI Error Proofing, it is less of a concern. By providing examples of the good and bad parts over a range of lighting, AI Error Proofing can learn the difference between the examples and properly differentiate between the good and bad work pieces.

FANUC’s iRVision cameras utilize a fixed focal length lens.  This means that the field of view is a factor of the selected lens and the distance the camera is from the viewing area.  By selecting the appropriate lens and standoff distance, the correct field of view required for the error proofing process can be achieved.  Typically, the larger the area to be error proofed is within the field of view, the more reliably AI Error Proofing can classify it.

There is a misconception in machine vision that higher resolution imaging is a requirement.  In most robotic automation cases, high resolution is simply not necessary. FANUC’s AI Error Proofing is designed to provide high performance with a standard resolution camera.  

In summary, adding error proofing can improve a manufacturing process by catching manufacturing errors early, which will improve production efficiency.  FANUC’s new AI Error Proofing iRVision tool makes it easy to add error proofing to any FANUC robot application, providing customers a variety of advantages including:

  • Reduces lighting and camera resolution requirements. 

  • Significantly reduces the amount of engineering hours needed to perfect the system.

  • Minimizes costs compared to traditional methods. 

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

PI USA - New Hexapod Applications

PI USA - New Hexapod Applications

Addressing all six degrees of freedom means that hexapods can provide linear motion in X, Y, Z as well as rotary motion in Theta-X, -Y, -Z (pitch, yaw, roll) a very useful feature for precision alignment and positioning in optics, photonics, aerospace engineering, automation, and life sciences applications.