| 11:38 AM EST

Robotic Bin Picking Made Simple(r)

The result of “thinking inside the bin,” is a design featuring a six-axis Yaskawa robot, an IFM Effector 200 photoelectric distance-measuring sensor, and, mounted to an IAI servo-driven slide, a Cognex In-Sight 8000 camera. 


Facebook Share Icon LinkedIn Share Icon Twitter Share Icon Share by EMail icon Print Icon
Robotically bin-picking randomly oriented components has long been a challenge, one ordinarily solved by using a 3D vision system.

When Systematix (systematix-inc.com), a systems integrator, was presented with the task of developing an automated system to remove car seat lumbar actuator assemblies from a bin and into a wire nest for assembly, its first idea was to use a robot and a 3D sensor.

But then its engineers thought of something. They realized that each actuator in the bin didn’t have to be mapped in all three dimensions but two would suffice. They could simply mount a 2D camera on a vertical slide such that each component is simply measured in X and Y.

Because there are sheets of cardboard separating layers of the randomly oriented parts and those dividers are removed once the parts on top of them are removed, there would be the need to measure the Z axis (i.e., depth) just once per layer.

The result of this “thinking inside the bin” is a design featuring a six-axis Yaskawa robot (motoman.com), an IFM Effector 200 photoelectric distance-measuring sensor (ifm.com), and, mounted to an IAI servo-driven slide (intelligentactuator.com), a Cognex (cognex.com) In-Sight 8000 camera. 

The camera uses RedLine, the latest iteration of PatMax, the geometric pattern-matching technology that Cognex first patented in 1996. Up until then, pattern matching technology relied upon a pixel-grid analysis process called normalized correlation. That method looks for statistical similarity between a gray-level model or reference image of an object and portions of the image to determine the object’s X-Y position. PatMax instead learns an object’s geometry from a reference image using a set of boundary curves tied to a pixel grid and then looks for similar shapes in the image without relying on specific gray levels. This approach, now widely used by numerous machine vision companies, greatly improves how accurately an object can be recognized despite differences in angle, size and shading.

The system not only gets the job done in the required time, but presumably, the use of the long-proven tech was somewhat more cost-effective than a less-straightforward approach.


  • IMTS 2014 Preview


  • Ford Invests $25 Million More at Kentucky Truck

    To prepare the Ford Kentucky Truck Plant to launch the Lincoln Navigator and the Ford Expedition last fall, Ford invested approximately $900 million in its Kentucky Truck Plant facility to launch the Lincoln Navigator and Ford Expedition.

  • Anecdote about Automation

    This is the Case IH 8000 Series Austoft sugar cane harvester: According to CNH Industrial, which owns Case, in Brazil, where equipment like this is used, sugar cane harvesting, which had once been a labor-intensive process (as had been the production of cars and components), workers had been able to cut cane at a rate of up to 500 kg per hour.