3D Scanned Objects Help Robots to “See” the Way That Humans Do

IMTS

Share this Article

Robots and humans don’t see things the same way. That’s not exactly a shocking statement, but some robots operate so efficiently and smoothly that it’s easy to forget that their vision isn’t as good as ours. The main difference is that robots see things very literally – they see exactly what is in front of them, nothing more, while humans have the ability to automatically fill in the parts of objects that are missing from our immediate view.

That’s why robots haven’t quite evolved yet to the point that they’re helping us every day in our homes. For example, a sink full of dirty dishes would baffle a robot. While a human can look at such a mess and know exactly what shape a plate is, and a cup, etc., even if they’re obscured by other dishes, a robot sees only a partial shape and registers it as an unknown. But a new algorithm created by a Duke University graduate student is allowing robots to visualize the way humans do.

Ben Burchfiel

PhD candidate and Intelligent Robot Lab (IRL) member Ben Burchfiel and his thesis advisor and IRL director, Dr. George Konidaris, now an assistant professor of computer science at Brown University, have developed a technology that allows robots to look at objects from one angle and know exactly what they are, even if they’ve never seen them before and can’t see them in their entirety. Burchfiel and Konidaris created the algorithm by building a database of 3D scans of about 4,000 common household objects, such as furniture and appliances. Each 3D scan was converted into tens of thousands of voxels for easier processing.

The algorithm learned different categories of objects by using a variation on a technique called probabilistic principal component analysis. It searched through examples of each object and learned how they varied and how they stayed the same. So when it sees something it’s never seen before, like an unusual coffee cup, it knows the general characteristics a coffee cup has and can recognize it as such, the same way a human would.

To test the algorithm, Burchfiel and Konidaris fed it 908 new 3D examples of 10 kinds of household items, viewed only from the top. The algorithm guessed what the objects were, and what their overall three-dimensional shapes should be, about 75 percent of the time, compared to just over 50 percent for a robot without the technology. It also recognized objects that were rotated in different ways, which even the best robots haven’t been able to do.

Left: what the robot is shown; center, the robot’s guess of the entire object; right, the actual object

The technology isn’t perfect yet, though. The algorithm is still baffled by objects that are similar in shape – it might confuse a table for a dresser when viewed from above, for example.

“Overall, we make a mistake a little less than 25 percent of the time, and the best alternative makes a mistake almost half the time, so it is a big improvement,” Burchfiel said. “But it still isn’t ready to move into your house. You don’t want it putting a pillow in the dishwasher.”

Burchfiel and Konidaris are working on improving and scaling up the algorithm, though; they want robots to be able to distinguish between thousands of objects at a time. The goal is to take robots out of the predictable, ordered environment of a laboratory or assembly line and into the messy, random environment of a typical home and have them function just as well.

“That has the potential to be invaluable in a lot of robotic applications,” Burchfiel said.

The research has been documented in a paper entitled “Bayesian Eigenobjects: A Unified Framework for 3D Robot Perception,” which you can read here. The research was supported in part by the Defense Advanced Research Projects Agency (DARPA). Discuss in the Robot Vision forum at 3DPB.com.

[Source: Duke University / Images: Burchfiel and Konidaris]

 

Share this Article


Recent News

“Bundled Light” Enables High Quality Plastic 3D Printing from LEAM

Stoke Space Deploys Solukon’s Automated Depowdering for 3D Printing Reusable Rockets



Categories

3D Design

3D Printed Art

3D Printed Food

3D Printed Guns


You May Also Like

3D Printing Webinar and Event Roundup: March 24, 2024

We’ve got a very busy week of webinars and events, starting with Global Industrie Paris and a members-only roundtable for AM Coalition. Stratasys will continue its advanced in-person training and...

New EOS M 290 1kW Enables Copper 3D Printing for New Space, Automotive, and More

EOS has released a new EOS M 290 1kW metal powder bed fusion (PBF) system, designed specifically with copper in mind. Initially developed by its custom machine building subsidiary, AMCM,...

3D Printing Webinar and Event Roundup: March 3, 2024

In this week’s roundup, we have a lot of events taking place, including SPE’s ANTEC 2024, Futurebuild, the AAOP Annual Meeting, JEC World, and more. Stratasys continues its training courses,...

EOS Taps 1000 Kelvin for “First” AI Co-pilot for 3D Printing

Additive manufacturing (AM) startup 1000 Kelvin has joined forces with EOS to integrate AMAIZE, a pioneering artificial intelligence (AI) co-pilot for AM, into the EOS software suite. The solution aims...