Click any of the links that follow for more Brown CS content about George Konidaris.
One of the biggest challenges to robots working autonomously is their lack of intuition. For example, in a task like grasping, intuition gives humans the ability to look at an object and instantly make sound judgments about its fragility, shape, center of gravity, and other factors that are necessary to pick it up without dropping or damaging it.
But what is intuition, really, except knowledge built up from the thousands of examples of objects that we encounter just in the first few years of our life? Professor George Konidaris of Brown University's Department of Computer Science (Brown CS) is a collaborator on recent research led by graduate student Benjamin Burchfiel of Duke University that provides such examples to help robots intuit how to deal with objects that they haven't seen before.
Their work ("Bayesian Eigenobjects: A Unified Framework for 3D Robot Perception"), presented last week at the 2017 Robotics: Science and Systems Conference (RSS), trains an algorithm on a dataset of more than 4,000 3D scans of common objects, and then uses probabilistic analysis to categorize them and learn how objects in each category do and do not vary. When the robot encounters a partial view of a new object, it can guess which category the object belongs in, estimate its position in the world, and even imagine what the concealed parts of the object look like. Their method is three times faster than other current techniques, and makes fewer mistakes.
A new article on the research is available at Phys.org.
For more information, click the link that follows to contact Brown CS Communication Outreach Specialist Jesse C. Polhemus.