Science – Artificial Intelligence So that robots can see the world as humans


Madrid, 29 (European Press)

Researchers at the Massachusetts Institute of Technology have developed a machine-learning model to enable robots to understand the basic relationships between objects in a scene.

When humans look at a scene, they see the things and the relationships between them. At the top of your desk, there may be a laptop located to the left of the phone, which is in front of the computer screen, the Massachusetts Institute of Technology (MIT) explains in a statement.

Many deep learning models struggle to see the world in this way because they do not understand the interrelationships between individual things. Without knowing these relationships, a robot designed to help someone in the kitchen would have trouble following a command like “Take the pallet to the left of the stove and place it on the cutting board.”

The new model represents individual relationships one by one, and then aggregates these representations to describe the overall landscape. This allows the model to generate more accurate images from text descriptions, even when the scene includes multiple objects arranged in different relationships to each other.

This work can be applied in situations where industrial robots must perform complex multi-step processing tasks, such as stacking items in a warehouse or assembling machines. It also takes the field one step closer to enabling machines that can learn and interact with their environments just as humans do.

“When I look at a table, I can’t say there is an object in position XYZ. Our minds don’t work that way. In our mind, when we understand a scene, we really understand it based on the relationships between objects. We think that by building a system that can understand the relationships between objects. We can use this system to manipulate and change our environments more effectively,” says Yilon Do, PhD student at the Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-author. Main article.

The research will be presented at the Neural Information Processing Systems conference in December.

Leave a Reply

Your email address will not be published. Required fields are marked *