Google's DeepMind AI enables robots to perform novel tasks

Source: IANS

Google has demonstrated its first vision-language-action (VLA) model for robot control that showed improved generalization capabilities and semantic (the understanding of words and sentences), and visual understanding beyond the robotic data it was exposed to.

This includes interpreting new commands and responding to user commands by performing beginner-level reasoning, such as reasoning about object categories or high-level descriptions.

The Robotic Transformer 2 (RT-2) is a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalized instructions for robotic control, according to Google DeepMind, the tech giant's artificial intelligence subsidiary.

A robot that can perform multiple tasks

A traditional robot can pick up a ball and stumble when picking up a cube. RT-2's flexible approach enables a robot to train on picking up a ball and can figure out how to adjust its extremities to pick up a cube or another toy it's never seen before.

"We also show that incorporating chain-of-thought reasoning allows RT-2 to perform multi-stage semantic reasoning, like deciding which object could be used as an improvised hammer (a rock), or which type of drink is best for a tired person (an energy drink),” said the team behind it.

The latest model builds upon Robotic Transformer 1 (RT-1) that was trained on multi-task demonstrations.

The team performed a series of qualitative and quantitative experiments on RT-2 models, on over 6,000 robotic trials.

The potential of vision-language models

The RT-2 model shows that vision-language models (VLMs) can be transformed into powerful vision-language-action (VLA) models, which can directly control a robot by combining VLM pre-training with robotic data.

"RT-2 is not only a simple and effective modification over existing VLM models, but also shows the promise of building a general-purpose physical robot that can reason, problem solve, and interpret information for performing a diverse range of tasks in the real -world,” said Google DeepMind.

(Source: IANS)

-Edited for style

For more such updates and news on the go, follow us on

Instagram | YouTube | Facebook

Comments

Popular posts from this blog

Five decades after man's arrival, 'moon trip' difficult, 60 out of 146 missions fail

If you are interested in astronomy due to solar eclipse, let us know about an app to install in your phone.

The transmission power of Corona virus sticking to the surface decreases after 5 days - research

Meta Accounts Center: Have you checked it out?

For the first time, NASA is sending a drone helicopter to Mars

Solid evidence of flowing water on Mars, the red planet of the Solar System: Images of pebbles and rocks found

How were the Himalayas formed? One minute breathtaking video, find out how long it took…

Did dinosaurs that could fly in the sky live on earth?

Govt orders social media platforms to remove ads on all fraud loan apps and betting apps within a week

IIT researchers develop real-time underwater marine robot, eases deep-sea surveillance