Researchers from MIT, Harvard and Yale discovered a paradox in the field of artificial intelligence (AI) training that could represent a breakthrough in accelerating intelligence in robots.
They discovered that AI systems that learn in quiet, controlled environments at times outperformed those trained in noisy, unpredictable conditions when deployed in the real world, according to their paper, “The Indoor-Training Effect: unexpected gains from distribution shifts in the transition function.”
It’s like saying that a tennis player who trained in a quiet tennis court will do better in an actual tennis game despite windy conditions and a crowd roaring in the background than another player who trained in real-world conditions.
But this was exactly what the researchers found.
“Surprisingly, we found that under certain conditions, training in a noise-free environment can lead to better performance when tested in a noisy environment,” they wrote in their paper. Noise refers to uncertain factors that anyone can encounter when they interact with the real world.
This “Indoor-Training Effect” challenges conventional wisdom in training of AI. The traditional belief is that training in the same environment as the actual situation for deployment will result in better performance, because they already know what to expect.
The researchers found the opposite. That’s because training the AI in quiet, controlled conditions lets it master the basics, which it can then better apply to the real world. Counterintuitively, this results in better performance than an AI trained in messier conditions akin to the real world.
What It Means for RoboticsThis finding can help improve the training of robots. Robots, like those deployed in warehouses, often have to work in a noisy, busy environment. The belief is that for robots to do well in real situations, they need to be trained in similarly busy and noisy settings. But the challenge in training these robots is the difficulty of duplicating real-world conditions in the lab.
The research team’s findings could make the training of robots simpler: Robots can be placed in quiet, calm surroundings, like those in the lab, and then placed in messy, real-world situations where they will still perform better than robots trained in a noisy environment.
They wrote, “robotic systems could be trained in simplified, controlled settings to master essential skills without the interference of noise” and the training still could “enhance the robots’ ability to adapt and perform in real-world conditions where unpredictability and noise are prevalent.”
Researchers further said that these training strategies could lead to “more robust, adaptable robots capable of navigating and executing tasks effectively” in diverse and changing situations.
Traditionally, robots were trained through pre-programming, according to an Nvidia blog post. “These succeeded in predefined environments but struggled with new disturbances or variations and lacked the robustness needed for dynamic real-world applications.” However, human trainers do add “noise and disturbances” during training of the robots so these machines can “learn to react well to unexpected events.”
But techniques to make robots more versatile are being developed. Last month, researchers from MIT created an AI system that could enable warehouse robots to handle odd-shaped packages and navigate crowded spaces without posing a danger to human workers.
The ExperimentThe study examined the behavior of AI systems called reinforcement learning agents. These agents learn through trial and error. Traditionally, reinforcement learning agents are trained in conditions that closely match the environment where they are expected to be deployed to perform well.
The team set out to examine this belief. It tested the performance of these agents in three classic Atari games: Pac-Man, Pong, and Breakout. They researchers modified the games by adding some uncertainty, or noise.
Researchers also divided the agents into Learnability agents that were trained and tested in the same noisy environment, and generalization agents that were trained in a quiet environment but tested in a noisy environment.
The results: Generalization agents outperformed Learnability agents several times, even in high-noise conditions.
However, the researchers warned of two main limitations in their experiment. They only used Atari games and they also drew conclusions from classical reinforcement learning methods and were unsure whether these extend to deep reinforcement learning approaches as well.
The post MIT Discovers AI Training Paradox That Could Boost Robot Intelligence appeared first on PYMNTS.com.