HomeTechnologyArtificial IntelligenceUsing Generalization Techniques to make AI Systems more Versatile

Using Generalization Techniques to make AI Systems more Versatile

A group at DeepMind called the Open-Ended Learning Team has developed a new way to train AI systems to play games. Instead of exposing it to millions of prior games, as is done with other game playing AI systems, the group at DeepMind has given its new AI system agents a set of minimal skills that they use to achieve a simple goal (such as spotting another player in a virtual world) and then build on it. The researchers created a virtual world called XLand—a colorful virtual world that has a general video game appearance. In it, AI players, which the researchers call agents, set off to achieve a general goal, and as they do, they acquire skills that they can use to achieve other goals. The researchers then switch the game around, giving the agents a new goal but allowing them to retain the skills they have learned in prior games. The group has written a paper describing their efforts and have posted it on the arXiv preprint server.

One example of the technique involves an agent attempting to make its way to a part of its world that is too high to climb onto directly and for which there are no access points such as stairs or ramps. In bumbling around, the agent finds that it can move a flat object it finds to serve as a ramp and thus make its way up to where it needs to go. To allow their agents to learn more skills, the researchers created 700,000 scenarios or games in which the agents faced approximately 3.4 million unique tasks. By taking this approach, the agents were able to teach themselves how to play multiple games, such as tag, capture the flag and hide and seek. The researchers call their approach endlessly challenging. Another interesting aspect of XLand is that there exists a sort of overlord, an entity that keeps tabs on the agents and notes which skills they are learning and then generates new games to strengthen their skills. With this approach, the agents will keep learning as long as they are given new tasks.

In running their virtual world, the researchers found that the agents learned new skills, generally by accident, that they found useful and then built on them, leading to more advanced skills such as resorting to experimentation when running out of options, cooperating with other agents and learning how to use objects as tools. They suggest their approach is a step toward creating generally capable algorithms that learn how to play new games on their own—skills that might one day be used by autonomous robots.

Related News

Must Read

Sasken Announces Hyderabad Center of Excellence to Scale Product Engineering and Digital Innovation

Hyderabad, India: April 16, 2026: Sasken Technologies Ltd. (BSE:...

Mission accomplished: Infineon technology proves reliable once again in space on Artemis II

Infineon's radiation-hardened semiconductors performed flawlessly on NASA's Artemis...

Bosch and Qualcomm expand collaboration to strategic ADAS solutions

Cockpit Computers: 10 million units delivered • High-performance solutions: Bosch...

Gartner Forecasts Worldwide Semiconductor Revenue to Exceed $1.3 Trillion in 2026

Semiconductor Revenue to Grow 64% in 2026 DRAM...

Directed Energy Systems: Where Capability Ends and Control Begins

by Sukhendu Deb Roy, Industry Consultant Key Takeaways The economics...

Boundary scan in combination with automotive applications for CAN-FD and LIN bus

Serial communication remains the backbone of electronic communication in...

Why Every EV & 5G Phone Could Soon Be Powered by Gujarat

In a move that cements India’s transition from a...