DeepMind AI is secretly lurking on the public StarCraft II 1v1 ladder – Ars Technica

0
32
DeepMind AI is secretly lurking on the public StarCraft II 1v1 ladder – Ars Technica

Have they taught it to BM yet? —

European players who opt-in have a chance to match up against AlphaStar.


DeepMind AI is secretly lurking on the public StarCraft II 1v1 ladder

Google’s DeepMind is making its state-of-the-art AI play StarCraft II again. We previously saw the AI, called “AlphaStar,” take on pro StarCraft II players in show matches, but now AlphaStar is ready to take on the public and faceroll scrubs on the 1v1 European multiplayer ladder.

Just like last time, AlphaStar is being built with the cooperation of Blizzard (StarCraft II‘s developer), and the official SC2 website has the details of AlphaStar’s new incarnation. The in-game UI now has a “DeepMind opt-in button” on the 1v1 Versus menu, which will mix instances of AlphaStar into the human pool of multiplayer players. AlphaStar will be playing the 1v1 ladder anonymously, so you won’t know if you’re playing AlphaStar or a human (I mean, I guess you could try asking your opponent). Blizzard says that “having AlphaStar play anonymously helps ensure that it is a controlled test, so that the experimental versions of the agent experience gameplay as close to a normal 1v1 ladder match as possible.” Players will be paired against AlphaStar according to the normal matchmaking rules, and a win or loss will count just as it would against a human.

The post contains a number of implementation details for this new version of AlphaStar, which sounds like a huge improvement over the version that played StarCraft II pros back in January. First, a number of improvements were made to bring the AI’s speed capabilities more in line with a human player. As an AI company, DeepMind’s stated goal with this experiment is to play SC2 on a level playing field and teach an AI things like thinking and long-term planning—basically, strategy. At a very high level, you could say the two big components to any StarCraft victory are “speed” and “strategy.” DeepMind’s previous AI experiments were in turn-based games like Chess and Go, where the speed at which you can move the pieces doesn’t really matter. As a real-time game, speed is a big factor in any SC2 victory, and in previous games, AlphaStar at times demonstrated superhuman speed that gave it an unfair advantage and muddied the experiment results.

A screenshot from the AlphaStar v TLO game in January.

Enlarge / A screenshot from the AlphaStar v TLO game in January.

While humans are forced to play StarCraft by moving their fingers over a keyboard and mouse, AlphaStar was wired directly into the game via a Blizzard-made API. For humans, SC2 is about spinning multiple plates at once, like managing your base expansion, positioning units, controlling your armies during battle, and doing this all through the limited in-game camera. With AlphaStar’s direct control of the game, it would be relatively easy to build an AI that wins with god-like speed and multitasking, with superhuman reaction times, perfect control of every unit in the game, and full visibility of everything happening on the map. Limiting AlphaStar’s speed and access to the game is critical to ensuring that any victories are due to a superior strategy.

Blizzard says this new version of AlphaStar now “perceives the game using a camera-like view,” which was not always the case in January. Back then, when AlphaStar played Grzegorz “MaNa” Komincz, the AI bot first won 5-0 with an unfair, unrestricted view of the game. Playing with a global view would allow for more information than a human player is normally allowed to take in, with faster reaction times and easier multitasking. In the single MaNa v AlphaStar game where AlphaStar was more fairly limited by the in-game camera, it lost. In this new version, Blizzard notes that “AlphaStar doesn’t receive information about its opponent unless it is within the camera’s field of view, and it can only move units to locations within its view.”

AlphaStar also should be blocked off from the superhuman unit control it demonstrated in the January matches. A player’s speed at controlling StarCraft is measured in “APM,” or “Actions Per Minute,” where every camera movement, unit click, or base function counts as an action. In January, DeepMind only limited AlphaStar’s APM in five-second increments, which means it could achieve superhuman burst APM for seconds at a time when it needed to. When a battle starts and there are dozens of units to control, this superhuman burst APM could easily be the difference between victory or defeat. In the new version, the peak APM has been capped. Blizzard says the new APM requirements “are more restrictive than DeepMind’s demonstration matches back in January and have been applied in consultation with pro players.”

This new version of AlphaStar also seems a lot more feature complete. It can now play as and against any of the game’s three races, whereas in January it was only trained to play one race, Protoss. There also isn’t just one new version of DeepMind out there. Blizzard’s post says that “DeepMind will be benchmarking the performance of a number of experimental versions of AlphaStar to enable DeepMind to gather a broad set of results during the testing period.”

DeepMind promises at some point it will release these ladder results in a peer-reviewed scientific paper, along with replays of AlphaStar’s matches. Good luck, everyone! Go out there and win one for Team Human.

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here