Policy or Value ? Loss Function and Playing Strength in AlphaZero-like Self-play
Por um escritor misterioso
Last updated 30 maio 2024
Results indicate that, at least for relatively simple games such as 6x6 Othello and Connect Four, optimizing the sum, as AlphaZero does, performs consistently worse than other objectives, in particular by optimizing only the value loss. Recently, AlphaZero has achieved outstanding performance in playing Go, Chess, and Shogi. Players in AlphaZero consist of a combination of Monte Carlo Tree Search and a Deep Q-network, that is trained using self-play. The unified Deep Q-network has a policy-head and a value-head. In AlphaZero, during training, the optimization minimizes the sum of the policy loss and the value loss. However, it is not clear if and under which circumstances other formulations of the objective function are better. Therefore, in this paper, we perform experiments with combinations of these two optimization targets. Self-play is a computationally intensive method. By using small games, we are able to perform multiple test cases. We use a light-weight open source reimplementation of AlphaZero on two different games. We investigate optimizing the two targets independently, and also try different combinations (sum and product). Our results indicate that, at least for relatively simple games such as 6x6 Othello and Connect Four, optimizing the sum, as AlphaZero does, performs consistently worse than other objectives, in particular by optimizing only the value loss. Moreover, we find that care must be taken in computing the playing strength. Tournament Elo ratings differ from training Elo ratings—training Elo ratings, though cheap to compute and frequently reported, can be misleading and may lead to bias. It is currently not clear how these results transfer to more complex games and if there is a phase transition between our setting and the AlphaZero application to Go where the sum is seemingly the better choice.
Strength and accuracy of policy and value networks. a Plot showing
AlphaZero from scratch in PyTorch for the game of Chain Reaction
LightZero: A Unified Benchmark for Monte Carlo Tree Search in
The future is here – AlphaZero learns chess
AlphaZero
Mastering the game of Go without human knowledge
AlphaZero Explained · On AI
Value targets in off-policy AlphaZero: a new greedy backup
AlphaZero, a novel Reinforcement Learning Algorithm, in JavaScript
Why Artificial Intelligence Like AlphaZero Has Trouble With the
Recomendado para você
-
Time for AI to cross the human performance range in chess – AI Impacts30 maio 2024
-
Mastering the game of Go without human knowledge30 maio 2024
-
Frontiers AlphaZe∗∗: AlphaZero-like baselines for imperfect information games are surprisingly strong30 maio 2024
-
Contabilidade Financeira: AlphaZero30 maio 2024
-
PDF) Alternative Loss Functions in AlphaZero-like Self-play30 maio 2024
-
Devin Pope on X: Beautiful graph showing the recent domination of30 maio 2024
-
Did AlphaZero also have to learn that each piece has a value? - Chess Stack Exchange30 maio 2024
-
Alphazero Performed 4000 Elo Game Against Magnus Carlsen, Alphazero vs Magnus Carlsen30 maio 2024
-
Alphazero Irrigation30 maio 2024
-
Why DeepMind AlphaGo Zero is a game changer for AI research30 maio 2024
você pode gostar
-
GTA DE CAMELÔ #playstation #gta #videogameconsole30 maio 2024
-
Tower of Fantasy - APK Download for Android30 maio 2024
-
38 ideias de Roblix roblox, coisas grátis, roupas de unicórnio30 maio 2024
-
The Hokages and the years they served as such (explanation in comments) : r/ Naruto30 maio 2024
-
The Unofficial Guide to Early Park Admission at Islands of Adventure30 maio 2024
-
Jojo's Bizarre Adventure Phantom Blood: Dio The Invader Review30 maio 2024
-
e-Reader [USA] - Nintendo Gameboy Advance (GBA) rom download30 maio 2024
-
id de cabelos pretos roblox30 maio 2024
-
NECA Pacific Rim STRIKER EUREKA JAEGER Action Figure - Reel Toys - BRAND NEW30 maio 2024
-
Valente - Xbox 360 (Seminovo) - Arena Games - Loja Geek30 maio 2024