TY - GEN
T1 - Self-supervised Contrastive Learning for Predicting Game Strategies
AU - Lee, Young Jae
AU - Baek, Insung
AU - Jo, Uk
AU - Kim, Jaehoon
AU - Bae, Jinsoo
AU - Jeong, Keewon
AU - Kim, Seoung Bum
N1 - Funding Information:
Acknowledgments. This research was supported by the Agency for Defense Development (UI2100062D).
Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - Many games enjoyed by players primarily consist of a matching system that allows the player to cooperate or compete with other players with similar scores. However, the method of matching only the play score can easily lose interest because it does not consider the opponent’s playstyle or strategy. In this study, we propose a self-supervised contrastive learning framework that can enhance the understanding of game replay data to create a more sophisticated matching system. We use actor-critic-based reinforcement learning agents to collect many replay data. We define a positive pair and negative examples to perform contrastive learning. Positive pair is defined by sampling from the frames of the same replay data, otherwise negatives. To evaluate the performance of the proposed framework, we use Facebook ELF, a real-time strategy game, to collect replay data and extract data features from pre-trained neural networks. Furthermore, we apply k-means clustering with the extracted features to visually demonstrate that different play patterns and proficiencies can be clustered appropriately. We present our clustering results on replay data and show that the proposed framework understands the nature of the data with consecutive frames.
AB - Many games enjoyed by players primarily consist of a matching system that allows the player to cooperate or compete with other players with similar scores. However, the method of matching only the play score can easily lose interest because it does not consider the opponent’s playstyle or strategy. In this study, we propose a self-supervised contrastive learning framework that can enhance the understanding of game replay data to create a more sophisticated matching system. We use actor-critic-based reinforcement learning agents to collect many replay data. We define a positive pair and negative examples to perform contrastive learning. Positive pair is defined by sampling from the frames of the same replay data, otherwise negatives. To evaluate the performance of the proposed framework, we use Facebook ELF, a real-time strategy game, to collect replay data and extract data features from pre-trained neural networks. Furthermore, we apply k-means clustering with the extracted features to visually demonstrate that different play patterns and proficiencies can be clustered appropriately. We present our clustering results on replay data and show that the proposed framework understands the nature of the data with consecutive frames.
KW - Game matching system
KW - Reinforcement learning
KW - Self-supervised contrastive learning
UR - http://www.scopus.com/inward/record.url?scp=85137974388&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-16072-1_10
DO - 10.1007/978-3-031-16072-1_10
M3 - Conference contribution
AN - SCOPUS:85137974388
SN - 9783031160714
T3 - Lecture Notes in Networks and Systems
SP - 136
EP - 147
BT - Intelligent Systems and Applications - Proceedings of the 2022 Intelligent Systems Conference IntelliSys Volume 1
A2 - Arai, Kohei
PB - Springer Science and Business Media Deutschland GmbH
T2 - Intelligent Systems Conference, IntelliSys 2022
Y2 - 1 September 2022 through 2 September 2022
ER -