Specializations in this tier consistently show up to PvP with a strong toolkit and take command of the playing field. You will often see these classes in many top arena compositions as they can work well with almost any class.
Dota toolkit 3.4
Shadow Priests have a surprising amount of abilities in their toolkit. They have good crowd control with Psychic Scream and another with disarm from Psychic Horror. They also have a big damage mitigation skill with Dispersion. Some notable team compositions you may see with them are:
Motivated by the quality of the work presented in [6] [7] [8], we aim to address the existing limitations associated with outdated studies and incomplete analysis of trending ML and RL techniques in the field of game design and development. This paper presents an insight into AI implementation in game development with an emphasis on ML and RL toolkits. It proposes a comprehensive evaluation framework using a qualitative comparison to support the community of game developers. In this study, we examine the applications of ML and RL toolkits in gaming, their challenges, as well as their trends.
The remainder of this paper is organized as follows; Section 2 provides an overview of the evolution of the global gaming industry. Section 3 introduces the fundamental concepts of ML and its sub-fields. Section 4 details the state-of-the-art of available ML and RL toolkits. Section 5 presents our qualitative evaluation methodology articulated around a specific set of technical criteria. Section 6 outlines the key evaluation analysis findings. Finally, Sections 7 and 8 discuss this study and conclude with future work.
learning algorithms [39]. The OpenAI Gym toolkit encompasses a collection of tasks, called environments, including Atari games, board games, as well as 2D and 3D physical simulations for serious games [40]. It is used to train agents by implementing and comparing various ML and RL algorithms using shared interfaces. Therefore, OpenAI Gym is mainly used for standardization and benchmarking purposes.
In this study, we propose a qualitative evaluation methodology that uses a set of eleven specific technical criteria (See the following subsections). Each candidate ML/RL toolkit introduced in Section 1 is evaluated based on the following qualitative data collection techniques: 1) Game design and development experts interviews; 2) Technical experimentation and observations; and 3) Documentation including scientific publications and technical reports.
Portability in ML/RL toolkits is the usability of the same toolkit in different environments. The pre-requirement for portability is the generalized abstraction between the toolkit logic and its interfaces. When a ML/RL toolkit with the same functionality is developed for several environments, portability is the key issue for development cost reduction.
Interoperability refers to the capability of different ML/RL toolkits to communicate with one another and with game engines freely and easily. Toolkits that are interoperable exchange information in real-time, without the need for specialized or behind-the-scenes coding.
The learning strategies are the different techniques ML/RL toolkits and frameworks used to train the agents in game design and development. These strategies are translated through machine learning algorithms including:
Table 2 illustrates the outcomes of the proposed qualitative evaluation analysis with respect to the technical criteria detailed in Section 5. It is important to note that OpenAI is an open-source platform and Unity is a commercial platform. Nevertheless, Unity offers its ML Agent as an open-source toolkit. With respect to the proposed set of eleven technical criteria, it is obvious that Unity ML-Agents toolkit provides full support for most of these criteria with some limitations with regard to Multitask Learning and Learning strategies. On the other hand, OpenAI including its various tools, Petting Zoo, and Google Dopamino suffer from a critical lack of Visual Observations support. Moreover, OpenAI and its tools fail to fully support Multi-Agent Environments.
On the one hand, Unity ML-Agents toolkit allows multiple cameras to be used for observations per agent. This enables agents to learn to integrate information from multiple visual streams. This technique leverages CNN to learn from the input images. The image information from the visual observations that are provided by the Camera Sensor is transformed into a 3D Tensor which can be fed
Finally, Table 4 provides a summary of common algorithms used in reinforcement learning gaming toolkits. Unity ML-Agents does not support multitask learning. However, it offers multiple interacting agents with independent reward signals sharing common Behavior Parameters. This technique offers game developers the ability to mimic multitask learning by implementing a single agent model and encoding multiple behaviors using HyperNetworks.
To address the limitations of our qualitative evaluation approach, our future work will focus on empirical and quantitative evaluations to verify, validate, and confirm our qualitative findings. A mixed method design with both qualitative and quantitative data will involve statistical assessments of existing RL toolkits to measure complexity, CPU and memory usage, scalability, and other relevant software quality attributes. 2ff7e9595c
Comments