Finding of the week #267

The Gamepad Skill

During my ongoing literature review I often discover interesting facts about things I’ve never thought about. Sometimes I can connect these facts with my own observations: The result is mostly a completely new idea why things are as they are. Maybe these ideas are new to you, too. Therefore I’ll share my new science based knowledge with you!

This week: This time, I think about my own difficulties when playing a console game as I am not used to play a game using gamepads. This observation provides another example for the requirement of well-trained human skills to successfully play a computer game.

Recently, we rediscovered the Playstation console in our meeting room and installed God of War on it. The game is an hack-and-slay action game and played from a 3rd-person perspective. God of War’s main challenge lies in the goal to defeat large groups of enemies or very powerful boss encounter by using the main characters abilities. These abilities mostly consist of combinable normal and powerful melee attacks and damage avoidance techniques such as dodging and blocking.

In this way, players are challenged to monitor an enemy’s behavior, to avoid taking damage and to defeat it as soon as possible. This, however, requires the player to quickly adjust to the situation and to move around to always face the enemies. As a result of this, a player’s hand-eye coordination is challenged to quickly react to the gameplay.

As the game is played on the Playstation, the only available input device is a Playstation controller. All navigation, interaction and view controls are mapped to the controller’s various buttons and thumbsticks. While performing actions using the controller’s buttons is not too much different to pressing keys on a keyboard, controlling my avatar’s orientation using a tiny thumbstick is very challenging. As a PC gamer, I am used to controll my perspective and to aim at targets with the mouse that allows for a very precise control in contrast to the thumbsticks.

As a result of this, although the gameplay itself is not very difficult, it seems to me very challenging as my avatar constantly faces into the wrong direction. It also shows how much playing a computer game requires constant practice to automatize and master the interaction methods.

Finding of the week #266

No Virtual Substitute for the Real Device

During my ongoing literature review I often discover interesting facts about things I’ve never thought about. Sometimes I can connect these facts with my own observations: The result is mostly a completely new idea why things are as they are. Maybe these ideas are new to you, too. Therefore I’ll share my new science based knowledge with you!

This week: This time, I think about some examples where a virtual simulation of a specific learning content cannot achieve a similar training effect as accurate and sensitive physical interactions are required.

Game-based and simulation-based training applications allow learners to learn and train new knowledge in an engaging environment. This virtual environment not only provides immediate feedback about the correctness of a user’s inputs, but it also visualizes the learning content in a way that is not possible in the real world. Also, by As a result, learners can develop an in-depth understanding of the underlying principles in a highly motivated way.

The training effect can even be increased when the training system implements immersive virtual reality (IVR) by rendering the gameplay to a Head-Mounted Display (HMD). An HMD allows users to visually immerse themselves in a virtual environment by blocking all visual information from the real world surrounding the user. In this way, a learner can experience the feeling of being directly inside of the virtual environment. This feeling of presence can increase the training effects as the knowledge then is presented in a more natural way to the learner.

Although training applications can simulate any knowledge and allow for a distant knowledge training, some learning contents still need the right hardware to provide haptic feedback for physical training. For instance, it is possible to present trainees large and complex machines in IVR to allow them to inspect their structure and learn about maintenance procedures even though they are just in a classroom. However, training the physical skills to actually disassemble and reassemble such a machine requires haptic feedback as learners need to know how to utilize the required tools correctly.

This problem also applies to other learning contents that require sensitive physical interactions. Recently, a friend and I were playing a mobile piano game requiring us to touch the touchscreen in the right moment and with the right amount of fingers to get the rhythm and keystrokes right. Thus, this game only allows for a rhythm training but not for an actual piano training.

We also tried a VR piano training game that was developed by a group of students who attended one of my seminars. While this VR game allows for a playful interaction with a virtual clavier, it still lacks haptic feedback as it is played using the HTC Vive controllers. However, the virtual environment has the potential to highlight the correct keys in order to guide the user and to explain the instrument. Hence, a player can only learn which of the keys has to be pressed in order to get a specific note but cannot practice sensitive physical interactions. Using a real clavier to interact with the training application would be the best solution, but then the virtual environment would be no longer needed.

As a result of this, training simulations allow for a good declarative knowledge training. However, when the learning content requires the physical interaction with a specific device, it becomes very complicated to achieve a good training environment due to the lack of a good haptic feedback that could create a substitute for the real device.

Finding of the week #265

Difficulties of playing mobile games

During my ongoing literature review I often discover interesting facts about things I’ve never thought about. Sometimes I can connect these facts with my own observations: The result is mostly a completely new idea why things are as they are. Maybe these ideas are new to you, too. Therefore I’ll share my new science based knowledge with you!

This week: This time, I think about my main issues when playing mobile games. These special games mostly implement the device’s touchscreen as input method which frequently results in falsely recognized inputs.

About a month ago, I finally started to play mobile games on a more frequent basis. As mobile games are played on cellphones, they can be used almost anywhere as long as I have my mobilde device with me. Also, the gameplay of those games is designed to be paused at any time and thus allows for quick and short game sessions. This especially is great as I currently do not have much time available to play computer games but still like to continue one of my favorite hobbies.

In contrast to other gaming devices, mobile games mostly implement the cellphone’s touchscreen as the core input method. As a result of this, the interactions have to be designed in a very simple way and be mapped to touch or drag gestures. The functionality of the touchscreen, however, adds another constraint to the interactions. A user can not simply keep a finger placed at a specific position to be ready for an upcoming input as it would be possible with traditional input devices. By keeping a finger on the sensor, the game potentially recognizes wrong inputs and can not be played successfully.

Another problem that can occur is the recognition of wrong inputs. For instance, Fallout Shelter allows a player to change the own perspective by touching the screen and „dragging“ the scenery around. The same interaction, however, needs to be performed to assign one of the virtual inhabitants of the user’s vault to a new task. As a result of this, I occasionally experience issues when I like to assign a dweller to a new task or when I like to change my perspective and accidently grab one of my inhabitants.

Therefore, as some kind of guideline, it is necessary to carefully decide how a user shall interact with the game and how to ensure a good usability of the selected interactions. In addition, it is critical to avoid assigning different interactions to the same gesture that can be performed on the same screen.

Finding of the week #264

Between Realism and Magic

During my ongoing literature review I often discover interesting facts about things I’ve never thought about. Sometimes I can connect these facts with my own observations: The result is mostly a completely new idea why things are as they are. Maybe these ideas are new to you, too. Therefore I’ll share my new science based knowledge with you!

This week: This time, I think about the power of magic that can turn a realistic simulation into a more convenient experience by bending a few rules for a short amount of time.

Simulation games aim at the realistic representation of real world knowledge and activities. They utilize equations and facts describing the knowledge to achieve an accurate simulation. Simultaneously, they provide interaction possibilities allowing for a manipulation of the simulation’s outcomes. As a result, players can interact with the game and practice the encoded knowledge’s application. For instance, a racing game allows a player to drive virtual racing cars that follow the underlying principles thus giving players the impressions of controlling an actual car.

However, often time becomes a critical issue for some of simulation contents as they normally take place over a long period of time. For instance, it takes a spacecraft several days to reach the Moon or a ship to cross the Atlantic ocean. As a result, players would need play the game for the same amount of time to really experience a realistic simulation. Aside from requiring a huge time commitment, it would also result in a lot of downtime and boring gameplay. Players would be required to wait for events to occur and quickly start to become bored by the game.

This problem can be demished with the power of game design that adds interaction techniques allowing for actions that are not possible in the real world. For instance, travel interaction techniques enable players to teleport themselves to distant locations thus greatly reducing the travel trime. Similarly, other techniques implement a time control function allowing for an increase or decrease of the simulation time. As a result, players can sail across the Atlantic ocean within a couple of minutes while still underlying the physical principles.

Although those „magical“ interactions reduce the overall realism of a simulation, they provide a convenient method to only focus on critical phases of the simulated knowledge. In the end, it is up to the users if they like to utilize magical power to bend some physical principles or if they prefer an ultra realistic simulation. In conclusion, computer games allow us to walk between magic and realism without reducing the accuracy of the simulation.