Recently, I have read the book « The Body Has a Mind of Its Own: How Body Maps in Your Brain Help You Do (Almost) Everything Better » by Sandra Blakeslee and Matthew Blakeslee. A tweet of a VR designer advised the book :
Highly recommend The Body Has a Mind of Its Own by @bysblakeslee to anyone working in #VR. Homuncular flexibility and the concept of peripersonal space – that we literally think differently about the space we have agency in – have huge importance in #XR interaction design pic.twitter.com/ZSVgFYEABG
— Martin Schubert (@MrSchubert) 25 janvier 2018
When I first began to read the book, I didn’t understand how it would be useful for a VR designer. But as I went on, I quickly understood why this book could be helpful for anyone who works in the VR industry. It’s mostly beneficial for the future’s VR but as we are shaping it now it will become useful sooner than we expect. The book is a science popularization of the research about the proprioception sense and all the other things that gravitate around it.
In that blog post, I will give you useful insight and lessons of VR design that I learned while reading the book.
Introduction and vocabulary:
Unfortunately, before we get into the subject itself, I’ll have to cover some vocabulary and concepts that will become very handy later on,
You all know what “muscle memory” is: it’s the memory of how you physically do things (take an object with your hand or ride a bike). It’s not located in the muscle but inside the cerebrum.
In the book it’s mostly called “body schema”. It’s the perception of your body position, movement, and capabilities. It’s made from your memory and your “body map”. The latter is constructed by the data output of your sense (visual and mostly proprioception).
Last but not least, it’s also made with your “body image”, meaning how you psychologically see your body (“as fat, slim…”). It is not always identical as how your body is in real life.
The body construct mental representation of your body
#1: The perception of the environments and Level Design: It not the same as standard video game (and I’m not talking about scale perception !)
– The theory of the book
During the chapter 3, the author of the book speaks about an interesting experience conducted by a doctor, Martin Grunwald. He gives some people an exercise : he blindfolded them and gave them some object (sphere, pyramids…), then un-blindfolded them and asked them to draw the object they had been touching. They could very easily do it, but one only couldn’t’t do it. His drawing did not represent the object. Why? After some research, Grunwald thought it might be because of her anorexia – which doesn’t seem obvious. But there might be a connection here :
Many people suffering from anorexia tend to see their body as “fat” (when they are skinny). Their body image is shattered: it does not match their actual body map as you and me can see it. There is a conflict between the two, so their understanding of the environment around them is shattered as well. That’s why she couldn’t figure the shape of the object with her hand only. In the book, the author explains that we understand our environment based on our own body. If you’re perceiving your body in a different way than it is in reality, then the perception of the environment around you is also different, because it’s relative. For example, you will perceive an object smaller if you’re taller, etc. But how has it anything to do with VR? Bear with me, you will understand it soon.
In chapter 7, the author talks about the understanding of the environment by our brain (no proximate space but farther space on a larger scale, for example the size of a room). It’s made by the place cells and grid cells inside the hippocampus. The place cells are context sensitive. They map the space around you relative to the object surrounding you. “There is a chair on the right, next to it, a table, etc.…” The grid cells map the space around you independently of what is in the environment.
– The design lessons
As I learned in this book, our spatial environment is understood by our brain always in relation to our own body. It’s like the scale and orientation position of the child transform in Unity3D: the objects are relative to your body. For our brain, we are the “parent transform” of what is in the environment.
In VR, we are using our own senses. Just like in reality we are not viewing a flat 2D screen (well… in a way), therefore it impacts our understanding and how good we find our way in the game. But comparatively to the normal non-VR game, how well do we perform? It can be tested with that example of the scientific test that I wrote: LINK. The result of that test can give us insight of how to apprehend VR level design (before making and testing it), making the environments less or more complex than its non-VR counterpart.
The second design principle we can learn is contained in that sentence: “you will perceive an object smaller if you’re taller”. It can affect many things. Let’s imagine a cathedral that you made for a VR game. You made it to give a wow effect to the player when they move their head up and see the ceiling. Now imagine you test it, and everything is fine, but you decide to test it with an avatar smaller than yourself, to try to see the ceiling way taller. So, the feel given by the cathedral may not be exactly the same. Another example of it is the feeling of speed in VR. Imagine a space of exact distance with two players, a tall one and a shorter one. They are given the ability to move with constant speed in the same environment. Even if it’s the same speed, the shortest player will think he is moving “faster”.
So what guideline can we deduct from that? If you want to give the exact same feeling for all players, you would have to scale all objects depending on the player’s size. But if your game is on a “real world” setting (i.e.: there is identifiable elements like a chair, a table…), it would be a very bad idea, on top of being very complicated to apply. The scale perception is so intense in VR that the player will quickly see if an object is out of scale. It would be a huge immersion breaker for the player.
So what is the other option? Scale only the tool used by the player! It will make the experience more personalized, adapted, and comfortable for the player. Remember comfort is king in VR (and especially for Oculus). Additionally to the tool scaling, why not let the player change the size himself a little (but be careful that it doesn’t affect your gameplay too much!)? VR is a personal experience, therefore, having many options to let the player personalize his experience is a must have. For example, the option of locomotion you can have in Fallout or Skyrim is incredibly satisfying! You can also look at the game mades by croteam, The Talos Principle VR, which will give you some insights on that question.
Speaking of this tool, I also include the hand and the body of the player. For example, Sprint vector scales your body, movement and hand according to the size of the player. The resulting “size” is the same for each player. This approach has two advantages: as I said before, the resulting scale and “feel” of the experience will be the same for everyone. The second is a more competitive and fair-play issue. In this game you swing your arm to move your body. When you are tall, one swing of your arm will move you further away than one swing of a short man. It’s unfair, so the game resizes your actual height in a similar in-game avatar than the other players. The game makes all players equal from a height point of view. The only thing determining your victory is your skills (than you can train) and if you’re in shape (which you can get).
In virtual reality you can be anything: the physical characters of the player can be reshaped to fit your gameplay or experience. That can be used not only for gaming but also for scientific research or eliminating bias of people based on appearance. For example Katharine Zaleski use Virtual Reality during the interview for gender bias or some research about emboding someone with other skin color. Maybe the development of virtual reality will have a positive impact on the empathy of people and reduce discrimination which Domna Banakou, Parasuram D, Hanumanthu and Mel Slater tried to demonstrate in their article.
#2 How the brain analyzes space around you and how it affects VR space.
– The book’s theory
In the 7th chapter, ‘The bubble around your body’, the author explains that the brain treats differently spaces around you (the science behind that study is called “Proxemics” if you want to look it up). It’s not only a cultural thing, it’s also psychologic, cognitive and, of course, sensorial. To start with cognitive space: it’s a space or location that is managed by certain parts of the brain. The author of the book describes two different cognitive spaces: Peripersonal and Extrapersonal. Their dimensions are unique for each and every individual, but exist for all human beings. Its effect is nearly invariant regardless of the person’s culture.
|Space definition||Arm length around you||All the other|
|Part of the brain used||Parietal lobe||Temporal lobe|
|That part is also used for||Senses especially proprioception, spatial understanding and language processing.||Visual memory, language comprehension, emotion association and face recognition.|
|It’s treated the “same way” in your body map||Yes||No*|
* in most case. We see later how it can be treated as your body.
But why does our brain create two different spaces and treat them differently ? To try to answer this this question, we must take a look at our evolution history, back to when we were still cavemen. Back then, humans were mostly trying to survive. Life and death were, more than now perhaps, a matter of seconds. We needed to react quickly to danger. When a human or animal was in or near our peripersonal space, it meant he could kill us, and quickly. Having a brain part dedicated to analyzing that space and the objects within it wasn’t a luxury: it was vital.
In a more contemporary analogy, when someone tries to hit you, you can move your arm in the correct defensive position in a fraction of second (if you have reflexes, which I haven’t). That’s in part because when their hand enters your peripersonal space, you are able to grab it easily. Just like you can touch your nose without seeing it. His hand is treated just like your own in your body map, for your brain, it’s like “it’s your own hand”.
So what is the reason behind the extrapersonal space? It’s bigger than the peripersonal one, and it’s more related to how you move on a larger scale. The temporal lobe is vital for analyzing that space. Let’s see the function of that lobe.
Visual memory : you need to remember where you are.
Face recognition and emotion association : when you see someone you need to understand quickly two things: “do you know them ?” and, if not, “are they angry, aggressive ?” The answer to that question was a matter of life and death when we were cavemen.
But why the latter is not in peripersonal space? When a human is inside our peripersonal space it’s often too late to know their intentions (And if they have not entered your extrapersonal space before being in your peripersonal space that means they are sneaking behind you…).
The most important thing you need to remember is that these spaces was created mostly during our caveman period (they are visceral) and related to detect a foe or a danger.
This how the brain treats space cognitively, but how does it treats it culturally? The American anthropologist, Edward Twitchell Hall, identifies four different zones around you: Intimate, Personal, Social, Public. These zones indicate how far a person should be from you.
|Extends from your body||0.15 to 0.46m||0.76 to 1.22m||2.1m to 3.7m||7.6 m or more|
|For who||Lover||Friend or family||Acquaintances, strangers or your boss||Public speaking|
When your cultural zone is not respected you may feel uncomfortable, threatened or upset. Because it’s cultural it mostly affects social interactions.
Here is a little recap schematic:
– The design lessons
It can give us insight on how we can organize space in VR. First let’s talk about cultural zones and how they can help us understanding and creating better social experiences in VR.
Interacting with people is a wonderful experience. That’s why Social VR is such a big trend even if the market is small. But it can also be an unsettling experience in some cases. When you are in the subway, you try to avoid contact with strangers because it’s uncomfortable. They are not in the right zone.
To avoid the player feel bad, some game allows to define a “limit zone”. In that area, other players will disappear , thus eliminating direct interaction with strangers. That can be implemented in multiple ways: for example in VR chat, a “game” whose core is about interacting with strangers, developers decided to enable that feature by default. That sort of feature also prevents direct bad behavior from other players like sexual harassment. In Echo arena, some players experience it. The developers implemented features to address those issues.
The cultural space discovered by Edward can also help us organize space and places between players to create meaning and feel. In the game RecRoom, there is an activity called “3D Charades” (it’s a sort of “VR Pictionary”). The developers have cleverly arranged spaces to support the cultural spaces. It makes the game more comfortable and fun for the others players. Here is a visual schematic of how the space is organized:
The “drawing player” is in green and the “guessing players” are in blue. As you can see the social space of the guessing player are overlapping each other. It gives them a complicity feeling and help build a feeling of equality. On the contrary, the green one is in the public space. It gives him the role of the “drawing player” immediately: the player in that position know intuitively what they have to do. Also, it gives them a superiority feeling because other player keep track of each of their actions.
Another application of cultural space is storytelling in VR. You can think about how you place characters around the player depending on their relation. It will make the experience more immersive and intense. But keep in mind that the cultural space is a theory, a guideline. When you understand it, try to break it! Putting a stranger or a friend in the intimate space can create great emotional reactions. One of the greatest examples of that is FATED: The Silent Oath. In the introduction scene with no dialogue, we intuitively know who is the different characters and the relationship between them only by their interaction distance between each other.
That’s my quick summary about using cultural spaces. If you want to learn more about Social VR, I advise you to see the video made by Facebook on how they created “Facebook Spaces”. Here are also some researches Oculus made about communication in VR.
Playtest, level design and analyzing tool.
Let’s talk more about cognitive spaces.
It also helps us designing spaces in VR but in a more visceral way. Let’s imagine you make a story-driven game. In one sequence, the player walks on a tiny road on the mountainside, like in the image below :
As a game designer, on that project, you are worried that a player suffering from acrophobia could get sick. You organized a playtest session and you ask all the participants to say if they are afraid of heights. After that, you are very confused because only some players who suffer from acrophobia got afraid. How to make them less afraid? If you didn’t understand why there are differences between these players, you can’t solve the problem efficiently.
The cognitive space theory can help us.
Why are we afraid of heights? Or more importantly when and how are we afraid of them? Instinctively, when you walk near a cliff, you stay one or two steps away from it. That distance is nearly the same as your peripersonal distance! Our brain has created this space to protect us. When we move near a cliff we unconsciously adapt the distance. We now have some clues about why people fear the cliff but not why some didn’t…
Who do you know that are not afraid of cliffs either? Children. You often see the same pattern over and over: a child comes close to the cliff and the parent quickly put the child away from it. But why is it like this? Why all children go near the cliff? Well they go toward it until the cliff enter their peripersonal space. But that distance is relative to arm length so it’s very near the cliff. But adults do not go until arm length they are far away from the cliff. Well, it’s not their cognition that tells them to not go too close. It’s their experience, their conscience and their mind that tell them so. A mind of a child is “basic”: their act on instinct. The reason why not only certain people experience fear in that area is the difference of height. Tall people have larger peripersonal space, therefore the cliff is inside it.
Knowing the basic of proxemics can help game designers to analysis such result. The design decision to make depends on what is the intention of the game. In our example it is relevant to adjust the size of the road depending on the player height.
The Peripersonal space is by essence a protective one. If you want to have an extremely comfortable and zero-stress experience for the player, you should remove any movement or effect in that space because anything at this distance can be interpreted as a threat by the player. For example, in Space Pirate Trainer, projectiles slow down when they are near you. It makes the projectile easier to dodge. It also prevents you from being surprised and prevent you from dodging it “instinctively”. It’s a disadvantage you may say? Well, in that game there are many projectiles, so you may move into another projectile if you dodge it instinctively. Another example is the game “Holopoint”, there is not slow down because the rhythm is faster, and you are targeted by only one projectile at a time. The game is about being in the flow and react instinctively. Those design decisions can be explained by cognitive spaces.
Another way to use it is as an analysis tool for creating elegant controls in VR games. I understood that when I was playing the game “Archangel” on Oculus Rift. It’s a rail shooter game that puts you inside the cockpit a giant robot. The innovative feature is that you are controlling the arm of the robot with your own hand (with motion controls). Think about the movie, “Pacific rim”. When I played that game, I was really hooked up by the concept. I find it very easy to move my robot arms. It’s like controlling a puppet. Then a very weird impression left me when I stop playing. I couldn’t find out why. When I talked with other players, they said, “controls are not intuitive at all, I struggle to fight well in the 1st level !”
It was my find also. Was it because of the difficulty? Impossible, it was the “1st level”. I was puzzled because it’s intuitive to move your arm but counter-intuitive to correctly control them. That makes no sense! To analyze that game we need to take a step backward and think about the global picture. That game use what I call “deferred 1st person” point of view. Let’s look at the example in reality. When are we controlling robots ? The military use robots like drones.
The first generation of military robots is used with a keyboard, mouse and HOTAS. More advanced robots are controlled in virtual reality with motion control. Why is there little to no real between the two ? You may argue that it’s a technology problem. Well, we speak about military, they have top-ranking technology. Keep in mind that our civil technologies are always years behind the military ones. Why did “motion gaming” has fallen in the long run ? Many say that at that time the technology was not good enough. Games controls were imprecise. That true, but was technology the only factor?
Let take the example of Kineck. Most games use a “deferred 1st person” point of view and most games have imprecise control. Then the game Fru, a third person platformer game (on Kinect of course), comes out. Controls were rated as “extremely precise”. Motion control (without VR) in 1st person is always deferred. Why is it the less efficient type of control? Your peripersonal is managed by the parietal lobe, not only the sensory hub. It transforms output data from your sense into motors intention and actual movements. Also, you have in your brain “multisensory cells”: their role is to connect your different senses to each other. More precisely our senses influence each other. For example, try the “parchment skin illusion”: in that case, sound influence our touch.
But what does all of this have to do with that weird sensation when playing Archangel? Bear with me, we will come to it . Imagine a simple situation in that game: you see in the last moment an enemy projectile coming at you. You move your arm into defensive position to block it. Here is a schematic of what happens in your brain when you are in “real” 1st person:
As you can see, using your arm in VR is straightforward. Also, your brain treats the robot arm as if it has been the “same” as your own arm! Thanks to how our brain subdivides space, a lot of mental processes are “automatized”. Here is the mental process of the same situation but with “deferred 1st person” :
As you can see, the brain needs more work to move that robotic arm. It’s because it’s not inside your peripersonal and outside our accurate depth view zone. Therefore, our proprioception sense can’t automatically know where that arm is located in space: our brain must compute it itself, like a normal object. That does not mean that the game is bad or that you can’t master it. That just means that will have to put more effort into it!
Also, when you are in 1st person “for real”, your brain computes the position of the robotic arm with the proprioception sense, even if it’s not in the same position as your real arm! How is it possible? You will learn how to solve that mysteries by reading “The body as a mind on its own” or reading my next articles!
Here is a td;dr of the design lesson :
#1: The perception of the environments and Level Design:
It is not the same as standard video game (and I’m not talking about scale perception!)
- VR has an influence on spatial information used by the brain. Therefore affecting Level Design.
- Perception of scale is affected by our size.
- Comfort options are not an option.
- Scale the tools used by the player !
- Visual appearance in VR can affect social interaction.
#2 How the brain analyzes space around you and how it affects VR space:
- Beware of cultural zones when developing social VR.
- Storytelling can be reinforced or destroyed with character position.
- Cognitive and cultural space can serve as analysis tools for playtest and game/level design.
- Cognitive space and knowing how the brain work can help analyzing and creating better game control and ergonomics.
- Deferred 1st person is bad ! Or else, prove me wrong! Send me a prototype 😊
That closes the first part of my article. I hope you enjoyed it!
Thanks a lot to Cécile Auer and Romain Trésarrieu for correcting this article
Thanks, you very much for reading my post! Don’t hesitate to give me feedback about it.
Feel free to comment! I would love to talk about the subject of that post