Experience on Demand: Predicting Careers in Immersive and Virtual Reality
Experience on Demand: Predicting Careers in Immersive and Virtual Reality
ENTER- Let’s put the emergence of good VR is some kind of perspective. What’s the time frame we’re looking at? When do you imagine VR seriously starts to seriously compete with real reality for people’s attention?
Jeremy Bailenson (JB) - For immersive VR—we can talk about augmented reality in a second—I don’t think we’re at the point anytime soon, meaning in the next year or two, where you’re going to see people wandering around wearing helmets. VR is fantastic for ten or twenty minutes. The experiences that I’ve seen are mind blowing, or they’re useful, and those things are not always related, but they’re typically not that long, and not something you want to be doing every day or for hours a day. So I think we’ve got a bit of lead time before we see people wandering around like zombies wearing helmets.
ENTER- Well let’s look a little bit into the future, maybe two to five years, what sorts of new job titles might VR create? What would you tell someone currently entering the university if they want a trajectory into the emerging workspace of VR?
JB- So let me start with a silly answer to your question, and then get to a real answer. I told you I was just giving a book talk inside a virtual reality—and I have a pretty weak vestibular system myself. I get dizzy pretty easily. And I was giving this book talk wearing the helmet, and there’s all these people watching me, It’s important to both speak and gesture, but after about ten minutes I decided I couldn’t stay in anymore. I was feeling a little bit wonky. So I had one of my colleagues put on the helmet, and wear the hand controllers. He was gesturing for me while I was speaking! So there might be a market for expert “gesturers” to outsource virtual reality conversations. That’s said in jest, but it was pretty interesting to have somebody controlling my gestures while I was doing the speaking.
ENTER- You mean like VR “signing”
JB- Well sure, there’s actually some neat applications. I just reviewed a grant for automatic teaching of ASL, which I think is a neat application—because you can look down and see from the first person if your arm and hards are moving in the way they should be.
But in terms of real training. Obviously every human who wants to enter the work force needs to learn how to do programming. When people come to me at Stanford saying what should I be doing, my first answer is however many programming classes you plan on taking, double it. The same way that I had to learn calculus in high school. Programming is just something that needs to be part of everyone’s regimen. That’s number one.
On the VR side, there is no such thing as virtual reality. My lab is not in the computer science department; we’re in the Department of Communication. People often ask, why is this tech lab in the comm department and not in computer science? One of the reasons is that VR is really three separate things. It’s tracking, rendering, and display.
I’ll go over those. Tracking is a fancy word for measuring your body movement, so in order to make virtual reality respond to your body, you gotta track what the body’s doing. So that’s mostly computer vision and sensing. So that’s one type of skill set. That’s one type of skill set: how you can figure out where a person’s body is, if her mouth was open, what direction she’s looking. That’s a computer vision problem, and that’s mostly graphics and algorithms, and would be more of your traditional programming.
The third area is display. That’s more about how the eyes work, how the ears work, and understanding hardware and perception.
So those are three separate buckets of skills: one is sensing, another is programming, and graphics, and the third is literally soldering together hardware so that people can experience these new senses.
ENTER- And of course they’ll be a need for people with skills in storytelling and cinematography.
JB- Absolutely. What I’d love to see is people who begin their journey as storytellers thinking about the medium of VR. I think where you’re seeing some of the challenges is, you’ve got cinematographers and storytellers who’ve basically conquered a medium—whether it be the written word or film—bringing those templates to VR. and finding out they don’t really work so well in VR.
ENTER- So what do you suppose the production economics will look like in all this? Will this evolve into a full studio model? Will it be more of a curriculum development model? Something completely new we haven’t really seen before?
JB- If I knew the answer to that, I’d be famous and rich. What you’re seeing in the marketplace right now is a bit of a struggle, because to produce 360º video—which is basically look around and you see a video playing no matter where you look. That’s cheap to do at this point. Used to be, In 2013, it was expensive to get all these cameras. Now anyone can buy a camera for $100, and film pretty good 360 video.
ENTER- The problem with that is it’s not interactive. You can’t reach out and grab objects. You can’t say something and have the scene change in response to where you’re looking or to what you said.
JB- A year ago, when people asked me what’s preventing VR from being pervasive, I used to say tracking: the ability to know someone’s position in the room, and their body movements. It’s really hard to do that. But now that there’s big money behind VR, I’m just stunned at the progress we’ve had in terms of making those devices more accurate and cheaper, and working even outdoors. The roadblock is now is on content. What we really need is more meaningful content, and the ability to democratize the production of content such that anyone—the same way that anyone can upload a YouTube video—can create a VR world that feels good.
ENTER- It’s got to be animation though, doesn’t it? Because you can’t really have virtual interactions with items in the real world—like, say, in my office.
JB- Well, that’s where we get into augmented reality, like Google Glass. And that’s one of the challenges. Everyone talks about AR, but you don’t see many people wearing goggles. For AR to work, it’s got to, as you pointed out, interact with the objects and the people in a space. That’s something called “computer vision”: the ability of a camera system or sub-sensing system to recognize what objects are, and to follow them as they move. Big corporations have been chasing computer vision for decades, but I don’t think we’re going to see quicker progress there because that’s not a supply and demand thing.
About Jeremy Bailenson
Bailenson is founding director of Stanford University’s Virtual Human Interaction Lab, Thomas More Storke Professor in the Department of Communication, Professor (by courtesy) of Education, Professor (by courtesy) Program in Symbolic Systems, a Senior Fellow at the Woods Institute for the Environment, and a Faculty Leader at Stanford’s Center for Longevity. He earned a B.A. cum laude from the University of Michigan in 1994 and a Ph.D. in cognitive psychology from Northwestern University in 1999. He spent four years at the University of California, Santa Barbara as a Post-Doctoral Fellow and then an Assistant Research Professor.
Bailenson studies the psychology of Virtual Reality (VR), in particular how virtual experiences lead to changes in perceptions of self and others. His lab builds and studies systems that allow people to meet in virtual space, and explores the changes in the nature of social interaction. His most recent research focuses on how VR can transform education, environmental conservation, empathy, and health.
CONTINUE TO PART 2: IMMERSIVE REALITY AND CLIMATE ACTIVISM, A STORY FROM PALAU