Have you ever talked to your computer or smartphone? Maybe you’ve seen a coworker, friend or relative do it. It was likely in the form of a question, asking for some basic information, like the location of the best nearby pizza place or the start time of tonight’s sporting event. Soon, however, you may find yourself having entirely different interactions with your device – even learning its name, favorite color and what it thinks about while you are away.

It is now possible to interact with computers in ways that seemed beyond our dreams a few decades ago. Witness the huge success of applications as diverse as Siri, Apple’s voice-response personal assistant, and, more recently, the Pokémon Go augmented reality video game. These apps, and many others, enable technology to enhance people’s lives, jobs and recreation.

Advertisement

Yet the potential for future progress goes well beyond just the newest novelty game or gadget. When properly merged, computers can become virtual companions, performing many roles and tasks that require awareness of physical surroundings as well as human needs, preferences and even personality. In the near future, these technologies can help us create virtual teachers, coaches, trainers, therapists and nurses, among others. They are not meant to replace human beings, but to enhance people’s lives, especially in places where real people who perform these roles are hard to find.

This is serious next-level augmented reality, allowing a machine to understand and react to you as you exist in the real physical world. My colleagues and I focus on breaking the fourth wall of human-computer interaction, letting you and computer talk to each other – about yourselves.

Bringing computers to life

Our goal was to help people build rapport with virtual characters and analyze the importance of “natural interaction” – without controllers, keyboard, mouse, text or additional screens.

Advertisement

To make the technology relatable, we created a Harry Potter “clone” by using IBM’s Watson artificial intelligence systems and our own in-house software. Through a microphone, you could ask our virtual Harry anything about his life, provided there was a reference for it in one of the seven books.

Since then we have also built a museum guide that helps visually impaired people to experience art. Our prototype character, named Sara, resides in a gallery in Queretaro, Mexico, where people can talk to her and ask about the artwork also on display.

We also created a “Jeopardy”-style game host, with whom you can play the popular trivia game filled with questions about our university. You talk to the character as if he were a real host, choosing the category you want to play and answering questions.

A college freshman interacts with the game show host for the first time. Interactive Systems Group, The University of Texas at El Paso, CC BY-ND

We even have our own virtual tour guide at the Interactive Research Group laboratory at UTEP. She answers any questions our hundreds of yearly visitors may have, or asks the researchers to help her out if it is a tough question.

Advertisement

Our most advanced project is a survival scenario where you need to talk, gesture and interact with a virtual character to survive on a deserted island for a fictional week (about an hour in real time). You befriend the character, build a fire, go fishing, find water and shelter, and escape other dangers until you get rescued, using just your voice and full-body gesture tracking.

A researcher interacts through speech and gesture with Adriana, the jungle survival virtual character. Interactive Systems Group, The University of Texas at El Paso, CC BY-ND

Understanding humans

These projects are fun to “play” for a reason. When we build human-like characters, we have to understand people – how we move, talk, gesture and what it means when you put everything together. This doesn’t happen in an instant. Our projects are fun and engaging to keep people interested in the interaction for a long time.

We try to make them forget that there are sensors and cameras hidden in the room helping our characters read body posture and listen to their words. While people interact, we analyze how they behave, and look for different reactions to controlled characters’ personality changes, gestures, speech tones and rhythms, and even small things like breathing, blinking and gaze movement.

Advertisement

The next steps are clearly bringing these characters outside of their flat screens and virtual worlds, either to have people join them in their virtual environments through virtual reality, or to have the characters appear present in the real world through augmented reality.

A student talks to Merlin, a character that recognizes speech and interacts in virtual reality. Inmerssion, CC BY-ND

We’re building on functions – particularly graphic enhancements – that have been around for several years. Several GPS-based games, like Pokémon Go, are available for mobile devices. Microsoft’s Kinect system for Xbox lets players try on different clothing articles, or adds an exotic location background to a video of the person, making it appear as if they were there.

More advanced systems can alter our perspective of the world more subtly – and yet more powerfully. For example, people can now touch, manipulate and even feel virtual objects. There are devices that can simulate smells, making visual scenes of beaches or forests far more immersive. Some systems even let a user choose how certain foods taste through a combination of visual effects and smell augmentation.

A vast and growing potential

All these are but rough sketches of what augmented reality technology could one day allow. So far most work is still heavily centered in video games, but many fields – such as health care, education, military simulation and training, and architecture – are already using it for professional purposes.

Advertisement

For now, most of these devices operate independently from one another, rather than as a whole ecosystem. What would happen if we combined haptic (touch), smell, taste, visuals and geospatial information at the same time? And then what if we add in a virtual companion to share the experience with?

Unfortunately, it’s common for new technology to be met with fear, or portrayed as dangerous – as in movies like The Matrix, Her or Ex-Machina, where people live in a dystopian virtual reality world, fall in love with their computers or get killed by robots designed to be indistinguishable from humans. But there is great potential too.

One of the most common questions we get is about the potential misuse of our research, or if it is possible for the computers to attain a will of their own – think “I, Robot” and the “Terminator” movies, where the machines are actually built and operating in the physical world. I would like to think that our research as a community will be used to create incredible experiences, fun and engaging scenarios, and to help people in their daily lives. To that end, if you ask any of our characters if they are planning to take over the world, they will tease you and check their calendar out loud before saying, “No, I won’t.”

Advertisement

Iván Gris, Post-Doctoral Fellow, Technology and Commercialization Partnership, University of Texas at El Paso.

This article first appeared on The Conversation.