Natural input on film

View on Vimeo

Natural input is the future of interfaces – from air gestures to facial gestures, from touch input to voice input. Apple appears to be pursuing a natural input strategy for the evolution of its products – from iOS gestures and Siri voice commands to filings for facial gesture patents for software to respond to a user’s emotional state while using software.

Hardware manufacturers are seeking new ways to differentiate their products as we move away from abstracted to more direct ways of interacting with applications and systems. This leads to faster learning curves for input and more intuitive and efficient modes of interaction. Early mouse users were taught to move the mouse on a table top rather than placing the mouse directly on the screen. Leap Motion’s system evolved from its inventor’s frustration with arcane combinations of selections and mouse clicks to shape 3D models on screen which would take seconds with a piece of clay.

What are other possible forms of input for the future? How can emerging technologies be applied to create natural input interfaces?

If we consider the experience of interacting with computing devices closer to how we  interact with people, animals and physical objects we move towards a more cohesive and natural model of interaction. This is a model where we continue to point, grab and move objects on screen as if they are physical artifacts or speak to interfaces as if they are obedient servants. It is also a model which moves towards a more subtle, nuanced language of non verbal communication like tone of voice, posture and even genetic identifiers.

Examining interfaces in science fiction the studio creates a ‘future ethnographic study‘* and analysis of how we might use natural input interfaces tomorrow.

Posture

Star Wars, 1977. [Film] George Lucas, USA: 20th Century Fox

The practice drone tracks the user’s posture, looking for undefended parts of the body. It may use a motion sensor to target moving objects unless line of sight is blocked by bright light sources.

Voice

Blade Runner, 1982. [Film] Ridley Scott, USA: Warner Brothers

User interacts with a system for examining 3 dimensional photographs via voice input. The user can ask the system to pan, zoom in and enhance areas of the photograph. The user can also request a print of a selected area.

Emotion (including engagement, eye gaze,paralanguage, clothing)

Scott, T. (2012) Prometheus Viral – “Quiet Eye”. [video online]

Software analyses video input of a user for facial gestures , gaze aversion and voice set and voice quality. The combination of inputs provides an analysis of the emotional state of the user including honesty, coercion, excitement, malintent and anxiety. The software also analyses clothing and accessories worn by the user which may imply other character traits. In this example the software detects a cross pendant worn by the subject which may signify some of the subjects beliefs. Facial recognition processes confirm the authenticity of the user against a database of similar named individuals.

Blade Runner, 1982. [Film] Ridley Scott, USA: Warner Brothers

The hardware helps an interviewer analyse a subject’s emotional response through involuntary iris dilation and changes in respiration.

2001: A Space Odyssey, 1968. [Film] Stanley Kubrick, Metro-Goldwyn-Major

The system uses visual and voice input to interface with and analyse the emotional state of the user. The user’s emotional state is analysed via voice set and qualities, posture and respiration. The system also uses visual input to lip read when audio is obscured.

Genetics

Gattaca, 1997. [Film] Andrew Niccol, USA: Columbia Pictures

An electronic barrier authenticates users via DNA analysis against a database of approved users.

Minority Report, 2002. [Film] Stephen Spielberg, USA: 20th Century Fox

Retina sensors identify the user to present targeted advertising messages and seamlessly approve payments for services such as public transportation.

Gesture

Minority Report, 2002. [Film] Stephen Spielberg, USA: 20th Century Fox

The user stands in a situated zone for interacting with the system. The user interacts with a virtual panoramic screen interacting with 2 dimensional projections. This is a ‘work station’  for focussed non casual computing. Perhaps the situational aspect is due to the complexity and expense of the technology or the need for security and access to the database. The ‘light gloves’ worn by the user seem to indicate a ‘master user’ status to prevent interference from others in the space.

Reportedly the actor found filming these scenes so tiring he needed to take breaks after only a few minutes.  

Iron Man 2, 2010. [Film] John Favreau, USA: Paramount Pictures

 

The user manipulates 3 dimensional projected gestural interfaces distributed throughout a dedicated workspace.

Prometheus, 2012. [Film] Ridley Scott, USA: 20th Century Fox

The user interacts with a projected gestural interface in an attentive seated position. The complex layering may be a special setting for android users with faster cognitive processing capability than humans.

*This post is inspired by Joe Malia’s great study on video conference systems on film.