Hard as it is to believe, the film Johnny Mnemonic is twenty years old. The irredeemably cheesy action thriller mixed urban settings and an Orwellian techno-future in a medium/low-budget romp that today appears more than a little campy. But look a little closer, and actually it seems far more prophetic than its Keanu Reeves-starred cousin, The Matrix.
Facebook last year purchased VR headset company Oculus for $2 billion. “Dataglove” technology has been commonplace for over two decades now, and even Microsoft’s XBox allows you to manipulate objects and play games using gestures and body motions via its Kinect add-on device. How long will it be before the headset/dataglove/immersive 3-D environment from the movie (and William Gibson short story of the same name, on which the film is based) becomes factual, versus just the wild imaginings of some academic researcher? Jaron Lanier, the scientist who is most famously identified with the concept of “virtual reality,” is credited with popularizing the term after being the first entrepreneur to sell dataglove technology commercially, almost a decade prior to the release of the film version of “Johnny Mnemonic.”
If we step back just before that time, to 1981, when William Gibson’s eponymous short story was published, the idea of a “virtual reality” was just a gleam in the eyes of a few science fiction authors, tech enthusiasts and future visionaries. The technology needed to make Gibson’s imaginings work was just barely conceivable, let alone possible. The movie Blade Runner was about to be released, which would take Star Wars sci-fi fantasy production design and extrapolate it to post-2010 Los Angeles. But even in Blade Runner, the concept of virtual reality didn’t exist (although the ideas of biohacking and human memory storage—both of which feature prominently in Johnny Mnemonic—did). The computers of the film were only slightly more sophisticated than technology allowed in the early 1980s, namely in terms of memory, speed, computing power and voice recognition. Other than the ‘replicants’ of Blade Runner, the abilities of machines to process with “A.I.,” or artificial intelligence, and to create synthetic “worlds”—as opposed to real-life cybernetic organisms—appeared to be rather limited.
It’s worth noting that the man responsible for most of the special effects in Blade Runner, Douglas Trumbull, went on afterward to direct the film Brainstorm, with Christopher Walken and Natalie Wood. That movie featured a story of a technology company inventing headsets much like those in Johnny Mnemonic and the same sort of gear later evangelized by Jaron Lanier. Although the headsets in the film created a simulation of reality through video sequences, the experiences were not interactive; there were no datagloves; the viewer was passive and merely experienced pre-made content, albeit in a hyper-realistic format, vivid enough to produce intense physiological reactions. The film was not a commercial success and was nearly scuttled when Natalie Wood died during its production.
While these movies were being made, Jaron Lanier worked at Atari with Thomas Zimmerman, the inventor of the dataglove. When Atari split into two separate companies, Lanier left to focus on a programming language for virtual reality in a new venture he called VPL. Unable to find commercial success with the development tools and the early dataglove technology, Lanier sold the venture to 3-D graphics pioneer Silicon Graphics and continued to do research as a consultant for them into the 1990s and beyond.
The commercial applications of dataglove manipulation so far can be found in “tele-immersion” and “tele-presence” tech that is sold and marketed today by such companies as Polycom, HP and Cisco.
In the meantime, headset technology has become a reality, with released products such as Google Glass, Google Cardboard and Samsung Gear VR actually coming onto the market in the wake of pioneering projects such as Virtuality, Sega VR and Nintendo Virtual Boy. Sony is planning a headset accessory for Playstation 4 called Project Morpheus (perhaps in a nod to Keanu Reeves’ better-known sci-fi franchise) while Google is sponsoring a more ambitious Android-based effort, called Magic Leap, which plans to “augment” reality by superimposing a VR display over real-world perspectives to provide extra information on people or their surroundings.
The third leg of the systems that will likely enable the worlds seen in Johnny Mnemonic to become a reality is the software. From early wireframe environments developed by Evans and Sutherland for the military in the 1960s to real-time 3-D gaming engines as pioneered by John Carmack’s Id Software (Carmack is now the Chief Technology Officer of Oculus) for use in first-person shooting games like Doom and Halo, the capabilities of machines to generate vast, complex, 3-D worlds instantaneously have advanced nearly to the point where computer-generated human figures cannot be discerned from live actors. Real-world spatial and physical conditions have been replicated so closely that only subtle atmospheric or lighting artifacts are what prevent people from mistaking artificial renderings for the real thing.
The image of Johnny Mnemonic wearing a head-mounted display and two datagloves may be the closest approximation of an average Internet user (or white collar professional) of the year 2030 that the movies have dreamed up in the last two decades. On the twentieth anniversary of this movie’s release, it is worth noting that its imagery (and even its storyline—which shrewdly predicts the near-impossibility of true data privacy) may turn out to have more foresight than that of any intervening cinematic sci-fi depiction; it may eventually prove more prescient to original viewers, who saw the movie in theaters, than they ever dared to imagine.