“Transformation” is no longer just a buzzword. In the world of technology, transformation is a daily occurrence, impacting our lives in previously unforeseen ways. At Distillery, we’re continually fascinated and energized by the countless ways these transformations are changing what’s possible — not only for the apps and websites we’re building, but for how we interact with our environment during our day-to-day lives, and how businesses and services are continuing to evolve.
Twenty years ago, few could’ve imagined that we’d now have telephones that could double as augmented reality devices and personal assistants, endless piles of data collected from even our most mundane actions on our mobile devices and laptops, thermostats and light switches we talk to, pavement that reports real-time traffic data, and the ability to diagnose depression based simply on tracking which Instagram filters you favor. While we still don’t live in the world of the Jetsons or The Matrix, we’re daily getting closer.
With transformation as our theme, we’re kicking off a blog series exploring some of the key arenas in which transformation is daily changing our world. Welcome to the first installment: augmented reality.
First, let’s get clear on how augmented reality is distinct from virtual reality. In overview:
VR has been in the public eye for decades. Flight simulators, arcade games, VR goggles, and the 1999 film The Matrix ushered in the exciting concept of a fully simulated world. For much of the population of the modern world, however, AR feels entirely new. The reality is that while AR has been quietly developing for decades, the technological advances of the past five years have at last pushed it into the mainstream and made it available for widespread public consumption. Through the seemingly endless magic of their smart phones, the general public now has ready access to AR; as a result, businesses and services are exploring how they can use AR to enhance their customer or user experience.
Before we examine AR’s present and future, however, it’s illuminating to look back at its roots. Where and how did it originate? And what did it take for us to understand its promise and harness its potential?
The history of AR is necessarily intermingled with that of VR. Precursors to AR were around as far back as the 1950s, when pilots wore heads-up displays (HUDs) that projected simple flight data into their line of sight. Then, in 1957, cinematographer Morton Heilig built the Sensorama Stimulator, a 3D video machine that allowed users to ride a virtual motorbike down the roadway, not only seeing the sights and hearing the sounds, but feeling the wind and the vibrations of the road and smelling the pavement and the scenery. He patented the Sensorama in 1962, envisioning countless practical applications for training students, workers, and the military. But the invention failed to take off. The corporations he pitched proved unready for Heilig’s vision. And of the five films he produced to showcase the Sensorama, only the one featuring a virtual belly dancer, the tantalizing sounds of finger cymbals, and the smell of cheap perfume was terribly popular with potential investors. In the end, the Sensorama’s only success was as a short-lived arcade game.
Heilig simultaneously invented the Telesphere Mask, patented in 1960. It was the first-ever head-mounted display (HMD), providing a full-color 3D viewing experience and stereo sound at a time when the majority of households still had black-and-white TVs. Despite being the earliest precursor of today’s VR and AR headsets, Heilig’s Telesphere Mask was also a failure.
In other words, when these pioneering VR and AR devices first came on the world stage, the world stage shooed them off. Heilig was quite literally ahead of his time. As of press time, on Morton Heilig’s official webpage, the original Sensorama machine is still listed for sale.
In 1965, professor and computer scientist Ivan Sutherland was the first to clearly conceptualize AR when he described the “ultimate display,” a “room within which the computer can control the existence of matter.” The immersive displays that he described would “give us a chance to gain familiarity with concepts not realizable in the physical world.” In 1962, Sutherland built upon these concepts to create Sketchpad, widely acknowledged as CAD’s forerunner. Then, in 1968, he developed what came to be known as the “Sword of Damocles.” An incredibly heavy and intimidating-looking HMD, it had to be suspended from the ceiling of the lab to be worn by the user. It displayed stereoscopic output from a computer program comprising a virtual environment of very basic vector images; to shift perspective, the user moved his or her head. The images were projected on top of what users saw through their otherwise clear lenses, effectively uniting the virtual and physical worlds.
The next big AR mile marker was computer artist Myron Krueger’s work on creating interactive environments (e.g., 1969’s Glowflow and 1970’s Metaplay). Among his most notable projects, Videoplace (1975) was an “artificial reality laboratory” notable for using a system of cameras, projectors, hardware, and onscreen silhouettes to allow users in separate rooms to interact with one another.
Sutherland and Krueger were the visionaries who showed the scientific and artistic communities the baseline of what was possible with AR. People began to take notice.
In 1978, while still in high school, original cyborg Steve Mann first began experimenting with wearable AR computer technology. When Google Glass started selling its prototype in 2013, Mann had already been wearing some form of computerized eyewear for 35 years. The EyeTap device he developed in 1999 is a seeing aid that — by enabling the eye to function as both a display and a camera — allows users to see the world in high-dynamic range (HDR) vision. In a show of ultimate commitment, Mann had his own invention permanently attached to his skull.
AR first arrived on our TV screens in 1982, as real-time weather radar images were superimposed on virtual maps of the earths to create the AR weather visualizations we know so well today. Finally, in 1990, AR was bestowed with its modern-day name, when Boeing researchers Thomas Caudell and David Mizell coined the term “augmented reality” to describe their proposed solution for giving workers HMDs that projected aircraft assembly wiring instructions onto reusable boards.
From there, exploration of AR’s practical possibilities exploded. In 1992, Columbia University’s first Knowledge-based Augmented Reality for Maintenance Assistance (KARMA) experiment took the form of an HMD that transmitted graphics and callouts to help users with laser printer maintenance. In the same year, the US Air Force developed the first fully immersive AR system: operators wore an upper-body exoskeleton that controlled robot arms and binocular magnifiers that made the robot arms appear to be the operators’ arms. In 1994, the University of North Carolina at Chapel Hill developed an AR medical application that enabled physicians to observe fetuses inside pregnant patients. CyberCode — the first AR system that used 2D visual markers to create landmarks — was developed by Sony researchers in 1996. In 1998, Sportvision cast the first yellow first-down marker onto the field during a live NFL game.
Through all this development, however, AR had remained largely out of reach for the majority of developers. Regular people simply couldn’t afford the technology. That finally changed in 1999, when Hirokazu Kato created the open-source ARToolKit, a software library that empowered developers to understand the orientation of an AR headset and camera even if they had access to neither. From that time forward, AR innovations arose in countless areas, including gaming (2000’s ARQuake), apps (Wikitude’s 2008 travel app was first to let you point your phone’s camera at a location to obtain useful information about it), mobiles (2004 saw the first AR system on a consumer mobile phone), toys (LEGO unveiled DIGITAL BOX in 2009), and web (2009’s FLARToolKit at last gave web developers the ability to display AR content). Google Glass was unveiled in 2012, letting the world more clearly glimpse the potential for wearable AR technologies. Given controversy about privacy concerns, however, the initial launch sputtered rather horrifically. Glass has only recently found new life in enterprise applications in health care, manufacturing, and journalism.
To tie our history together with our present, let’s examine the lesson Google’s leadership learned rather expensively back in 2013 with Google Glass, because it was the same lesson that poor, underappreciated Heilig (remember the Sensorama and the Telesphere Mask?) learned back in the early ‘60s: AR only succeeds when the general public is ready to accept it, individuals and businesses can see practical applications for it that make financial sense, and the general public can afford the baseline technology required to use it. Today, that’s where we at last seem to be. Only 50+ years too late for poor Heilig.
Niantic’s 2016 release of the award-winning Pokémon Go app launched AR permanently into the public eye, convincing unforeseen hordes to go outside seeking invisible creatures. Today, AR applications are proliferating, and the possibilities are truly endless. For your edification and inspiration, a selection of illustrative examples:
Disney is taking AR in several directions, including the ability to bring characters from your coloring book to 3D life, a Magic Bench that lets you sit next to your favorite Disney personalities, and interactive experiences that let you complete important missions for Star Wars’ Rebel forces.
Beyond gaming, AR apps allow you to put dinosaurs in your own backyard, create your own private snowstorm, or “try on” tattoos.
Given the ever-expanding capabilities of consumer electronics, the past five years have seen countless AR innovations. The existing technology in today’s most popular smart phones already serves as a sufficient platform for many AR apps, but devices will continue to evolve along with newly available technology (e.g., Google’s Tango platform).
Beyond smart phones, AR HUDs, HMDs, and smart glasses are under development or already available in nearly every quadrant. There are sports-themed models, enterprise-focused models, and consumer-focused models. There are lower-cost models that let you turn your smartphones into AR devices. Most of the AR headsets and smart glasses are still more expensive than the average consumer can afford, however. Fortunately, the existing AR capabilities of smart phones are guaranteed to keep the public interested while AR product manufacturers work to address the concerns about battery life, connectivity, and app availability that keep many consumers from being ready to invest. To expedite software development, many manufacturers offer SDKs directly on their product sites. In addition, Apple has released an AR SDK to developers, and AR SDKs are widely available for Android development (including our old friend ARToolKit).
To be successful, today’s businesses need to identify and capitalize on the “next big thing” capturing the public’s interest. AR is absolutely here to stay, and it holds massive financial promise: technology industry advisors Digi-Capital’s 2017 report on AR/VR growth predicts that by 2021, AR could take $83B of a total $108B combined AR/VR market. And poor Heilig, who could elicit only quarters from the arcade users of his ground-breaking Sensorama, must be rolling over in his grave.
Interested in learning more about the potential of integrating AR capabilities into your app idea? Let us know!
Sergei Prokopenko, Distillery’s Chief Information Officer, has been a member of Distillery’s technical staff since 2009. As CIO, Sergei provides leadership for the continued development of an innovative, robust, and secure information technology environment throughout the company. Prior to becoming CIO in 2015, he was one of Distillery’s lead software developers, responsible for developing and maintaining IT solutions of varying scale.