Week 5: Mixed Reality; Virtually Augmented

We are bombarded with electromagnetic radiation, oscillated by particle-wave vibration and charged with kinesthetic feedback loops. The Macro & Micro worlds  are unified, and we are all part of a collective unconscious experiencing itself subjectively. I gotta lay off the Electro-delics… But is the experience of ‘reality’ the construction of a sensory illusion based on the relation of mass to intelligence? Why then do we lose our selves in Virtuality and bring Digital Augmentation ever closer to our hands and eyeballs?
The machine-mind interface has its crude beginnings…

 

Lets skip the metaphysical debates about the virtual, augmented and simulated versions of reality. Instead I will look at the concepts and implications that technological convergence & augmentation has for various industries. I’m deeply fascinated with augmentation as the prescient development in mind-machine interfacing, so I’ll also look at some of the emerging technologies and make some near-future predictions about this exciting field. This post’s a big one.

Historically, Virtual Reality has not penetrated the mainstream market for a few reasons: they are often large cumbersome set-ups that are usually quite expensive, require narrow specialised training to implement, develop & operate, and there are latency issues that can result in disorientation. So along the way we have seen some Failed Virtual Reality Devices (Goldmeier, 2009), but they paved the way for ideas in Augmented Reality. Today virtual reality occupies two realms: a) Advanced room-sized training simulators for military personel with an array of worn sensors and pilots in haptic-simulated cockpits (and other industries). b) immersive gaming with Head Mounted Displays like the Oculus Rift where games can be designed or hacked for stereoscopy (Bonus: check out the Grandma Freaking Out while wearing one). Below is the almost ultimate combination of both worlds…

If Virtuality is the enclosure of the senses, then Augmentation (Reality v1.5) is confluent, dynamic layering.
Broadly speaking there are 3 types of Augmentation technology:

1) Mobile devices as ‘windows’ that mediate that enhance physical objects/places (e.g. Tablets)
2) Room sized set-ups with projectors and sensors where the body directly controls the interaction (e.g. Microsoft Kinect)
3) Hybrid or wearable systems like Head Mounted or Portable Displays (e.g Google Glass)

A lot of video information comes from this highly recommended blog: Augmented Engineering, Augmented Reality Overview and ARLab. My mind was literally blown away by how much incredible work is being developed. I’ll share some of these here.

1) Mobile devices carry a plethora of powerful technologies (touch, camera, gps, gpu, wireless, motion sensors, etc) that provides the perfect ecology for augmented displays. Commonly this involves gps data or using the camera to ‘window’ the real-work objects and places with layers of 2d information or 3d models.  This in-depth Interview with Qualcomm’s Vice-President of Development demonstrates R&D with publishing (4:30) and toys (11:31). The key points here are that augmented apps are branded and dependent on ‘Context Awareness’ (8:40) and well programmed algorithms and databases (11:00). Are augmented desktop displays the future?

2) Embodied interaction is at this forefront. The Kinect has been used by companies, prosumers and artists to reconfigure and expand interactions in a fixed space. This can include extending existing surfaces or displays, portable interactive data using Body Gestures to Control a UI with depth sensing (see LightSpace bellow), and projection mapping fixed or moving objects with visuals. The potential for this technology is only limited by the refinement of input data, programming skills and one’s imagination. I want a Dedicated Holo-room.

3) Not as easy to define as either fixed or mobile augmentation as it usually includes both in developing and wearable technology. At present these include Clear Screen Displays (e.g. on Car Windshields (Hollister, 2012)) or Head Mounted Displays like Google Glass that function as a Heads Up Display of real-time information streams from internet/sensor enabled apps (limited) with voice activation and life-logging. As always there are Privacy Concerns (Smith, 2013), even without proposed facial recognition and the potential for advertisments. Such technology was pioneered by the ‘father of AR and wearable computers’ Steve Mann and his EyeTap.


While there are Impending Social Consequence(Havens, 2013) and  (more than) 7 Ways AR will Improve our Live (Drell, 2012), convergence is at the heart of these ideas for augmentation. Not only in the combination of an array of technologies and sensor inputs but on ‘context aware’ firmware that can filter information from convergent databases of reference points and integrate various applications. At these early stages to build a vast collective database of referent objects is an impossible feat, let alone code complex algorithms to filter context aware sensors (this would probably require a combination of eye-tracking, direct-neural interfacing or some other yet-developed technology), so company specific databases are here to stay.

Now here is a look at some of today’s developing technology that show promise for AR:

Eye Tracking: there are Quite A Few Use (Rimerman, 2010) for eye tracking currently;  supplementing the keyboard/mouse interface and helping disabled people, website heat-maps, 3d for displays without glasses, control over smartphone functions (e.g. screen on/off, scrolling, media control) and vehicle safety. I see the potential for a revolutionary GUI design to integrate ‘clicks’ (e.g. by winking) in a HUD on transparent screens/projections for seamless, natural control. Perhaps like this Early Prototype (O’Brien, 2011).

3d Gesture Chip (Leber, 2012): to uses an electrical field to accurately measure hand gestures in 3 dimensions. These could be embedded in smartphones/watches/others and opens up a range of possibilities for handsfree GUI control at low power uses (90% less than camera tracking). The Leap motion controller uses IR and CCD cameras to demonstrate this mobile potential very well:

Mind Control: by measuring the electrical signals of neurons, EEG readings (combined with other sensors?) would be used to directly control devices without movement/touch/voice activation, particularly for the disabled. Here are the crude beginnings of the neural-interface. Apart from development and use in neuroscience, there are many Consumer Brain-Computer Interfaces, and even Samsung are working on a Tablet Controlled by EEG (Young, 2013).

Volumetric Displays: projects a light field within a given volume that can represent objects in 3 dimensions. Currently under development, this can include the scattering, emission or relaying of illumination with lasers, LEDs or Plasma that are typically rotating and enclosed by clear material. Here Microsoft have developed a Touch Interactive Volumetric Display (Hollister, 2012):

Though I doubt we could have scenery sized projections/hallucinations, with some of the aforementioned technology converged, the future of Augmented Reality might resemble this video:

     AR Mann – to the rescue of reality!
ARmann
Wikimedia Commons: Evolution of WearableComputing…
[Cropped] – Glogger

References
Drell, Lauren (2012) ‘7 Ways Augmented Reality Will Improve Your Life’ Mashable <http://mashable.com/2012/12/19/augmented-reality-city/>
Havens, John (2013) ‘The Impending Social Consequences of Augmented Reality’ Mashable <http://mashable.com/2013/02/08/augmented-reality-future/>
Hollister, Sean (2012) ‘Pioneer’s laser-projected car HUD lets you drive like Robocop’ The Verge <http://www.theverge.com/2012/5/9/3010623/pioneers-laser-projected-car-hud-lets-you-drive-like-robocop>
Leber, Jessica (2012) ‘A New Chip to bring 3D gesture control to Smartphones’ MIT Technology Review <http://www.technologyreview.com/news/507161/a-new-chip-to-bring-3-d-gesture-control-to-smartphones/>
O’Brien, Terrence (2011) ‘Eye-tracking microdisplay delivers Terminator Vision’ Engadget <http://www.engadget.com/2011/04/20/eye-tracking-microdisplay-delivers-terminator-vision-distracts/>
Rimerman, Deane (2010) ‘6 Ways Eye Tracking is Changing the Web’ ReadWrite <http://readwrite.com/2010/08/09/6_ways_eye_tracking_will_redefine_the_web>
Smith, Paul (2013) ‘Google Glass in privacy battle’ Australian Financial Review <http://www.afr.com/p/technology/google_glass_in_privacy_battle_Fb7F66HUH9YEvdGhoj5DKL>
Young, Susan (2013) ‘Samsung Demos a Tablet Controlled by your Brain’ MIT Technology Review <http://www.technologyreview.com/news/513861/samsung-demos-a-tablet-controlled-by-your-brain/>

Media
Glogger ‘Evolution of WearableComputing and Augmediated Reality…’ Wikimedia Commons <http://commons.wikimedia.org/wiki/File:Evolution_of_WearableComputing_and_AugmediatedReality_in_everyday_life.png>
Hollister, Sean ‘Microsoft builds a 3D Hologram you can touch’ The Verge <http://www.theverge.com/2012/1/7/2688635/microsoft-research-vermeer-3d-hologram-
Leapmotion ‘Introducing the Leap Motion’ Youtube <http://www.youtube.com/watch?v=_d6KuiuteIA>
Manycure ‘Augmented Reality: SC Clip’ <https://soundcloud.com/manycure/manycure-augmented-reality-sc>
Matsuda, Keiichi ‘Augmented City 3d’ Vimeo <https://vimeo.com/14294054>
Microsoft Research ‘Applied Sciences Group: Interactive Displays’ Youtube <http://www.youtube.com/watch?v=oGa1Q7NvsI0>
Microsoft Research ‘LightSpace from Microsoft Research’ Youtube <http://www.youtube.com/watch?v=xx5kBqxyaHE>
Missfeldt, Martin (2013) ‘How Google Glass Works’ <http://www.brille-kaufen.org/en/googleglass/>
Playstation 3 ‘Most Insane Immersive Experience Ever’ Youtube <http://www.youtube.com/watch?v=VrgWH1KUDt4&playnext=1&list=PL556954B159B32B06>
Rivot, Paul ‘My 90 year old grandmother tries the Oculus Rift’ Youtube <http://www.youtube.com/watch?feature=player_embedded&v=pAC5SeNH8jw>

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s