top of page

Outfitting real-time cockpit environment with adaptive perceptual skins

Rewriting perception itself through motion, interaction, and cinematic systems
Annotation 2019-10-23 062939.png
Overview

Biologically Encoded Augmented Reality – Systems Integration and Geometric Reform reimagines AR not as a visual overlay problem, but as a perceptual systems challenge. Traditional AR floods the center of vision, competing for attention and fragmenting experience. This work asks a more radical question: what if AR expanded perception instead of interrupting it?

 

The insight was biological. Human vision already processes meaning in parallel—especially through motion-sensitive pathways in the far periphery. By targeting low-level mechanisms such as motion-induced position shift and peripheral edge reconstruction, semantic information can be encoded directly into perception itself, bypassing conscious focus entirely.

 

To prove this, a real-time, adaptive “perceptual skin” was built using multi-camera environmental capture, stabilized signal pipelines, and GPU-driven display systems that choreographed motion stimuli precisely into peripheral vision—without ever pulling gaze.

 

The result reframes AR from interface design to perceptual architecture.

Not adding information to reality—but teaching reality itself to speak.

Challenge

Designing for perception, not screens

Augmented reality has traditionally been framed as an overlay problem: more pixels, more layers, more UI competing for attention in the center of the frame. But human perception doesn’t work that way. Our visual system is already multiplexed—processing motion, context, and meaning far beyond conscious focus.

The challenge was radical: Could AR deliver complex, semantic information without ever entering the center of vision? Could motion itself become the interface—readable, expressive, and cinematic—without interrupting narrative flow or demanding attention?

This project set out to solve what felt impossible: expand perceptual bandwidth without adding visual clutter, and design an experience that feels less like “UI” and more like an invisible extension of human perception.

Motion is a language the body already understands

Instead of designing for attention, this work designs around it. The key insight: far-peripheral vision isn’t empty—it’s optimized for motion, rhythm, and change.

By leveraging low-level visual phenomena (motion-induced position shift, peripheral motion sensitivity), information can be encoded biologically, not symbolically. Characters, shapes, and meaning are no longer drawn—they are felt through motion paths that the visual system reconstructs automatically.

This reframes interaction entirely. The “user” doesn’t click, tap, or look. They perceive. The interface disappears, replaced by a perceptual system that operates in parallel with conscious experience—exactly the kind of invisible, intuitive interaction ECXD strives to build.

Insight

Cinematic motion as an interactive system

The execution combined vision science with cinematic production pipelines to create a motion-driven perceptual codex:

  • Motion systems translated symbols and abstract forms into animated trajectories perceived through static apertures in far-peripheral vision.
     

  • Cinematic environments were captured using multi-camera rigs (pedestrian and automotive POVs), then composited with adaptive motion stimuli that blended seamlessly into real-world scenes.
     

  • Interactive perception was tested in immersive, multi-screen environments where participants identified symbols without shifting gaze—validating motion as a viable semantic channel.
     

  • Visual storytelling tools (After Effects, Premiere, GPU-driven display systems) were used not for spectacle, but for precision: timing, rhythm, and perceptual continuity.
     

This wasn’t just a prototype—it was a system, capable of scaling from abstract research to real-world cinematic environments where attention is precious and narrative flow matters.

Execution

A new grammar for immersive experiences

Developed as part of doctoral research at the MIT Media Lab, this work introduces a foundational shift in how immersive experiences can be designed:

  • It demonstrates that motion can carry meaning, not just emotion.
     

  • It reframes AR from “overlaying content” to orchestrating perception.
     

  • It opens new directions for cinematic UX where information lives in the margins—supporting story, emotion, and context without breaking immersion.
     

While rooted in research, the implications extend directly to ECXD’s mission:

content-forward interfaces, cinematic interaction, and immersive systems that feel effortless, human, and alive.

This project doesn’t just ask how we design better interfaces—it asks how we design experiences that disappear into perception itself.

Impact

Information floods the center of our visual field and often saturates the focus of our attention, yet there are parallel channels in the visual system constantly and unconsciously processing our environment.  There is a dormant potential to activate these channels to challenge the limits of perception.

 

This demonstration argues for the development and application of real-time, adaptive, perceptual skins, projected onto the observer’s immediate environment and computationally generated to target and stimulate key visual processing pathways.  By activating biological processes responsible for edge detection, curve estimation, and image motion, I propose that these perceptual skins can change the observer’s proprioception as they move through dynamic environments.

(un-mute for audio)

Stimulus projections to affect observer perception of self-motion through space time [live field test, central channel, raw].  Sequence 1: computationally generating sunlight on a rainy day.  Sequence 2: gimbal correction for observer proprioception.

GREEN_1_downsized.mp4
high cognitive load demos_520MB.mp4
Adventure Mode Process.mp4
Hyperdrive Process.mp4
Zero Blind Spot Process.mp4

A motion-based perceptual codex

ECXD competency: Motion Systems · Systems Thinking

The system translates symbols, shapes, and meaning into motion-induced perceptual events that appear in far-peripheral vision without gaze shift.

Key components:

  • Motion systems that encode semantic information into perceptual trajectories
     

  • Cinematic timing tuned to persistence of vision and perceptual thresholds
     

  • Multi-aperture motion choreography for complex symbol construction
     

  • Scene-adaptive integration that blends motion into real environments
     

Rather than rendering UI, the system orchestrates perception.

“This isn’t animation on top of reality—it’s motion woven into it.”

From lab experiment to cinematic system

ECXD competency: Prototyping · Cinematic UX · Interaction

  • Captured high-resolution pedestrian and automotive POV footage
     

  • Built a multi-screen immersive environment simulating real-world attention load
     

  • Designed and animated motion-driven perceptual symbols
     

  • Adapted motion stimuli dynamically to scene color, contrast, and movement
     

  • Validated through longitudinal human studies measuring recognition, accuracy, and learning
     

Tools spanned research and production:

MATLAB · After Effects · Premiere · GPU display systems · Eye tracking

“We used cinematic tools not to decorate the experience—

but to tune perception itself.”

Explicit Mapping to ECXD Job Requirements

Side-by-Side Alignment

ECXD Job Requirement

How This Project Demonstrates It

Design immersive, cinematic experiences

Uses motion as a narrative and semantic layer that preserves immersion rather than breaking it

Strong motion systems thinking

Motion is treated as a system—timing, velocity, spatial choreography, and perceptual effect are engineered holistically

Interaction design beyond traditional UI

Interaction occurs through perception, not clicks or gaze—aligns with invisible, intuitive interaction paradigms

Cross-disciplinary collaboration

Integrates vision science, computation, cinematic production, and UX design

Prototype complex systems end-to-end

From stimulus design → environmental capture → real-time adaptation → human validation

Innovate at the edge of storytelling & technology

Expands how meaning can be delivered in immersive environments without explicit interfaces

Design for real-world constraints

Tested in dynamic, high-load environments (walking, driving) rather than idealized lab conditions

Why This Matters for ECXD

This project doesn’t just demonstrate skill—it demonstrates taste, restraint, and systems-level thinking.

It aligns directly with ECXD’s mission to:

  • Blend motion, interaction, and storytelling
     

  • Create content-forward, cinematic experiences
     

  • Design systems that feel inevitable, intuitive, and human
     

“The best experience design doesn’t ask for attention.

It earns it—quietly.”

 

If you want, next we can:

  • Compress this into a 1-slide internal pitch
     

  • Write a Netflix recruiter-facing summary
     

Or tailor a version that mirrors the exact wording of the ECXD job description

Annotation 2019-10-19 144816 (2).png

Evaluating adaptive cues in dynamic scenes requires a carefully controlled environment that is both parameterized and repeatable.  The study environment includes three use cases: while walking, while driving in a rural environment, and while driving in a suburban environment.  Source videos for the driving sequences were captured on three, 4K wide-angle cameras. 

Environmental capture and stimulus projection (below): (a) sensor array, (b) projection system (c) interior topical mapping

Raw footage from natural environments was captured with a custom 12 DOF rig consisting of three GoPro HERO7 cameras, each recording 4K, 60 FPS with 69.5° vertical and 118.2° horizontal FOV and encoding using H.265 HEVC codec. 

Processing

Even with on-board image-stabilization, the raw videos contain significant jitter.  Before capturing the videos, we placed registration marks in several locations within the interior of the vehicle and afterwards imported the raw videos into After Effects to track the motion of the marks and stabilize the footage.  After stabilization, the three views were aligned using a corner pin effect and the results were rendered and encoded with the H.265 HEVC codec.

Once the footage is stabilized, the interior of the cockpit is isolated and used to mask the prepared stimuli.

2019-10-23-03-15-39.mkv
Data Capture
Annotation-2020-02-13-151334.png
Annotation-2020-02-17-144711.png
GREEN_1_downsized.mp4
carinterior_2x.png

High-resolution LCD displays surrounded the observer: the center parallel to the observer’s coronal plane and the left and right displays abutting, at each seam forming a 100° angle.  The observer was seated 31.5 inches from the center screen such that each pixel subtended 1.10 arcmin of the visual field with no restraining mechanisms to fix head or body motion.  Three displays were mosaiced using an NVIDIA Quadro RTX 4000 graphics card, driver version 431.02.

Viewing Environment Architecture
MVIMG_20191025_113509.jpg
MVIMG_20191025_105833.jpg
IMG_20191025_001032.jpg
IMG_20191013_232028.jpg
IMG_20190930_121105.jpg
IMG_20191002_212941.jpg
IMG_20190929_162436.jpg
IMG_20191001_191856.jpg
PANO_20191025_101540.vr.jpg
Annotation 2019-10-14 112407.png
Annotation 2019-10-16 131030.png
Annotation 2019-10-23 044204.png
bottom of page