top of page
IMG_20190826_005534_edited.jpg

CASE STUDY

Biologically Encoded Augmented Reality

Using the human visual system as a display engine

Biologically Encoded Augmented Reality explores a radical alternative to conventional AR: what if information didn’t demand attention, but quietly inhabited perception itself? Instead of crowding the fovea—the most saturated and fragile part of human vision—this work asks whether meaning can live in the far periphery, a region long dismissed as incapable of semantic detail.

 

The project reframes peripheral vision not as low-resolution, but as differently encoded—optimized for motion and time rather than spatial precision. By leveraging motion-induced position shift, static pixels are transformed into perceptual trajectories, allowing symbols to be written through motion and reconstructed by the visual system without eye movement or interface overlays.

 

Through rigorous psychophysical testing, gaze-tracked validation, and real-world simulations, the system demonstrates that users can rapidly learn and reliably interpret a new peripheral visual language—even under cognitive load.

 

This work proposes a new medium for AR: one where biology is not a constraint, but the computation itself.

AR doesn’t need more pixels. It needs better biology. 

INTERATION DESIGN
ANIMATION & MOTION SYSTEMS
  • Designed a novel human–system interaction model rooted in perception
     

  • Zero-input, gaze-free information decoding
     

  • Interaction through neural interpretation, not gestures or UI

  • Motion as a semantic carrier, not decoration
     

  • Parametric control of temporal frequency, velocity, and perceptual salience
     

  • Animation driving meaning at the sensory level

CINEMATIC UX
  • Off-screen storytelling (information revealed without focal attention)
     

  • Temporal sequencing instead of spatial density
     

  • Designed for lived environments, not screens

The Challenge
Execution
Impact
The Insight

How do you add information without stealing attention?

Modern AR systems overload the center of vision—the same place humans already use to read, recognize faces, and navigate. Peripheral vision covers over 90% of the visual field, yet is dismissed as incapable of carrying semantic detail. Vision science suggested that letters, symbols, or abstract information could not be perceived beyond ~50° eccentricity, especially without eye movements.

The challenge was to design an AR system that could deliver complex information in the far periphery, remain readable during real-world movement, and operate without increasing display size, resolution, or visual clutter. By all conventional models, this should not have been possible.

Peripheral vision doesn’t see poorly — it sees differently.

Instead of fighting the limits of spatial resolution, this work exploited a different perceptual pathway: motion encoding. Through a phenomenon known as motion-induced position shift, static pixels can be made to appear to move when patterned noise flows beneath them.

The key realization was that motion itself could become a symbolic language. By encoding letterforms as motion trajectories and revealing them through tiny, static apertures, the visual system reconstructs shapes internally. The eye never moves. The display never changes shape. Perception does the work.

Designing a language the brain already knows how to read.

A complete perceptual encoding system was built and tested:

  • Symbolic forms decomposed into stroke-based motion paths
     

  • Gaussian and Gabor noise fields animated behind fixed apertures
     

  • Apertures as small as 0.64° of visual angle, positioned beyond 50° eccentricity
     

  • Gaze-tracked experiments ensuring zero fixation drift
     

  • Real-world simulations: walking pedestrians, automotive cockpits, urban scenes
     

Participants learned to recognize symbols quickly and accurately—even as environmental complexity increased. Information appeared to “materialize” in the periphery, readable without distraction or conscious effort.

This was not a visualization trick. It was a new display modality.

A new paradigm for AR, UX, and attention.

This research demonstrates that:

  • Semantic information can be reliably delivered through far peripheral vision
     

  • Attention, not resolution, is the true bottleneck in AR
     

  • The human visual system can be treated as a compression and reconstruction engine
     

The work challenges long-standing assumptions in vision science and interaction design, opening new pathways for:

  • Cinematic AR overlays
     

  • Navigation and situational awareness systems
     

  • Ambient, always-on information design
     

  • Attention-safe interfaces for mobility and immersive media
     

In short: this changes how we think about where—and how—interfaces can exist.

bottom of page