top of page

Invisible Ink

Augmented Reality Without Devices

A project that doesn’t ask users to look harder — it asks their biology to do the work.

Invisible Ink is an exploration of augmented reality that doesn’t add layers to the world, but activates the ones already inside us. The project asks an “impossible” question: can visual information be embedded so deeply into an image that it bypasses conscious perception and only reveals itself through the mechanics of human vision?

Drawing from perceptual science, the work treats the eye not as a passive receiver, but as an active decoding system. By leveraging spatial frequency, color adaptation, and edge-sensitive neural processing, Invisible Ink encodes meaning that only resolves as the viewer’s retina and brain adapt over time. Executed through motion-based, retina-centric visual demos, the experience unfolds cinematically—without UI, devices, or overlays—relying instead on perception itself as the interface.

Its impact lies less in awards than in its provocation: AR doesn’t have to be worn, held, or seen to exist. Sometimes the most powerful augmentation isn’t added to reality—it’s revealed by how we see it.

73042792c8c5e877e3f5c3d19ca1c869.jpg
mona_lisa_color.gif
Annotation-2020-02-16-014800.png
Challenge

How do you hide information in plain sight — and reveal it without adding anything to the world?

Traditional AR overlays add layers: graphics, HUDs, interfaces, hardware. Invisible Ink explored the inverse problem. Could meaning be embedded directly into the visual field — encoded so subtly that it bypasses conscious perception, yet emerges through the act of looking itself?

The “impossible” constraint:

No screens.

No devices.

No visible symbols.

Only the human visual system.

This wasn’t about designing content for the eye — it was about designing content with the eye.

Insight

The retina is already a processor. AR doesn’t need hardware — it needs choreography.

Invisible Ink was driven by a realization drawn from vision science and perceptual psychology: the eye does not passively receive images. It actively interprets spatial frequency, contrast, and color over time. That process can be designed against.

By exploiting edge sensitivity, chromatic adaptation, and spatial frequency filtering, visual information could be encoded in a way that appears invisible — until the viewer’s perceptual system reconstructs it. The content doesn’t “appear”; it resolves.

Meaning emerges not through attention, but through adaptation.

This reframes AR as something biological, not technological — a system where perception itself becomes the interface.

Execution

Motion as ink. Perception as the decoder.

Invisible Ink was executed as a series of motion-based visual experiments that use time, contrast, and frequency as narrative tools.

  • High-resolution visual stimuli engineered to sit below conscious recognition thresholds
     

  • Motion-driven reveals that activate retinal and cortical processing over time
     

  • Training sequences and demos showing how invisible content resolves into form through sustained viewing
     

  • Perceptual choreography rather than UI — no explicit interaction, only experiential discovery
     

The experience unfolds cinematically. Viewers don’t “use” it — they encounter it. The visuals behave like latent signals, revealing themselves only when perception catches up.

This is interaction without buttons. Storytelling without text. Motion as narrative infrastructure.

Impact

A conceptual shift: AR doesn’t have to be visible to be real.

Invisible Ink didn’t aim for spectacle or commercialization. Its impact lives upstream — influencing how we think about perception-first experiences, sensory design, and the future of augmented media.

  • Demonstrates biologically encoded AR — augmentation without devices
     

  • Challenges the assumption that AR must add visual clutter to be effective
     

  • Positions perception itself as a creative medium
     

  • Bridges vision science, motion design, and experiential storytelling
     

It opens a door to AR that is ambient, invisible, and emotionally resonant — an idea that aligns deeply with how cinematic experiences work at Netflix: felt before they’re understood.

Explicit ECXD Mapping

🎛 

Interaction

  • Interaction is implicit, not mechanical
     

  • The viewer’s visual system is the interaction model
     

  • Engagement emerges through perception, not input
     

  • Aligns with ECXD’s interest in non-traditional, content-forward interaction models
     

The user doesn’t click. They perceive.

Invisible Ink sits at the exact intersection ECXD cares about:

  • Perception as platform
     

  • Motion as storytelling
     

  • Interfaces that disappear
     

  • Experiences that feel cinematic, not instructional
     

It’s not a feature.

It’s not a UI.

It’s a new way of thinking about how meaning reaches an audience.

I'm a paragraph. Click here to add your own text and edit me. It’s easy. Just click “Edit Text” or double click me to add your own content and make changes to the font. 

Cinematic UX

  • Time-based reveals function like narrative beats
     

  • Visual information unfolds the way tension does in film
     

  • The experience rewards sustained attention rather than immediate clarity
     

  • Prioritizes mood, emergence, and perceptual payoff
     

The interface behaves like a scene, not a screen.

Annotation-2020-02-16-015047.png
Annotation-2020-02-16-015111.png
training_obmg.mp4

The eye is a highly sensitive and programmable array of sensors.  By presenting stimuli to discrete regions of the retina we encode neural pathways with spatially modulated information to explore the many dimensions of our unique visual experience.  Invisible Ink builds upon the spatial frequency theory of vision to program the color adaptation of edge sensitive neurons, allowing us to manipulate our reception of a scene using only our visual system and its after effects.

training_all.avi
training_obmg.mp4
training_obmgBO.avi
training_q10.avi
training_q50.avi
test.png
effect_2.png
effect_diagL.png
effect_diagR.png
effect_horiz.png
effect_vert.png
target.png
Annotation-2020-02-16-015134.png
Annotation-2020-02-16-015159.png
Annotation-2020-02-16-015241.png
Annotation-2020-02-16-015322.png
Annotation-2020-02-16-015408.png
Annotation-2020-02-16-015444-1.png
test2-scaled-1.png
bottom of page