top of page
CASE STUDY

Peripheral Semantics

Encoding Meaning at the Edge of Human Perception
First Principles
Lawson Scientific Logo - Invert.png

Peripheral Semantics investigates a core tension of immersive experience design: how to convey meaning without hijacking attention. As screens proliferate across vehicles, AR headsets, and spatial interfaces, traditional UX concentrates information in the fovea, forcing users to look away from the world—and, at times, from safety-critical tasks. Vision science has long argued that this is unavoidable: beyond ~40° eccentricity, static symbols dissolve, leaving the periphery suitable only for crude alerts. This research overturns that assumption.

 

Peripheral vision does not parse form; it is exquisitely tuned to motion. By translating letters, numbers, and abstract concepts into choreographed motion trajectories—leveraging motion-induced position shift—symbols could be reconstructed by the brain without gaze shifts or explicit interaction. Meaning unfolds over time, not space.

 

Implemented as a motion-first interaction system validated in dynamic, real-world scenes, Peripheral Semantics demonstrates that semantic information can live outside conscious attention. It reframes UX as perceptual orchestration—proving that the future of interaction isn’t louder or brighter, but quieter, timed, and written directly into human perception.

How do you deliver meaning without stealing attention?

Every screen fights for the center of our vision. In cars, AR headsets, and immersive environments, that fight becomes dangerous — or at least exhausting. Traditional UX patterns overload the fovea, forcing users to look away from what matters most.

The “impossible” problem:

Can complex, symbolic information be communicated without asking the user to look at it?

Vision science says no. Static symbols disappear in the far periphery. Cortical scaling makes text unreadable beyond ~40°. Peripheral UI is supposed to be limited to crude signals: flicker, color, urgency — never language.

Peripheral Semantics set out to challenge that assumption.

Challenge
Insight

The periphery doesn’t read shapes. It reads motion.

What others treated as a limitation became the opportunity.

Peripheral vision is not for form — it’s for change. Motion dominates the far visual field, and certain psychophysical effects (specifically motion-induced position shift) cause static apertures to appear as if they’re moving — even when they’re not.

The insight:

If symbols are translated into motion paths, the brain reconstructs meaning without gaze shift.

Letters, numbers, and abstract concepts don’t need to be drawn —

they can be performed.

By encoding symbols as motion trajectories inside tiny static apertures, meaning emerges cinematically over time, rather than spatially all at once.

system3.mp4
Execution

A motion-first interaction system encoded directly into perception.
 

Peripheral Semantics is a modular motion system that translates symbols into motion trajectories rendered inside tiny static apertures placed in the far periphery.

Core components:

  • Motion Codex: Letters, numbers, and abstract concepts decomposed into motion strokes
     

  • Temporal Sequencing: Multi-stroke symbols assembled over time using persistence of vision
     

  • Spatial Compression: Complex meaning conveyed in static apertures that occupy <1° of the visual field, while perceived forms span more than 10°.
     

  • Environmental Adaptation: Stimuli dynamically integrated into real-world scenes
     

The result is an interaction layer that communicates meaning without asking users to divert gaze or attention, expanding the potentials for cognitive loading.

Impact
  • Expanding the accessible cognitive bandwidth of AR, HUDs, and immersive systems:
     

  • Semantic information can live outside conscious attention
     

  • Motion can act as a primary carrier of meaning
     

  • UX systems can be co-designed with human perception
     

  • The work reframed augmented reality not as an overlay problem, but as a perceptual orchestration problem — where timing, motion, and biology are part of the design surface.

Codex Overview

Translating motion to meaning

This research expands upon prior work by presenting the following contributions:

  1. Translation of alphanumeric and abstract symbols into motion-modulated static aperture stimuli which:

    • are detectible with high accuracy rates at eccentricities greater than 50° from fixation

    • occupy small regions of the visual field (less than 1 degree)

    • are scalable to symbols containing multiple strokes for communicating increasingly complex semantic information

  2. Adaptation of visual stimuli for deployment in complex natural environments, such as from a pedestrian or automotive cockpit point of view.

The unique advantage of this approach is the compression of highly semantic visual information into small static apertures, overcoming cortical scaling to efficiently communicate characters and symbols to receptive fields at greater than 50° eccentricity.

2sym_3v2.mp4
00:00/00:00

For the sake of simplification, we can think of any object static or unchanging in our periphery as invisible through habituation.  Motion is the catalyst to perception in the discrete outer regions of the retina. A simple way to overcome imperceptibility of static forms in the far periphery is to apply a pattern consisting of Gaussian local motion to a character or symbol, such as Schäffel’s “Luminance Looming” effect.  However, presenting symbols at increasing eccentricities in the visual field must account for cortical scaling, which at far peripheral locations beyond 40° would exceed a factor of 3.  For the application of peripheral information delivery, this factor is highly impractical.  To overcome cortical scaling, semantic forms are parsed into strokes and conveyed through motion-induced position shifts from small, static apertures.  In the viewing environment developed for experimentation, aperture diameters were set at 35px, or 0.64 degrees of observer’s visual field.  Utilizing multiple MIPS apertures for multi-stroke symbols, this approach can theoretically articulate almost any conceivable character or symbol in fractions of a degree of the visual field.

Codex block design

As a demonstration of first principles, twelve unique symbols were transformed into motion-modulated static aperture codex blocks.  The characters: A, B, C, D, 2, 3, 4, 5, in addition to abstract *forms* such as boat, stick figure, flower, and tree. Each symbol was first parsed into its constituent stroke paths. An image of white Gaussian noise was animated along the stroke path behind an aperture to generate a perceived position shift. To make it easier to see, the strokes in the diagrams here are color-coded according to their order in sequence – 1st (green), 2nd (light blue), 3rd (dark blue), and 4th (orange). Note, sequential strokes with overlapping end-points and start-points are consolidated into one aperture (for example, the second stroke of *B*)

Screenshot 2026-02-07 at 7.41.17 PM.png

In the case of the symbol *boat*, there immediately arises a need for expressing multi-stroke vector trajectories to communicate a complex symbol within a moment arc of perception. To accomplish this, I generated arrays of MIPS apertures, with the number of elements corresponding to the number of strokes necessary to impart minimum information for symbol recognition.

What is important to note here is the displacement of additive perceived shifts in relation to actual static structure. The most valuable asset of this approach is the close onset, or compression, of physical loci within a confined visual environment, in other words the space within an x-y coordinate plane or pixel pitch of digital display technologies. In the constructed study environment, each aperture spanned 0.64 degrees of the observer’s visual field. This is approximately equivalent to the size of your pinky fingernail when held at arms length.

Apertures were activated consecutively in time and, when inactive, presented static grain. Spatial arrangement of arrays ensured each aperture was positioned relative to the previous stroke.  This becomes especially valuable in future application of the limited real estate of head mounted consumer technologies.

system3.mp4
00:00/00:00
2016_10 part 1 - perceived motion.mp4

Increasing Environmental Complexity

How detectable are far peripheral semantic cues in increasingly dynamic environments?
Challenge

The impossible problem wasn’t just about seeing information — it was about perceiving meaning in motion. Traditional visual systems and interfaces perform well in controlled conditions: grayscale backgrounds, static focus, predictable stimuli. But what happens when the environment gets messy — chaotic, dynamic, unpredictable? Could far-peripheral semantic cues still be detected reliably when the world around them is constantly changing? The task was to break free from sterile lab conditions and evaluate whether a new perceptual interface could actually learn and adapt as complexity rose.

Insight

Most perceptual systems stumble when the background churns — contrast shifts, motion intrudes, and attention fractures. But what if the system itself could learn to predict and persevere through complexity? By embedding codex blocks — units of semantic information — into progressively dynamic visual environments, the experiment tapped into a basic truth: exposure fosters adaptability. Instead of deterioration, detection accuracy remained resilient, and detection speed continued to improve even as scenes became chaotic. This pointed to a deeper learning mechanism, not just raw sensory processing.

Execution

I engineered a multistage perceptual experimental framework and platform that traversed the visual complexity spectrum.

🎬 Phase 1 — Control: Static central fixation against a uniform gray background.
🎬 Phase 2 — Low Load Dynamic: Codex blocks were embedded into real-world natural scene videos — changes in brightness, hue, and motion introduced unpredictability.
🎬 Phase 3 — High Load Dynamic: Successive codex blocks in continuous, uncontrolled video sequences, mimicking real-life scenarios (such as pedestrian motion cues or automotive cockpit views). 

Participants were tested for:

  • Detection accuracy

  • Speed of recognition

  • Adaptation performance across increasing scene complexity

Visual trials were repeated, and performance was tracked across both controlled and high-load sequences.

Rather than collapsing under environmental pressure, subjects maintained high detection accuracy moving from static conditions into chaotic scenes. More than that — detection speeds improved with continued exposure. Variance in response times decreased, showing that the perceptual system wasn’t just coping — it was learning. Continuity between control and dynamic trials revealed a priming effect: early low-load exposure prepared participants for success in high-load conditions. These findings not only validated the feasibility of delivering peripheral semantic cues in real-world environments but also laid groundwork for adaptive perceptual interfaces that perform robustly amid sensory noise — a striking step toward truly intelligent human–machine perceptual systems.

Impact
Study Design

The focus of the experimentation conducted to evaluate the codex was two-fold:

  1. To measure detection metrics in a standard psychophysical setting, and

  2. To establish the feasibility of implementing this approach in dynamic real-world environments.

The study series first presented codex blocks with a standard 50 gray background and static central fixation (“control”).  Next, in the low-load series, each codex block was evaluated with the same dynamic natural scene presented centrally, to normalize the effects of increasing scene complexity on codex perceptibility.  Finally, three high-load environments were tested, in which codex blocks were presented in sequence over continuous video clips, with no normalization for variations in the scene.

Annotation-2020-09-10-192902.png
Study Outcomes

Results show very little reduction in detection accuracy when transitioning from a 50-gray background with static fixation cues to a natural scene with uncontrolled changes in brightness, hue, and contrast.  This table is a breakdown of accuracy metrics over successive trials in each study series.

It shows mean detection accuracy dropped at first to levels comparable with the first trial in the control series, but rebounded just as quickly (see high-load series, trial 2 on the right-hand side of the table). 

Detection speeds continue to improve despite increasing scene complexity

A subset of participants from the control series continued to complete trials in increasingly complex visual environments. An additional (n=2) completed a low-load series and two high-load series (from pedestrian continuing through automotive cockpit POV trial sequences), while another group of participants (n=3) completed the automotive cockpit POV, in addition to the control.  Comparing these two groups allows us to evaluate whether continued exposure to the codex in environments, with gradually increasing complexity, resulted in improved detection rates. 

results7.jpg

Both subjects who completed the low-load series in the pedestrian POV environment continued to improve in reported detection times over successive trials, despite the dramatic shift from a static 50-gray background and static fixation to a dynamic natural scene. Notably, fitted trends show close continuity throughout the series, especially near the end of control trials and beginning of low-load series. 

In addition to continued downward trends in the low-load case, both subjects exhibited a substantial decrease in variance of recorded detection times.  These outcomes from the low-load series represent the first demonstration of far peripheral complex symbol recognition in dynamic visual environments.  High accuracy rates and continued improvements in detection speed in low-load series build a strong foundation for feasibility and practicality of implementing motion-modulated peripheral stimuli for semantic information delivery outside of controlled psychophysical environments. 

Annotation-2020-09-10-194539.png
bottom of page