Spatial Signals | 7.13.2025
Weekly insights into the convergence of spatial computing + AI
Experimenting with an afternoon post :)
Here’s this weeks ‘Spatial Signals’ — a snapshot of what matters most in Spatial Computing + AI.
This week’s TLDR:
Perk up and pay attention, because the world is changing. Fast.
First time reader? Sign up here.
(1) Apple plans five new Vision products by 2028
Over the next three years, Apple plans to roll out five new Vision products—spanning from lightweight smart glasses to high-end mixed reality headsets.
This isn’t just a refresh cycle. It’s a full-spectrum play to own both ends of the Spatial Computing market: accessibility and aspiration.
It starts this year with an updated Vision Pro, featuring the M4 chip and comfort improvements to keep momentum alive. But what’s coming next is far more telling.
In 2027, Apple will launch its first mass-market smart glasses—audio-first, with voice, gesture, and camera input but no display. Think AirPods, but for spatial awareness.
Alongside those, Apple plans Vision Air, a lighter, thinner, more affordable AR/VR headset built with a plastic-magnesium frame and powered by future iPhone chips. It’s designed to bring spatial computing to a broader audience without the Vision Pro’s bulk.
By 2028, we’ll see Vision Pro 2, a full redesign with Apple’s next-generation silicon and lighter form factors. And finally, a high-end XR Glasses product leveraging waveguides and LCoS displays for true AR capabilities—positioned as Apple’s premium, AI-integrated vision for head-worn computing.
So what?
Apple isn’t building toward one killer device—it’s building toward a multi-tiered ecosystem of spatial products that span casual wearables to immersive headsets. This mirrors their approach to the iPhone, iPad, and Mac: different devices, shared OS, unified ecosystem.
For developers, this means planning for fragmentation—apps, content, and interfaces will need to flex across radically different form factors and user expectations.
For enterprises, it opens pathways from lightweight consumer wearables to industrial-grade AR/VR without switching ecosystems.
For consumers, it signals Apple’s first serious push to bring spatial computing into the mainstream—through both high-end ambition and everyday utility.
Spatial computing isn’t headed for a single form factor. It’s evolving into a full hardware category. And Apple is laying the groundwork to own it—end to end.
(2) From pixels to photonics — the next display revolution is (actually) holographic
A new class of display technology is emerging—one that doesn’t rely on flat projections, waveguides, or visual tricks, but instead shapes light itself into real, volumetric 3D.
That’s the breakthrough behind Swave Photonics, a company building true holographic displays designed for the Spatial + AI era. Their technology doesn’t simulate depth—it creates it, sculpting light into high-fidelity, interactive 3D environments with no need for waveguides or visual workarounds.
At the heart of this is Holographic eXtended Reality (HXR)—a platform that uses diffractive photonics on CMOS chips to achieve the world’s smallest pixel.
These pixels don’t just project light—they shape it. The result is natural depth, clarity, and realism that could finally dissolve the boundary between digital and physical space.
Swave’s approach is manufacturing-friendly, leveraging CMOS processes for scalability and affordability. It’s already been recognized with a CES 2025 Innovation Award and has secured follow-on funding from Samsung Ventures and IAG Capital Partners—clear signals this is no longer theoretical tech. It’s heading for commercialization.
And the timing couldn’t be better. As spatial computing and AI converge, the demand for more natural, immersive interfaces is accelerating. HXR isn’t just about seeing 3D—it’s about interacting with it in real space, across smaller form factors, with higher fidelity, and less fatigue.
So what?
This points to a future where screens dissolve and light becomes the interface. AI-generated worlds won’t just appear on displays—they’ll inhabit space itself, viewable from any angle, in true depth. XR becomes less about tricking perception and more about aligning with it.
If spatial computing is going to feel natural, it has to look natural first. And that means displays built not for pixels—but for light, depth, and reality.
(3) AI can now navigate 3D worlds through language, not maps
Robots are gaining the ability to explore unfamiliar spaces not with rigid maps or GPS pings, but by simply understanding your words—like a friend following directions through a crowded room.
That’s the bold promise of GRaMMaR (from Google DeepMind), a groundbreaking framework fusing Neural Radiance Fields (NeRFs) with vision-language reasoning to let AI agents navigate open-world 3D environments using natural language alone.
Gone are the days of pre-mapped routes or scripted paths. GRaMMaR builds a dynamic, radiance-based 3D model of the surroundings in real time—capturing light, textures, and object relationships—then anchors spoken instructions directly into that spatial context. Tell it, “Weave around the coffee table and head to the window,” and it grasps the semantics, spatial affordances, and relational cues to plot a path that's adaptive and intuitive.
It’s not just following commands; it’s interpreting intent, reasoning through visuals and words like a human would. This shines in unpredictable settings where traditional systems crumble—no need for exhaustive data labeling or controlled labs.
Tests on benchmarks like Room-to-Room and Room-across-Room show GRaMMaR crushing prior methods, generalizing to novel spaces with fluid, human-like navigation.
So what?
This flips robotics from rigid automation to conversational collaboration, unlocking AI that truly "gets" the physical world.
For spatial computing, it means smarter agents in homes, warehouses, or cities—robots that assist with everyday tasks via simple dialogue, or AR systems that guide you through unfamiliar terrain without apps or beacons.
But as navigation becomes linguistic, the deeper shift is toward embodied intelligence: AI that doesn't just move through space—it comprehends why, adapting to our descriptions and desires.
Because when AI can see the world the way we describe it— navigation isn’t just about getting from A to B. It’s about understanding why we’re going there at all.
The implications?
For business…
Are you designing your environments for machines that follow rules—or machines that understand context?
That’s the shift on the horizon as AI moves beyond rigid automation into contextual, language-driven navigation.
GRaMMaR offers a glimpse of what’s coming: robots and spatial agents that no longer require fixed maps or pre-defined routes. Instead, they interpret human instructions, reason through 3D environments, and adapt their behavior in real time.
This isn’t about giving machines more data—it’s about giving them understanding. AI that comprehends spatial relationships through language will fundamentally reshape how we think about operations, logistics, and human-machine collaboration.
Warehouses, factories, hospitals, campuses, and cities won’t just be mapped for machines—they’ll be navigated through intent, not coordinates. The future isn’t a grid of waypoints—it’s a shared space where humans and machines collaborate through words, not code.
The opportunity:
Unlock greater flexibility in robotics without re-mapping or manual programming.
Enable conversational workflows across logistics, manufacturing, healthcare, and public spaces.
Design environments where machines interpret and adapt to natural human behavior—not the other way around.
Key questions for your team:
How much time, cost, and complexity do your current systems waste on static, rule-based navigation?
What becomes possible when robots and AI can adapt dynamically to the spaces they’re in?
Are your teams ready to shift from programming workflows… to collaborating with intelligence?
The future of spatial systems won’t run on fixed paths. It will run on context, collaboration, and conversation—between humans, machines, and the spaces we share.
For life…
What happens when light itself becomes the interface?
We’re moving toward a future where digital experiences won’t live behind glass. They’ll inhabit space—shaped not by pixels, but by light.
With true holography on the horizon, the boundary between the digital and physical worlds begins to blur. Displays won’t just show images; they’ll sculpt presence. Information won’t sit on a screen; it will occupy space around us, moving as we move, revealing itself from every angle.
This is the promise technology like Swave’s HXR: displays that no longer simulate depth but create it—through light, photonics, and the world’s smallest pixels. It’s a shift that redefines how we see, interact, and co-exist with digital content.
Think less of headsets and more of presence. Less of screens and more of surfaces—walls, windows, air itself—capable of hosting volumetric information. The world becomes a canvas, and the line between real and rendered starts to dissolve.
But this shift isn’t just technological. It’s experiential.
If the spaces we inhabit can host light-sculpted realities alongside physical ones, what happens to how we perceive home, work, learning, and connection? Do we retreat into these layered environments—or does the world itself become richer through them?
When digital presence moves from device to dimension, we won’t just look at technology.
We’ll live with it—woven into the fabric of the spaces we move through every day.Because the future of computing isn’t flat. It’s volumetric, luminous, and alive in light.
Favorite Content of the Week
X Post | Drones are getting crazy good/powerful… bring on the home deliveries!