Spatial Signals | 6.15.2025
Weekly insights into the convergence of spatial + AI
Welcome to our first ‘Spatial Signals’: a weekly snapshot of what matters most in Spatial + AI.
It includes:
2-3 key market signals & observations
2 questions to help you digest & implement (one for work and one for life)
My favorite piece of content this week (demos, articles, memes, announcements, etc)
This week's post is on the juicier side... A ton happened these past few weeks, so gotta play catch up!
This week’s TLDR:
Headsets/glasses are catching a second (third?) wind
Meta + Anduril poised to dominate front line AR (first with the military, then in industrial)
Apple quietly solves major blockers to enterprise adoption: shared devices and collaboration
Snap + Niantic team up to build the AR Cloud (shared, contextual, intelligent real world experiences via VPS + 3D maps)
Real-time avatars are getting scary good—led by Apple’s Personas and Google’s Project Beam
4D GSplats have arrived, and the implications are mind/time bending... e.g. What happens when a memory becomes a place?
Alright, perk up and enjoy, because things are accelerating...
First time reader? Sign up here.
1) Glasses & headsets are having a moment
After years of hype and false starts, spatial computing hardware is regaining some momentum (and utility). Starting with what I think could become industry defining… (at least in the enterprise).
Meta + Anduril poised to dominate front-line AR
Defense tech meets next-gen XR—and the enterprise should take notes.
Meta and Anduril have partnered on EagleEye, a rugged AR headset and helmet system for the U.S. Army’s IVAS Next program, set to deploy in 2025. The device fuses Meta’s Reality Labs optics, sensors, and Llama-powered AI with Anduril’s Lattice OS—a battlefield-proven command platform.
The result: real-time overlays of drone feeds, threat detection, teammate positions, and tactical guidance—directly into the soldier’s field of view.
But this goes far beyond combat. EagleEye is quietly laying the foundation for industrial AR: hardened, high-performance glasses that can thrive in the harshest field environments. Lattice doesn’t just visualize; it contextualizes. It knows what you're seeing—a vehicle, a structure, a hazard—and produces insights and guidance on the fly.
This is a blueprint for the next phase of spatial computing—where AR becomes a mission-critical interface for energy, construction, logistics, and defense. The combination of Meta’s hardware and Anduril’s software will offer real-time awareness, instant updates, and deep integration with operational data.
The enterprise should watch this closely: EagleEye hints at what’s coming for every frontline role that needs hands-free access to live digital context, aka an interface for digital twins.
Apple quietly solves major blockers to enterprise AR: shared devices and shared context.
With visionOS 26, Apple introduced Team Device Sharing, a seemingly simple but critical feature.
A single Vision Pro headset can now be used by multiple people without needing to reconfigure settings. Your eye calibration, hand data, and accessibility preferences travel with your iPhone and transfer seamlessly between devices. This small update has big consequences: it turns the Vision Pro from a personal gadget into shared infrastructure—perfect for design teams, hospitals, training centers, and field operations.
This is a major sign that Apple is serious about enabling enterprise-scale use cases.
Real-time, local spatial collaboration is finally productized. The most magical use case in spatial computing has always been collaboration: multiple people occupying the same digital space, manipulating 3D content as if it were real.
That dream—once only possible through duct-taped prototypes and niche apps—is now baked into Apple’s core OS. With Shared Spatial Experiences, teams can review designs, brainstorm, or experience content together in the same physical or remote space.
That Nike design workflow from Part I of this essay? (The Fate of Apple’s Vision Pro) It’s now possible. And when real-time avatars (Personas) become lifelike and expressive? That’s when we start seeing a headset on every desk…
Snap + Niantic team up to build the AR Cloud (shared, contextual, intelligent, localized experiences via VPS + 3D maps)
Snap ancounced that its first consumer smart glasses, Specs, will ship in 2026. Thinner, lighter, and with a wider field of view than the 2024 developer Spectacles, Specs run Snap OS and integrate AI support from OpenAI and Google’s Gemini—enabling features like live translation, cooking help, pool coaching, and interactive games
Crucially, Snap has partnered with Niantic Spatial in a multi-year deal to build a shared, semantic AI-powered 3D map of the world.
Using Niantic’s VPS, and soon a "Large Geospatial Model," developers can anchor spatial AR experiences with centimeter-level precision.
This means Stories, quests, or virtual objects can persist at the same spot for everyone—even across time—as the map grows through crowdsourced user scans.
What this means for builders: Specs aren’t just solo gadgets—they’re shared spatial computers. You can build multiplayer tours, collaborative design tools, or synchronized location-based games. And since AR scenes live in the same mapped world, the same experience shows up for every user—whether friends standing next to each other or strangers across the globe
2) Real-time avatars are getting scary good—led by Apple’s Personas and Google’s Project Beam
At WWDC, Apple unveiled a huge leap in digital presence: upgraded Personas for Vision Pro, which scan your face in about 20 seconds and generate a shockingly lifelike avatar—complete with realistic skin, blinking, side profiles, and reflections in your glasses.
Over at Google I/O, Project Beam showcased an even more radical idea: photorealistic avatars animated by Gemini AI, capable of holding conversations, responding to prompts, and replicating your presence—even when you’re not there.
The tech behind Persona’s is wild: Apple’s using machine learning and something akin to 3D video pixels to rebuild your presence in space, lighting, and motion. It runs locally on the headset, creating an impressively fluid version of “you” without needing cloud computing. And while there are still some robotic moments, this is a massive leap forward.
For prosumers and builders, this opens up surreal new possibilities: AI-powered meetings where your avatar delivers the pitch, training sessions that run on your cloned likeness, and spatial social apps where everyone looks... exactly like themselves. We’re not just watching avatars improve—we’re watching the performance of self go realtime.
3) 4D GSplats have arrived, and the implications are mind/time bending...
A breakthrough from 4DV.ai called 4D Gaussian Splatting turns your everyday videos—like phone clips—into fully immersive, interactive 3D scenes you can explore in your browser. You can pause them, move around, even scrub forward and backward through time, making it feel more like stepping into a memory than watching one.
Here’s how it works (in simple terms): it treats each frame of your video as a cluster of tiny 3D “splats”—think of them as colorful floating dots with shape and transparency. By adding a fourth dimension—time—these splats are linked across each other so they move and evolve naturally. Under the hood, the system cleverly figures out how each splat shifts in space and time, then “projects” them to your screen at different angles and moments, blending them to recreate the scene realistically
No special glasses or headsets needed—just an ordinary browser, and suddenly your video becomes an explorable volume that plays like a time machine.
But here’s the real idea underneath the tech: maybe the past isn’t as gone as we think. Maybe memory, like reality, is something we can move through, not just replay. And if our memories become places, perhaps our stories are meant to be explored/experienced, not just told.
So what are the implications?
For business…
How would your business change if your frontline workers could see & know everything—with perfect context, guidance, and shared awareness?
With AI-powered AR glasses becoming lighter, smarter, and fully standalone, we’re entering a world where knowledge isn’t just transferred—it’s embedded into the environment. A technician sees a live overlay of repair instructions. A new employee watches procedures unfold spatially. A remote expert joins as a floating voice with annotations in real time. Context isn’t explained—it’s lived.
But most workflows today weren’t designed for this level of situational intelligence. Training manuals, static PDFs, and siloed systems can’t keep up with the immediacy that spatial computing unlocks. The biggest risk isn’t missing the tech—it’s failing to rethink the flow. If the interface is now the world itself, then your business must start designing for experiences, not just tasks.
The opportunity? Start small. Map your most critical workflows—repairs, inspections, onboarding—and ask: what would this look like if the worker could see everything? Then build from there. The future of productivity is ambient, assistive, and always one glance ahead.
For life…
What happens when the boundary between memory and simulation dissolves—when the past becomes explorable, revisable, and no longer just... over?
With technologies like 4D Gaussian Splatting, we can now step back into our memories—pause them, rotate them, even walk around inside them. Moments that were once fleeting become spatial, persistent, and infinitely revisitable. The past stops fading and instead becomes a kind of software—interactive, explorable, and ready to be remixed.
But when memory becomes a place we can edit, re-enter, and reframe, do we start remembering what actually happened—or just the version we like best? If every moment can be relived from the perfect angle, do we lose the quiet finality that makes life feel real? As simulation starts to overwrite recollection, our emotional timelines risk becoming endlessly loopable but never fully lived.
Maybe the answer is to treat these immersive memories not as replacements, but as rituals—spaces to revisit not for control, but for understanding. When the past becomes explorable, we may need to practice letting go all over again.
Favorite Content of the Week
If you enjoyed… please consider sharing with a friend :-) We’d beyond appreciate your support towards this mission & helping the community grow!