Wearing the Future: Notes from Mexico City with Ray-Ban Meta Glasses

I’ve spent the past few weeks in Mexico City using a pair of Ray-Ban Meta glasses, part camera, part assistant, part translator, and something about the experience keeps pulling me back to the same thought: we may already be living in the early stages of a post-mobile phone world.

I purchased the glasses to ask questions or get help with Spanish translations. Their most obvious features, snapping quick street photos without pulling out my phone was an added bonus and felt surprisingly natural. The first-person perspective created a different kind of intimacy, unfiltered, unposed, and fully embedded in the moment. Using voice commands felt like having a low-key assistant riding along with me, but without the awkwardness of holding up a phone and announcing to the world that I needed help.

What surprised me most, though, wasn’t the functionality. It was the feel. The technology didn’t demand my attention. It waited. It was simply there, watching, listening, ready to help if I asked, silent if I didn’t. The glasses weren’t a replacement for a phone. They were something else entirely. More subtle. More ambient.

And that’s when the bigger shift started to take shape in my mind. We’re moving past an era where technology lives in rectangles we reach for. Ambient AI is coming, slowly but steadily. These are tools that live with us, not on us. Tools that observe, learn, and assist without needing to be tapped, clicked, or swiped. In the case of these glasses, AI doesn’t just respond to queries. It begins to understand the context around them, including location, language, visual cues, and tone of voice.

That said, this is still version one. The glasses are entry level. The camera is decent, but not especially sharp. Stills and video are serviceable but not built for detail or dynamic range. There’s no heads-up display yet. But we’ve seen this curve before. The Oakley version of the glasses offers a slight step up in capability, and future versions will inevitably become smaller, smarter, and more powerful. The technology will fade further into the background while becoming more deeply embedded in daily life. We are not far from ambient devices that are nearly invisible and fully capable.

Using these glasses in a city layered with noise, history, and motion has been a kind of test run. A way to see how it feels to let Meta AI ride shotgun for a while. Not to dominate the experience, but to ride along with it. And so far, what’s most striking isn’t what the technology does. It’s what it suggests. A near future where interaction becomes frictionless and where AI’s strength comes not from visibility but from its quiet, constant presence.

This shift isn’t happening in isolation. OpenAI has already signaled its ambition to move beyond screens through its collaboration with Jony Ive and the acquisition of his design firm, LoveFrom, point directly toward a new kind of AI device. Wearable, screenless, and built around awareness instead of interface. Rumors suggest a 2027 release for what could become the first truly ambient AI product, something that listens, watches, and responds in real time to the world you move through. Not because you tapped it, but because it understands the situation.

Other companies have tried. Humane AI Pin, developed by former Apple engineers, attempted to break free from the phone. It promised hands-free computing through a wearable that projected text onto your palm and responded to your voice. The idea was bold. The execution, unfortunately, fell short. The device felt clunky and underpowered. It was more proof of concept than breakthrough. Another product, the Limitless pendant, is even quieter, worn like jewelry, designed to transcribe and summarize conversations throughout your day. It leans further into ambient functionality, but again, adoption depends on whether people find it helpful or intrusive. These experiments raise the right questions. They remind us how difficult it is to create invisible tech that feels both useful and welcome in the real world.

In contrast, Meta’s smart glasses are not trying to replace your phone entirely. They are trying to blend in. They don’t ask you to change behavior. They offer something lighter and more gradual, like subtle assistance, on-the-go capture, and ambient context. And perhaps that is why they work, at least for now.

But none of this unfolds without consequences. As these ambient tools become smaller, more capable, and more common, most people will never be asked if they want to participate. They will simply be observed. Wearing AI-enabled glasses might feel like a personal decision, something the wearer controls. But that presence turns everyone nearby into potential data, including the shopkeeper, the commuter, the child in the background, and the stranger walking past on the street. They did not opt in. They cannot easily opt out.

This is more than a privacy issue. It is a shift in the social contract. The systems of consent we have now, from terms and conditions to app permissions, were built for screen-based tools. They assume interaction. They assume choice. But ambient AI operates in public, often without announcement, and its functionality depends on being quietly present.

So we find ourselves with new questions. What does privacy mean when computation lives in a pair of glasses? Who decides what is stored, analyzed, or forgotten? And how do we begin to set boundaries for technologies designed not to be noticed?

These are not technical issues. They are civic ones. And they need to be asked before ambient AI becomes as unremarkable as Wi-Fi or street cameras. It is easy to be swept up in what this technology can do, especially when it is packaged as effortless, intuitive, and near-invisible. But ease should not outpace care. If we are going to build a future filled with tools that see, listen, and assist without being invited, then we have to start talking about how we want to live with them, and who gets to decide what those rules look like.

I did not expect a pair of glasses to spark these reflections, but walking through Mexico City with a camera in my eye and a voice assistant in my ear, I started to feel that shift. Not just a change in how we capture or translate, but in how we relate to technology itself. The post-phone world may not arrive with a bang. It may arrive like this, quietly, waiting at eye level, learning, listening, and slowly becoming part of the way we move through the world.

Richard Cawood

Richard is an award winning portrait photographer, creative media professional and educator currently based in Dubai, UAE.

http://www.2ndLightPhotography.com
Next
Next

Strapped In: Teaching in the Age of Relentless AI