The Everything-UI: How LLMs Are Rewriting the Interface of the Internet
Why the next big platform shift won’t just be about AI, but about how we experience it.
In my previous article, Building for Everyone: The UX Challenge of Consumer AI, I argued that as AI tools move from niche prototypes to mass-market apps, the single biggest barrier is not model size or dataset quality, but interface quality.
If a system cannot deliver fast, fluid, intuitive interaction, it does not matter how smart the model is or how broad its access. It will fail to capture everyday users.
Now, we are entering a new frontier. With the launch of ChatGPT Atlas, we are beginning to see how large language models (LLMs) are not just helping us search or summarize information. They are becoming the interface to the web itself.
In this essay, I want to explore what that means: how the interface of computing is being rewritten around language, and why, despite this technological leap, one truth remains constant: great UX still wins.
From Mobile Apps to Model Interfaces
ChatGPT Atlas just launched, and it feels like the beginning of something big. It is a browser where you don’t really browse. You type what you want, and the large language model (LLM) goes out into the web to find, summarize, and structure it for you. You can still open individual websites, but the LLM has quietly become the interface to the World Wide Web.
It is an early glimpse of a new paradigm. For decades, the browser was our window to the internet. Now, the window itself is learning to think.
From Mobile Apps to Model Interfaces
Today, the smartphone is our dominant computing device. And if the web is being reshaped by LLMs, mobile interfaces will not be far behind. Imagine if, instead of tapping and swiping through apps, you described what you wanted, and an on-device model rendered it instantly on screen, like Siri, but actually useful.
It is easy to picture: you tell your phone, “show me my unread messages, then summarize today’s market news,” and it dynamically composes the screens, fetching the right content and layout on demand. The LLM becomes not just a voice assistant but the UX layer itself.
Even in gaming, I have seen early pitches where LLMs generate what appears on screen frame by frame, worlds rendered in real time at 120 frames per second, based on what the model “knows” should happen next. It sounds futuristic, but the underlying technology is already possible, and it is improving fast.
I’m Torn Because Good UX Still Wins
But I am torn. History has shown that elegant abstractions do not always win; great user experience does.
In 2011, Facebook launched its first dedicated mobile app built on HTML5. It was ambitious: one codebase to run everywhere. But it was also slow, clunky, and frustrating. Scrolling lagged. Transitions stuttered. Users hated it.
When Facebook switched to fully native apps, Objective-C on iOS and Java on Android, everything changed. The UI was fluid, load times dropped, and engagement soared. Native performance simply felt better.
That lesson still holds: smoothness, speed, and tactile feedback matter.
ChatGPT Atlas, as exciting as it is, reminds me of Facebook Mobile in 2011, visionary but not yet delightful. It is a glimpse of the future through a foggy window.
The Next Paradigm: The Everything-UI
If the 2010s were about native apps, what wins in the 2030s? How do we evolve from a unified, text-based chat interface into something that satisfies our deeper needs, such as play, communication, creativity, and exploration?
The challenge is not technical. It is experiential.
An “everything-UI” would need to support gaming, messaging, social feeds, video watching, and personalized creation, all seamlessly mediated by an LLM.
But that raises new questions:
How do intellectual properties like Fortnite exist in a world without fixed app ecosystems?
How do creators and influencers maintain reach when the LLM algorithms define discovery?
What happens to distribution when the interface itself curates what we see?
We are standing at the edge of another platform shift. Atlas is the proof of concept, the HTML5 Facebook of this era. Somewhere ahead lies the native equivalent, an LLM-driven interface that feels instant, alive, and deeply human.
The future is not only about smarter models. It is about how those models reshape what interacting with technology feels like.
Because even in the age of AI, good UX still wins.

