Liquid AI and AMD: Building the Future of Edge AI

The story of artificial intelligence in recent years has largely been dominated by cloud-based giants—OpenAI, Google, Anthropic, Microsoft—deploying vast, server-side models accessed via APIs. Yet running AI in the cloud, for all its power, comes at a price: high infrastructure costs, latency, and persistent questions around privacy.
Liquid AI, an MIT spin-out with roots in cutting-edge research on efficient foundation models, is taking a different path. By focusing exclusively on edge AI—models that run locally on laptops, smartphones, and even wearables—the company wants to make generative AI faster, more private, and far less expensive to deploy.
A significant step in this journey came with Liquid AI’s collaboration with AMD, announced this summer. Their Liquid Edge AI Platform (LEAP) now runs natively on AMD’s Ryzen™ and Ryzen AI™ processors, a partnership that could unlock new opportunities for developers worldwide.
Privacy and Zero Cloud Costs
“The LEAP-AMD collaboration will enable developers to build apps using state-of-the-art language models and other generative AI models that run fully local on AMD laptops and hardware,” said Mathias Lechner, Chief Technology Officer of Liquid AI. “This means there is 100% privacy and zero cloud costs involved compared to solutions that use API models from providers such as OpenAI or Google.”
The point here is crucial. Today, most AI apps rely on calls to centralised servers where data is processed, raising concerns about both cost and privacy. Each query comes with a price tag, while sensitive data is transmitted over the internet to third-party providers. By contrast, LEAP allows developers to keep everything on the device. No data leaves the user’s laptop or phone, and the developer avoids the spiralling costs of cloud hosting.
One of the first demonstrations of this approach was Liquid Lexify, a grammar and writing improvement tool similar to Grammarly. Built during a hackathon, Lexify ran entirely on-device on iOS and Android. “As mentioned above, the advantage of using LEAP is that the developer has zero infrastructure cost and the user full privacy (no data is transmitted, everything is computed and stays on the device),” Lechner explained.
From Productivity to Gaming and Creativity
Lechner emphasises that the platform is not tied to any one sector or vertical. “Using LEAP app developers can build productivity, gaming, creativity and other apps, ie. the developer has full flexibility, we do not restrict them,” he said.
This flexibility matters because edge AI is still in its infancy, and its biggest breakthroughs may come from unexpected places. Just as the smartphone app economy gave rise to billion-dollar businesses few could have predicted—Instagram, Uber, TikTok—Liquid AI is betting that giving developers freedom and low-cost tools will spur innovation.
Research at the Core
Unlike many “edge AI” startups that focus primarily on packaging existing models for deployment, Liquid AI stresses its research credentials. “The advantage that Liquid AI has over other ‘edge AI’ companies is that we have a strong research team with years of experience on building efficient foundation models and the team is focused exclusively on that,” Lechner said. “Thus, we are in a unique position to build models specifically tailored for exactly the edge AI use cases.”
This reflects the company’s MIT heritage. Its founding team has been closely associated with the development of liquid neural networks—a new class of AI architecture designed to be efficient, adaptive, and scalable in ways that traditional transformer models are not. For edge applications, where memory and compute power are constrained compared to data centres, these innovations could prove decisive.
Vertical Integration Through AMD
On top of the research pedigree, the AMD partnership represents an opportunity to integrate hardware and software more tightly than many rivals. “On top of that, with partnering with AMD we can achieve almost full vertical integration and further optimise to maximise quality and efficiency,” Lechner explained.
Vertical integration is an idea that has proven powerful in the tech industry before. Apple famously succeeded by designing both hardware and software for the iPhone in tandem, achieving levels of optimisation competitors struggled to match. Liquid AI and AMD hope for a similar effect in edge AI—models designed to run specifically on AMD chips, taking advantage of the architecture for maximum performance per watt.
Scaling Across Devices
Liquid AI’s ambitions don’t stop at laptops. “Our goal is to build the best local models at every scale, from wearables (smart glasses) that process the video feed, to productivity apps running on a laptop,” said Lechner.
The wearable use case is particularly striking. Smart glasses, for example, generate a constant stream of video data that must be processed quickly and privately. Sending that video to the cloud would not only consume bandwidth and battery but also create privacy risks. Running the analysis locally—identifying objects, translating text in real time, or providing augmented reality overlays—makes the experience faster, safer, and more viable.
At the other end of the spectrum, LEAP can support fully featured productivity tools on laptops, where AMD’s Ryzen processors can provide the necessary computational muscle. Between these extremes, Liquid AI sees opportunities in gaming, creative applications, and more.
The Broader Context: Edge AI vs Cloud AI
To understand the importance of Liquid AI’s work, it helps to see the broader landscape. Today’s generative AI boom has been powered largely by cloud-based models. When you use ChatGPT, Gemini, or Claude, your query is routed to a distant data centre where a massive model processes it before sending the response back.
This model works, but it is expensive and slow relative to what’s possible on-device. Every interaction requires an internet connection, raising costs for developers and privacy concerns for users. In some cases—military, healthcare, finance—those concerns are deal-breakers.
Edge AI promises an alternative. Advances in model efficiency and chip design mean that increasingly capable models can now run locally. The trade-off is scale: no laptop or phone can match the sheer size of GPT-4. But smaller, more efficient models can still power useful applications, and in many cases, faster response time and guaranteed privacy are worth more than raw scale.
It is into this space that Liquid AI, with its LEAP platform and AMD collaboration, is stepping.
Fundraising and Momentum
Liquid AI’s approach has already attracted investor backing. In December 2024, the company announced a successful fundraising round to accelerate development. The funds, Lechner noted, are helping expand both research and platform capabilities as LEAP moves from early experiments to wider adoption.
That momentum matters, because the edge AI sector is increasingly crowded. Competitors range from startups packaging open-source models to tech giants experimenting with mobile inference. Differentiation will depend not just on technology but also on ecosystem—how easy developers find it to build and deploy applications, and how much performance can be squeezed from each device.
Why AMD?
AMD’s role in this story is also worth considering. Long seen as the underdog to Intel in CPUs, AMD has enjoyed a renaissance over the past decade, with its Ryzen processors now widely respected for performance and energy efficiency. By adding integrated AI accelerators into its latest chips, AMD has positioned itself as a key player in the AI hardware race.
For AMD, partnering with Liquid AI provides a showcase: a way to demonstrate not just raw performance benchmarks, but real-world applications. For Liquid AI, it provides credibility, scale, and access to optimisations that only come with deep hardware integration.
What Comes Next
While Lechner was careful not to reveal specifics about upcoming launches, the trajectory is clear. Liquid AI will continue refining its models, expanding support for more devices, and engaging with developers through hackathons and community-building.
If the promise holds, the result could be a new wave of applications that blend the best of generative AI with the strengths of edge computing: speed, privacy, and cost-effectiveness.
Conclusion
In the grand arc of AI development, the story of the past five years has been one of scaling models ever larger in the cloud. The next five may be about scaling them down—making them efficient, local, and integrated into everyday devices.
Liquid AI, with its research-led approach and AMD collaboration, is betting that the future of AI is not in distant data centres but in the laptops, phones, and wearables we use every day.
As Lechner put it: “Our goal is to build the best local models at every scale, from wearables (smart glasses) that process the video feed, to productivity apps running on a laptop.”
If they succeed, the age of edge AI may be closer than we think.