Powerful intelligence. Everywhere.

Our ultra-efficient multimodal models are turning the promise of an AI-powered world into reality. Optimized for CPUs, GPUs, and NPUs, they enable privacy-, low-latency, and security-critical applications everywhere, not just the cloud.

Featured in
AI for everyone

Maximum intelligence. Minimum compute.

Enterprise Solutions

End-to-end custom AI tailored to your business

We deliver full-scale custom AI solutions tailored to your business’ needs, hardware and data. Our unique process for developing efficient models is rooted in our proprietary device-aware model architecture search, allowing us to quickly provide the best fit model for latency, privacy and security critical needs - on device, in the cloud, or hybrid.

startup solutions

Own your moat with specialized AI

Amplify your startup’s growth and create competitive advantage with customized LFMs. Through our startup program, selected startups gain access to our full stack along with guidance from our product and engineering teams to specialize and deploy the best model for your business.

Developer Solutions

Specialize and deploy LFMs anywhere

Through our developer tools and community we’re making building, specializing and deploying highly efficient powerful AI accessible to everyone, from devs just getting started to experts building at scale.

Efficient AI that delivers. Everywhere you need it.

Liquid Neural Networks

A fundamentally different architecture built for real-world intelligence

Our Liquid Foundation Models (LFMs) leverage liquid neural networks, inspired by dynamical systems and signal processing, to process complex, sequential, and multimodal data with superior reasoning and flexibility. For faster, more efficient AI - without compromising capability and performance.

Liquid Foundation Models

The most efficient models on the market

Our LFMs are purpose-built for efficiency, speed, and real-world deployment on any device. From wearables to robotics, phones, laptops, cars, and more, LFMs run seamlessly on GPUs, CPUs or NPUs, while delivering best-in-class performance. So you get AI that works - everywhere you need it.

Our LFM2 family includes a range of modalities and parameter sizes and are rapidly customizable to deliver AI that’s just right for your use case.

LEAP & Apollo

Efficient on-device intelligence for everyone

With LEAP, our developer platform for building, specializing and deploying on-device AI—and Apollo, a lightweight app for vibe checking small language models directly on your phone, we’re making on-device AI accessible to everyone from beginners to experts.

LiquidAI/LFM2-1.2B

Text
1.2B
Updated July 10

LiquidAI/LFM2-Audio-1.5B

Audio
1.5B
Updated October 1

LiquidAI/LFM2-350M

Text
350M
Updated July 10

LiquidAI/LFM2-VL-1.6B

Image
1.6B
Updated August 12

LiquidAI/LFM2-700M

Text
700M
Updated July 10

LiquidAI/LFM2-VL-450M

Text
450M
Updated August 12

LiquidAI/LFM2-350M-ENJP-MT

Text
350M
Updated September 4

LiquidAI/LFM2-1.2B-RAG

Audio
1.2B
Updated September 18

LiquidAI/LFM2-1.2B

Image
1.2B
Updated September 25

LiquidAI/LFM2-350M

Text
350M
Updated August 25

LiquidAI/LFM2-700M

Text
700M
Updated September 7

LiquidAI/LFM2-1.2B

Text
1.2B
Updated July 1
News

Keep up with Liquid

View more
View all
Ready to experience Liquid AI?

Power your business, workflows, 
and engineers with Liquid AI.

Manage your preferences

We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.

Learn more
  • Essential cookies required