The world’s best-in-class English, Arabic, and Japanese model, native in French, German, and Spanish, optimized to be the substrate for private enterprise chat, code, fast instruction following, and agentic workflows.
Today, we release LFM-7B, a new best-in-class language model. LFM-7B is designed for exceptional chat capabilities, including languages like Arabic and Japanese. Powered by the Liquid Foundation Model (LFM) architecture, it exhibits unique features like low memory footprint and fast inference speed. This makes it cost-efficient to fine-tune for specific use cases (such as smart chatbots and document generation) and to deploy on-premises or directly on devices.
We unveil LFM-7B, the best-performing model in its size class on the market.
LFM-7B uses a non-transformer, Liquid Foundation Model architecture, with high throughput and the lowest memory footprint.
LFM-7B is the natural choice of a language model for local deployment, latency-bound, and cost-constrained tasks.
LFM-7B is the world’s best-in-class multilingual language model in English, Arabic, and Japanese.
Try LFM-7B today on Liquid Playground, and soon on Openrouter, Perplexity Playground, Lambda API, and AWS marketplace.
LFM-7B comes with inference and customization stacks for enterprises. Get in touch with us to learn more.
LFM-7B is specifically optimized for response quality, accuracy, and usefulness. To assess its chat capabilities, we leverage a diverse frontier LLM jury to compare responses generated by LFM-7B against other models in the 7B-8B parameter category. It allows us to reduce individual biases and produce more reliable comparisons.
We compared answers to English prompts that include curated business use cases such as following instructions, questions from Arena-Hard-Auto (Li et al.), and real-world conversations (Zheng et al.). Thanks to our comprehensive preference alignment process, LFM-7B outperforms every LLM in the same size category.
The following head-to-head evaluation shows the proportion of times the LLM jury preferred answers generated by LFM-7B over those from other models. It contains the same exact English prompts.
LFM-7B maintains the core capabilities of expansive knowledge and reasoning similar to our other models. In addition to enhanced conversational skills, it also showcases improved coding and instruction-following abilities.
The following scores were obtained on standard automated benchmarks, using Eleuther AI’s Language Model Evaluation Harness v0.4.5. We only compare post-trained models.
LFM-7B supports English, Spanish, French, German, Chinese, Arabic, Japanese, and Korean. While evaluating our models, we observed that automated benchmarks like MMMLU add confounding factors (e.g., world knowledge) and do not require any writing skills in the target language. On the other hand, arena evaluations specifically focus on producing grammatically correct and relevant answers. This is why we built language-specific arenas in Arabic and Japanese to assess the quality of models in a fair and relevant manner.
For the Arabic arena, we use a curated subset of real-world conversations (Zheng et al.) in Arabic. LFM-7B is fluent in Arabic and significantly preferred over other models in the same size category.
For the Japanese arena, we use a combination of ELYZA-tasks-100 (Sasaki et al.) and real-world prompts curated by our partner ITOCHU-CTC. This creates a diverse set of prompts representative of business use cases. LFM-7B also leads our Japanese arena by a significant margin.
Like our previous models, LFM-7B has a minimal memory footprint compared to other architectures.
The memory efficiency of LFM-7B allows for several key features, including long-context understanding, energy-efficient inference, and high-throughput deployments on local devices. LFM-7B can also be efficiently customized to any knowledge or task using our on-premise fine-tuning stack. Consequently, LFM-7B significantly increases value for end users in applications such as private enterprise chat, secure code generation, fast instruction following, long document analysis, energy-efficient on-device AI assistants, and multi-step agentic workflows.
In addition to having the ability to process long input contexts efficiently, LFM-7B can retrieve from and reason over long contexts effectively. We validated this across all stages of development via our specialized Liquid internal long-context evals. In addition, we also evaluate the long-context ability of LFM-7B via two public long-context evals: RULER (Hsieh et al.) and LongBench v2 (Bai et al.). With RULER, a length is considered “effective” when its corresponding score is higher than 85.6. This shows that LFM-7B has an effective context length of 32k.
To chat with LFMs go to Playground.liquid.ai
Coming soon:
If your enterprise has use cases that need the efficient and high-throughput performance of our LFMs in order to do more with less, get in touch with us to discuss licensing or purchasing our models.
If our mission aligns with your personal goals and ambitions, we invite you to join our team and drive this vision forward. We are very early on this journey and actively innovating across various aspects of foundation model development and deployment.
We invite enthusiastic users to share their experience as well as criticism, and join our red-teaming efforts to continuously refine the capabilities of our models – send your feedback here.
Yes. Get in touch with our team to license or purchase LFMs from our library of best-in-class models.
LFMs also come with two software stacks for deployment and customization: 1) LFM inference stack and 2) LFM customization stack. We currently prioritize working with clients on enabling edge and on-prem use cases. Connect with our team to learn more about our business model.
Yes. We have built an on-prem LFM customization stack available for purchase to enterprises. LFMs can be rapidly fine-tuned, specialized, and optimized for local, private, safety-critical, and latency-bound use cases for enterprise applications – all within the security of your enterprise firewall.
LFM-7B is the world’s best-in-class multilingual language model in English, Arabic, and Japanese. It also natively supports Spanish, French, German, Chinese, and Korean.
Read more about our research on various aspects of LFMs here , and follow us on X and LinkedIn.