The challenge

A retail leader used large VLMs to catalog products—but models were slow, generic, and costly to fine-tune, creating deployment bottlenecks.

Key Obstacles:

  • Slow inference: Even quantized models lagged in production
  • Poor specialization: Struggled with structured data extraction
  • Complex deployment: Months of tuning needed for accuracy
OUR SOLUTION

Liquid fine-tuned smaller, specialized VLMs for cataloging, using our Edge SDK to optimize both inference speed and accuracy.

THE RESULTS

Faster, more accurate cataloging with 65% lower deployment time.

  • 65% faster time-to-production
  • Higher accuracy than larger generic models
  • 50% lower compute/memory needs
  • Seamless pipeline from fine-tuning to deployment
Ready to experience AI?

Power your business, workflows, and engineers with Liquid AI

Manage your preferences

We use cookies to enhance your browsing experience and analyze our traffic. By clicking "Accept All", you consent to our use of cookies.

Learn more
  • Essential cookies required