All Stories
Vespa on ARM64
Vespa is now released as a multiplatform container image, supporting both x86_64 and ARM64.
Photo by Arnaud Mariat on Unsplash
Building Billion-Scale Vector Search - part one
How fast is fast? Many consider the blink of an eye, around 100-250ms, to be plenty fast.
Pre-trained models on Vespa Cloud
Vespa Cloud now provides pre-trained ML models for your applications
Text embedding made simple
Vespa now lets you create a production quality semantic search application from scratch in minutes
Photo by Ilya Pavlov on Unsplash
Vespa Newsletter, September 2022
Advances in Vespa features and performance include rank-phase statistics, detailed rank performance analysis, new query- and trace-applications and a new training video!
Photo by Joshua Sortino on Unsplash
Will new vector databases dislodge traditional search engines?
Doug Turnbull asks an interesting question on Linkedin; Will new vector databases dislodge traditional search engines?
IR evaluation metrics with uncertainty estimates
Compare different metrics and their uncertainty in the passage ranking dataset.
Photo by Arnold Francisca on Unsplash
Summer internship at Vespa
After the summer internship of 2022 the intern has summarized what he has done and his experience at Vespa
Photo by israel palacio on Unsplash
Managed Vector Search using Vespa Cloud
This blog post describes how your organization can unlock the full potential of multimodal AI-powered vector representations using Vespa -- the industry-leading open-source big data serving engine.
Photo by Scott Graham on Unsplash
Vespa Newsletter, June 2022
Advances in Vespa features and performance include ANN with configurable filtering, fuzzy matching, and native embedding support. Also see pyvespa’s new experimental ranking module!
Photo by Claudio Schwarz on Unsplash