All Stories
Matryoshka 🤝 Binary vectors: Slash vector search costs with Vespa
Announcing Matryoshka (dimension flexibility) and binary quantization in Vespa and how these features slashes costs.
Photo by Scott Graham on Unsplash
Vespa Newsletter, April 2024
Advances in Vespa features and performance include a new SPLADE embedder, float16 support for ONNX models, new Cohere guides, and support for using ColBERT with long texts.
Farfetch: Scaling recommendations with Vespa
E-commerce platform Farfetch explains how they use Vespa to scale their online recommendation system
Marqo chooses Vespa
Vector search experts Marqo choose Vespa as their vector database after benchmarking against Milvus, OpenSearch, Weaviate, Redis, and Qdrant.
Migrating to the Vespa Search Engine
In this post, I will detail the journey at Stanby of how we have addressed the challenges faced by our existing search system through migrating to Vespa.
Photo by Anika Huizinga on Unsplash
Perspectives on R in RAG
In this blog post, I share perspectives on the R in RAG.
Photo by Phil Botha on Unsplash
Scaling vector search using Cohere binary embeddings and Vespa
Three comprehensive guides to using the Cohere Embed v3 binary embeddings with Vespa.
The Singapore Government Pair Search
The Singaporean government deploys state of the art semantic search
The Singaporean government leverages Vespa to do semantic search in every word ever said in Parliament
Photo by Polina Kuzovkova on Unsplash
Announcing Vespa Long-Context ColBERT
Announcing long-context ColBERT, giving it larger context for scoring and simplifying long-document RAG applications.
Embedding flexibility in Vespa
Why did Vespa score "Exceptional" on Embedding Flexibility in GigaOm's report on Vector Databases?
Photo by Ilya Pavlov on Unsplash