Bonnie Chase
Bonnie Chase
Director of Product Marketing

Introducing Vespa Voice — Your Signal for What’s Next in AI-Driven Search Infrastructure

Introducing Vespa Voice — Your Signal for What’s Next in AI-Driven Search Infrastructure

Introducing Vespa Voice: Your Signal for What’s Next in AI-Driven Search Infrastructure

We’re excited to launch Vespa Voice, the podcast where AI leaders, search pioneers, and enterprise innovators converge to explore the future of AI. Each episode features conversations with those building what’s next in agentic AI, retrieval-augmented generation (RAG), hybrid search, and scalable enterprise applications. Whether you’re an engineer shaping the future of search, a CTO leading digital transformation, or a CIO evolving your data strategy, this is the podcast for you.


Episode 1: Vector Databases - Defining the Leaders in a Rapidly Growing Field

Our first episode features Whit Walters, Field CTO at GigaOM, who joins us to break down the latest GigaOM Sonar Report — and explain why Vespa was recognized as both a leader and fast mover in the rapidly growing vector database landscape. With decades of experience spanning IBM, Oracle, and Google Cloud, Whit brings sharp insight into what sets true leaders apart. Tune in to learn:

  • What fast mover really means in GigaOM’s analysis
  • Why scalability isn’t just about speed — it’s about cost predictability
  • The trends shaping the future of vector databases and AI-native infrastructure

Whit also unpacks the challenges traditional databases face in adding vector search and why purpose-built platforms like Vespa have a clear edge in performance, relevance, and innovation velocity.

Listen to Episode 1 now.


What’s Next

In future episodes of Vespa Voice, we’ll explore:

  • Cutting-edge applications in search and retrieval
  • How real companies are using Vespa in production
  • Trends in enterprise AI, from fine-tuned models to RAG pipelines

Make sure to subscribe to our YouTube channel. If you’re evaluating vector databases, this is a conversation you don’t want to miss. Read GigaOm’s Sonar Report.

Vespa is a platform for developing and running real-time AI-driven applications for search, recommendation, personalization and retrieval-augmented generation (RAG). Vespa supports both MRL and BQL by enabling highly efficient storage and processing of embeddings, which are crucial for AI applications that deal with large data sets. With Vespa, you can query, organize, and make inferences in vectors, tensors, text and structured data. Vespa can scale to billions of constantly changing data items and thousands of queries per second, with latencies below 100 milliseconds. It’s available as a managed service and open source. Learn more about Vespa here.

Read more

Modernize your retrieval pipeline with ModernBERT and Vespa

Learn how the ModernBERT backbone model paves the way for more efficient and effective retrieval pipelines, and how to use ModernBERT in Vespa.

Getting Started

Welcome to Vespa, the open big data serving engine! Here you'll find resources for getting started.

Free Trial

Deploy your application for free. Get started now to get $300 in free credits. No credit card required!