Kristian Aune
Kristian Aune
Head of Customer Success, Vespa.ai

Vespa Newsletter, November 2021

In the previous update, we mentioned vespa CLI, Nearest neighbor search performance improvement, Paged tensor attributes, mTLS, improved Feed performance, and the SentencePiece Embedder. This time, we have the following updates:

Schema Inheritance

In applications with multiple document types it is often convenient to put common fields in shared parent document types to avoid duplication. This is done by declaring that the document type in a schema inherits other types.

However, this does not inherit the other elements of the schema, such as rank profiles and fields outside the document. From 7.487.27 onwards, you can also let a schema inherit another. It will then include all the content of the parent schema, not just the document type part.

In Vespa 7.498.22, we also added support for lettings structs inherit each other; see #19949.

Improved data dump performance

The visit operation is used to export data in batch from a Vespa instance. In November, we added features to increase throughput when visiting a lot of data:

  • Streaming HTTP responses enables higher throughput, particularly where the client has high latency to the Vespa instance.
  • Slicing lets you partition the selected document space and iterate over the slices in parallel using multiple clients to get linear scaling with the number of clients.

Matching all your documents

Vespa now has a true query item, simplifying queries matching all documents, like select * from sources music, books where true.

More query performance tuning

More configuration options are added for query performance tuning:

  • min-hits-per-thread
  • termwise-limit
  • num-search-partitions

These address various aspects of query and document matching, see the schema reference.

Faster deployment

Vespa application packages can become large, especially when you want to use modern large ML models. Such applications will now deploy faster, due to a series of optimizations we have made over the last few months. Distribution to content nodes is faster, and rank profiles are evaluated in parallel using multiple threads - we have measured an 8x improvement on some complex applications.

Hamming distance

Bitwise Hamming distance is now supported as a mathematical operation in ranking expressions, in addition to being a distance metric option in nearest neighbor searches.

The neural search paradigm shift

November 8, Jo Kristian Bergum from the Vespa team presented From research to production - bringing the neural search paradigm shift to production at Glasgow University. The slides are available here.


About Vespa: Largely developed by Yahoo engineers, Vespa is an open source big data processing and serving engine. It’s in use by many products, such as Yahoo News, Yahoo Sports, Yahoo Finance, and the Yahoo Ad Platform. Thanks to feedback and contributions from the community, Vespa continues to grow.