Inside Vector Database Quantization: Product, Binary, and Scalar | S2 E23

  Рет қаралды 139

How AI Is Built

How AI Is Built

Күн бұрын

When you store vectors, each number takes up 32 bits.
With 1000 numbers per vector and millions of vectors, costs explode.
A simple chatbot can cost thousands per month just to store and search through vectors.
The Fix: Quantization
Think of it like image compression. JPEGs look almost as good as raw photos but take up far less space. Quantization does the same for vectors.
Today we are back continuing our series on search with Zain Hasan, a former ML engineer at Weaviate and now a Senior AI/ ML Engineer at Together. We talk about the different types of quantization, when to use them, how to use them, and their tradeoff.
Three Ways to Quantize:
1. Binary Quantization
Turn each number into just 0 or 1
Ask: "Is this dimension positive or negative?"
Works great for 1000+ dimensions
Cuts memory by 97%
Best for normally distributed data
2. Product Quantization
Split vector into chunks
Group similar chunks
Store cluster IDs instead of full numbers
Good when binary quantization fails
More complex but flexible
3. Scalar Quantization
Use 8 bits instead of 32
Simple middle ground
Keeps more precision than binary
Less savings than binary
*Key Quotes:*
"Vector databases are pretty much the commercialization and the productization of representation learning."
"I think quantization, it builds on the assumption that there is still noise in the embeddings. And if I'm looking, it's pretty similar as well to the thought of Matryoshka embeddings that I can reduce the dimensionality."
"Going from text to multimedia in vector databases is really simple."
"Vector databases allow you to take all the advances that are happening in machine learning and now just simply turn a switch and use them for your application."
*Zain Hasan:*
[**LinkedIn**]( / zainhas )
[**X (Twitter)**](x.com/zainhasan6)
[**Weaviate**](weaviate.io/)
[**Together**](www.together.ai/)
*Nicolay Gerold:*
[**⁠LinkedIn⁠**]( / nicolay-gerold )
[**⁠X (Twitter)**]( / nicolaygerold )
vector databases, quantization, hybrid search, multi-vector support, representation learning, cost reduction, memory optimization, multimodal recommender systems, brain-computer interfaces, weather prediction models, AI applications

Пікірлер
Local-First Search: How to Push Search To End-Devices | S2 E22
53:09
Context is King: How Knowledge Graphs Help LLMs Reason
1:33:37
How AI Is Built
Рет қаралды 187
Secret to sawing daughter in half
00:40
Justin Flom
Рет қаралды 33 МЛН
New Colour Match Puzzle Challenge - Incredibox Sprunki
00:23
Music Playground
Рет қаралды 44 МЛН
Thank you 😅
00:15
Nadir Show
Рет қаралды 46 МЛН
What if all the world's biggest problems have the same solution?
24:52
Scaling Vector Database Usage Without Breaking the Bank   Quantization and Adaptive Retrieval
1:11:24
Toronto Machine Learning Series (TMLS)
Рет қаралды 306
Binary Quantization - Andrey Vasnetsov | Vector Space Talk #001
20:43
Qdrant - Vector Database & Search Engine
Рет қаралды 2,8 М.
Why are vector databases so FAST?
44:59
Underfitted
Рет қаралды 20 М.
Secret to sawing daughter in half
00:40
Justin Flom
Рет қаралды 33 МЛН