文章目录
- 一、关于 qdrant
- Features
- Filtering and Payload
- Hybrid Search with Sparse Vectors
- Vector Quantization and On-Disk Storage
- Distributed Deployment
- Highlighted Features
- Integrations
- 二、快速上手
- 1、下载和运行
- 安装 qdrant-client
- docker
- 2、初始化 client
- 3、创建 collection
- 4、添加向量
- 5、运行 query
- 6、添加 filter
- 三、Qdrant Examples
一、关于 qdrant
Qdrant - High-performance, massive-scale Vector Database for the next generation of AI.
qdrant (read: quadrant)
- 官网 : https://qdrant.tech
- github : https://github.com/qdrant/qdrant
- qdrant-client : https://github.com/qdrant/qdrant-client
- examples : https://github.com/qdrant/examples
- qdrant - cloud : https://cloud.qdrant.io
Features
Filtering and Payload
Qdrant can attach any JSON payloads to vectors, allowing for both the storage and filtering of data based on the values in these payloads. Payload supports a wide range of data types and query conditions, including keyword matching, full-text filtering, numerical ranges, geo-locations, and more.
Filtering conditions can be combined in various ways, including should
, must
, and must_not
clauses, ensuring that you can implement any desired business logic on top of similarity matching.
Hybrid Search with Sparse Vectors
To address the limitations of vector embeddings when searching for specific keywords, Qdrant introduces support for sparse vectors in addition to the regular dense ones.
Sparse vectors can be viewed as an generalisation of BM25 or TF-IDF ranking. They enable you to harness the capabilities of transformer-based neural networks to weigh individual tokens effectively.
Vector Quantization and On-Disk Storage
Qdrant provides multiple options to make vector search cheaper and more resource-efficient. Built-in vector quantization reduces RAM usage by up to 97% and dynamically manages the trade-off between search speed and precision.
Distributed Deployment
Qdrant offers comprehensive horizontal scaling support through two key mechanisms:
- Size expansion via sharding and throughput enhancement via replication
- Zero-downtime rolling updates and seamless dynamic scaling of the collections
Highlighted Features
- Query Planning and Payload Indexes - leverages stored payload information to optimize query execution strategy.
- SIMD Hardware Acceleration - utilizes modern CPU x86-x64 and Neon architectures to deliver better performance.
- Async I/O - uses
io_uring
to maximize disk throughput utilization even on a network-attached storage. - Write-Ahead Logging - ensures data persistence with update confirmation, even during power outages.
有三种方式使用 Qdrant
- Run a Docker image if you don’t have a Python development environment. Setup a local Qdrant server and storage in a few moments.
- Get the Python client if you’re familiar with Python. Just
pip install qdrant-client
. The client also supports an in-memory database. - Spin up a Qdrant Cloud cluster: the recommended method to run Qdrant in production.
Read Quickstart to setup your first instance.
推荐 Workflow
Integrations
Examples and/or documentation of Qdrant integrations:
- Cohere (blogpost on building a QA app with Cohere and Qdrant) - Use Cohere embeddings with Qdrant
- DocArray - Use Qdrant as a document store in DocArray
- Haystack - Use Qdrant as a document store with Haystack (blogpost).
- LangChain (blogpost) - Use Qdrant as a memory backend for LangChain.
- LlamaIndex - Use Qdrant as a Vector Store with LlamaIndex.
- OpenAI - ChatGPT retrieval plugin - Use Qdrant as a memory backend for ChatGPT
- Microsoft Semantic Kernel - Use Qdrant as persistent memory with Semantic Kernel
二、快速上手
https://qdrant.tech/documentation/quick-start/
1、下载和运行
安装 qdrant-client
pip install qdrant-client
docker
First, download the latest Qdrant image from Dockerhub:
docker pull qdrant/qdrant
Then, run the service:
docker run -p 6333:6333 -p 6334:6334 \-v $(pwd)/qdrant_storage:/qdrant/storage:z \qdrant/qdrant
Under the default configuration all data will be stored in the ./qdrant_storage
directory.
This will also be the only directory that both the Container and the host machine can both see.
Qdrant is now accessible:
- REST API: localhost:6333
- Web UI: localhost:6333/dashboard
- GRPC API: localhost:6334
2、初始化 client
from qdrant_client import QdrantClientclient = QdrantClient("localhost", port=6333)
或
client = QdrantClient(url="http://localhost:6333")
从本地初始化
client = QdrantClient(path="path/to/db")
# 或
client = QdrantClient(":memory:")
By default, Qdrant starts with no encryption or authentication .
This means anyone with network access to your machine can access your Qdrant container instance.
Please read Security carefully for details on how to secure your instance.
3、创建 collection
You will be storing all of your vector data in a Qdrant collection. Let’s call it test_collection
.
This collection will be using a dot product distance metric to compare vectors.
from qdrant_client.http.models import Distance, VectorParamsclient.create_collection(collection_name="test_collection",vectors_config=VectorParams(size=4, distance=Distance.DOT),
)
TypeScript, Rust examples use async/await syntax, so should be used in an async block.
Java examples are enclosed within a try/catch block.
4、添加向量
Let’s now add a few vectors with a payload. Payloads are other data you want to associate with the vector:
from qdrant_client.http.models import PointStructoperation_info = client.upsert(collection_name="test_collection",wait=True,points=[PointStruct(id=1, vector=[0.05, 0.61, 0.76, 0.74], payload={"city": "Berlin"}),PointStruct(id=2, vector=[0.19, 0.81, 0.75, 0.11], payload={"city": "London"}),PointStruct(id=3, vector=[0.36, 0.55, 0.47, 0.94], payload={"city": "Moscow"}),PointStruct(id=4, vector=[0.18, 0.01, 0.85, 0.80], payload={"city": "New York"}),PointStruct(id=5, vector=[0.24, 0.18, 0.22, 0.44], payload={"city": "Beijing"}),PointStruct(id=6, vector=[0.35, 0.08, 0.11, 0.44], payload={"city": "Mumbai"}),],
)print(operation_info)
Response:
operation_id=0 status=<UpdateStatus.COMPLETED: 'completed'>
5、运行 query
Let’s ask a basic question - Which of our stored vectors are most similar to the query vector [0.2, 0.1, 0.9, 0.7]
?
search_result = client.search(collection_name="test_collection", query_vector=[0.2, 0.1, 0.9, 0.7], limit=3
)print(search_result)
Response:
ScoredPoint(id=4, version=0, score=1.362, payload={"city": "New York"}, vector=None),
ScoredPoint(id=1, version=0, score=1.273, payload={"city": "Berlin"}, vector=None),
ScoredPoint(id=3, version=0, score=1.208, payload={"city": "Moscow"}, vector=None)
The results are returned in decreasing similarity order.
Note that payload and vector data is missing in these results by default.
See payload and vector in the result on how to enable it.
6、添加 filter
We can narrow down the results further by filtering by payload.
Let’s find the closest results that include “London”.
from qdrant_client.http.models import Filter, FieldCondition, MatchValuesearch_result = client.search(collection_name="test_collection",query_vector=[0.2, 0.1, 0.9, 0.7],query_filter=Filter(must=[FieldCondition(key="city", match=MatchValue(value="London"))]),with_payload=True,limit=3,
)print(search_result)
Response:
ScoredPoint(id=2, version=0, score=0.871, payload={"city": "London"}, vector=None)
To make filtered search fast on real datasets, we highly recommend to create payload indexes!
You have just conducted vector search. You loaded vectors into a database and queried the database with a vector of your own.
Qdrant found the closest results and presented you with a similarity score.
三、Qdrant Examples
This repo contains a collection of tutorials, demos, and how-to guides on how to use Qdrant and adjacent technologies.
Example | Description | Technologies |
---|---|---|
Huggingface Spaces with Qdrant | Host a public demo quickly for your similarity app with HF Spaces and Qdrant Cloud | HF Spaces, CLIP, semantic image search |
QA which is always updated: Recency and Cohere using Llama Index | Notebook which demonstrates how you can keep your QA system always use updated information | Llama Index, OpenAI Embeddings, Cohere Reranker |
Qdrant 101 - Getting Started | Introduction to semantic search and the recommendation API of Qdrant | NumPy and Faker |
Qdrant 101 - Text Data | Introduction to the intersection of Vector Databases and Natural Language Processing | transformers, datasets, GPT-2, Sentence Transformers, PyTorch |
Qdrant 101 - Audio Data | Introduction to audio data, audio embeddings, and music recommendation systems | transformers, librosa, openl3, panns_inference, streamlit, datasets, PyTorch |
Ecommerce - reverse image search | Notebook demonstrating how to implement a reverse image search for ecommerce | CLIP, semantic image search, Sentence-Transformers |
Serverless Semantic Search | Get a semantic page search without setting up a server | Rust, AWS lambda, Cohere embedding |
Basic RAG | Basic RAG pipeline with Qdrant and OpenAI SDKs | OpenAI, Qdrant, FastEmbed |
Step-back prompting in Langchain RAG | Step-back prompting for RAG, implemented in Langchain | OpenAI, Qdrant, Cohere, Langchain |
Collaborative Filtering and MovieLens | A notebook demonstrating how to build a collaborative filtering system using Qdrant | Sparse Vectors, Qdrant |
Use semantic search to navigate your codebase | Implement semantic search application for code search task | Qdrant, Python, sentence-transformers, Jina |
2024-03-27(三)