Back to changelog

Qdrant Vector Database

January 22, 2026

Follow us on:

XLinkedin
Qdrant Vector Database

We’ve added Qdrant support to MTD Cloud, enabling teams to run a production-ready, open-source vector database directly on Kubernetes for RAG, semantic search, and recommendation workloads.

Why Qdrant on MTD Cloud

Most AI prototypes start fast, but they often get stuck when you need repeatable operations: reliable storage, filtering, multi-tenant separation, predictable performance, and a clean deployment path across dev/test/prod. With Qdrant on MTD Cloud, you get a standardized vector storage foundation that’s easy to adopt across teams—without vendor lock-in.

Vector search only becomes valuable in production when you can combine semantic similarity with real-world constraints: tenant boundaries, document sources, access levels, time ranges, product categories, and more. Qdrant is built around vectors plus payload metadata, which makes it a strong fit for production RAG where filtering is non-negotiable.

MTD Cloud adds the missing platform layer: Kubernetes-native operations, consistent deployment patterns, and a predictable way to scale from a single use case to a shared AI foundation.


What’s included

  • Kubernetes-ready deployment pattern

    Deploy Qdrant as a first-class component within your clusters using standard platform workflows.

  • Production storage foundations

    Persistent storage and deployment baselines designed for real workloads (not “toy” demos).

  • RAG-friendly data model

    Store vectors with payload metadata to support filtered retrieval and traceability.

  • Multi-team adoption

    Clean onboarding model that fits existing namespaces, RBAC patterns, and team workflows.

  • Integration-ready usage

    Simple API patterns for creating collections, upserting points, and searching top-K results.


Best-fit use cases

Qdrant support is ideal for:

  • RAG assistants grounded in internal policies, runbooks, and technical docs

  • Semantic search across knowledge bases, tickets, wikis, and documentation portals

  • Multi-tenant AI platforms where each team/client needs data separation

  • Recommendations & similarity use cases (content, products, documents, templates)

  • AI feature add-ons inside existing apps (search, classification, summarization with citations)


Quick start

Install Qdrant (illustrative) and verify it’s running:

# Install (example with Helm chart pattern) helm repo add qdrant https://qdrant.github.io/qdrant-helm helm repo update helm upgrade -i qdrant qdrant/qdrant \ --namespace ai-platform --create-namespace kubectl -n ai-platform get pods# Create a collection (define embedding size + distance metric): curl -X PUT "http://qdrant.ai-platform.svc.cluster.local:6333/collections/docs" \ -H "Content-Type: application/json" \ -d '{"vectors": {"size": 1536, "distance": "Cosine"}}'# Create a collection (define embedding size + distance metric): curl -X PUT "http://qdrant.ai-platform.svc.cluster.local:6333/collections/docs" \ -H "Content-Type: application/json" \ -d '{"vectors": {"size": 1536, "distance": "Cosine"}}'# Upsert vectors with metadata payload (source/team/ACL tags, etc.): curl -X PUT "http://qdrant.ai-platform.svc.cluster.local:6333/collections/docs/points?wait=true" \ -H "Content-Type: application/json" \ -d '{ "points": [ { "id": 1, "vector": [0.1, 0.2, 0.3, 0.4], "payload": { "source": "policy", "team": "security", "access": "internal" } } ] }'# Run a filtered similarity search (top-K with metadata constraints): curl -X POST "http://qdrant.ai-platform.svc.cluster.local:6333/collections/docs/points/search" \ -H "Content-Type: application/json" \ -d '{ "vector": [0.1, 0.2, 0.3, 0.4], "limit": 5, "filter": { "must": [ { "key": "team", "match": { "value": "security" } }, { "key": "access", "match": { "value": "internal" } } ] } }'

Conclusion

With Qdrant on MTD Cloud, teams can move from “RAG demos” to production-grade AI features using an open-source, Kubernetes-native vector database that supports the realities of enterprise data: filtering, governance, and multi-team adoption.

Instead of each team building their own vector storage stack, you now have a shared, standardised foundation for semantic search and retrieval, accelerating delivery while keeping operations and data control consistent across environments.

Stay Ahead of the Cloud Managed Services Curve

Join our newsletter for exclusive insights and updates on the latest Cloud and AI trends.