VBase · Vector Database

Vector search, served.

VBase is a managed vector database for RAG, semantic search, recommendations, and agentic memory. Provision in a second, run hybrid queries with metadata filters, snapshot on a schedule — all behind a single endpoint and a drop-in client.

< 1s
Provisioning
Hybrid
Dense + sparse
pymilvus
Drop-in client
Collections / tenant
Rolling your own

K8s clusters. Backup scripts. HA replicas. Tenant isolation. Six weeks of ops.

VBase

One API call. Serverless by default. Backups, auto-scaling, and pymilvus drop-in — ready in under a second.

Everything you stopped
wanting to run yourself.

Serverless vector collections you can provision in a second, the enterprise operations you expected to build yourself, and a drop-in client for the tools you already use.

Serverless · Live

Instant pools, per-tenant isolation

Shared serverless pools provision a new collection in under a second with database-level isolation and per-tenant quotas. Scale to tens of millions of vectors without touching a cluster manifest. Dedicated clusters for strict physical isolation are on the roadmap.

Operate · Live

Backups, tenants, and TLS, done

Backup policies with cron schedules and retention counts. One-call restore to the same or a new collection. Multi-tenant isolation via Keycloak OIDC and DRN-based ABAC. TLS on every endpoint by default.

Compatible · Live

Drop-in client. Zero rewrites.

AUTOINDEX, IVF_FLAT, HNSW, hybrid dense + sparse search, dynamic fields, and metadata filters — all available through the standard pymilvus, Go, Java, and Node clients. Swap the URI and your existing code runs unchanged.

The lifecycle

Provision. Ingest. Query. Back up.

Step 01

Provision

`CreateCluster` for dedicated (~48s), `AllocateDatabase` for shared (~1s). Both hand back a pymilvus-ready endpoint and a bearer token.

Step 02

Ingest

Create collections, build indexes (AUTOINDEX / IVF_FLAT / HNSW), and insert vectors via standard pymilvus — no custom SDK, no lock-in.

Step 03

Query

Dense, sparse, hybrid, or metadata-filtered — every query shape you expect from a production vector DB. The engine handles scaling under load; you just send the request.

Step 04

Back up

Policy-driven backups with cron schedules and retention counts. Restore to the same or a new cluster with a single call. Nothing to configure on-cluster.

Just pymilvus. And a few REST calls for ops.

Drop-in client compatibility means no SDK rewrites. The operational surface — clusters, backups, scaling — lives behind a small REST/gRPC API.

from pymilvus import MilvusClient

# Point pymilvus at your VBase endpoint — same client, same API.
client = MilvusClient(
    uri="https://vbase-ct-a1b2c3.cloud-svc.dodil.io:443",
    token="Bearer <keycloak_token>",
    db_name="default",
)

client.create_collection(
    collection_name="papers",
    dimension=1536,
    metric_type="COSINE",
    auto_id=True,
)
Try it

Three tabs. The whole mental model.

Provision contrasts dedicated vs shared with simulated log streams so you can feel the provisioning time difference. Query lets you click a live 2D vector space to see top-K nearest neighbours and their distances. Collection is a tuner — dimension, index type, metric — that generates the exact pymilvus call you'd paste into a notebook.

Shared pools provision in under a second. Dedicated clusters (coming soon) take about a minute — same API surface, stricter isolation.
Demo runs entirely in your browser — provisioning times and API shapes are lifted from the real dodil-vbase proto.

Questions, answered.

Stop babysitting infrastructure. Start shipping.

Join Early Access for $500 in credits, priority onboarding, and a direct line to the database team. Your first collection is one API call away.

Powered by Milvus 2.6
DODILFrom data to intelligence.
© 2026 Circle Technologies Pte Ltd. All rights reserved.Built for the AI era.