One S3-compatible bucket. Three things you can do with every object that lands in it: search it by meaning (RAG), transform it with an AI pipeline, or analyze it across millions of siblings — all without standing up a single extra service.
Store and retrieve objects by key.
K3 isn’t a vector database with extras. It’s a bucket where every object is searchable by meaning, processable by any AI pipeline, and analyzable across millions of siblings — using the same rules engine, the same auth, and the same storage you already point your S3 SDK at.
Six industries, one bucket. Every card shows the time saved and the launch-day pipelines that power it — all shipping today, all in the same Augmented S3.
The platform underneath the trio. Every capability below is shared across RAG, Transform, and Analyze — wired into the same S3-compatible bucket you’d point your existing tooling at. No staging, no migration, no glue code.
Drop the file in via the S3 API you already use. K3 re-signs the request with your org credentials, persists the object, and queues a discovery event the moment it lands.
K3 matches the new object against your rules — glob patterns, MIME types, size limits — and queues an ingest job for each match.
Scriptum chunks, normalizes, and embeds the content. Vectors land in your bucket's vector index; job status is tracked end-to-end so you can see exactly where every object is.
Queries embed via Scriptum, fan out across collections, fuse dense and sparse hits with RRF, and optionally rerank for the highest-quality top-K.
Wire-compatible with the S3 API for upload, REST for everything else. Your existing tooling and SDKs stay the same.
A small bucket, a tiny ranker, and the same modes the real VectorSearch RPC exposes. Toggle Auto, Vector, or Hybrid to see how K3 routes the same query through dense embeddings, BM25 sparse, or both fused with RRF.
K3 isn’t a research artifact. The API, the console, the deployment story, and the reliable processing under load are already wired up — drop it in your cluster or run it on Dodil Cloud.
Every operation in K3 — buckets, sources, ingest rules, search, ACL, presigned URLs — has a first-class API. Wire it into your platform with the SDK of your choice; the same surface that powers the console is what you build against.
A working web console for managing buckets, ingest rules, vector collections, search, and access policies. Same auth, same multi-tenancy. No need to build your own UI before you can ship.
Deploys cleanly into your Kubernetes cluster — autoscaling, health checks, hardened images, secrets handled. Or skip the setup entirely and run it on Dodil Cloud.
Async ingestion with durable job tracking and self-healing under load. Every upload is queued, traced, and visible end-to-end — jobs don't get lost when a worker restarts or a downstream service blips.