Dodil Cloud

Augmented S3
for the AI era.

One bucket where every object is searchable by meaning, transformable by any AI pipeline, and analyzable across millions of siblings.

RAG
Search by meaning
Transform
AI on every upload
Analyze
Insights at bucket scale
The Challenge

AI Agents are ready.
Infrastructure isn’t.

The gap isn’t model capability. It’s the data plumbing underneath every agent.

62%23%
Experimenting → Scaling
Only 23% of orgs have moved AI agents from pilot to production.
Source: McKinsey Global Survey on AI, 2025
  • Knowledge in silos. Agents can't retrieve.
  • Every upload re-stitches the same five pipelines.
  • Cross-object insight needs a whole second stack.
Market

A $4.4T opportunity.
Here’s the slice we go after.

McKinsey puts AI’s annual productivity opportunity at $4.4T. AI infra spend is the fastest-growing segment in cloud — and the wedge for an Augmented S3 that does RAG, transform, and analyze natively is exactly where K3 lives.

Anchor: McKinsey Global Survey on AI, 2025
TAM
$1.37T
in 2026

Worldwide AI infrastructure spend

Gartner forecasts total worldwide AI spending at $2.5T in 2026, with $1.37T concentrated in AI infrastructure — compute, storage, networking, and platform services purpose-built for AI workloads.

Source: Gartner, AI Spending Forecast (Sept 2025)
SAM
$6.5B
in 2026, growing 25%+ CAGR

Vector DB + RAG infrastructure

K3's wedge sits at the intersection of vector databases ($3.2B in 2026) and RAG infrastructure ($3.3B in 2026) — the two segments K3 collapses into one S3-compatible bucket. Combined growth rate: 25%+ CAGR through 2030.

Source: MarketsAndMarkets · Grand View Research, 2026 forecasts
SOM
$10M ARR
reachable in 24 months

Sovereign deployments — UK, MEA, EU

Bottom-up: 5 paying customers + 5 named enterprise POCs (Crédit Agricole, PepsiCo, NBE, Rubix) + 40+ early-access pipeline, against a $13.5B MEA sovereign cloud market and €100B European sovereign cloud opportunity by 2031.

Source: internal bottom-up · Gartner Sovereign Cloud IaaS, 2026
Solution

Four products.
One platform.

Storage, compute, orchestration, and retrieval — designed to compose into one bucket. Built on the same control plane, the same auth, and the same multi-tenant isolation.

K3

Augmented S3.

An S3-compatible bucket that searches, transforms, and analyzes every object — RAG, AI pipelines on every upload, and cross-bucket analysis, all from one bucket.

Ignite

Serverless functions for AI agents.

A FaaS runtime designed so AI agents can author, test, and call their own tools. Python and Rust, MCP-enabled, ~200ms warm cold starts, massively parallel by default.

Scriptum

Workflows that compile.

A purpose-built DSL for LLM-driven workflows — 15 composable primitives, full type checking, resumable threads, first-class human-in-the-loop. The language K3 transforms run on.

VBase

Managed vector database.

Drop-in pymilvus compatibility, instant serverless pools, automated backups, multi-tenant isolation. The vector index every K3 retrieval call hits.

Hero product

K3 — three jobs,
one bucket.

K3 is Augmented S3 — vector, transform, analyze, and an embedded warehouse pipelines offload into (HTAP soon). Open it in a spreadsheet or ask the built-in support agent.

RAG

Find what's relevant, instantly.

Hybrid semantic search across your bucket — dense vectors, BM25, multimodal queries (text, image, audio, video), and reranking. The full RAG stack, just sitting there waiting for a query.

Transform

Run an AI pipeline on every upload.

Trigger a Scriptum pipeline on every object that matches your rules — summarize, redact, caption, transcribe, classify. Outputs land back in the same bucket under a derived prefix like `summaries/`.

Analyze

Make sense of the whole bucket.

Run Scriptum pipelines across many objects at once — outputs offload into K3's embedded warehouse (HTAP soon), queryable in SQL or the Excel interface alongside the bucket itself.

Warehouse
Pipelines offload to embedded warehouse · HTAP soon
Excel interface
Open the warehouse as a spreadsheet — pivot, filter, drill-through
Support agent
MCP-aware, answers across bucket + warehouse with citations
S3-compatible
Drop-in for any S3 SDK
<10ms
Hybrid search latency
1 bucket
Three modes, one rules engine
Outcomes

Where K3 saves time today.

Six industries. Same bucket. Same fifteen pipelines, all shipping day one.

Legal
M&A diligence
6 months
a weekend

Hybrid search catches semantic clause variants keyword review misses entirely.

Contract extractionOCR + LayoutEntities + PIISummarization
Finance
Compliance + AP
Weeks per audit query
seconds

Every answer carries a citation chain: policy → regulation → guidance.

Invoice/Receipt parsingTable extractionTranslationSummarization
Insurance
Claims + fraud
30-day claim cycle
hours

Auto-PII redaction and fraud signal scoring run inline at ingest, not weeks later.

OCR + LayoutEntities + PIIImage understandingSentiment + Intent
Engineering
Defect intelligence
30% of engineer time searching
minutes per query

Defect patterns auto-cluster across years of FMEA reports — root causes that no single engineer would have spotted.

OCR + LayoutCode intelligenceImage understandingClassification
Tech / AI
RAG-as-a-service
6-month RAG build
1 day

Multi-tenant + sovereign by default — your enterprise customers get data residency for free.

Modality-routed embeddingSummarizationResume/CV parsingClassification
Defense
Source material triage
Weeks of manual triage
hours

Foreign-language source material auto-translated and indexed alongside English; multi-classification handling out of one bucket.

ASR + DiarizationTranslationOCR + LayoutImage understandingClassification
Companion stack

K3 doesn’t sit alone.

The same control plane runs three more products that compose with it — compute for the agents that build on it, the language they orchestrate in, and the vector index every retrieval call hits.

Ignite

Serverless functions for AI agents.

A FaaS runtime designed so AI agents can author, test, and call their own tools — Python and Rust, ~200ms warm cold starts, MCP-enabled, massively parallel by default.

Scriptum

LLM workflows that compile.

A purpose-built DSL with 15 composable primitives, full compile-time type checking, resumable threads, and first-class human-in-the-loop. The language K3 uses for every Transform and Analyze pipeline.

VBase

Managed vector database.

Drop-in pymilvus compatibility, instant serverless pools, automated backups, multi-tenant isolation. Powered by Milvus 2.6 and exposed inside every K3 bucket as the underlying vector index.

Competition

One bucket, where everyone else needs four boxes.

The capabilities investors care about, scored across the patterns customers actually compare us to.

Capability
Dodil
K3 + Ignite + Scriptum + VBase
AWS S3 + Pinecone + Lambda
Hyperscaler patchwork
MongoDB Atlas Vector
Managed DB vendor
Vercel + LangChain + Supabase
AI app stack
Self-built (S3 + Milvus + glue)
DIY
S3-compatible storage
Drop-in for any existing S3 SDK
Hybrid semantic search
Dense + BM25 + reranking, in the bucket
AI transform on every upload
Summarize, redact, caption, classify, extract
Cross-bucket analyze pipelines
Anomaly detection, batch scoring, datasets
Workflow language with type checking
Compile-time safety for LLM workflows
Multi-tenant by design
Org isolation across storage, search, processing, credentials
Sovereign / on-prem deploy
Run the full stack in your own datacenter
Built in
Partial / external
Not available

Our Cloud

Sovereign infrastructure for AI workloads.

Your data plane runs in your datacenter. Our control plane is in London — expanding to the Middle East and Asia.

Compute + storage + networking tuned for AI workloads
Multi-tenant isolation with Dodil IAM across every service
Regional data planes, one unified control plane
LondonActive
Middle East
Asia
Sovereign AI Cloud

Run K3 on our cloud.
Or run it in yours.

K3 is the same Augmented S3 either way. Drop the data plane into your own datacenter, keep every object in-country, and we manage the upgrades, observability, and pipelines. The control plane sits in London today; the data plane sits wherever your compliance officer wants it.

In-Country Data Plane
Datasets + indexes stay in your datacenter
Knowledge Base
Built-in vector storage & RAG
Model Deployment
Run LLMs on your infrastructure
Sovereignty
Control plane in London today; more regions soon
  • Run frontier models and agent workloads within border.
  • Keep datasets and indexes inside your datacenter (data plane).
  • Cloud-grade UX with managed upgrades and observability.
  • Control plane hosted in London today; expanding to more regions soon.

Traction

Demand is signed.
The bottleneck is capacity.

Five paying customers. Forty-plus on the early-access list. Five enterprise POCs across four countries — all on the sovereign K3 deployment path.

5
Paying Customers
Sovereign-first buyers
40+
Early Access
Active signups
5
Enterprise POCs
Regional leaders

Enterprise POCs

CA
Crédit Agricole
France / MEABanking

Compliance research + auditor-grade citation chains across policies, regulations, and guidance — running on K3 in their own jurisdiction.

P
PepsiCo
MEA RegionEngineering / Manufacturing

Defect intelligence and supply-chain knowledge retrieval across decades of QA reports — local data plane, no upstream egress.

NBE
NBE
EgyptBanking

AI-powered customer service and fraud signal scoring — sovereign deployment for the National Bank of Egypt.

R
Rubix Consulting
Middle EastConsulting / Tech

Multi-tenant K3 deployments across Rubix's regional client base — one platform, many sovereign data planes.

TEAM

Veterans of the stacks we’re rebuilding.

SR
Seemo Rizk
CEO / CTO
3-time startup founder · 10 years of infrastructure engineering
YT
Yaguang Tang
Head of Infra & DevOps
Ex Tencent · Canonical · 15 years scaling cloud infrastructure
MA
Mohamed Abouzeed
Hardware & Network Lead
15+ years leading data center operations
KR
Karim Ramadan
Partnership Lead
Ex Google Cloud · Microsoft Azure · 10 years of enterprise sales
DM
David Mikheev
Rust Engineer
5 years shipping production Rust across crypto and distributed systems
SB
Sofia Bedrikhina
Chief of Staff

Investment Opportunity

More demand than we can fulfill.

Raising
$2.5M
Seed Round
Pre-Money
$12M
USD Valuation
Instrument
SAFE
Standard Terms

Use of Funds

~70%
Capacity
  • Sovereign GPU footprint to close the next 20 enterprise POCs
  • Multi-region data plane expansion (UK → MEA → EU)
  • Storage & networking buildout for K3 production tenants
  • Platform reliability and 24/7 sovereign-tier SLAs
~30%
Go-to-market
  • Developer relations & community
  • Enterprise sales & partnerships
  • Regional market expansion (MEA)
  • Brand awareness & content
What $2.5M unlocks
18 months of sovereign capacity to convert the 5 signed POCs and close 20+ more at $250K\u2013500K ARR each. Path to $5M\u2013$10M ARR at the next round.

The Challenge: Demand Exceeds Capacity

We have more demand than we can fulfill. 5 Enterprise POCs, 40+ early access signups, and a growing waitlist. This round is about scaling infrastructure to capture the opportunity — not finding product-market fit.

Thank You

Building Augmented S3 for the next decade of AI.

Get in Touch
seemo@dodil.io
DODIL
Augmented S3 for the AI era.
DODILFrom data to intelligence.
© 2026 Circle Technologies Pte Ltd. All rights reserved.Built for the AI era.