Ignite
Serverless Functions

Ultra-fast serverless functions for the AI era. Execute code in milliseconds, scale from zero to thousands, and pay only for what you use.

  • Sub-200ms cold starts with pre-warmed containers.
  • Auto-scale from 0 to 1000+ concurrent executions.
  • Deploy Python functions Now, Node.js, Go, and Rust coming soon.
  • Integrates seamlessly with VBase and Objects.
Demo: write code → deploy → auto-scale on demand.
Ignite Notes

Simple, granular explanations

A walkthrough of how Ignite works, backed by the DODIL stack.

OverviewRuntimeScalingRoutingPipelines
Step 1
Write a tiny handler
Start with the smallest unit of product value: a single function that accepts input and returns a response. No servers to maintain, no framework decisions to revisit later. You can ship a new capability in minutes and iterate without infrastructure work.
Step 2
Deploy in seconds
Deployment is a product action, not an infrastructure project. Publish once, then choose how it should live: persistent for customer‑facing routes, TTL for experiments, or ephemeral for one‑off jobs and migrations.
Step 3
Pick the right tier
Resource tiers map cleanly to user outcomes. Small tiers are great for lightweight features, larger tiers for data processing and ML. You can ship a new feature with a safe default and upgrade tiers as usage grows.
Step 4
Run sync or async
Use synchronous calls for product surfaces that require immediate feedback, and async for background work. Live logs and status tracking make it easy for support and ops to answer customer questions quickly.
Narrative
A common story: a team ships a new feature, a campaign drives a traffic spike, and performance becomes the headline instead of the product. Ignite flips that story. You ship in hours, scale without re‑architecting, and keep response times consistent even when demand is unpredictable.
Who it’s for
Product teams shipping new capabilities fast, growth teams running experiments, and platform teams who want a safe default for bursty workloads.
When to use
APIs, webhooks, data transforms, AI inference, and scheduled jobs that need to scale without on‑call overhead.
When not to use
Long‑running services with always‑on state. Keep those in persistent services and use Ignite for the elastic edges.
Ignite Research

Briefs, patterns, and technical notes

Short reads that document how we build, deploy, and scale Ignite in production.

PerformanceLatency
Cold starts that feel instant
Ignite is optimized for the user’s first request. Warm pools and image pre‑pulling minimize wait time, so new features feel responsive from day one. This reduces drop‑off for new flows and improves perceived product quality.
ScalingReliability
Predictable scaling under load
Scale behavior is predictable for launches and spikes. Keep a warm baseline for steady traffic, then let the platform expand automatically when campaigns, announcements, or integrations drive bursts.
ResourcesCost
Resource tiers you can reason about
Clear tiers make cost and performance a product decision, not a guess. You can associate tiers with feature classes and keep pricing aligned with user value.
ObservabilityDeveloper Experience
Built‑in logs and results
Every run keeps logs, outputs, and status so teams can debug, replay, and improve reliability without extra tooling. This reduces support overhead and speeds up incident response.

Deploy in seconds

# handler.py
from dodil import ignite

@ignite.function(
    memory="512MB",
    timeout="30s",
    regions=["LON-1"]
)
def process_image(event):
    """Auto-scales from 0 to 1000 concurrent executions"""
    image = download(event.bucket, event.key)
    result = model.predict(image)
    return {"labels": result.labels}

# Deploy: dodil ignite deploy handler.py
# Done. Live at: https://ignite.dodil.io/process-image-a1b2c3

Common use cases

From simple APIs to complex data pipelines.

API Backends

Build RESTful and GraphQL APIs that scale automatically from zero to thousands of requests per second.

Ideal for endpoints that see unpredictable traffic: launches, partner integrations, and regional rollouts. Keep latency consistent without pre‑provisioning.

Mini case
A growth team launches a referral program. Traffic spikes 12x in 48 hours, the API stays responsive, and the team doesn’t pause the launch to tune infrastructure.

Data Processing

Process streams, transform files, and analyze data in real-time as events occur.

Use Ignite to clean, enrich, and classify data at the moment it arrives. Keep pipelines modular and ship new transforms without re‑wiring your stack.

Mini case
A product analytics team adds a new event taxonomy. They deploy a transform function in minutes and backfill without waiting for a platform sprint.

Automation

Schedule tasks, trigger workflows, and automate infrastructure responses to system events.

Great for operational workflows that should be reliable but not always running: scheduled reports, cleanup jobs, and remediation tasks.

Mini case
Support escalations create a ticket and trigger a diagnostic run. The workflow completes in the background and posts results back to the thread.

Webhooks

Receive and process webhooks from external services with built-in verification and retry logic.

Handle payment events, CRM updates, and partner integrations without maintaining dedicated servers. Keep retries safe and idempotent.

Mini case
A billing integration receives bursts of events at month‑end. Ignite absorbs the spike and keeps the ledger in sync without manual scaling.
DODILFrom data to intelligence.
© 2026 Circle Technologies Pte Ltd. All rights reserved.Built for the AI era.