Skip to main content
A single-threaded agent that processes one task at a time is a bottleneck waiting to happen. The moment two requests arrive simultaneously, one of them waits. The moment a task takes longer than expected, everything behind it stalls. The scheduler decouples task submission from task execution. Clients get an immediate response. Workers pick up tasks when they are ready. The queue absorbs bursts without dropping work.

Why the Scheduler Matters

In production, agents need to handle concurrent requests, survive restarts mid-task, and scale across multiple workers without losing work. That requires a queue between the HTTP layer and the execution layer.
Without a schedulerWith Bindu scheduler
Tasks execute synchronously in the request threadTasks are queued and executed asynchronously
Concurrent requests block each otherWorkers process tasks in parallel
A restart loses in-flight tasksRedis-backed queue survives restarts
Scaling requires custom coordinationMultiple workers share a single queue
Fine for local scriptsRequired for production agents
That is the shift: the scheduler makes task execution independent of task submission, so the agent can scale without changing any application code.
Bindu defaults to an in-memory scheduler for local development. Switch to Redis for production. The scheduler backend is configured in agent_config.json — your agent code does not change.

How the Bindu Scheduler Works

The scheduler sits between the TaskManager and the worker pool. When a task is submitted, the TaskManager enqueues the task ID. Workers dequeue task IDs and execute them. The storage layer holds the full task data.

The Scheduling Model

Non-blocking

Task submission returns immediately. Execution happens asynchronously in a worker.

Concurrent

Multiple workers pull from the same queue and execute tasks in parallel.

Durable

Redis-backed queue survives agent restarts. In-flight tasks are not lost.

Backends

Bindu supports two scheduler backends:
MemoryRedis
SetupNoneRequires Redis instance
DurabilityLost on restartSurvives restarts
Multi-workerSingle process onlyDistributed workers
Use caseLocal developmentProduction
Config"type": "memory""type": "redis"

Configuration

The scheduler is configured in agent_config.json. Switching backends requires no changes to your agent code.

Memory (Development)

{
  "scheduler": {
    "type": "memory"
  }
}
The in-memory scheduler uses an asyncio queue inside the agent process. It is fast and requires no external dependencies, but tasks are lost if the process stops.

Redis (Production)

{
  "scheduler": {
    "type": "redis",
    "url": "redis://localhost:6379/0"
  }
}
Or via environment variable:
SCHEDULER__TYPE=redis
SCHEDULER__URL=redis://localhost:6379/0
The SCHEDULER__URL environment variable takes precedence over agent_config.json. Use environment variables in production to keep connection strings out of config files.

The Task Execution Lifecycle

Here is how the scheduler fits into the full task lifecycle: The client never waits for execution. It submits, gets a task_id, and polls or uses push notifications to know when the task is done.

Worker Concurrency

By default, Bindu runs a single worker. You can increase concurrency by configuring the number of workers:
{
  "scheduler": {
    "type": "redis",
    "url": "redis://localhost:6379/0",
    "workers": 4
  }
}
With Redis, you can also run multiple agent instances pointing at the same queue. Each instance runs its own worker pool, and tasks are distributed across all of them automatically.
# Instance 1
SCHEDULER__URL=redis://redis:6379/0 uv run python main.py

# Instance 2 (same Redis, same queue)
SCHEDULER__URL=redis://redis:6379/0 uv run python main.py
Both instances compete for tasks from the same queue. Redis ensures each task is delivered to exactly one worker.

Redis Setup

For production deployments, you need a running Redis instance.

Docker (Quick Start)

docker run -d \
  --name bindu-redis \
  -p 6379:6379 \
  redis:7-alpine

Environment Variables

SCHEDULER__TYPE=redis
SCHEDULER__URL=redis://localhost:6379/0

With Authentication

SCHEDULER__URL=redis://:your-password@localhost:6379/0

Combining Storage and Scheduler

In production, you typically run both PostgreSQL and Redis together:
{
  "storage": {
    "type": "postgres",
    "url": "postgresql://bindu:secret@localhost:5432/bindu"
  },
  "scheduler": {
    "type": "redis",
    "url": "redis://localhost:6379/0"
  }
}
Or with environment variables:
STORAGE__TYPE=postgres
STORAGE__URL=postgresql://bindu:secret@localhost:5432/bindu
SCHEDULER__TYPE=redis
SCHEDULER__URL=redis://localhost:6379/0
The queue holds task IDs. The database holds task data. Workers read from both. This separation means the queue stays lean and fast while the database handles the heavy state.

Real-World Use Cases

When many tasks arrive at once, the queue absorbs the burst. Workers process at their own pace without dropping requests or blocking the HTTP layer.
A task that takes minutes to complete does not block the agent from accepting new requests. The worker handles it in the background while the HTTP server stays responsive.
With Redis, tasks that were queued but not yet started survive an agent restart. When the agent comes back up, workers pick up where the queue left off.
Deploy multiple agent instances behind a load balancer, all pointing at the same Redis queue. Tasks are distributed across instances automatically. No coordination code required.

Security Best Practices

Use Environment Variables

Keep Redis connection strings out of agent_config.json. Use SCHEDULER__URL as an environment variable and exclude it from version control.

Enable Redis Auth

In production, configure Redis with a password and use TLS if the connection crosses a network boundary.

Sunflower LogoThe scheduler is what separates receiving work from doing work — so your agent can always say yes, even when it’s busy.