Why the Scheduler Matters
In production, agents need to handle concurrent requests, survive restarts mid-task, and scale across multiple workers without losing work. That requires a queue between the HTTP layer and the execution layer.| Without a scheduler | With Bindu scheduler |
|---|---|
| Tasks execute synchronously in the request thread | Tasks are queued and executed asynchronously |
| Concurrent requests block each other | Workers process tasks in parallel |
| A restart loses in-flight tasks | Redis-backed queue survives restarts |
| Scaling requires custom coordination | Multiple workers share a single queue |
| Fine for local scripts | Required for production agents |
Bindu defaults to an in-memory scheduler for local development. Switch to Redis for
production. The scheduler backend is configured in
agent_config.json — your agent
code does not change.How the Bindu Scheduler Works
The scheduler sits between the TaskManager and the worker pool. When a task is submitted, the TaskManager enqueues the task ID. Workers dequeue task IDs and execute them. The storage layer holds the full task data.The Scheduling Model
Non-blocking
Task submission returns immediately. Execution happens asynchronously in a worker.
Concurrent
Multiple workers pull from the same queue and execute tasks in parallel.
Durable
Redis-backed queue survives agent restarts. In-flight tasks are not lost.
Backends
Bindu supports two scheduler backends:| Memory | Redis | |
|---|---|---|
| Setup | None | Requires Redis instance |
| Durability | Lost on restart | Survives restarts |
| Multi-worker | Single process only | Distributed workers |
| Use case | Local development | Production |
| Config | "type": "memory" | "type": "redis" |
Configuration
The scheduler is configured inagent_config.json. Switching backends requires no changes to your agent code.
Memory (Development)
Redis (Production)
The
SCHEDULER__URL environment variable takes precedence over agent_config.json.
Use environment variables in production to keep connection strings out of config files.The Task Execution Lifecycle
Here is how the scheduler fits into the full task lifecycle: The client never waits for execution. It submits, gets atask_id, and polls or uses push notifications to know when the task is done.
Worker Concurrency
By default, Bindu runs a single worker. You can increase concurrency by configuring the number of workers:Redis Setup
For production deployments, you need a running Redis instance.Docker (Quick Start)
Environment Variables
With Authentication
Combining Storage and Scheduler
In production, you typically run both PostgreSQL and Redis together:Real-World Use Cases
Handling request bursts
Handling request bursts
When many tasks arrive at once, the queue absorbs the burst. Workers process at
their own pace without dropping requests or blocking the HTTP layer.
Long-running tasks
Long-running tasks
A task that takes minutes to complete does not block the agent from accepting new
requests. The worker handles it in the background while the HTTP server stays
responsive.
Surviving restarts
Surviving restarts
With Redis, tasks that were queued but not yet started survive an agent restart.
When the agent comes back up, workers pick up where the queue left off.
Horizontal scaling
Horizontal scaling
Deploy multiple agent instances behind a load balancer, all pointing at the same
Redis queue. Tasks are distributed across instances automatically. No coordination
code required.
Security Best Practices
Use Environment Variables
Keep Redis connection strings out of
agent_config.json. Use SCHEDULER__URL
as an environment variable and exclude it from version control.Enable Redis Auth
In production, configure Redis with a password and use TLS if the connection
crosses a network boundary.