Redis Scheduler
Redis is a high-performance, in-memory data structure store that Bindu uses as a distributed task scheduler. It enables efficient task queuing, distribution, and coordination across multiple agent instances in production deployments.Why Use Redis?
1. Distributed Task Scheduling
Redis enables true distributed task processing:- Multiple workers can consume tasks from the same queue
- Load balancing automatically distributes work across agent instances
- No duplicate processing - each task is consumed by exactly one worker
- Horizontal scaling - add more workers to increase throughput
2. High Performance & Low Latency
Redis is optimized for speed:- Sub-millisecond latency for most operations
- In-memory operations - no disk I/O bottlenecks
- Efficient data structures - lists, sets, sorted sets optimized for queuing
- Pipelining support - batch multiple operations for maximum throughput
3. Blocking Pop Operations
Redis’sBLPOP (blocking left pop) is ideal for task queues:
- No polling overhead - workers block until tasks arrive
- Instant task delivery - tasks are processed immediately
- Resource efficient - no CPU wasted on empty queue checks
- Fair distribution - tasks distributed round-robin to waiting workers
4. Multi-Process & Multi-Worker Support
Redis enables process-level parallelism:- Multiple Python processes can share the same queue
- Cross-machine coordination - workers on different servers
- Kubernetes-friendly - perfect for pod-based deployments
- Zero shared memory - no GIL (Global Interpreter Lock) limitations
5. Reliability & Fault Tolerance
Redis provides production-grade reliability:- Persistence options - RDB snapshots and AOF (Append-Only File) logging
- Replication - master-slave setup for high availability
- Sentinel - automatic failover and monitoring
- Redis Cluster - sharding for massive scale
- Connection pooling - efficient connection reuse
6. Operational Simplicity
Redis is easy to operate:- Simple deployment - single binary, minimal configuration
- Rich monitoring - built-in
INFOcommand, Redis CLI tools - Active community - extensive documentation and support
- Cloud-native - available as managed service (AWS ElastiCache, Redis Cloud, etc.)
When to Use Redis Scheduler
✅ Use Redis Scheduler when:- Running multiple agent instances/workers
- Need to distribute tasks across pods/processes
- Require low-latency task processing
- Scaling horizontally in Kubernetes or cloud environments
- Building high-throughput agent systems
- Need reliable task queuing with automatic retries
- Single-instance deployments (use in-memory scheduler)
- Tasks don’t need distribution (use local queue)
- Extremely complex workflow orchestration (consider Temporal, Airflow)
Architecture
Bindu’s Redis scheduler uses a producer-consumer pattern:How It Works
- Task Submission: API instances push tasks to Redis list using
LPUSH - Task Distribution: Worker instances block on
BRPOPwaiting for tasks - Atomic Consumption: Redis ensures each task goes to exactly one worker
- Task Processing: Worker executes the task (run, pause, resume, cancel)
- Completion: Worker updates task state in storage (PostgreSQL)
Configuration
Environment Variables
Example Configuration
Task Operations
Redis scheduler supports all Bindu task operations:1. Run Task
2. Pause Task
3. Resume Task
4. Cancel Task
Data Structures
Task Queue (List)
Redis uses a List for the task queue:- FIFO (First-In-First-Out) ordering
- Atomic push/pop operations
- Blocking pop for efficient waiting
- Simple and fast
Task Encoding
Tasks are JSON-encoded before pushing to Redis:Performance Characteristics
Throughput
Redis can handle:- 100,000+ operations/second on a single instance
- Millions of tasks/day with proper configuration
- Sub-millisecond latency for most operations
Scalability
- Vertical: Single Redis instance can handle most workloads
- Horizontal: Redis Cluster for sharding across multiple nodes
- Workers: Add unlimited worker instances for parallel processing
Resource Usage
- Memory: ~1KB per task (depends on message size)
- CPU: Minimal - Redis is single-threaded but extremely efficient
- Network: Low bandwidth - only task metadata transferred
Reliability Features
1. Connection Pooling
Reuses connections for efficiency:2. Automatic Retries
Retries on transient failures:3. Graceful Degradation
Handles Redis unavailability:- Connection errors logged
- Tasks can be retried
- Workers reconnect automatically
4. Persistence Options
RDB (Redis Database Backup):Monitoring & Observability
Redis Metrics
Monitor these key metrics:Bindu Logging
Redis scheduler logs all operations:Health Checks
Alerting
Set up alerts for:- Queue depth > threshold (backlog building up)
- Redis memory > 80% (risk of eviction)
- Connection errors (Redis unavailable)
- Worker lag (slow task processing)
Production Deployment
1. Redis Setup
Docker:2. High Availability
Redis Sentinel (automatic failover):3. Security
Enable Authentication:- Run Redis in private network
- Use firewall rules to restrict access
- Enable Redis ACLs for fine-grained permissions
4. Scaling Workers
Kubernetes Deployment:Troubleshooting
Common Issues
Queue Building UpTask Loss
Comparison with Alternatives
| Feature | Redis | RabbitMQ | Kafka | SQS |
|---|---|---|---|---|
| Latency | ⚡ Sub-ms | ⚡ Low | ⚠️ Higher | ⚠️ Variable |
| Throughput | ✅ Very High | ✅ High | ✅ Extreme | ⚠️ Moderate |
| Persistence | ⚠️ Optional | ✅ Yes | ✅ Yes | ✅ Yes |
| Complexity | ✅ Simple | ⚠️ Moderate | ❌ Complex | ✅ Simple |
| Ordering | ✅ FIFO | ✅ FIFO | ✅ Partition-level | ⚠️ Best-effort |
| Best for | Fast queues | Complex routing | Event streaming | AWS-native |
Advanced Patterns
1. Priority Queues
Use multiple queues for priority:2. Dead Letter Queue
Handle failed tasks:3. Rate Limiting
Limit task processing rate:4. Task Deduplication
Prevent duplicate tasks:Best Practices
- Enable Persistence: Use AOF for durability
- Monitor Queue Depth: Alert on backlog growth
- Set Memory Limits: Configure
maxmemoryand eviction policy - Use Connection Pooling: Reuse connections efficiently
- Implement Retries: Handle transient failures gracefully
- Scale Workers: Match worker count to task volume
- Secure Redis: Use passwords, TLS, and network isolation
- Regular Backups: Snapshot Redis data periodically
- Test Failover: Verify Sentinel/Cluster works as expected
- Profile Tasks: Optimize slow task handlers
Getting Started
1. Install Redis
2. Start Redis
3. Configure Bindu
4. Run Workers
5. Monitor Queue
Conclusion
Redis is the recommended task scheduler for distributed Bindu deployments. It provides:- ✅ Performance: Sub-millisecond latency and high throughput
- ✅ Scalability: Horizontal scaling with unlimited workers
- ✅ Reliability: Persistence, replication, and automatic failover
- ✅ Simplicity: Easy to deploy, operate, and monitor
- ✅ Efficiency: Blocking operations eliminate polling overhead
Next Steps: