Why Architecture Matters
In Key Concepts, you saw how task states likesubmitted, input-required, and completed make interactive workflows possible. The architecture is the part that makes those states real in a running system.
| Flat application model | Bindu layered architecture |
|---|---|
| Request handling, execution, and storage blur together | Each layer has a clear job in the lifecycle |
| Scaling usually means rewriting core pieces | Storage, queueing, and workers can evolve independently |
| Observability is bolted on late | Traces, LLM observability, and metrics are part of the runtime |
| Protocol support becomes tightly coupled to business logic | Protocol, security, orchestration, and execution stay separated |
| Hard to reason about what happens after a message arrives | The request flow is explicit from client to artifact |
When a message creates a task, that task moves through several layers, not just one server endpoint. The architecture matters because each layer is responsible for part of that lifecycle.
How Bindu Architecture Works
Bindu is organized into protocol, security, application, orchestration, storage, agent, and observability layers. Each one participates in turning a message into a task and a task into a result.The System Layout
The layered structure is what lets Bindu stay simple on the surface while still handling protocol, identity, execution, and scaling concerns underneath.Layered
Protocol, security, orchestration, storage, and observability each live in their own part of the system.
Task-Centered
TaskManager sits in the middle because task state is the thing the rest of the system coordinates around.
Scalable
Storage backends, queues, workers, and agent frameworks can change without changing the whole model.
The Lifecycle: Accept, Execute, Return
Under the hood, every request moves through three practical stages.Accept
A client sends a
message/send request. The protocol and security layers handle the request first, then the application layer validates it and passes it to TaskManager.TaskManager creates the task, stores it with state submitted, puts the task ID on the queue, and returns the task immediately.Execute
A worker dequeues the task, fetches the task details, and moves the task into
working.The worker then calls your agent. That agent may use frameworks such as Agno, LangChain, CrewAI, or LlamaIndex, plus skills and tool integrations.Return
Once the agent returns a result,
TaskManager saves the artifact, updates the task state, and makes the finished task available through retrieval APIs and notifications.The request flow summary is still the same:- Phase 1: Submit (0-50ms) - Client sends
message/send-> Auth validates -> TaskManager creates task -> Returnstask_idimmediately - Phase 2: Execute (async) - Worker dequeues -> Runs your agent -> Updates state (
working->input-requiredorcompleted) - Phase 3: Retrieve (anytime) - Client polls with
tasks/get-> Gets current state + artifacts
Core Components
The architecture is easier to reason about when the layers are spelled out directly.Protocol Layer
- A2A Protocol - Agent-to-agent communication (task lifecycle, context management)
- AP2 Protocol - Commerce extensions (payment mandates, cart management)
- X402 Protocol - Micropayments (cryptographic signatures, multi-currency)
Security And Identity Layer
- Authentication - Auth0, OAuth2, API Keys, Mutual TLS
- DID (Decentralized Identity) - Unique, verifiable agent identity
- PKI - RSA/ECDSA key generation, signature verification
Application Layer
- BinduApplication - Starlette-based web server with async/await, WebSocket support
- Request Router - Routes to
/agent/card,/agent/skills,/tasks/*,/contexts/* - Schema Validator - Validates request structure and types
Orchestration Layer
- TaskManager - Central coordinator that creates tasks, manages state, coordinates workers
- Task Queue - Memory (dev) or Redis (prod) for distributed task scheduling
- Worker Pool - Executes tasks asynchronously, handles retries and timeouts
Storage Layer
- Memory Storage (dev) - In-memory dictionaries for tasks, contexts, artifacts
- PostgreSQL (prod) - ACID compliance, relational queries, JSON support
- Redis Cache - Session storage, rate limiting, pub/sub notifications
Agent Layer
- Framework Agnostic - Works with Agno, LangChain, CrewAI, LlamaIndex
- Skills Registry - Defines agent capabilities via
/agent/skillsendpoint - Tool Integrations - 113+ built-in toolkits for data, code, web, APIs
Observability Layer
- Distributed Tracing - Jaeger/OTLP tracks requests across all components
- LLM Observability - Phoenix/Langfuse monitors token usage, latency, cost
- Metrics - Request rate, task duration, error rate, queue depth, worker utilization
The system works because these layers stay distinct. Protocol is not storage. Storage is not orchestration. Orchestration is not the agent itself.
The Value Of Layered Architecture
The architecture is designed around a few practical goals.- Simplicity - Wrap any agent with minimal code
- Scalability - From localhost to distributed cloud
- Reliability - Built-in error handling and recovery
- Observability - Complete visibility into operations
- Security - Authentication and identity built-in
- Standards - Protocol-first design (
A2A,AP2,X402)
Real-World Use Cases
Following one request through the system
Following one request through the system
When you send “create sunset caption”, the request hits the Protocol Layer, is authenticated by the Security Layer, validated in the Application Layer, turned into a task by
TaskManager, executed by a worker, and returned as a completed task with an artifact.Interactive conversations with paused work
Interactive conversations with paused work
If the agent asks “which platform?”, the task does not disappear.
TaskManager updates it into input-required, stores that state, and lets the same task continue when the user answers.Scaling from local development to production
Scaling from local development to production
In development, the same architecture can run with in-memory storage and queues. In production, those pieces can move to PostgreSQL and Redis without changing the task model.
Observing the whole path
Observing the whole path
Tracing and metrics sit alongside execution so you can see requests move through the server, manager, worker, and agent instead of guessing which layer is slow or failing.
Related
- /bindu/introduction/key-concepts
- /bindu/concepts/task-first-pattern
- /bindu/concepts/protocol