Skip to main content
This page is a conceptual tour. By the end, you will understand why Bindu uses a sidecar, how the two halves talk to each other, and what the tradeoffs are. You will not write any code here — that comes in the Quickstart.

The Multi-Language Problem

You want to build agent intelligence. What you keep getting dragged into is infrastructure. You ship a solid TypeScript handler. Then someone asks for Kotlin. Then Python. Suddenly the real work is no longer prompts, tools, or reasoning. It is rebuilding auth, DID identity, A2A protocol handling, x402 payments, scheduling, storage, and service plumbing for every language. That is the infrastructure trap. The part that should be reusable becomes the part you rewrite the most.

The Goal: Language-Agnostic Agents

The goal is simple: make the developer write only the brain and let one reusable runtime provide the body. That means a language-agnostic architecture where the public API stays the same, the infrastructure stays centralized, and teams can move between TypeScript, Kotlin, and Python without losing features or behavior. The contract should feel boring in the best way:
bindufy(config, handler)  # handler runs in the same process
Same function name. Same config shape. Same outcome. Different language, same microservice.

The gRPC Sidecar Architecture

Now that you know the problem, here is how Bindu solves it. Bindu uses the sidecar model. Think of it as two halves of one system: The SDK is the driver. It owns your logic, your framework choices, and your handler. The Python Core is the engine. It owns the infrastructure: config validation, DID generation, auth, x402, scheduling, storage, manifest creation, and the A2A-facing HTTP server. The bridge between them is gRPC: a high-performance, open-source RPC framework built on HTTP/2 and Protocol Buffers. Bindu uses it here instead of standard REST for three reasons:
  • Strict typing keeps the SDKs and core aligned on one contract.
  • Low latency keeps the local transport overhead tiny compared with the LLM call.
  • Bidirectional calls let the SDK register with the core, then let the core call back into the SDK when work arrives.
The gRPC layer stays out of your way. You do not write proto files, manually boot a second service, or think about serialization. You call bindufy(), write the handler, and the SDK wires up the rest.

The Big Picture

Here is how the two halves sit side by side. One process owns the logic. The other owns the infrastructure.
Their TypeScript code                    Bindu Core (Python, auto-started)
+---------------------+                  +----------------------------+
|                     |                  |                            |
|  OpenAI SDK         |  1. Register     |  Config validation         |
|  LangChain          | ------gRPC-----> |  DID key generation        |
|  Any framework      |                  |  Auth (Hydra OAuth2)       |
|                     |                  |  x402 payment setup        |
|  handler(messages)  |  2. Execute      |  Manifest creation         |
|  <------gRPC--------|<---------------- |  Scheduler + Storage       |
|                     |                  |  HTTP/A2A server (:3773)   |
+---------------------+                  +----------------------------+
        SDK process                                  Core process
     (developer's language)                    (Python, invisible)
Two processes. One terminal. You see your app. The SDK quietly manages the Python child process.

Why Two Processes?

This is a fair question, so let’s address it directly. Because the alternatives are worse. Option A: Rewrite the core in every language. DID, auth, x402, scheduler, storage, A2A in TypeScript, then Kotlin, then Python, then whatever comes next. Every bug gets fixed multiple times. Option B: Keep one core, put a clean wire protocol in front of it, and let thin SDKs translate between the developer and that core. Bindu chooses Option B. The sidecar is the boundary, and gRPC is the wire.

What Actually Happens

Now that you understand the shape of the architecture, here is what happens at runtime in three quick beats:
1

The SDK starts the Bindu Core as a child process

The Python core handles DID, auth, x402, scheduling, storage, and the HTTP server. You do not manually run a second service. The SDK detects how to launch it and spawns it.
2

The SDK registers the agent over gRPC

It sends config, skills, and a callback address to the core. The core runs the full bindufy pipeline and starts the A2A HTTP server.
3

When messages arrive, the core calls the SDK's handler over gRPC

A client sends an A2A message to :3773. The core receives it, builds task context, and calls your handler back over gRPC.
Client --HTTP--> Bindu Core --gRPC--> TypeScript Handler --> OpenAI
        :3773    (Python)    :3774      (your code)

        DID, Auth, x402                 Just the handler.
        Scheduler, Storage              That's all you write.
        A2A protocol
That is the full lifecycle: start, register, handle. If you want to see the detailed step-by-step of what happens during startup and message processing, the Agent Implementation page walks through every phase.

Two Services, Two Directions

The sidecar works because calls move in both directions. The SDK talks to the core during startup. The core talks back to the SDK during execution. Let’s look at each side. BinduService (lives in the Python core on :3774) — the SDK calls this to register and manage its agent:
MethodWhat it does
RegisterAgent”Here is my config, skills, and callback address. Turn me into a microservice.”
Heartbeat”I am still alive.” (every 30 seconds)
UnregisterAgent”I am shutting down. Clean up.”
AgentHandler (lives in the SDK on a dynamic port) — the core calls this when work arrives:
MethodWhat it does
HandleMessages”A user sent this message. Run your handler and give me the response.”
GetCapabilities”What can you do?”
HealthCheck”Are you still there?”
This is exactly why Bindu does not use plain REST for this boundary. Both sides need typed contracts and both sides need to initiate calls cleanly. The API Reference documents every message and field if you want the full details.

Python vs gRPC Agents

From the outside, both models look the same. Inside, the transport path differs. This table shows you exactly where they diverge and where they are identical.
Python AgentgRPC Agent
Developer callsbindufy(config, handler)bindufy(config, handler) (identical)
Handler runs inSame process as coreSeparate process
Core started bybindufy() directlySDK spawns as child process
CommunicationIn-process function callgRPC over localhost
Latency overhead0ms1–5ms
LanguagePython onlyAny language with gRPC
DID, auth, x402Full supportFull support (identical)
SkillsLoaded from filesystemSent as data during registration
StreamingSupportedSupported
From the outside, there is no visible difference. The agent card looks the same. The DID is generated the same way. The A2A responses have the same structure. The artifacts carry the same DID signatures. A client cannot tell whether the agent behind :3773 is Python, TypeScript, or Kotlin.

Real Examples

If you want to see these ideas in working code, here are three examples that each use the same bindufy() pattern in different languages and frameworks:

TypeScript + OpenAI

GPT-4o agent with one bindufy() call

TypeScript + LangChain

LangChain.js research assistant

Kotlin + OpenAI

Kotlin agent with the same pattern

Known Limitations

The sidecar model already gives you full parity for core infrastructure. The current gaps are about transport security and edge-case resilience. None of these will affect you during local development, but they are worth knowing about before you deploy to production.
gRPC connections use grpc.insecure_channel. Traffic between the core and SDK is unencrypted.This is acceptable for now because the core and SDK run on the same machine (localhost). The SDK spawns the core as a child process — there is no network exposure. TLS/mTLS support is planned for remote deployments.
If the SDK process crashes mid-execution, the GrpcAgentClient does not retry. The task fails, and the agent must be re-registered.ManifestWorker catches the gRPC UNAVAILABLE error and marks the task as failed. On restart, the SDK calls RegisterAgent again.
Each GrpcAgentClient creates a single gRPC channel lazily on first use. Under high concurrency, all calls share one channel.This is fine for most agents because gRPC multiplexes well via HTTP/2. At extremely high concurrency, connection pooling would reduce contention.
The /metrics endpoint reports HTTP request metrics but not gRPC call metrics. You cannot see HandleMessages latency, error rates, or call counts in the dashboard.Workaround: check the core log output, which includes timing information for each handler call.
If you run two instances of the same TypeScript agent, each one registers separately with a different callback address. There is no built-in routing to spread load across instances.Workaround: use a reverse proxy such as Envoy in front of the SDK instances, and register the proxy address as the callback.

Feature Comparison

This is the complete picture of what works today and what is still on the roadmap:
FeaturePython AgentsgRPC Agents
Unary responsesworksworks
Streaming responsesworksworks
DID identityworksworks
x402 paymentsworksworks
Skillsworksworks
State transitions (input-required)worksworks
Health checksworksworks
Multi-languagePython onlyany language
Latency overhead0ms1–5ms
TLSN/A (in-process)not implemented
Auto-reconnectionN/A (in-process)not implemented
The driver/engine split is highly robust. gRPC agents have full parity with Python agents for identity, auth, payments, skills, streaming, and the A2A protocol. The missing pieces are purely around advanced security hardening and distributed resilience.

Next Steps

You now have the conceptual map. From here, pick the path that matches where you are:

Quickstart

Build your first gRPC agent in 10 minutes

Agent Implementation

Handler patterns, state transitions, and how the bridge works under the hood

Custom SDKs

Build SDKs for other languages

API Reference

Services, messages, ports, and env vars
Sunflower LogoEscape the infrastructure trap by keeping your logic entirely decoupled from identity, protocols, and routing.