Overview
This is the fastest path to the aha moment . You will build a simple TypeScript agent, call bindufy(), and watch the Bindu Core sidecar turn it into a real microservice.
You write the driver . The sidecar brings the engine : DID identity, A2A protocol compliance, x402 payment support, scheduling, storage, and an HTTP server.
Time to complete: ~10 minutes
DID Identity The core generates and manages agent identity for you.
A2A Protocol Your handler is exposed as a production-ready A2A service.
Payments x402 support can be added without changing the transport model.
Scheduling + Storage The sidecar handles task orchestration and persistence.
Prerequisites
Before you start, make sure the local machine can run both halves of the sidecar model.
Node.js Version 18 or higher
Python Version 3.12+ with Bindu installed
OpenAI API Key Get one at platform.openai.com/api-keys
Terminal Basic command line knowledge
Install Bindu Python Core
The TypeScript SDK needs the Bindu Core installed on the machine:
pip install bindu
# or with uv:
uv pip install bindu
The SDK launches the Python core automatically as a child process. You never start the sidecar manually during normal SDK use.
Step 1: Create Your Project
Start with a clean project directory:
mkdir my-first-agent
cd my-first-agent
npm init -y
Step 2: Install Dependencies
Next, install the SDK and your app dependencies.
npm install @bindu/sdk openai dotenv
npm install -D tsx typescript
What each package does:
@bindu/sdk - Bindu TypeScript SDK (gRPC, registration, sidecar lifecycle)
openai - OpenAI Node.js SDK
dotenv - Loads environment variables from .env
tsx - TypeScript executor (dev dependency)
typescript - TypeScript compiler (dev dependency)
Step 3: Create Your Environment File
Create a .env file in your project root:
Add your OpenAI API key:
OPENAI_API_KEY=sk-your-openai-api-key-here
OPENAI_MODEL=gpt-4o
Never commit your .env file to git. Add it to .gitignore: echo ".env" >> .gitignore
Step 4: Create a Skill Definition
Now give the sidecar something structured to advertise.
mkdir -p skills/question-answering
Create skills/question-answering/skill.yaml:
name : question-answering
description : General question answering using GPT-4o
tags :
- qa
- assistant
- general-knowledge
input_modes :
- text/plain
output_modes :
- text/plain
version : 1.0.0
author : dev@example.com
You can also use Markdown format (SKILL.md) instead of YAML. Both formats are supported.
Step 5: Write Your Agent Code
Here is the minimal pattern: your code defines the driver , and bindufy() attaches the engine .
import { bindufy , ChatMessage } from "@bindu/sdk" ;
import OpenAI from "openai" ;
import * as dotenv from "dotenv" ;
// Load environment variables
dotenv . config ();
// Initialize OpenAI client
const openai = new OpenAI ({
apiKey: process . env . OPENAI_API_KEY ,
});
// Bindufy your agent - one call, full microservice
bindufy (
{
// Required: Agent identity
author: "dev@example.com" ,
name: "my-first-agent" ,
description: "A helpful assistant powered by GPT-4o" ,
version: "1.0.0" ,
// Required: Deployment configuration
deployment: {
url: "http://localhost:3773" ,
expose: true ,
cors_origins: [ "http://localhost:5173" ],
},
// Optional: Skills
skills: [ "skills/question-answering" ],
},
async ( messages : ChatMessage []) => {
const response = await openai . chat . completions . create ({
model: process . env . OPENAI_MODEL || "gpt-4o" ,
messages: messages . map (( m ) => ({
role: m . role as "user" | "assistant" | "system" ,
content: m . content ,
})),
});
return response . choices [ 0 ]. message . content || "" ;
}
);
Step 6: Run Your Agent
Start the app:
You should see output like this:
[Bindu SDK] Starting Bindu core...
[Bindu SDK] Bindu core is ready on :3774
[Bindu SDK] AgentHandler gRPC server started on :50052
[Bindu SDK] Registering agent with Bindu core...
[Bindu SDK]
[Bindu SDK] Agent registered successfully!
[Bindu SDK] Agent ID: 91547067-c183-e0fd-c150-27a3ca4135ed
[Bindu SDK] DID: did:bindu:dev_at_example_com:my-first-agent:91547067...
[Bindu SDK] A2A URL: http://localhost:3773
[Bindu SDK]
[Bindu SDK] Waiting for messages...
Your agent is now running as a full microservice.
Step 7: Verify It Works
With the sidecar running, verify the public surface.
Send a message
curl -s -X POST http://localhost:3773 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "message/send",
"params": {
"message": {
"role": "user",
"parts": [{"kind": "text", "text": "What is the capital of France?"}],
"messageId": "msg-001",
"contextId": "ctx-001",
"taskId": "task-001",
"kind": "message"
},
"configuration": {
"acceptedOutputModes": ["text/plain"],
"blocking": true
}
},
"id": "test-1"
}' | python3 -m json.tool
Check the agent card
curl -s http://localhost:3773/.well-known/agent.json | python3 -m json.tool
This returns your agentās A2A card with DID, skills, and capabilities.
Check health
curl -s http://localhost:3773/health
What Just Happened?
Now that the agent is working, here is the sequence the SDK handled for you.
Launched the Python core
Spawned bindu serve --grpc as a child process
Started a gRPC server
For your handler on a dynamic port (e.g., :50052)
Read your skill files
Loaded skills/question-answering/skill.yaml
Registered your agent
Called RegisterAgent on the core with your config
Core ran full bindufy logic
Generated deterministic agent ID from SHA256(author:name)
Created Ed25519 DID keys
Set up authentication (Hydra OAuth2)
Created manifest with GrpcAgentClient as handler
Started HTTP/A2A server on :3773
Returned registration result
Agent ID, DID, and A2A URL
Started heartbeat loop
Pings core every 30 seconds to signal liveness
In other words, your code became the driver , and the sidecar brought the engine online around it.
Understanding the Message Flow
Once startup is done, every request follows the same path:
1. Client -> HTTP POST to :3773 (A2A protocol)
2. Bindu Core receives request
3. TaskManager creates task, Scheduler queues it
4. ManifestWorker picks up task, builds message history
5. Worker calls manifest.run(messages)
-> This is GrpcAgentClient - makes gRPC call to your TypeScript process
6. TypeScript SDK receives HandleMessages on :50052
7. SDK calls your handler(messages) - the async function you wrote
8. Your handler calls OpenAI GPT-4o API
9. OpenAI returns response
10. SDK sends response back over gRPC
11. Worker processes result, creates DID-signed artifact
12. Client receives A2A JSON-RPC response
The gRPC overhead is ~1-5ms. The rest is the LLM call time.
Project Structure
At this point, your project should look like this:
my-first-agent/
|-- index.ts
|-- package.json
|-- .env
|-- .gitignore
|-- skills/
| `-- question-answering/
| `-- skill.yaml
`-- node_modules/
Next Steps
The quickstart proves the sidecar works. The next step is learning how to drive it well.
Agent Implementation Learn how handlers, state transitions, skills, and debugging work
Custom SDKs Build SDKs for other languages
API Reference Review services, messages, ports, and env vars
Overview Revisit the sidecar architecture and limitations
Multi-Turn Conversations
One of the first useful patterns is keeping the task open when the driver needs more context.
bindufy ( config , async ( messages : ChatMessage []) => {
if ( messages . length === 1 ) {
return {
state: "input-required" ,
prompt: "Could you be more specific about what you're looking for?"
};
}
const response = await openai . chat . completions . create ({
model: "gpt-4o" ,
messages: messages . map (( m ) => ({
role: m . role as "user" | "assistant" | "system" ,
content: m . content ,
})),
});
return response . choices [ 0 ]. message . content || "" ;
});
The task stays open after input-required. When the user sends a follow-up, your handler is called again with the full conversation history.
Adding Payments
The same sidecar can also enforce payment before your handler runs:
bindufy (
{
author: "dev@example.com" ,
name: "premium-agent" ,
deployment: { url: "http://localhost:3773" , expose: true },
execution_cost: {
amount: "1000000" ,
token: "USDC" ,
network: "base-sepolia" ,
pay_to_address: "0xYourWalletAddress" ,
},
},
async ( messages ) => {
return "Premium response!" ;
}
);
Using LangChain
If the driver changes frameworks, the sidecar does not care.
npm install @langchain/openai
import { ChatOpenAI } from "@langchain/openai" ;
const llm = new ChatOpenAI ({ model: "gpt-4o" });
bindufy ( config , async ( messages : ChatMessage []) => {
const response = await llm . invoke (
messages . map (( m ) => ({
role: m . role ,
content: m . content ,
}))
);
return typeof response . content === "string"
? response . content
: JSON . stringify ( response . content );
});
Production Deployment
When you move beyond local testing, you still keep the same driver/sidecar split.
OPENAI_API_KEY=sk-prod-...
STORAGE_TYPE=postgres
DATABASE_URL=postgresql://user:pass@host/db
SCHEDULER_TYPE=redis
REDIS_URL=redis://host:6379
DEPLOYMENT__URL=https://your-agent.example.com
DEPLOYMENT__EXPOSE=true
FROM node:18-alpine
RUN apk add --no-cache python3 py3-pip
RUN pip3 install bindu
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3773 3774
CMD [ "npx" , "tsx" , "index.ts" ]
docker build -t my-agent .
docker run -p 3773:3773 -p 3774:3774 --env-file .env my-agent
Troubleshooting
āBindu not found"
pip install bindu
bindu --version
"Port 3773 already in use"
lsof -ti:3773 -ti:3774 | xargs kill 2> /dev/null
"OPENAI_API_KEY not setā
cat .env
# Should show: OPENAI_API_KEY=sk-...
Agent starts but no response
Common issues:
Invalid API key
Model not available on your OpenAI plan
Rate limiting
Network connectivity
āRegistration failedā
Common causes:
Missing author or name in config
Invalid deployment.url format
Port conflicts
Ports Reference
Port Protocol Purpose :3773HTTP A2A protocol server (clients connect here) :3774gRPC Bindu core registration (SDK connects here) :50052gRPC AgentHandler (core calls your handler here)
The handler port (:50052) is auto-assigned by default. You can override it with callbackPort in your config.
Complete Example
import { bindufy , ChatMessage } from "@bindu/sdk" ;
import OpenAI from "openai" ;
import * as dotenv from "dotenv" ;
dotenv . config ();
const openai = new OpenAI ({
apiKey: process . env . OPENAI_API_KEY ,
});
bindufy (
{
author: "dev@example.com" ,
name: "my-first-agent" ,
description: "A helpful assistant powered by GPT-4o" ,
version: "1.0.0" ,
deployment: {
url: "http://localhost:3773" ,
expose: true ,
cors_origins: [ "http://localhost:5173" ],
},
skills: [ "skills/question-answering" ],
},
async ( messages : ChatMessage []) => {
const response = await openai . chat . completions . create ({
model: process . env . OPENAI_MODEL || "gpt-4o" ,
messages: messages . map (( m ) => ({
role: m . role as "user" | "assistant" | "system" ,
content: m . content ,
})),
});
return response . choices [ 0 ]. message . content || "" ;
}
);
Get your first language-agnostic agent running in minutes without writing a single line of infrastructure code .