Overview
This guide walks you through creating your first TypeScript agent with Bindu. You’ll build a simple OpenAI-powered assistant and transform it into a full A2A-compliant microservice with one bindufy() call.
What you’ll get:
DID identity
A2A protocol compliance
x402 payment support
Task scheduling and storage
Authentication ready
Production-ready HTTP server
Time to complete: ~10 minutes
Prerequisites
Before you begin, make sure you have:
Node.js Version 18 or higher
Python Version 3.12+ with Bindu installed
OpenAI API Key Get one at platform.openai.com/api-keys
Terminal Basic command line knowledge
Install Bindu Python Core
The TypeScript SDK requires the Bindu Python core to be installed:
pip install bindu
# or with uv:
uv pip install bindu
The SDK automatically launches the Python core as a background process — you never start it manually.
Step 1: Create Your Project
Create a new directory for your agent:
mkdir my-first-agent
cd my-first-agent
Initialize a new Node.js project:
Step 2: Install Dependencies
Install the Bindu SDK and OpenAI SDK:
npm install @bindu/sdk openai dotenv
npm install -D tsx typescript
What each package does:
@bindu/sdk — Bindu TypeScript SDK (handles gRPC, registration, etc.)
openai — OpenAI Node.js SDK
dotenv — Loads environment variables from .env
tsx — TypeScript executor (dev dependency)
typescript — TypeScript compiler (dev dependency)
Step 3: Create Your Environment File
Create a .env file in your project root:
Add your OpenAI API key:
OPENAI_API_KEY=sk-your-openai-api-key-here
OPENAI_MODEL=gpt-4o
Never commit your .env file to git. Add it to .gitignore: echo ".env" >> .gitignore
Step 4: Create a Skill Definition
Skills define what your agent can do. Create a skills directory:
mkdir -p skills/question-answering
Create skills/question-answering/skill.yaml:
name : question-answering
description : General question answering using GPT-4o
tags :
- qa
- assistant
- general-knowledge
input_modes :
- text/plain
output_modes :
- text/plain
version : 1.0.0
author : dev@example.com
You can also use Markdown format (SKILL.md) instead of YAML. Both formats are supported.
Step 5: Write Your Agent Code
Create index.ts in your project root:
import { bindufy , ChatMessage } from "@bindu/sdk" ;
import OpenAI from "openai" ;
import * as dotenv from "dotenv" ;
// Load environment variables
dotenv . config ();
// Initialize OpenAI client
const openai = new OpenAI ({
apiKey: process . env . OPENAI_API_KEY ,
});
// Bindufy your agent — one call, full microservice
bindufy (
{
// Required: Agent identity
author: "dev@example.com" ,
name: "my-first-agent" ,
description: "A helpful assistant powered by GPT-4o" ,
version: "1.0.0" ,
// Required: Deployment configuration
deployment: {
url: "http://localhost:3773" ,
expose: true ,
cors_origins: [ "http://localhost:5173" ],
},
// Optional: Skills
skills: [ "skills/question-answering" ],
},
async ( messages : ChatMessage []) => {
// This handler is called every time a message arrives via A2A
// messages = [{role: "user", content: "..."}, {role: "assistant", content: "..."}, ...]
const response = await openai . chat . completions . create ({
model: process . env . OPENAI_MODEL || "gpt-4o" ,
messages: messages . map (( m ) => ({
role: m . role as "user" | "assistant" | "system" ,
content: m . content ,
})),
});
return response . choices [ 0 ]. message . content || "" ;
}
);
Step 6: Run Your Agent
Start your agent:
You should see output like this:
[Bindu SDK] Starting Bindu core...
[Bindu SDK] Bindu core is ready on :3774
[Bindu SDK] AgentHandler gRPC server started on :50052
[Bindu SDK] Registering agent with Bindu core...
[Bindu SDK]
[Bindu SDK] Agent registered successfully!
[Bindu SDK] Agent ID: 91547067-c183-e0fd-c150-27a3ca4135ed
[Bindu SDK] DID: did:bindu:dev_at_example_com:my-first-agent:91547067...
[Bindu SDK] A2A URL: http://localhost:3773
[Bindu SDK]
[Bindu SDK] Waiting for messages...
Your agent is now running as a full microservice!
Step 7: Test Your Agent
Open a new terminal and test your agent with curl:
Send a message
curl -s -X POST http://localhost:3773 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "message/send",
"params": {
"message": {
"role": "user",
"parts": [{"kind": "text", "text": "What is the capital of France?"}],
"messageId": "msg-001",
"contextId": "ctx-001",
"taskId": "task-001",
"kind": "message"
},
"configuration": {
"acceptedOutputModes": ["text/plain"],
"blocking": true
}
},
"id": "test-1"
}' | python3 -m json.tool
Check the agent card
curl -s http://localhost:3773/.well-known/agent.json | python3 -m json.tool
This returns your agent’s A2A card with DID, skills, and capabilities.
Check health
curl -s http://localhost:3773/health
What Just Happened?
When you called bindufy(), the SDK:
Launched the Python core
Spawned bindu serve --grpc as a child process
Started a gRPC server
For your handler on a dynamic port (e.g., :50052)
Read your skill files
Loaded skills/question-answering/skill.yaml
Registered your agent
Called RegisterAgent on the core with your config
Core ran full bindufy logic
Generated deterministic agent ID from SHA256(author:name)
Created Ed25519 DID keys
Set up authentication (Hydra OAuth2)
Created manifest with GrpcAgentClient as handler
Started HTTP/A2A server on :3773
Returned registration result
Agent ID, DID, and A2A URL
Started heartbeat loop
Pings core every 30 seconds to signal liveness
Understanding the Message Flow
When a client sends a message to your agent:
1. Client → HTTP POST to :3773 (A2A protocol)
2. Bindu Core receives request
3. TaskManager creates task, Scheduler queues it
4. ManifestWorker picks up task, builds message history
5. Worker calls manifest.run(messages)
└── This is GrpcAgentClient — makes gRPC call to your TypeScript process
6. TypeScript SDK receives HandleMessages on :50052
7. SDK calls your handler(messages) — the async function you wrote
8. Your handler calls OpenAI GPT-4o API
9. OpenAI returns response
10. SDK sends response back over gRPC
11. Worker processes result, creates DID-signed artifact
12. Client receives A2A JSON-RPC response
The gRPC overhead is ~1-5ms. The rest is the LLM call time.
Project Structure
Your project should now look like this:
my-first-agent/
├── index.ts # Your agent code
├── package.json # Dependencies
├── .env # Environment variables (git-ignored)
├── .gitignore # Git ignore file
├── skills/
│ └── question-answering/
│ └── skill.yaml # Skill definition
└── node_modules/ # Installed packages
Next Steps
Add Multi-Turn Conversations Learn how to handle follow-up questions
Add Payments Enable x402 payment requirements
Use LangChain Switch to LangChain.js framework
Deploy to Production Deploy your agent to the cloud
Multi-Turn Conversations
Sometimes your agent needs more information before answering. Return a state transition:
bindufy ( config , async ( messages : ChatMessage []) => {
if ( messages . length === 1 ) {
// First message — ask for clarification
return {
state: "input-required" ,
prompt: "Could you be more specific about what you're looking for?"
};
}
// Follow-up message — now we have enough context
const lastMessage = messages [ messages . length - 1 ]. content ;
const response = await openai . chat . completions . create ({
model: "gpt-4o" ,
messages: messages . map (( m ) => ({
role: m . role as "user" | "assistant" | "system" ,
content: m . content ,
})),
});
return response . choices [ 0 ]. message . content || "" ;
});
The task stays open after input-required. When the user sends a follow-up, your handler is called again with the full conversation history.
Adding Payments
Require payment before your agent executes:
bindufy (
{
author: "dev@example.com" ,
name: "premium-agent" ,
deployment: { url: "http://localhost:3773" , expose: true },
// x402 payment configuration
execution_cost: {
amount: "1000000" , // 1 USDC (6 decimals)
token: "USDC" ,
network: "base-sepolia" ,
pay_to_address: "0xYourWalletAddress" ,
},
},
async ( messages ) => {
// This handler only runs AFTER payment is verified
return "Premium response!" ;
}
);
Using LangChain
Switch to LangChain.js instead of the OpenAI SDK:
npm install @langchain/openai
Update your handler:
import { ChatOpenAI } from "@langchain/openai" ;
const llm = new ChatOpenAI ({ model: "gpt-4o" });
bindufy ( config , async ( messages : ChatMessage []) => {
const response = await llm . invoke (
messages . map (( m ) => ({
role: m . role ,
content: m . content ,
}))
);
return typeof response . content === "string"
? response . content
: JSON . stringify ( response . content );
});
Production Deployment
Environment Variables
For production, set these environment variables:
# Required
OPENAI_API_KEY=sk-prod-...
# Storage (use PostgreSQL in production)
STORAGE_TYPE=postgres
DATABASE_URL=postgresql://user:pass@host/db
# Scheduler (use Redis in production)
SCHEDULER_TYPE=redis
REDIS_URL=redis://host:6379
# Deployment
DEPLOYMENT__URL=https://your-agent.example.com
DEPLOYMENT__EXPOSE=true
Docker Deployment
Create a Dockerfile:
FROM node:18-alpine
# Install Python and Bindu
RUN apk add --no-cache python3 py3-pip
RUN pip3 install bindu
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3773 3774
CMD [ "npx" , "tsx" , "index.ts" ]
Build and run:
docker build -t my-agent .
docker run -p 3773:3773 -p 3774:3774 --env-file .env my-agent
Troubleshooting
”Bindu not found”
Install the Python package:
Verify it’s installed:
“Port 3773 already in use”
Kill existing processes:
lsof -ti:3773 -ti:3774 | xargs kill 2> /dev/null
“OPENAI_API_KEY not set”
Make sure your .env file exists and has a valid key:
cat .env
# Should show: OPENAI_API_KEY=sk-...
Agent starts but no response
Check the terminal for error logs. Common issues:
Invalid API key
Model not available on your OpenAI plan
Rate limiting
Network connectivity
”Registration failed”
The core rejected your config. Check the [bindu-core] log lines for details. Common causes:
Missing author or name in config
Invalid deployment.url format
Port conflicts
Ports Reference
Port Protocol Purpose :3773HTTP A2A protocol server (clients connect here) :3774gRPC Bindu core registration (SDK connects here) :50052gRPC AgentHandler (core calls your handler here)
The handler port (:50052) is auto-assigned by default. You can override it with callbackPort in your config.
Complete Example
Here’s the complete working example from this guide:
import { bindufy , ChatMessage } from "@bindu/sdk" ;
import OpenAI from "openai" ;
import * as dotenv from "dotenv" ;
dotenv . config ();
const openai = new OpenAI ({
apiKey: process . env . OPENAI_API_KEY ,
});
bindufy (
{
author: "dev@example.com" ,
name: "my-first-agent" ,
description: "A helpful assistant powered by GPT-4o" ,
version: "1.0.0" ,
deployment: {
url: "http://localhost:3773" ,
expose: true ,
cors_origins: [ "http://localhost:5173" ],
},
skills: [ "skills/question-answering" ],
},
async ( messages : ChatMessage []) => {
const response = await openai . chat . completions . create ({
model: process . env . OPENAI_MODEL || "gpt-4o" ,
messages: messages . map (( m ) => ({
role: m . role as "user" | "assistant" | "system" ,
content: m . content ,
})),
});
return response . choices [ 0 ]. message . content || "" ;
}
);
What You’ve Learned
How to install and set up the Bindu TypeScript SDK
How to create a skill definition
How to write a handler function
How to call bindufy() to create a microservice
How to test your agent with curl
How the message flow works internally
How to add multi-turn conversations
How to add payment requirements
Additional Resources
TypeScript SDK Reference Complete API documentation
Architecture Deep Dive How gRPC agents work internally
Example: LangChain Agent Full LangChain.js example
Building SDKs Create SDKs for other languages