Skip to main content
This is the fastest path to the aha moment. You will build a simple TypeScript agent, call bindufy(), and watch the Bindu Core sidecar turn it into a real microservice. You write the driver. The sidecar brings the engine: DID identity, A2A protocol compliance, x402 payment support, scheduling, storage, and an HTTP server. Time to complete: ~10 minutes

DID Identity

The core generates and manages agent identity for you.

A2A Protocol

Your handler is exposed as a production-ready A2A service.

Payments

x402 support can be added without changing the transport model.

Scheduling + Storage

The sidecar handles task orchestration and persistence.

Prerequisites

Before you start, make sure the local machine can run both halves of the sidecar model. You need a JavaScript runtime for your code and a Python runtime for the core.

Node.js

Version 18 or higher

Python

Version 3.12+ with Bindu installed

OpenAI API Key

Get one at platform.openai.com/api-keys

Terminal

Basic command line knowledge

Install Bindu Python Core

The TypeScript SDK needs the Bindu Core installed on the machine. This is the engine half of the sidecar — your SDK will launch it automatically, but it needs to be available first.
pip install bindu
# or with uv:
uv pip install bindu
The SDK launches the Python core automatically as a child process. You never start the sidecar manually during normal SDK use.

Step 1: Create Your Project

Start with a clean project directory. Nothing special here — just a standard Node.js project.
mkdir my-first-agent
cd my-first-agent
npm init -y

Step 2: Install Dependencies

Now install the packages your agent needs. The Bindu SDK handles the sidecar lifecycle, and you can use any LLM library you like alongside it.
npm install @bindu/sdk openai dotenv
npm install -D tsx typescript @types/node
  • @bindu/sdk — Bindu TypeScript SDK (gRPC, registration, sidecar lifecycle)
  • openai — OpenAI Node.js SDK
  • dotenv — Loads environment variables from .env
  • tsx — TypeScript executor (dev dependency)
  • typescript — TypeScript compiler (dev dependency)

Step 3: Create Your Environment File

Your agent needs an API key to talk to OpenAI. Store it in a .env file so it stays out of your code.
touch .env
OPENAI_API_KEY=sk-your-openai-api-key-here
OPENAI_MODEL=gpt-4o
Never commit your .env file to git.
echo ".env" >> .gitignore

Step 4: Create a Skill Definition

Skills tell the A2A protocol what your agent can do. Think of them as the agent’s resume — other agents and clients read this to decide whether to talk to yours.
mkdir -p skills/question-answering
Create skills/question-answering/skill.yaml:
name: question-answering
description: General question answering using GPT-4o
tags:
  - qa
  - assistant
  - general-knowledge
input_modes:
  - text/plain
output_modes:
  - text/plain
version: 1.0.0
author: dev@example.com
You can also use Markdown format (SKILL.md) instead of YAML. Both formats are supported.

Step 5: Write Your Agent Code

This is the core of what you are building. Your code defines the driver — the handler function that receives messages and returns responses. bindufy() attaches the engine around it.
import { bindufy, ChatMessage } from "@bindu/sdk";
import OpenAI from "openai";
import * as dotenv from "dotenv";

dotenv.config();

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

bindufy(
  {
    author: "dev@example.com",
    name: "my-first-agent",
    description: "A helpful assistant powered by GPT-4o",
    version: "1.0.0",
    deployment: {
      url: "http://localhost:3773",
      expose: true,
      cors_origins: ["http://localhost:5173"],
    },
    skills: ["skills/question-answering"],
  },
  async (messages: ChatMessage[]) => {
    const response = await openai.chat.completions.create({
      model: process.env.OPENAI_MODEL || "gpt-4o",
      messages: messages.map((m) => ({
        role: m.role as "user" | "assistant" | "system",
        content: m.content,
      })),
    });

    return response.choices[0].message.content || "";
  }
);
Notice what you did not write: no server setup, no DID key generation, no authentication middleware, no protocol handling. That is the sidecar doing its job.

Step 6: Run Your Agent

Everything is in place. Start your agent with a single command:
npx tsx index.ts
You should see output like this:
[Bindu SDK] Starting Bindu core...
[Bindu SDK] Bindu core is ready on :3774
[Bindu SDK] AgentHandler gRPC server started on :XXXXX
[Bindu SDK] Registering agent with Bindu core...
[Bindu SDK]
[Bindu SDK] Agent registered successfully!
[Bindu SDK]   Agent ID:  91547067-c183-e0fd-c150-27a3ca4135ed
[Bindu SDK]   DID:       did:bindu:dev_at_example_com:my-first-agent:91547067...
[Bindu SDK]   A2A URL:   http://localhost:3773
[Bindu SDK]
[Bindu SDK] Waiting for messages...
Your agent is now running as a full microservice.
That is a lot of infrastructure that just appeared from one function call. Your agent has a DID identity, an A2A-compliant HTTP server, authentication, and is ready to receive messages.

Step 7: Verify It Works

Let’s make sure everything is working by sending your first message. Open a new terminal and try these commands.

Send a message

curl -s -X POST http://localhost:3773 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "message/send",
    "params": {
      "message": {
        "role": "user",
        "parts": [{"kind": "text", "text": "What is the capital of France?"}],
        "messageId": "11111111-1111-1111-1111-111111111111",
        "contextId": "22222222-2222-2222-2222-222222222222",
        "taskId": "33333333-3333-3333-3333-333333333333",
        "kind": "message"
      },
      "configuration": {
        "acceptedOutputModes": ["text/plain"],
        "blocking": true
      }
    },
    "id": "11111111-1111-1111-1111-111111111111"
  }' | python3 -m json.tool
messageId, contextId, and taskId are required fields in the A2A Message schema. You must pass valid UUIDs. Omitting any of them will result in a validation error from the core.
If everything is working, you should see a JSON response with the answer. That response traveled from your curl command, through the A2A HTTP server, across gRPC to your TypeScript handler, out to OpenAI, and back the same way.

Check the agent card

Every Bindu agent exposes a machine-readable identity card. This is how other agents discover what yours can do.
curl -s http://localhost:3773/.well-known/agent.json | python3 -m json.tool

Check health

curl -s http://localhost:3773/health

What Just Happened?

Your agent is running. Let’s slow down and understand what bindufy() actually did behind the scenes, because quite a lot happened in that one function call.
1

Launched the Python core

Spawned bindu serve --grpc as a child process.
2

Started a gRPC server

For your handler on a dynamic, OS-assigned port.
3

Read your skill files

Loaded skills/question-answering/skill.yaml.
4

Registered your agent

Called RegisterAgent on the core with your config.
5

Core ran full bindufy logic

  • Generated deterministic agent ID from SHA256(author:name)
  • Created Ed25519 DID keys
  • Set up authentication (Hydra OAuth2)
  • Created manifest with GrpcAgentClient as handler
  • Started HTTP/A2A server on :3773
6

Returned registration result

Agent ID, DID, and A2A URL.
7

Started heartbeat loop

Pings core every 30 seconds to signal liveness.
Your code became the driver, and the sidecar brought the engine online around it.

Project Structure

Here is what your project looks like now:
my-first-agent/
├── index.ts
├── package.json
├── .env
├── .gitignore
├── skills/
│   └── question-answering/
│       └── skill.yaml
└── node_modules/

Troubleshooting

If something did not work, check the common issues below. Most problems come from missing prerequisites or port conflicts.
pip install bindu
bindu --version
lsof -ti:3773 -ti:3774 | xargs kill 2>/dev/null
cat .env
# Should show: OPENAI_API_KEY=sk-...
Common causes:
  • Invalid API key
  • Model not available on your OpenAI plan
  • Rate limiting
  • Network connectivity
Common causes:
  • Missing author or name in config
  • Invalid deployment.url format
  • Port conflicts

Next Steps

You have a working agent. From here, the natural next question is: how do I make it smarter?

Agent Implementation

Handler patterns (multi-turn, LangChain, payments), state transitions, configuration, and how the bridge works under the hood

Custom SDKs

Build SDKs for Rust, Go, Swift, or any language with gRPC

API Reference

The complete gRPC contract: services, messages, ports, and env vars

Overview

Revisit the sidecar architecture, tradeoffs, and current limitations
Sunflower LogoGet your first language-agnostic agent running in minutes without writinga single line of infrastructure code.