Multi-agent blog writing system using LangGraph workflow.Documentation Index
Fetch the complete documentation index at: https://docs.getbindu.com/llms.txt
Use this file to discover all available pages before exploring further.
Code
Createmain.py with the code below, or save it directly from your editor.
from bindu.penguin.bindufy import bindufy
from graph import build_graph
from schemas import AgentResponse
graph = build_graph()
def handler(messages):
try:
# Handle possible dict wrapper
if isinstance(messages, dict) and "messages" in messages:
messages = messages["messages"]
if not messages:
raise ValueError("No messages received")
last_message = messages[-1]
# Support both formats:
# 1) [{"role": "user", "content": "..."}]
# 2) ["plain string"]
if isinstance(last_message, dict):
query = last_message.get("content", "")
else:
query = str(last_message)
result = graph.invoke({
"topic": query,
"plan": None,
"sections": [],
"final": None
})
return result["final"]
except Exception as e:
return AgentResponse(
answer="Agent execution failed.",
)
config = {
"author": "amritanshu9973@gmail.com",
"name": "langgraph_blog_writing_agent",
"deployment": {
"url": "http://localhost:3773",
"expose": True,
"cors_origins": ["*"],
},
"skills": ["skills/blog_writing_agent"],
}
bindufy(config, handler)
Additional Files
Create these supporting files in the same directory:graph.py
from __future__ import annotations
import operator
from typing import TypedDict, List, Annotated, Literal,Optional
from pydantic import BaseModel, Field
from langgraph.graph import StateGraph, START, END
from langgraph.types import Send
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage
from dotenv import load_dotenv
import os
load_dotenv()
class Task(BaseModel):
id: int
title: str
goal: str = Field(
...,
description="One sentence describing what the reader should be able to do/understand after this section.",
)
bullets: List[str] = Field(
...,
min_length=3,
max_length=5,
description="3–5 concrete, non-overlapping subpoints to cover in this section.",
)
target_words: int = Field(
...,
description="Target word count for this section (300–450).",
)
section_type: Literal[
"intro", "core", "examples", "checklist", "common_mistakes", "conclusion"
] = Field(
...,
description="Use 'common_mistakes' exactly once in the plan.",
)
class Plan(BaseModel):
blog_title: str
audience: str = Field(..., description="Who this blog is for.")
tone: str = Field(..., description="Writing tone (e.g., practical, crisp).")
tasks: List[Task]
class State(TypedDict):
topic: str
plan: Optional[Plan]
sections: Annotated[List[str], operator.add] # reducer concatenates worker outputs
final: Optional[str]
llm = ChatOpenAI(
model="openai/gpt-oss-120b", # or any OpenRouter-supported model
openai_api_key=os.getenv("OPENROUTER_API_KEY"),
openai_api_base="https://openrouter.ai/api/v1",
)
def orchestrator(state: State) -> dict:
planner = llm.with_structured_output(Plan)
plan = planner.invoke(
[
SystemMessage(
content=(
"You are a senior technical writer and developer advocate. Your job is to produce a "
"highly actionable outline for a technical blog post.\n\n"
"Hard requirements:\n"
"- Create 5–7 sections (tasks) that fit a technical blog.\n"
"- Each section must include:\n"
" 1) goal (1 sentence: what the reader can do/understand after the section)\n"
" 2) 3–5 bullets that are concrete, specific, and non-overlapping\n"
" 3) target word count (120–450)\n"
"- Include EXACTLY ONE section with section_type='common_mistakes'.\n\n"
"Make it technical (not generic):\n"
"- Assume the reader is a developer; use correct terminology.\n"
"- Prefer design/engineering structure: problem → intuition → approach → implementation → "
"trade-offs → testing/observability → conclusion.\n"
"- Bullets must be actionable and testable (e.g., 'Show a minimal code snippet for X', "
"'Explain why Y fails under Z condition', 'Add a checklist for production readiness').\n"
"- Explicitly include at least ONE of the following somewhere in the plan (as bullets):\n"
" * a minimal working example (MWE) or code sketch\n"
" * edge cases / failure modes\n"
" * performance/cost considerations\n"
" * security/privacy considerations (if relevant)\n"
" * debugging tips / observability (logs, metrics, traces)\n"
"- Avoid vague bullets like 'Explain X' or 'Discuss Y'. Every bullet should state what "
"to build/compare/measure/verify.\n\n"
"Ordering guidance:\n"
"- Start with a crisp intro and problem framing.\n"
"- Build core concepts before advanced details.\n"
"- Include one section for common mistakes and how to avoid them.\n"
"- End with a practical summary/checklist and next steps.\n\n"
"Output must strictly match the Plan schema."
)
),
HumanMessage(content=f"Topic: {state['topic']}"),
]
)
return {"plan": plan}
def fanout(state: State):
return [
Send(
"worker",
{"task": task, "topic": state["topic"], "plan": state["plan"]},
)
for task in state["plan"].tasks
]
def worker(payload: dict) -> dict:
task = payload["task"]
topic = payload["topic"]
plan = payload["plan"]
bullets_text = "\n- " + "\n- ".join(task.bullets)
section_md = llm.invoke(
[
SystemMessage(
content=(
"You are a senior technical writer and developer advocate. Write ONE section of a technical blog post in Markdown.\n\n"
"Hard constraints:\n"
"- Follow the provided Goal and cover ALL Bullets in order (do not skip or merge bullets).\n"
"- Stay close to the Target words (±15%).\n"
"- Output ONLY the section content in Markdown (no blog title H1, no extra commentary).\n\n"
"Technical quality bar:\n"
"- Be precise and implementation-oriented (developers should be able to apply it).\n"
"- Prefer concrete details over abstractions: APIs, data structures, protocols, and exact terms.\n"
"- When relevant, include at least one of:\n"
" * a small code snippet (minimal, correct, and idiomatic)\n"
" * a tiny example input/output\n"
" * a checklist of steps\n"
" * a diagram described in text (e.g., 'Flow: A -> B -> C')\n"
"- Explain trade-offs briefly (performance, cost, complexity, reliability).\n"
"- Call out edge cases / failure modes and what to do about them.\n"
"- If you mention a best practice, add the 'why' in one sentence.\n\n"
"Markdown style:\n"
"- Start with a '## <Section Title>' heading.\n"
"- Use short paragraphs, bullet lists where helpful, and code fences for code.\n"
"- Avoid fluff. Avoid marketing language.\n"
"- If you include code, keep it focused on the bullet being addressed.\n"
)
)
,
HumanMessage(
content=(
f"Blog: {plan.blog_title}\n"
f"Audience: {plan.audience}\n"
f"Tone: {plan.tone}\n"
f"Topic: {topic}\n\n"
f"Section: {task.title}\n"
f"Section type: {task.section_type}\n"
f"Goal: {task.goal}\n"
f"Target words: {task.target_words}\n"
f"Bullets:{bullets_text}\n"
)
),
]
).content.strip()
return {"sections": [section_md]}
def reducer(state: State) -> dict:
title = state["plan"].blog_title
body = "\n\n".join(state["sections"]).strip()
final_md = f"# {title}\n\n{body}\n"
return {"final": final_md}
# -----------------------------
# 5) Graph
# -----------------------------
def build_graph():
g= StateGraph(State)
g.add_node("orchestrator", orchestrator)
g.add_node("worker", worker)
g.add_node("reducer", reducer)
g.add_edge(START, "orchestrator")
g.add_conditional_edges("orchestrator", fanout, ["worker"])
g.add_edge("worker", "reducer")
g.add_edge("reducer", END)
app = g.compile()
return app
schemas.py
from pydantic import BaseModel
from typing import Optional
class AgentResponse(BaseModel):
answer: Optional[str]
reasoning: Optional[str] = None
Skill Configuration
Createskills/blog_writing_agent/skill.yaml:
# LangGraph Structured Technical Blog Agent
# Production-grade technical blog generation agent
id: langgraph-structured-blog-writer
name: LangGraph Structured Technical Blog Agent
version: 1.0.0
author: amritanshu9973@gmail.com
description: |
A production-grade technical blog generation agent built with LangGraph
and OpenAI (gpt-5.2).
Architecture:
- Orchestrator: Generates a strictly validated blog outline using a Pydantic schema (Plan + Task).
- Worker Nodes: Independently generate detailed Markdown sections based on structured goals.
- Reducer: Combines all generated sections into a cohesive final blog post.
Core Capabilities:
- Schema-enforced structured planning
- 5–7 section automatic decomposition
- Map-reduce parallel section writing
- Developer-focused technical precision
- Markdown-first clean formatting
- Explicit inclusion of edge cases, trade-offs, and implementation details
This agent ensures deterministic, high-quality, engineering-grade blog posts.
tags:
- langgraph
- openai
- gpt-5.2
- technical-writing
- structured-output
- pydantic
- map-reduce
- markdown
- blog-generation
- orchestration
input_modes:
- application/json
output_modes:
- application/json
examples:
- "How does Retrieval-Augmented Generation (RAG) work?"
- "Design a production-ready microservices architecture"
- "Deep dive into Kubernetes scheduling"
- "How to implement distributed tracing in Python"
- "Understanding vector databases in AI systems"
capabilities_detail:
structured_planning:
supported: true
description: "Uses strict Pydantic schema (Plan + Task) for deterministic outline generation."
section_schema_validation:
supported: true
description: "Each section includes goal, 3–5 actionable bullets, target word count, and type constraints."
map_reduce_execution:
supported: true
description: "Uses LangGraph fan-out workers to generate sections and reducer to merge outputs."
markdown_output:
supported: true
description: "Produces clean, production-ready Markdown with headings and code blocks."
technical_depth_enforcement:
supported: true
description: "Requires implementation-level detail, trade-offs, edge cases, and debugging insights."
openai_backend:
supported: true
description: "Powered by OpenAI gpt-5.2 via ChatOpenAI."
deterministic_structure:
supported: true
description: "Ensures exactly one 'common_mistakes' section and 5–7 total sections."
requirements:
packages:
- "langgraph>=0.2.0"
- "langchain-openai>=0.2.0"
- "pydantic>=2.0.0"
- "python-dotenv>=1.0.0"
- "bindu>=0.1.0"
system:
- python_311_or_higher
api_keys:
- OPENROUTER_API_KEY
performance:
avg_processing_time_ms: 45000
max_concurrent_requests: 2
context_window_tokens: 128000
scalability: horizontal
assessment:
keywords:
- blog
- technical
- writing
- langgraph
- structured
- markdown
- openai
- pydantic
- map-reduce
specializations:
- domain: technical-writing
confidence_boost: 0.5
- domain: blog-generation
confidence_boost: 0.4
- domain: structured-output
confidence_boost: 0.3
anti_patterns:
- "creative writing"
- "marketing content"
- "non-technical explanations"
- "casual tone"
- "generic advice"
complexity_indicators:
simple:
- "write about"
- "explain"
- "how to"
medium:
- "design architecture"
- "deep dive into"
- "implement"
complex:
- "comprehensive guide"
- "production-ready"
- "distributed systems"
How It Works
Agent Roles- Orchestrator: Breaks topic into structured plan with sections and word counts
- Workers: Write individual sections simultaneously with specific technical depth
- Reducer: Aggregates sections into final cohesive markdown article
- Orchestrator creates detailed plan with specific tasks
- Fanout distributes tasks to parallel workers
- Workers write sections simultaneously ensuring constraints
- Reducer combines sections into final article
topic: User input for blog topicplan: Structured outline with sections and requirementssections: Individual written sections from workersfinal: Completed markdown article
- Orchestrator breaks topic into detailed plan
- Workers write sections in parallel
- Reducer aggregates and formats final article
- Returns cohesive blog post
Dependencies
uv init
uv add bindu langgraph langchain-openai pydantic python-dotenv
Environment Setup
Create.env file:
OPENROUTER_API_KEY=your_openrouter_api_key_here
Run
uv run main.py
- “How does Retrieval-Augmented Generation (RAG) work?”
- “Design a production-ready microservices architecture”
- “Deep dive into Kubernetes scheduling”
Example API Calls
Message Send Request
Message Send Request
{
"jsonrpc": "2.0",
"method": "message/send",
"params": {
"message": {
"role": "user",
"kind": "message",
"messageId": "9f11c870-5616-49ad-b187-d93cbb100001",
"contextId": "9f11c870-5616-49ad-b187-d93cbb100002",
"taskId": "9f11c870-5616-49ad-b187-d93cbb100003",
"parts": [
{
"kind": "text",
"text": "How does Retrieval-Augmented Generation (RAG) work?"
}
]
},
"skillId": "langgraph-structured-blog-writer",
"configuration": {
"acceptedOutputModes": ["application/json"]
}
},
"id": "9f11c870-5616-49ad-b187-d93cbb100003"
}
Task get Request
Task get Request
{
"jsonrpc": "2.0",
"method": "tasks/get",
"params": {
"taskId": "9f11c870-5616-49ad-b187-d93cbb100003"
},
"id": "9f11c870-5616-49ad-b187-d93cbb100004"
}
Frontend Setup
# Clone the Bindu repository
git clone https://github.com/GetBindu/Bindu
# Navigate to frontend directory
cd frontend
# Install dependencies
npm install
# Start frontend development server
npm run dev