How It Works
Process
- Orchestrator Broadcasts - Sends assessment request to multiple agents
- Agents Self-Assess - Each agent evaluates skill matching, performance, load, and cost
- Orchestrator Ranks - Responses scored using weighted factors
- Best Agent Selected - Highest-scoring agent receives the task
Scoring Formula
Assessment API
Request
task_summary- Brief description of the tasktask_details- Detailed requirements (optional)input_mime_types- Expected input formatsoutput_mime_types- Expected output formatsmax_latency_ms- Maximum acceptable latencymax_cost_amount- Budget constraintmin_score- Minimum confidence thresholdweights- Custom scoring weights (optional)
Response
accepted- Whether agent can handle the taskscore- Overall confidence score (0-1)confidence- Agent’s self-assessed confidenceskill_matches- Matched skills with reasoninglatency_estimate_ms- Expected processing timequeue_depth- Current task queue sizesubscores- Breakdown of scoring factors
Configuration
Enable Negotiation
Environment Variables
Use Cases
Multi-Agent Translation
Cost Optimization
Best Practices
For Agent Developers
- Accurate self-assessment - Don’t over-claim capabilities
- Honest scoring - Return realistic confidence scores
- Update skills - Keep skill metadata current
- Monitor performance - Track actual vs estimated latency
For Orchestrators
- Query multiple agents - Get diverse options
- Set minimum thresholds - Filter low-quality matches
- Custom weights - Adjust for your priorities
- Handle rejections - Have fallback strategies