Agent directories have always been built for humans. You open a web page, you scan a list, you click through profiles, you compare features. That's how Agentry started — and it still works for the people who use it every day.
But the next wave of "users" won't be people. They'll be AI agents — LangChain orchestrators, Claude tool-use chains, AWS Bedrock agents, custom AutoGPT setups — tasked with finding and calling other agents. And agents don't browse web pages. They read structured data.
Today, Agentry is shipping five machine-discovery layers that make every agent in our directory programmatically findable, queryable, and callable — by other agents.
The Shift: From Human Browsing to Agent Discovery
When a developer needs a customer service bot, they Google it, browse a directory, read some reviews, and make a call. When an AI agent needs a customer service bot, the process is fundamentally different.
An orchestrator agent doesn't have eyes. It doesn't click buttons. It needs a URL it can fetch, a schema it can parse, and structured metadata it can evaluate. If your agent isn't discoverable through machine-readable endpoints, it doesn't exist in the agentic world — no matter how good your landing page looks.
This is the gap we just closed. Agentry now serves two audiences simultaneously: humans who browse our directory to find agents, and machines that query our APIs to discover them. Same directory, same listings, same trust scores — but now with five distinct machine-readable surfaces that conform to the emerging standards for agent discovery.
Five Discovery Layers, One Directory
Each layer serves a different consumer at a different stage of the discovery process. Here's what we shipped and why.
1. /llms.txt — The AI-Readable Front Door
Think of robots.txt, but for AI models. The llms.txt standard is a plain-text markdown file at the root of a domain that tells any LLM what the site does, what APIs exist, and how to use them.
Agentry now serves this file at both https://agentry.com/llms.txt and https://api.agentry.com/llms.txt. When an LLM or AI agent encounters our domain — during retrieval-augmented generation, tool-use exploration, or autonomous browsing — it can consult this file to understand the site. It may learn that Agentry is an agent directory, that we have a public API, and which endpoints to call.
2. /.well-known/agents.json — The API Descriptor
A machine-readable API capability file at https://api.agentry.com/.well-known/agents.json. This follows the emerging agents.json standard — a JSON file that describes all API capabilities with full parameter schemas, return types, and usage examples.
An agent reads this file and instantly knows how to call the Agentry API. No documentation parsing. No guesswork. The file describes all 6 API capabilities — from searching agents to filtering by category to retrieving individual agent cards — with enough detail for an LLM to construct valid API calls on the first try.
3. /api/agents/public — A2A-Compatible Open Discovery
This is the big one. An open discovery endpoint at https://api.agentry.com/api/agents/public that follows the proposed A2A Registry standard from Google A2A Discussion #741.
This endpoint returns proper A2A Agent Cards for every agent in our directory — complete with name, description, capabilities, skills, provider info, and Agentry-specific metadata like trust scores and pricing. It supports full-text search via ?q=, category filtering, and cursor-based pagination. Any A2A-compliant client can query it.
4. /api/agents/search — Structured Search API
A purpose-built search endpoint at https://api.agentry.com/api/agents/search that returns JSON with full metadata: name, description, category, pricing model, trust score, integrations, and A2A/MCP support flags. Optimized for programmatic consumption with query parameters for filtering by category, pricing model, and minimum trust score.
5. MCP Registry — Native Discovery for MCP Clients
Agentry is now listed in the official MCP Registry (currently in preview) at registry.modelcontextprotocol.io as io.github.cthulhutoo/agentry. MCP clients and downstream aggregators can discover Agentry through the official MCP Registry, which provides a standardized API for server metadata. The MCP integration exposes agent search, category browsing, and agent card retrieval as MCP tools.
What This Means for Agent Developers
If you've listed your agent on Agentry, it's now discoverable by other agents — not just humans. Here's what that looks like in practice:
- A LangChain agent searching for "customer service automation" will find your listing via the
/api/agents/publicendpoint, parse your Agent Card, and evaluate whether your capabilities match its task. - An MCP client like Claude or Cursor, looking for agent tools, will find Agentry in the MCP Registry and query your capabilities through our search tools.
- An enterprise orchestrator evaluating agent options will call
/api/agents/search, rank results by trust score, and recommend your agent — all without a human in the loop.
Your paid listing (Pro, Featured, or Premium) now serves double duty. It's visible to humans browsing the directory and to machines querying the API. The same trust score that ranks you in the web directory ranks you in programmatic search results.
Here's how another agent would query the API to find your listing:
# Find customer service agents via A2A discovery
curl "https://api.agentry.com/api/agents/public?q=customer+service&top=5"
# Search by category
curl "https://api.agentry.com/api/agents/search?q=analytics&category=Data+%26+Analytics"
# Get a specific agent's full card
curl "https://api.agentry.com/api/agents/public?q=SupportBot+Pro&top=1"
What This Means for Businesses
If you're deploying AI agents in your workflows — whether for customer support, sales outreach, data analysis, or internal operations — this matters for three reasons.
Your agents can now discover specialized tools programmatically. Instead of a human researcher manually evaluating agent options, your orchestrator agent can query Agentry's API, compare trust scores and capabilities, and surface the best candidates automatically.
No more manual research loops. When your workflow needs a new capability — say, a document-processing agent or a compliance-checking agent — your existing agents can search for it, evaluate options based on structured metadata (trust score, pricing model, supported features), and recommend the best fit. The evaluation criteria are in the data, not locked in someone's head.
Enterprise registries can sync with Agentry's public endpoint. If you maintain an internal agent catalog — a list of approved third-party agents your teams can use — you can now pull directly from Agentry's /api/agents/public endpoint to populate and refresh that catalog. Filter by minimum trust score, required capabilities, or specific categories. The data is structured, paginated, and ready for automated ingestion.
How Agent Discovery Actually Works
Let's walk through a concrete scenario. An orchestrator agent is given the task: "Find me a sales outreach AI agent with email integration."
Step 1: The agent may read agentry.com/llms.txt to learn about the directory. If consulted, it learns that Agentry is an agent directory with a public API.
Step 2: The agent reads api.agentry.com/.well-known/agents.json. It parses the API capability descriptor and understands the available endpoints, parameters, and response formats.
Step 3: The agent calls GET /api/agents/search?q=sales+outreach&category=Sales+%26+Marketing. It receives a structured JSON response with matching agents.
Step 4: The agent evaluates results based on trust_score, pricing_model, key_features, and whether the agent supports the required email integration.
Step 5: The agent returns a recommendation to the user — or, if authorized, calls the chosen agent directly via its A2A endpoint.
Here's what the response from /api/agents/public looks like for a single agent card, including the x-agentry metadata extension:
{
"name": "OutreachPilot",
"description": "AI sales outreach agent that drafts personalized cold emails, manages follow-up sequences, and integrates with Gmail, Outlook, and HubSpot.",
"provider": {
"organization": "PilotAI Inc.",
"url": "https://outreachpilot.ai"
},
"version": "3.2.1",
"supportedInterfaces": [
{
"url": "https://api.outreachpilot.ai/a2a/v1",
"protocolBinding": "JSONRPC",
"protocolVersion": "1.0"
}
],
"capabilities": {
"streaming": true,
"pushNotifications": true,
"stateTransitionHistory": false
},
"skills": [
{
"id": "cold-email-draft",
"name": "Cold Email Drafting",
"description": "Generates personalized cold outreach emails based on prospect data and ICP criteria.",
"tags": ["sales", "outreach", "email", "cold-email"]
},
{
"id": "follow-up-sequence",
"name": "Follow-Up Sequence Manager",
"description": "Manages multi-step follow-up sequences with configurable timing and escalation.",
"tags": ["sales", "follow-up", "automation"]
}
],
// Agentry-specific metadata extension
"x-agentry": {
"trust_score": 87,
"category": "Sales & Marketing",
"pricing_model": "usage-based",
"listing_tier": "featured",
"integrations": ["Gmail", "Outlook", "HubSpot", "Salesforce"],
"a2a_verified": true,
"mcp_supported": true,
"listing_url": "https://agentry.com/agents/outreachpilot"
}
}
Notice the x-agentry extension. This is where Agentry adds value beyond the base A2A spec: trust scores, pricing metadata, integration lists, and verification flags that help an evaluating agent make a decision — not just discover an option.
The Standards We Support
Agentry's machine-discovery layers are built on four emerging standards. Each operates at a different layer of the stack. As nullpath.com put it: "MCP is discovery, A2A is communication, x402 is money. Different layers."
| Standard | Created By | Purpose | Agentry Implementation |
|---|---|---|---|
| llms.txt | llmstxt.org | Plain-text site descriptor for AI models | Served at /llms.txt on both agentry.com and api.agentry.com |
| agents.json | Community standard | Machine-readable API capability descriptor | Full API schema at /.well-known/agents.json with 6 capabilities |
| A2A Protocol | Google / Linux Foundation | Agent-to-agent communication and discovery | A2A Registry endpoint at /api/agents/public with full Agent Cards |
| MCP | Anthropic | Tool and context discovery for LLM clients | Listed in MCP Registry (preview) as io.github.cthulhutoo/agentry |
These standards are complementary, not competing. An agent might discover Agentry via MCP, learn the API structure from agents.json, query the A2A Registry endpoint for matching agents, and then communicate with a discovered agent directly over A2A. Each standard handles one layer of the interaction.
Get Your Agent Listed
If you're building an AI agent, listing it on Agentry now means it's discoverable by both humans and machines — across all five discovery layers.
Here's how to maximize your visibility:
- Add an A2A Agent Card to your agent at
/.well-known/agent.json. Agents with valid A2A cards get a +25 trust score boost and higher placement in both human and programmatic search results. See our step-by-step guide. - Include detailed skills with examples. The
skillsarray in your Agent Card is what other agents use to decide whether to route tasks to you. Vague descriptions get skipped. Specific, example-rich descriptions get called. - Declare MCP support if your agent exposes MCP tools. Agents that support both A2A and MCP are accessible to the widest range of clients and orchestrators.
- Keep your listing metadata current. Pricing model, integration list, and capability flags all feed into the structured data that machines evaluate. Stale data means missed opportunities.
Listing is free. Pro, Featured, and Premium tiers increase your visibility in both human-facing and machine-facing search results. Submit your agent here.
Make Your Agent Machine-Discoverable
List your agent on Agentry and reach both human decision-makers and AI orchestrators. Agents with A2A cards and MCP support rank higher in programmatic discovery.