Insight

The Agents Are Already Searching — What Happened When We Made Our Directory Machine-Readable

By Ryan Clark · · 6 min read

We built a directory for AI agents. The standard stuff: categories, listings, search, trust scores. It worked fine as a website — people browsed it, found agents, compared options.

Then we added machine-readable discovery endpoints — A2A protocol support, agents.json, llms.txt, MCP Registry listing — and something unexpected happened.

The Agents Showed Up First

Within minutes of deploying the discovery endpoints, we started seeing requests from Azure IP ranges associated with OpenAI's infrastructure. Not a trickle — structured, methodical requests to our /.well-known/agent.json and /llms.txt files.

Within 24 hours, the picture got clearer. Requests from OpenAI, from infrastructure consistent with Anthropic's Claude, from what appeared to be an MCP client, and from LangChain-based systems. The machine traffic arrived faster and more consistently than the human traffic ever did.

We hadn't announced anything yet. We hadn't posted about it. We just put structured data at well-known URLs, and the agents found it.

What This Tells Us

AI systems are already actively reading the web for structured agent data. They're looking for agents.json, llms.txt, A2A Agent Cards — the same discovery protocols we had just implemented. This isn't theoretical. It isn't a whitepaper prediction. The agent-to-agent discovery layer is forming right now, and most of the industry hasn't noticed.

Think about what that means. Major AI labs have already built infrastructure that probes domains for machine-readable agent metadata. They're not waiting for a standard to be finalized and ratified by committee. They're reading the endpoints today, building indexes, and presumably using that data to improve tool-use and agent routing.

The parallel to early web search is hard to ignore. In 1994, if you put up a web page, Webcrawler found it before most humans did. We're at that moment again — except the "crawlers" are AI systems, and instead of HTML they're reading JSON-LD, Agent Cards, and structured capability descriptors.

The Discovery Gap

Right now, finding an AI agent is a human problem solved with human tools. You Google "best AI customer service agent," you read a listicle, you ask your network, you maybe check a directory. It works, but it's slow, subjective, and doesn't scale.

But here's the thing: agents need to find other agents too. An orchestration system that needs a specialized tool. A multi-agent workflow that needs to delegate a subtask. An enterprise platform that needs to dynamically compose agent pipelines. These systems can't Google it. They can't read a blog post and make a judgment call.

They need structured, queryable, machine-readable registries. They need to be able to ask: "What agents exist that can handle document processing, support A2A, and cost less than $0.50 per task?" And they need an answer in milliseconds, not minutes.

That gap — between how humans discover agents and how machines need to discover agents — is the thing most people building in this space are underestimating. The human-facing directory is table stakes. The machine-facing registry is the infrastructure that actually enables the agent economy.

DNS for Agents, Not Yellow Pages

There's an analogy that keeps coming back in our internal discussions: the real value of a directory might not be the website — the Yellow Pages — but the structured data layer underneath — the DNS.

The Yellow Pages was a human interface. You flipped through it, you found a plumber, you made a call. Useful, but fundamentally limited by human attention. DNS, on the other hand, is invisible infrastructure. No one "uses" DNS in the way they use a directory. But DNS is what makes the entire internet work — a machine-readable, programmatically queryable layer that resolves names to addresses billions of times per second.

When an agent can programmatically query "show me all customer service agents that support A2A and cost less than $1 per resolution," that's a fundamentally different thing than a human browsing a list. It's the difference between the Yellow Pages and DNS. One is a product. The other is infrastructure.

We think the real opportunity for agent directories isn't the website. It's becoming the resolution layer — the system that agents query to find other agents, automatically, at scale, using open protocols.

The Protocols Are Converging

What makes this moment interesting isn't any single protocol. It's that four complementary standards are emerging in the same window:

The GetStream guide on agent protocols does a good job mapping how these layers interact: MCP handles tool discovery, A2A handles agent communication, and files like llms.txt and agents.json handle the initial "what is this site and what can it do?" question.

These protocols aren't competing. They're complementary layers of the same stack. An agent might discover a registry via llms.txt, understand its API via agents.json, query it for A2A Agent Cards, and then communicate with a discovered agent over A2A directly. Each protocol handles one step of the interaction.

The infrastructure for agent discovery is being laid right now. Not in a spec committee. In production, by the systems actually doing the discovering.

What We're Building Toward

Agentry serves all of these protocols from one registry. You list your agent once, and it's discoverable via A2A Agent Cards, queryable through a structured search API, described in agents.json, readable via llms.txt, and findable in the MCP Registry.

Free to list. Free to query. The goal isn't to be a gatekeeper — it's to be useful infrastructure. Open protocols, open data, no lock-in.

We don't know exactly what the agent economy looks like in two years. But we know it needs a resolution layer. It needs somewhere agents can go to find other agents — reliably, programmatically, and using the protocols that are already being adopted. That's what we're building.

Get Involved

If you're building an agent, list it. It's free, and your agent becomes discoverable by both humans browsing the directory and machines querying the API. Add an A2A Agent Card to your agent for a trust score boost and richer discovery metadata.

If you want to go deeper on A2A and how agent discovery works at the protocol level, read our explainer on the A2A protocol.

And if you're curious about what exactly we shipped on the machine-discovery side, our previous post covers all five discovery layers in detail.

List Your Agent for Free

Make your agent discoverable by both humans and machines. One listing, five discovery protocols. No lock-in.

List Your Agent Read the A2A Explainer