Your site is probably Bot-Aware today. Not Agent-Ready. The distinction isn't semantic: it decides whether Claude, ChatGPT, Perplexity, or a commercial agent can discover your APIs, read your prices, trigger a payment, and close a transaction without a human clicking. In 2026, that's what separates sites capturing agentic traffic from sites invisible to the new buyers.
This guide explains the 5 categories of agent-readiness checks, what each one means concretely, and how to scan your site for free in 5 seconds with Scan Agent Ready. No theory. A checklist.

Why "AI Agents" Is a Different Problem From "AI SEO"
AI SEO is making sure your content gets cited inside ChatGPT, Perplexity, and Gemini answers. You're optimizing for the human who asked the question.
Agent readiness is making sure an autonomous agent (an AI assistant, a commercial agent, a programmatic crawler) can discover your services, understand your APIs, pay for your product, and trigger an action without human intervention. You're optimizing for the machine that acts on behalf of the human.
Two related but distinct problems. You can rank well inside ChatGPT and still be completely opaque to an agent trying to book, buy, or integrate your service. Most sites in 2026 sit in that gap: citable, but not executable.
The 4 Levels of Agent Readiness
xSeek scores sites on 4 progressive tiers. Each tier unlocks another layer of agentic interaction.
- Level 1: Web-discoverable. robots.txt, sitemap.xml, basic metadata. The minimum to exist on the traditional web.
- Level 2: Bot-Aware. The site knows about AI bots, exposes specific rules, signals preferences via Content Signals. This is where most well-maintained sites live in 2026.
- Level 3: Agent-Readable / Agent-Friendly. Agent discovery active: MCP Server Card, API Catalog (RFC 9727), documented OAuth 2.0. An agent can understand what you offer and how to connect to it.
- Level 4: Agent-Native. Agent commerce active: ACP, AP2, MPP, UCP, x402. An agent can buy, pay, and negotiate without a human in the loop.
Going from Level 2 to Level 4 rarely takes more than a few weeks of focused engineering. But 95% of sites have never audited their position. That's the first diagnostic to run.
Category 1: Agent Commerce (5 Checks)
The top of the pyramid. If your business model involves transactions (e-commerce, SaaS, marketplace), these protocols decide whether an agent can buy directly.
- ACP (Agent Commerce Protocol): discovery document announcing that the site supports agentic commerce. Without ACP, the agent doesn't even know it can attempt a transaction.
- AP2 (Agent Payment Protocol via A2A Agent Card): lets an agent present payment credentials and negotiate terms.
- MPP (Machine Payment Protocol): discovery layer for the payment methods the site accepts.
- UCP (Unified Commerce Profile): unified profile describing products, prices, return policy, supported currencies.
- x402 payments: native HTTP payment protocol built on the HTTP 402 status code, designed for agent-to-server transactions.
For most content sites (blogs, media, brand awareness), these 5 checks don't apply and a fail is expected. For a transactional site (SaaS, e-commerce, services), failing all 5 means you're invisible to agentic commerce.
Category 2: Agent Discovery (7 Checks)
The core of Level 3 readiness. How an AI agent finds, understands, and connects to your services.
- MCP Server Card: the file at
/.well-known/mcp/server-card.jsonannouncing your site's MCP (Model Context Protocol) capabilities. Since 2025, this has become the de facto standard for exposing tools to AI agents. - API Catalog (RFC 9727): machine-readable announcement of every public API your site exposes. Official IETF standard.
- WebMCP: WebMCP tools detectable on page load, for client-side agent-page interaction.
- A2A Agent Card: agent card describing your service's capabilities as an agent itself (for Agent-to-Agent architectures).
- Agent Skills: index of the skills your site exposes to agents.
- OAuth 2.0 Discovery (RFC 8414): OAuth authorization metadata so an agent can obtain an access token.
- OAuth Protected Resource: metadata for resources protected by OAuth.
If you fail on the MCP Server Card and API Catalog but pass on OAuth, you're halfway to Level 3. Adding those two well-known files usually takes half a day of dev work.
Category 3: Discoverability (3 Checks)
The base layer. Fail here and your site isn't even Level 1.
- sitemap.xml: valid structure, current, accessible. AI agents scan sitemaps to map your content.
- robots.txt: valid file exposing your access rules. Has to exist, has to be clean.
- Link headers (RFC 8288): HTTP
Linkheaders exposing relations useful to agents (api-catalog,service-doc, etc.).
Most modern CMS platforms handle these 3 by default. If you fail, you've probably inherited a misconfiguration from a theme or plugin.
Category 4: Bot Access Control (3 Checks)
Not every AI agent is welcome. This category checks that you communicate your preferences clearly.
- Web Bot Auth: directory authenticating allowed bots. Informational check (no penalty if missing), but useful when you want to distinguish legitimate bots from scrapers.
- Content Signals: signals in robots.txt indicating how your content can be used (AI training, search, indexing). A recent standard backed by Cloudflare and adopted by the major AI engines.
- AI bot rules in robots.txt: you can either have bot-specific rules (
User-agent: GPTBot,User-agent: PerplexityBot, etc.) or generic wildcards that apply to everyone. Both pass.
For most B2B sites, the goal is to explicitly allow the AI bots tied to engines you want citations from (GPTBot, OAI-SearchBot, PerplexityBot, ClaudeBot, Google-Extended) and block unidentified scrapers.
Category 5: Content Accessibility (1 Check)
- Markdown content negotiation: ability to serve a clean Markdown version of your pages when an agent sends an
Accept: text/markdownheader. AI agents process Markdown far more efficiently than HTML loaded with styles, scripts, and trackers.
This is the most advanced check in the category. Very few sites pass it in 2026, but it's a real competitive edge: an agent that gets a clean Markdown version of your pricing page extracts you correctly, vs. a competitor whose same page returns 200 KB of soupy HTML.
How to Read Your Bot-Aware (Level 2) Score
If your scan looks like 9 passed, 1 info, 9 failed with "Level 2 · Bot-Aware," here's how to read it.
What's solid: sitemap, robots.txt, Link headers, Content Signals, AI bot rules, MCP Server Card, OAuth 2.0, API Catalog. You're already above the median 2026 web.
What's missing: the 5 agentic commerce protocols (ACP, AP2, MPP, UCP, x402), WebMCP, A2A Agent Card, Agent Skills, and Markdown negotiation.
Priority action plan to reach Level 3:
- Add an A2A Agent Card. A JSON document describing your service as an agent. Plug-and-play since 2025 for sites on a modern stack.
- Publish an Agent Skills index. Machine-readable list of the actions your site can execute for an agent.
- Enable Markdown negotiation. When an agent sends
Accept: text/markdown, your server returns a clean version. A few lines of middleware on Next.js or Express.
To reach Level 4 (Agent-Native), you need at least ACP + a payment protocol (AP2 or x402) + UCP. That's a 4 to 8 week backend project, justifiable when your revenue genuinely depends on agent-driven transactions.
Why Audit Now and Not in 12 Months
Three concrete reasons.
First mover, first cited. AI agents memorize sites where they can act successfully. An agent that completes one transaction on your site comes back. An agent that fails goes to the competitor and only learns of your existence at the next model iteration.
Technical competition is still low. In 2026, fewer than 5% of B2B sites have implemented MCP, ACP, or A2A protocols. A 2-to-4-week effort puts you in the top 5%. In 18 months, this will be table stakes and you'll be catching up, not getting ahead.
Agent-driven budgets are exploding. 2026 estimates put the agentic commerce market at $200B by 2030. A meaningful fraction of that traffic will never pass through a human looking at a screen. If your site isn't discoverable to the agents spending those budgets, you're literally invisible to the money.
How to Scan Your Site
xSeek's Scan Agent Ready tool is free, requires no signup, and delivers a report in under 6 seconds. You get:
- Your current readiness level (1 to 4)
- Number of passed, info, and failed checks
- Detail on every check with what it means
- A prioritized action plan to reach the next level
Drop in your URL. Read the report. Hand it to your dev with "here's what's missing." It's the fastest diagnostic that exists for this question in 2026.
FAQ
What does it mean for a website to be ready for AI agents?
Being ready for AI agents means an autonomous agent (Claude, ChatGPT, Perplexity, or a dedicated commercial agent) can discover your services, understand your APIs, and execute actions (read prices, book, buy, integrate) without human intervention. xSeek measures this readiness on 4 levels: Bot-Aware, Agent-Readable, Agent-Friendly, Agent-Native.
What's the difference between AI SEO and agent readiness?
AI SEO optimizes your content so a human asking ChatGPT a question gets an answer that cites you. Agent readiness optimizes your infrastructure so an autonomous agent (with no human in the loop) finds, understands, and uses your services. You can be strong on AI SEO and weak on agent readiness, or vice versa.
What is the Bot-Aware level?
Bot-Aware is Level 2 of 4 in xSeek's framework. Your site recognizes AI bots, exposes specific rules (e.g., via Content Signals in robots.txt), and signals preferences. Most well-maintained 2026 sites sit at this level. Level 3 (Agent-Readable) is where real differentiation begins.
Do I need Level 4 if I don't run e-commerce?
No. The agent commerce protocols (ACP, AP2, MPP, UCP, x402) target transactional sites. A blog, media site, or brand awareness site should aim for Level 3 (Agent-Readable) with strong MCP, OAuth, and API catalog coverage. Level 4 is only relevant when you sell something an agent should be able to buy directly.
What is MCP and why does it matter?
MCP (Model Context Protocol) is the open standard backed by Anthropic since 2024 for exposing tools and resources to AI agents. An MCP Server Card at /.well-known/mcp/server-card.json announces what your site can do for an agent. In 2026, it's the foundation of agentic discovery, the same way robots.txt was the foundation of web discovery in 2010.
How much does it cost to reach Level 3?
For most sites on a modern stack (Next.js, Vercel, Cloudflare), going from Level 2 to Level 3 takes 1 to 2 weeks of backend dev work. Estimated cost: $5,000 to $15,000 CAD depending on your existing stack. For Level 4, plan for 4 to 8 weeks and $30,000 to $80,000 CAD depending on how many protocols you need to implement.
How do I scan my site for free?
Go to https://www.xseek.io/fr/outils/scan-agent-ready, enter your URL, and you'll get a report in under 6 seconds. No signup required. The report includes your current level, detail on all 19 checks, and the prioritized action plan to reach the next level.
What does the API Catalog (RFC 9727) do?
The API Catalog is a /.well-known/api-catalog file that lists every public API on your site in a machine-readable format. An AI agent can read it to understand which data it can fetch and which actions it can trigger. It's the agentic equivalent of an "API Documentation" page, but designed for machines instead of human developers.
