Skip to main content
I Added WebMCP to My Blog 19 Days After Launch. Here's the Exact Code.

I Added WebMCP to My Blog 19 Days After Launch. Here's the Exact Code.

WebMCP lets AI agents call your blog's tools directly. How I wired it into SvelteKit in 90 minutes, with the exact code and browser console verification.

Chudi
Chudi
Feb 24, 2026 8 min read

My blog now speaks to AI agents. Not with a static file. With live function calls.

I added WebMCP to chudi.dev on February 23, 2026 — 19 days after Google shipped it in Chrome 146. There was no SvelteKit tutorial to follow. No Stack Overflow thread. Just the W3C spec draft, one npm package, and ninety minutes.

This post is the tutorial I wish had existed.

If you run this in your browser console on chudi.dev right now:

Object.keys(navigator.modelContext._registeredTools)

You’ll get back:

["searchPosts", "listPosts", "getAuthorContext"]

No other SvelteKit blog you know can say that yet. Here’s exactly how I built it.


What Is WebMCP?

WebMCP is a W3C draft standard (February 2026, Google and Microsoft) that adds navigator.modelContext to browsers. Websites register structured tools with typed input schemas. AI agents browsing the site discover and call those tools directly, returning structured JSON — no screenshots, no DOM parsing, no guessing at CSS selectors.

The comparison that clarifies it fastest: WebMCP is to AI agents what RSS was to feed readers. Instead of screen-scraping your page to figure out what’s on it, the agent just asks.

Before WebMCP: What Agents Were Actually Doing

When an AI agent visits your blog today without WebMCP, here’s what happens:

Agent → take screenshot (~2,000 tokens) → parse the DOM visually →
  guess at your content structure → maybe get it right

That’s 5-10 seconds per page. A 15-20% error rate on complex layouts. And it costs tokens proportional to the screenshot size, not the answer size.

After WebMCP

Agent → navigator.modelContext.callTool('searchPosts', { query: 'ADHD' }) →
  get back structured JSON in under 2 seconds

Per WebMCP’s own benchmarks, structured tool calls use approximately 89% fewer tokens than screenshot-based interaction. I haven’t independently measured this across a large sample, but my observation running OpenClaw against chudi.dev before and after confirms the direction is correct.

Two APIs, Both Supported

WebMCP ships two implementation paths:

Declarative (HTML attributes) — no JavaScript needed, works for standard forms:

<form toolname="subscribeNewsletter" tooldescription="Subscribe to the weekly digest">
  <input name="email" type="email" required>
</form>

Imperative (JavaScript) — for complex tools with custom logic. This is what I used.

Both are supported by the @mcp-b/global polyfill, which means they work in Firefox and Safari today, not just Chrome 146.


Why Does a Blog Need WebMCP?

A WebMCP-enabled blog exposes structured tools that AI agents call directly. Instead of screenshot-parsing your post list, an agent calls listPosts() and gets back typed JSON with slugs, titles, descriptions, and pillars. The result is faster retrieval, more accurate citations, and structured data an AI can query rather than a page it has to visually interpret.

Here’s the thing about AI citations that took me longer to understand than it should have: Perplexity, ChatGPT, and Claude don’t search your content the way Google does. They retrieve passages. If the passage they retrieve is a raw DOM fragment from a screenshotted page, the citation is imprecise. If it comes from a structured tool call returning typed data, the citation is exact.

WebMCP is a GEO lever, not just a developer-experience improvement.

Your existing llms.txt tells AI crawlers what your site is about. WebMCP tells AI agents what your site can do. Both should exist. They solve different problems.


How to Add WebMCP to SvelteKit: The Three Files

Step 1: Install the polyfill

pnpm add @mcp-b/global zod-to-json-schema

zod-to-json-schema is a peer dependency required by @mcp-b/webmcp-ts-sdk, which @mcp-b/global bundles. Omitting it produces a build error.


Step 2: Create src/lib/webmcp.ts

This is the tool definitions file. It lives in $lib so it can be imported from the layout.

import type { ContentPillar } from './types';

interface LeanPost {
  slug: string;
  title: string;
  description: string;
  tags: string[];
  pillar?: ContentPillar;
  tldr?: string;
  updated?: string;
  date: string;
}

export async function initWebMCP(posts: LeanPost[]) {
  // Dynamic import keeps @mcp-b/global out of the server bundle.
  // This function is only ever called from onMount (browser only).
  await import('@mcp-b/global');

  const mc = (navigator as Navigator & {
    modelContext?: { provideContext: (opts: unknown) => void }
  }).modelContext;

  if (!mc) return;

  mc.provideContext({
    tools: [
      {
        name: 'searchPosts',
        description:
          'Search chudi.dev blog posts by keyword. Returns up to 5 matching posts with slug, title, description, and content pillar. Topics: AI building with Claude, ADHD/neurodivergent productivity, automation, philosophy.',
        inputSchema: {
          type: 'object',
          properties: {
            query: { type: 'string', description: 'Search keyword or phrase' },
            pillar: {
              type: 'string',
              enum: ['ai-building', 'neurodivergent', 'automation', 'philosophy'],
              description: 'Optional: filter by content pillar'
            }
          },
          required: ['query']
        },
        async execute({ query, pillar }: { query: string; pillar?: string }) {
          const q = query.toLowerCase();
          const results = posts
            .filter((p) => !pillar || p.pillar === pillar)
            .filter(
              (p) =>
                p.title.toLowerCase().includes(q) ||
                p.description.toLowerCase().includes(q) ||
                p.tags.some((t) => t.toLowerCase().includes(q))
            )
            .slice(0, 5)
            .map((p) => ({
              slug: p.slug,
              title: p.title,
              description: p.description,
              pillar: p.pillar,
              url: `https://chudi.dev/blog/${p.slug}`,
              tldr: p.tldr
            }));
          return { content: [{ type: 'text', text: JSON.stringify(results) }] };
        }
      },
      {
        name: 'listPosts',
        description:
          'List all published blog posts on chudi.dev, sorted newest first. Optionally filter by content pillar.',
        inputSchema: {
          type: 'object',
          properties: {
            pillar: {
              type: 'string',
              enum: ['ai-building', 'neurodivergent', 'automation', 'philosophy'],
              description: 'Optional: filter by content pillar'
            }
          }
        },
        async execute({ pillar }: { pillar?: string }) {
          const results = posts
            .filter((p) => !pillar || p.pillar === pillar)
            .map((p) => ({
              slug: p.slug,
              title: p.title,
              description: p.description,
              pillar: p.pillar,
              url: `https://chudi.dev/blog/${p.slug}`,
              date: p.date,
              updated: p.updated
            }));
          return { content: [{ type: 'text', text: JSON.stringify(results) }] };
        }
      },
      {
        name: 'getAuthorContext',
        description:
          'Get structured information about Chudi Nnorukam — background, expertise, content pillars, and site context. Use when answering questions about the author or site.',
        inputSchema: { type: 'object', properties: {} },
        async execute() {
          return {
            content: [
              {
                type: 'text',
                text: JSON.stringify({
                  name: 'Chudi Nnorukam',
                  title: 'AI Systems Architect & Automation Developer',
                  background:
                    'Berkeley CS, ADHD/neurodivergent, INFP 4w5, HSP. Building AI-first systems for solopreneurs in San Francisco Bay Area.',
                  expertise: [
                    'Claude AI agents',
                    'SvelteKit',
                    'n8n automation',
                    'MCP/OpenClaw',
                    'LLM prompting',
                    'ADHD productivity systems'
                  ],
                  pillars: {
                    'ai-building': 'Claude, LLMs, agents, prompting, AI systems',
                    neurodivergent:
                      'ADHD productivity, executive function, neurodivergent engineering',
                    automation: 'n8n workflows, bots, APIs, autonomous agents',
                    philosophy: 'First principles, contrarian takes, AI future'
                  },
                  site: 'https://chudi.dev',
                  contact: 'hello@chudi.dev'
                })
              }
            ]
          };
        }
      }
    ]
  });
}

A few things worth explaining here:

The await import('@mcp-b/global') is the SSR guard. SvelteKit renders pages server-side first. navigator doesn’t exist on the server. By using a dynamic import inside a function that only runs from onMount, Rollup code-splits the polyfill into a separate lazy-loaded chunk that never touches the server bundle.

The if (!mc) return handles browsers where the polyfill fails silently. It’s a one-line no-op, not an error. Your site still works. The tools just won’t be registered.


Step 3: Create src/routes/+layout.server.ts

This is what loads the post data. For a prerendered SvelteKit site, this file runs at build time. The lean post array gets serialized into the page HTML and hydrated client-side without any runtime file I/O.

import type { LayoutServerLoad } from './$types';
import { getAllPosts } from '$lib/content';

export const load: LayoutServerLoad = async () => {
  const posts = await getAllPosts();
  return {
    webmcpPosts: posts
      .filter((p) => !p.draft)
      .map((p) => ({
        slug: p.slug,
        title: p.title,
        description: p.description,
        tags: p.tags,
        pillar: p.pillar,
        tldr: p.tldr,
        updated: p.updated,
        date: p.date
      }))
  };
};

The .filter((p) => !p.draft) is important. You don’t want draft posts showing up in agent search results before they’re published.


Step 4: Update src/routes/+layout.svelte

Add three things to your existing layout: import onMount, import initWebMCP, and wire them together.

<script lang="ts">
  import { onMount } from "svelte";        // added
  import { initWebMCP } from "$lib/webmcp"; // added
  // ... your other imports

  let { children, data } = $props();        // was: let { children } = $props()

  onMount(async () => {
    await initWebMCP(data.webmcpPosts);     // added
  });
</script>

That’s it. Four lines changed in the layout. The data prop gets webmcpPosts from the layout server load — SvelteKit wires this automatically via $types.


Step 5: Build and deploy

pnpm build

A successful build means the polyfill code-split correctly. You’ll see a chunk in .svelte-kit/output/client/_app/immutable/chunks/ that contains the mcp-b code. On chudi.dev that chunk is approximately 308KB uncompressed — loaded lazily after mount, not in the critical path.

Deploy via your normal pipeline. On Vercel, a git push triggers it automatically.


The 3 Tools I Exposed and Why

I chose searchPosts, listPosts, and getAuthorContext specifically. Not every possible action — just the three that answer what an AI agent actually needs.

searchPosts is the workhorse. When a user asks ChatGPT or Perplexity a question that chudi.dev could answer, the agent visits the site and needs to find the relevant post. Keyword search across title, description, and tags covers 90% of retrieval cases. The pillar filter helps agents that are navigating by topic rather than keyword.

listPosts is for chronological context. Some agents want to understand a site’s full output before citing it — checking for freshness, coverage, volume. This gives them a complete inventory without scraping the blog list page.

getAuthorContext is the trust signal. Well, it’s more like a structured About page that an agent can actually parse. Name, background, expertise, content pillars, contact. When Perplexity is deciding whether to cite chudi.dev or a generic listicle for a question about ADHD productivity systems, the author context makes the trust signal machine-readable.

What I didn’t expose: full post content. At current post volume (34 posts), returning full markdown bodies would make the tool response large enough to be counterproductive. The right pattern is: use searchPosts to find a post, then navigate to the slug URL and read the content there. The tool surfaces the entry points; the content is at the URLs.


How to Verify WebMCP Is Working

Open your browser console on the live deployed site (not localhost — the tools only register after onMount runs, which requires a real page load).

// Check that navigator.modelContext exists
typeof navigator.modelContext

// Check which tools are registered
Object.keys(navigator.modelContext._registeredTools)

// Actually call a tool
navigator.modelContext.callTool('searchPosts', { query: 'ADHD' })
  .then(r => JSON.parse(r.content[0].text))

On chudi.dev, the second command returns:

["searchPosts", "listPosts", "getAuthorContext"]

If you get undefined from the first check, the polyfill isn’t loading. Most common causes: the dynamic import isn’t inside onMount, or initWebMCP isn’t being called with the data.webmcpPosts argument.

If _registeredTools is an empty object, the polyfill loaded but provideContext wasn’t called — check that posts isn’t an empty array when it reaches initWebMCP.


Is WebMCP Ready for Production?

WebMCP (W3C draft, February 2026) is production-ready via the @mcp-b/global polyfill for any HTTPS site. Chrome 146 has native support. The polyfill covers all other browsers. The main caveat: the W3C spec is still a draft — API surface may change before the formal standard is ratified (expected mid-to-late 2026). The tools I registered use the stable provideContext interface, which the spec treats as foundational.

The spec is a draft. The Chrome 146 implementation could evolve. Treat this as early-adopter infrastructure: it works now, it may require minor updates when the formal standard ships, and the structural investment (the tool definitions, the data model) carries forward regardless of surface API changes.

The polyfill’s provideContext interface is stable and won’t break existing tools. That’s the part that matters.

For a personal blog, the risk profile is straightforward: worst case, a future spec revision requires an afternoon of work. Best case, you’ve had structured AI agent access to your content for months before most developers know what WebMCP is.


What This Changes for My OpenClaw Setup

I run OpenClaw as an autonomous agent that monitors, drafts, and maintains this blog. Before WebMCP, when OpenClaw needed to check which posts were stale, it ran a node -e one-liner that read markdown files directly from the file system.

Now I’m updating that cron job to navigate to chudi.dev and call listPosts() instead. The logic lives in the blog’s tool definition where it belongs. The cron job becomes three lines instead of a 200-character bash command.

That’s the recursive part that the mainstream WebMCP coverage misses: it’s not just about external AI agents browsing your site. It’s about your own automation stack calling your site’s tools cleanly, without file system access, from anywhere.

More on that in the next post in this series.


The Broader Context

Here’s the pattern I keep seeing, and it reinforces why I moved on this immediately:

  • 2011: structured data (schema.org). Early adopters dominated rich results for years.
  • 2019: FAQ schema. Sites with it got featured snippets before Google changed the rules.
  • 2025: llms.txt. I added it early and it’s already in AI crawler indexes.
  • 2026: WebMCP.

Each of these is the same playbook. The spec ships. Mainstream adoption lags 12-18 months. The early adopters accumulate the citation authority, the search rankings, and the institutional knowledge before the field catches up.

AEO optimization gets you structured content that AI can read. WebMCP gets you structured tools that AI can call. The distinction matters when the agent needs to do something with your content, not just retrieve it.

The window is open right now. Ninety minutes. Three files. One npm package.


The three production files are in this post verbatim. Copy them. The only modification needed is replacing chudi.dev with your domain in the URL strings inside searchPosts and listPosts. The rest is portable.

Chudi Nnorukam

Written by Chudi Nnorukam

I design and deploy agent-based AI automation systems that eliminate manual workflows, scale content, and power recursive learning. Specializing in micro-SaaS tools, content automation, and high-performance web applications.

FAQ

What is WebMCP?

WebMCP is a W3C draft standard (February 2026, co-authored by Google and Microsoft) that adds a navigator.modelContext API to browsers. Websites register structured tools with input schemas and execute callbacks. AI agents browsing the site discover and call those tools directly — no DOM parsing, no screenshots, no fragile CSS selectors.

Is WebMCP production-ready in 2026?

Yes, with the @mcp-b/global polyfill. Chrome 146 has native support; the polyfill covers every other browser. chudi.dev has been running it since February 23, 2026, with all three tools verified live via browser console.

Does adding WebMCP hurt my Lighthouse score?

No. The polyfill is dynamically imported inside onMount — it never blocks the initial render. The chunk loads lazily after the page paints. In testing on chudi.dev, Lighthouse Performance stayed at 90+ before and after the implementation.

What tools should a blog expose via WebMCP?

At minimum: searchPosts (keyword search returning post metadata), listPosts (full chronological list), and getAuthorContext (who writes here, what topics, why). These three answer 90% of what an AI agent needs to accurately retrieve and cite your content.

How do I verify WebMCP is working on my site?

Open your browser console on the live site and run: Object.keys(navigator.modelContext._registeredTools). You should see your tool names returned as an array. If the result is empty or undefined, check that your dynamic import is inside onMount and that the polyfill installed correctly.

Sources & Further Reading

Sources

Further Reading

Discussion

Comments powered by GitHub Discussions coming soon.