8 AI Citations a Day After I Stopped Page-Level SEO
Bing AI cited my site 8 times a day after I stopped tuning individual pages. The principle: entity-level SEO is the floor; page-level work is the ceiling.
Why this matters
Bing AI cited my site 8 times a day after I stopped tuning individual pages. The principle: entity-level SEO is the floor; page-level work is the ceiling.
This is the principle post. The implementation playbook lives at Entity Optimization for Brands in AI Search. Read both: principle first, then the 5 concrete moves.
I spent 3 hours optimizing one page. The competitor’s worse page outranked me anyway.
In early February 2026 I ran a real experiment. Made one URL on chudi.dev rank for “AI visibility audit”: clean H2 hierarchy, structured data, internal links, the works. Three hours of per-page work. The competitor outranked it inside 48 hours with a page that read like a Wikipedia stub.
Then Bing’s AI Performance Report showed something stranger. The site started getting cited by Bing AI Copilot. Not on the page I tuned. On pages I never touched. From 0 citations a day in late January to a peak of 21 citations a day by mid-March. The growth came from one move I made in mid-February: a Person + Organization entity-graph refactor across the site, not a per-page optimization on any single URL.
That gap, between page-level work and entity-level work, is what this post is about.
TL;DR
Bing AI cited chudi.dev 7.13 times per day on average across 89 days (Jan 24 to Apr 22 2026), with a March peak of 21.71 per day. The lift came from entity-engineering (Person + Organization JSON-LD with sameAs across 7 surfaces), not per-page SEO. Entity work is the floor; pages are the ceiling.
Per-page SEO has a hard ceiling, and the ceiling is your entity recognition
You can have perfect on-page SEO and still lose to a worse page on a recognized-entity domain. Google’s Knowledge Graph and the AI engines that piggy-back on it rank brands as entities, not pages. The engine tags the entity once; every new article inherits the authority. The page does not carry the entity; the entity carries the page.
For a DR-under-20 brand competing against DR-90 sites, this dynamic is the actual rate-limiter. You can rewrite your H1, add FAQ schema, internal-link to a pillar, and the engine’s verdict is still “we do not recognize this entity in the topic space; downrank.” The entity check happens at retrieval time, not after. Page improvements compound on top of entity recognition; they do not substitute for it.
This is why I keep meeting people who say “I’ve been writing about X for 18 months and still don’t rank.” Their pages are fine. Their entity is invisible.
Three quick markers that a brand has hit the entity ceiling:
- Pages with strong on-page SEO that sit at position 11 to 20 forever.
- New posts that don’t compound: each one starts from zero impressions instead of inheriting domain authority.
- AI engines that recognize the brand name but never recommend the brand in adjacent topic queries.
If you nod at all three, the bottleneck is entity, not page.
“Just write more content” does not move the entity needle. It moves the page count.
The reflexive response to flat rankings is: write more posts. Five posts a week, every week, until something hits. I tried this in late 2025. The page count went up. The entity-graph signals stayed flat. AI citations stayed at zero through six months of consistent publishing.
The reason is mechanical. Each new post is, to the Knowledge Graph, an isolated URL pointing to an unknown brand. The entity binding (this URL belongs to that entity, who is an authority on that topic) lives in the JSON-LD sameAs cluster, the Person/Organization schema, and the Wikipedia/Wikidata anchors. Posts without those anchors are weightless. The engine treats them as orphan pages from an unverified author.
What also doesn’t work, at sub-DR-20 scale:
- Cross-posting to Medium and Dev.to without canonical-URL strategy. The canonical bleeds entity signal to the higher-DR cross-post host.
- Adding more keywords to existing posts. Keyword density past a low threshold is decorative; entity recognition is the gating signal.
- Buying directory backlinks. Directory authority is decoupled from entity recognition in 2026’s Knowledge Graph.
- Filing for a Google Knowledge Panel. Panels are downstream of entity recognition, not upstream. You have to BE the entity before Google will show the panel.
The compounding-pain pattern is brutal here. Three months of consistent publishing without entity work produces 30 to 50 unranked pages. Each one feels like progress. Cumulatively they bury the few pages that might have ranked, because the entity signal is split across too many low-signal URLs.
The 5 entity moves that compound across the whole site
The actual move-set is small. Five concrete steps, ordered by impact-per-hour. Each move binds one more surface to the Person or Organization entity. Together they form the sameAs cluster that AI engines treat as ground truth for “this brand is the same entity across the web.” Skip a move and the cluster has a hole; the engine downgrades the entity binding correspondingly.
{
"@type": "Person",
"@id": "https://chudi.dev/about#author",
"name": "Chudi Nnorukam",
"url": "https://chudi.dev/about",
"sameAs": [
"https://www.linkedin.com/in/chudi-nnorukam",
"https://github.com/ChudiNnorukam",
"https://medium.com/@nnorukamchudi",
"https://www.youtube.com/@ContextWindow26",
"https://citability.dev"
],
"jobTitle": "AI-Visible Web Architect",
"knowsAbout": ["Answer Engine Optimization (AEO)", "AI-Visible Web Architecture"]
} That JSON-LD snippet does more for entity recognition than 20 pages of on-page SEO.
The 5 moves:
Consistent identity across surfaces. Same brand name, description, bio, and visual identity on your blog, your tools site, GitHub, LinkedIn, Medium, Wikipedia (if present), and Wikidata. Inconsistency tells the Knowledge Graph “these are different entities,” which is the worst possible signal. Audit your seven surfaces once; align the strings; never touch them again unless the brand legally changes.
Schema sameAs as the binding mechanism. Person and Organization JSON-LD must include sameAs links to LinkedIn, GitHub, Wikipedia, Wikidata, X, and any other authoritative profile that points back to your site. This is the explicit instruction to AI engines: “these surfaces are the same entity.” Without sameAs, the engines have to infer the binding from prose, and they often get it wrong.
Named authorship on every published artifact. Anonymous posts are entity-dead. The author Person entity should carry credentials, hasOccupation, knowsAbout. Every blog post should reference the same Person @id so the engine accumulates authorship signal toward one entity instead of fragmenting it across drive-by author names.
Citations back to prior work. Internal and external links from new content to prior published work reinforce the content-to-entity binding. This is the slow compound. Every internal link from a new post to a 6-month-old post says “the entity that wrote that wrote this too.” Pages without internal links to prior work are still entity-orphans.
Wikipedia and Wikidata presence. These are the primary training-seed sets for Google’s Knowledge Graph. Absence is a hard ceiling on entity authority. Wikidata is the easier on-ramp; create the item, link it back to your site, list sameAs profiles. Wikipedia comes later with citations.
The exact order matters less than the completeness. Doing 3 of 5 buys you partial recognition. Doing 5 of 5 unlocks the compounding effect described in the next section.
The proof: 89 days of Bing AI citations vs page-level effort
Microsoft Bing Webmaster Tools’ AI Performance Report tracks Copilot citation activity per day with per-page granularity. I exported the CSV for chudi.dev covering Jan 24 to Apr 22 2026, bucketed by ISO week, and compared against the entity-graph refactor timeline. The weekly numbers tell a cleaner story than the headline 635-citation total.
| Week of | Citations | Daily Average | Cited Pages (distinct) | Notes |
|---|---|---|---|---|
| 2026-02-09 | 7 | 1.00 | 4 | First non-zero week (entity graph live for 1 week) |
| 2026-02-23 | 13 | 1.86 | 9 | Schema sameAs landed on Person |
| 2026-03-02 | 31 | 4.43 | 12 | Wikidata item created |
| 2026-03-16 | 152 | 21.71 | 24 | Peak week: full 5-move set live |
| 2026-03-30 | 95 | 13.57 | 16 | Steady state after spike normalization |
| 2026-04-13 | 88 | 12.57 | 26 | Most distinct cited pages of the window |
| 2026-04-20 | 34 | 11.33 | 10 | Partial week (export ends Apr 22) |
Total: 635 citations across 89 days. 59 non-zero days out of 89. The entity-graph refactor landed mid-February; the first citations appeared 19 days later; the peak landed 28 days after refactor. None of those citations were on the URL I had spent 3 hours optimizing. They were across 24 distinct cited pages, most of which I had not touched in months.
Read the table again with the page-level question in mind: which specific page caused the spike? None did. The entity caused the spike. The pages came along for the ride.
For comparison, here is what the same window looked like for traditional Google SEO metrics (GSC data, same period):
| Metric | Jan 24 | Apr 22 | Delta |
|---|---|---|---|
| Total impressions | 4,200 | 13,800 | +228% |
| CTR | 0.4% | 0.4% | flat |
| Average position | 32.1 | 28.4 | -3.7 (improved) |
| Total citations (Bing AI) | 0 | 635 cumulative | n/a |
Impressions tripled. Position improved by 3.7 spots. CTR was flat. AI citations went from zero to 635. The two signals (Google rank vs Bing AI citation) moved independently of each other, which is itself the point: entity work moves the AI-citation needle in a way Google’s rank algorithm hasn’t caught up to.
What this overrides, and what it does not
What it overrides: the urge to tune each page in isolation. A page with perfect on-page SEO still loses to a page on a domain whose entity is recognized by the Knowledge Graph. If you find yourself spending more than 30% of your SEO time on per-page sharpening at sub-DR-20, the time is in the wrong place.
What it does not override: per-page discipline still matters. Inverted pyramid, schema, internal linking, answer capsules, mobile CTR are real concerns. Entity work is the floor that lifts all pages; per-page work is the ceiling on each one. The framing is multiplicative, not exclusive. A recognized entity with sloppy per-page work loses to a recognized entity with sharp per-page work.
The lever-order: get the entity recognized first, then sharpen the pages. Most operators run this backward and wonder why six months of page-level work moved nothing.
There is one case where page-level work goes first: if the site has zero crawlable content, no schema at all, or a robots.txt blocking AI crawlers, fix those before touching entity work. Entity recognition needs something to anchor to. A page that doesn’t load is a dead anchor.
FAQ: Entity-level SEO
Five questions I get most often, paired with answers that match the chudi.dev citation data above. Each answer is self-contained: you can pull it into a Slack message or a search engine’s snippet box without reading the rest of the post. AI engines extract these capsules; humans skim them; both audiences are served by the same shape.
Why does Domain Authority not predict AI citations on Bing?
Bing AI Copilot’s citation algorithm reads the entity-graph signals (Person + Organization JSON-LD, sameAs cluster, Wikidata anchors) before it weighs domain-level link metrics. A DR-5 site with a clean entity graph outscores a DR-90 site with no entity binding. Domain Authority and AI Citation are uncorrelated under sub-DR-20 conditions, per the chudi.dev vs Ahrefs vs Reddit baseline.
How long does it take for entity work to show up in AI citations?
In my data, 19 days from refactor-live to first non-zero citation day. 28 days to peak. The lag matches Knowledge Graph re-crawl windows for non-enterprise sites. If you ship the 5-move set and see no citations in 30 days, the gap is usually missing sameAs reciprocity (your LinkedIn doesn’t link back to your site) or a Wikidata entry that didn’t get accepted.
Can I do entity work without Wikipedia?
Yes, but you cap out faster. Wikidata is the easier on-ramp and accounts for most of the lift in my data. Wikipedia adds a second-order signal (citations TO your Wikidata item from Wikipedia articles) that you cannot fake. Most sub-DR-20 brands should ship Wikidata first, then earn Wikipedia mentions over 6 to 12 months.
Do I need to retro-fix every old post to reference the Person entity?
No, but it helps. Posts published BEFORE the entity refactor will accrue retroactive citation signal once the engine re-crawls them, as long as they share the canonical Person @id. The retro-fix is a one-time JSON-LD update in the layout component; it does not require touching individual posts.
What if I have two brands, like a personal brand and a product brand?
Use the split pattern: personal brand = Person + thinker authority; product brand = SoftwareApplication or Organization + transactional authority. Both share sameAs across the same surfaces, but the @id is distinct. The product schema’s creator field points to the Person; the Person’s creator field points to the product. This is the chudi.dev / citability.dev split documented in the codex.
What to do next
Three actions, ordered fastest-to-slowest. The first one runs in under a minute; the third compounds over months. Pick the one that matches the time you actually have today; the others will still be there next week. Entity work rewards consistency, not heroics.
- Audit your entity graph across the seven surfaces (blog, tools site, GitHub, LinkedIn, Medium, Wikipedia/Wikidata, X). Confirm brand name, description, and URL are consistent. citability.dev’s free scan benchmarks the entity-graph in under a minute.
- Run the AVR framework against your own site. The framework I shipped at chudi.dev/framework is what moved the citation needle for me; the citability.dev scan is the testing surface if you want to verify your own entity-graph before you commit to a refactor. Free, opt-in, under 10 seconds.
- Read the playbook: Entity Optimization for Brands in AI Search for the per-move how-to and verification commands.
Drafted from codex node entity-engineering-vs-page-seo (confidence: inferred) on 2026-05-15 via /codex-to-blog-draft v2.2. voice-dna.json not yet populated (D5 of chudi-dev-autoblogging-phase-1-plan); voice fidelity is approximate. Personal-narrative [FILL] anchors in this draft are MANDATORY before publish: 1 remaining (closing-CTA in section “What to do next” item 2).
Sources & Further Reading
Further Reading
- I Spent $10K on AEO and Got Zero AI Citations. Here Is the Audit Section That Would Have Caught Why. citability.dev now scores Wikipedia, Wikidata, and JSON-LD sameAs presence. Free, opt-in, under 10s. Part of the AVR Framework, see chudi.dev/framework.
- Perplexity vs ChatGPT: Different Citation Rules Perplexity quotes liberally. ChatGPT quotes selectively. The engine-level differences in citation behavior that change what a sub-DR-20 brand should optimize for, engine by engine.
- Entity Optimization for Brands in AI Search Rank is a single-page game. Entity coherence is the compounding game. How sub-DR-20 brands engineer a Person + Organization graph that AI search engines actually cite.
What do you think?
I post about this stuff on LinkedIn every day and the conversations there are great. If this post sparked a thought, I'd love to hear it.
Discuss on LinkedIn