How AI answer engines actually resolve your @graph
ChatGPT, Perplexity, Claude, Gemini, and Google AI Overviews each parse JSON-LD differently. The deltas matter. Here's what I've measured running the same @graph through all five.
Schema validators check syntax. They tell you whether your JSON-LD is well-formed. They don't tell you what an AI engine does with it.
I've spent the last year running the same @graph payloads through five AI engines using RankLabs and watching where they diverge. The deltas are not subtle. Some engines merge entities aggressively. Others split. One ignores sameAs chains entirely. Another follows them three hops deep. If your team treats AI engine optimization as one thing instead of five, you are going to ship work that helps two engines and hurts three.
This is the field guide.
Why the @graph form matters in the first place
The choice between separate JSON-LD blocks per entity and a single consolidated @graph is not cosmetic. AI engines extract entities and then run a resolution step where they merge entities that share an @id or are connected through sameAs. With separate blocks, the merge happens (or fails) per block. With a @graph, the merge happens across the whole document because every reference is explicitly @id-bound.
Concretely: if you emit Organization in one script tag and reference it from a Product in a second script tag, some engines treat those as two unconnected documents and refuse to merge. The same payload as a single @graph resolves cleanly because the references are local.
Every engine I test prefers the consolidated @graph. None penalize it. The choice is one-sided.
ChatGPT: aggressive merge, strict on @id
OpenAI's web fetcher (and the ChatGPT browse tool that descends from it) merges entities aggressively when @ids match. If your Organization has a stable @id and you reference it consistently from every Product, every Article, every BreadcrumbList, ChatGPT collapses the entire site into one knowledge unit and cites you confidently across query types.
If the @id drifts (page-URL-as-id is the most common form of this bug, see bug #1 in the commerce audit catalog), ChatGPT treats every reference as a separate organization. It will not merge by name match alone. The result: queries that should mention you cite competitors with stable @ids instead.
What I've measured: brands with consistent @id structure get cited on 40 to 70 percent more relevant queries than brands with the same content but unstable @ids. The variance is @id structure, not content quality.
Practical rule: pick a @id convention before you write any schema, and never deviate. https://yoursite.com/#organization for the brand entity. https://yoursite.com/products/{slug}#product for products. Document it. Enforce it in CI.
Perplexity: follows sameAs, weights citations heavily
Perplexity follows sameAs chains harder than any other engine. If your Organization's sameAs includes Wikidata, Wikipedia, and your verified social profiles, Perplexity treats those as confidence signals and weights you higher when constructing answers.
If your sameAs is empty or points at dead URLs, Perplexity downweights you. I've seen brands move from cited to uncited on the same query simply by adding a Wikidata sameAs reference, with no other change.
The other Perplexity quirk: it cites with explicit URL attribution. Citation share on Perplexity is a cleaner signal than on engines that summarize without citation. If you're benchmarking AEO progress, Perplexity is the engine to track first.
Practical rule: every primary entity (Organization, Person if you're a personal brand, primary Product line) gets a sameAs array pointing at the canonical external identifiers. Wikidata first, Wikipedia second, official socials third.
Claude: conservative merge, structure-sensitive
Anthropic's parser is the most conservative of the five. It will not merge entities that aren't explicitly tied through @id. It also reads structure more literally: if your Article entity has author set to a nested Person object instead of an @id reference back to the canonical Person, Claude will treat that author as a different person than the one referenced from your Organization.
The fix is the same as the ChatGPT case (always reference, never nest), but Claude punishes the same bug harder.
Claude also weights description fields heavily. A good description on your primary Person and Organization entities materially affects how Claude characterizes you when summarizing. Sloppy descriptions land in answers verbatim, sometimes for years.
Practical rule: write description fields like they are headlines. They are.
Gemini: weighted by Google's existing signals
Gemini sits inside Google's broader infrastructure, so it has access to crawl history, link graph, and entity associations Google has built up over a decade. The result: Gemini is the most forgiving engine on schema bugs and the least forgiving on broader topical authority.
You can ship perfect JSON-LD and still not get cited by Gemini if Google's existing topical assessment of your site doesn't match the query. Conversely, you can ship mediocre JSON-LD and still get cited if Google already trusts you on the topic.
Gemini is the engine where schema is necessary but insufficient. Content depth and authority matter more here than on the others.
Practical rule: Gemini is the engine to deprioritize when scoping schema work. Fix it last. Fix content first.
Google AI Overviews: structured-data-first, citation-shy
AI Overviews extract from the same crawl that powers regular search, but they weight @graph extraction heavily and prefer to cite sites with rich, well-formed structured data. The catch is they often summarize without obvious citation, so attribution is harder to track than on Perplexity.
The bug pattern that hits AI Overviews hardest is the JS-rendered schema problem (bug #8 in the catalog). Googlebot renders JS for indexing, but the AI Overviews extraction pipeline reads the initial HTML response in many cases. Schema that only exists post-hydration is invisible to the AI Overviews path even when regular Google search ranks the page well. I've found this on multiple engagements where the team thought their schema was fine because Search Console reported it as valid.
Practical rule: emit schema server-side in the initial response. Always.
What this means for engagement scoping
The five engines are not interchangeable. The work that helps ChatGPT (stable @ids) is different from the work that helps Perplexity (sameAs chains) which is different from what helps Google AI Overviews (server-side rendering). A useful Sprint scope orders the work by which engines you most need to win on, not by which fixes look most impressive in a deck.
When I scope a Sprint, I ask which engines drive the queries the brand most wants to own. Then the architecture decisions and per-template implementation prioritize the schema patterns those engines are most sensitive to. The validator suite encodes the rules.
If you don't know which engines matter to you, that's a question the Audit answers. RankLabs runs your priority queries across all five and reports who cites you, who cites competitors, and where the structural gap is. Without that baseline, you're guessing.
The engines are not magic. They each have rules. Schema work that respects the rules wins. Schema work that doesn't, doesn't.