Get Started

AI Narrative Consistency Score: How Consistently AI Describes Your Brand

LinkedIn @ultrascoutai
UltraScout AI Proprietary Metrics Series · View all guides →
YH
Yuliya Halavachova · Founder & Principal Data Scientist at UltraScout AI

Yuliya developed the AI Narrative Consistency Score to address a problem distinct from citation rate: brands being cited consistently but described inconsistently. Her semantic analysis methodology for comparing AI brand descriptions across platforms is unique to UltraScout AI's research programme.

Your brand is cited on 60% of tracked queries. Good. But ask ChatGPT to describe your positioning and it says "enterprise-grade solution." Ask Perplexity the same question and it says "affordable small business tool." Ask Gemini and it focuses on a product feature you discontinued two years ago. You are being cited — but different AI platforms are telling entirely different stories about who you are. The AI Narrative Consistency Score (ANCS) measures the severity of this fragmentation.

The Core Insight

"If ChatGPT calls you a premium agency and Perplexity calls you a budget tool, your brand narrative is broken in AI. The ANCS quantifies that gap."

— Yuliya Halavachova, UltraScout AI

1. ANCS vs. AI Brand Stability Index: A Critical Distinction

Two metrics sound related but measure fundamentally different things:

AI Brand Stability Index (ABSI)

Measures whether your brand is cited or not cited consistently over time. Tracks citation rate variance. Answers: "Are you reliably present in AI responses?"

AI Narrative Consistency Score (ANCS)

Measures whether your brand is described consistently across platforms and query types. Tracks semantic similarity of descriptions. Answers: "When AI mentions you, does it describe you correctly and consistently?"

A brand can have excellent ABSI (cited on 80% of queries consistently) but terrible ANCS (described differently by every platform). From a revenue perspective, this is a different kind of crisis: buyers are hearing about you, but they are hearing contradictory things. This confusion undermines trust and slows purchase decisions.

Why Narrative Inconsistency Undermines Buyer Trust

Brand trust in B2B purchasing is built on predictability. A buyer who researches your brand across three AI platforms and receives three different characterisations cannot form a coherent mental model of your offering. They face cognitive dissonance — and cognitive dissonance in a purchase decision typically resolves in favour of delay or competitor selection.

The specific risk scenarios:

  • Price tier inconsistency: A buyer who sees "premium enterprise" from ChatGPT starts a budget evaluation using Perplexity and sees "affordable." They question which is true — and may suspect deceptive pricing practices rather than acknowledging AI inconsistency.
  • Audience mismatch: A CMO looking for a board-ready analytics solution uses Claude and hears "technical tool for data teams." They move on without realising your product serves executives too.
  • Discontinued feature prominence: An AI platform trained on older data describes a feature you removed 18 months ago as a core differentiator. A buyer evaluates you on the basis of that feature, requests a demo expecting to see it, and discovers it no longer exists. Trust is broken before a conversation has started.

2. The ANCS Formula

AI Narrative Consistency Score

ANCS = (1 − Average Semantic Distance between descriptions across platforms) × 100

Semantic Distance = cosine distance between vector representations of brand descriptions on each platform pair

Average Semantic Distance = mean pairwise semantic distance across all platform combinations

Score range: 0 – 100 (100 = perfectly identical descriptions across all platforms)

Score Ranges

Above 80 Highly Consistent: AI platforms describe your brand in substantially the same way. Buyer experience is coherent regardless of platform used.
60 – 80 Minor Inconsistencies: Some variation in emphasis or attribute weighting, but core narrative is intact. Monitor and address specific divergences.
Below 60 Fragmented Brand Narrative: AI platforms are telling materially different stories about your brand. Significant buyer trust and conversion risk.

The Semantic Distance Methodology

Semantic distance is calculated using embedding vector representations of brand descriptions. In practice, UltraScout AI's ANCS methodology follows this process:

1
Query collection: Ask each AI platform a standardised set of brand description prompts: "Describe [Brand]," "What type of company is [Brand]?", "Who is [Brand] designed for?", "What is [Brand] known for?", "How does [Brand] compare to competitors in terms of price tier?"
2
Description extraction: Collect the verbatim AI response for each prompt on each platform (5 prompts × 6 platforms = 30 descriptions per brand).
3
Vector embedding: Convert each description to a high-dimensional vector using a sentence embedding model (UltraScout uses a multilingual model to handle platform language variations).
4
Pairwise distance calculation: Calculate cosine distance for each platform pair combination (C(6,2) = 15 pairs). Average across all pairs for each prompt.
5
ANCS calculation: Apply formula. Optionally weight specific prompts by importance — "price tier" inconsistency typically receives higher weight for brands in price-competitive categories.

Qualitative Layer

The semantic distance score is supplemented by human review of specific divergences. A score of 62 with the divergence concentrated in price-tier descriptions requires different intervention than a score of 62 with divergence spread evenly across all attributes. UltraScout AI's ANCS reports include both the quantitative score and a qualitative narrative gap analysis.

3. The Five Key Narrative Attributes AI Describes

UltraScout AI's research identifies five core attributes that AI platforms describe when characterising a brand. Narrative consistency must be measured and managed across all five:

1
Positioning

How AI describes your market position and category leadership. "Market leader in X," "specialist in Y," "challenger brand disrupting Z." Inconsistency here fragments your competitive narrative.

Common divergence: One platform describes category leadership; another describes niche specialist positioning. Both may be accurate historically, but the inconsistency confuses buyers.

2
Price Tier

How AI characterises your pricing relative to the market. "Premium," "enterprise," "mid-market," "affordable," "budget-friendly." This is the most damaging inconsistency category.

Common divergence: Brands that repositioned from SMB to enterprise, or vice versa, often have inconsistent price tier descriptions because different platforms reference content from different periods of the brand's history.

3
Target Audience

Which buyers AI thinks your product is for. Functional role (CTOs, marketers), company size (SMB, mid-market, enterprise), industry verticals.

Common divergence: AI platforms trained on different content vintages may describe different ICP profiles as the brand has evolved. A brand that started serving startups and now serves enterprise may be described inconsistently across platforms.

4
Key Differentiators

The specific capabilities or qualities AI associates with your brand's competitive advantage.

Common divergence: Platforms emphasise different differentiators based on which content or third-party sources they weighted most heavily. One platform may cite integrations; another cites customer support; another cites a specific feature.

5
Primary Use Cases

What AI says buyers use your product for. The functional jobs-to-be-done associated with your brand.

Common divergence: Platforms may describe different primary use cases based on which queries they were trained or retrieved for. A platform that sees your brand mostly in "automation" queries will describe you as an automation tool; another that sees you in "analytics" queries will describe you as an analytics tool.

4. Which Platforms Diverge Most and Why

Platform Pair Typical Divergence Level Primary Reason
ChatGPT vs. Perplexity Highest Training-data model vs. real-time web model. ChatGPT reflects historical training; Perplexity reflects current web. For brands that have evolved, this creates temporal inconsistency — old positioning vs. new positioning.
ChatGPT vs. Gemini High Different training data sources and Google Search integration. Gemini inherits Google's entity understanding, which may differ from ChatGPT's training data patterns.
Perplexity vs. Claude Moderate-High Recency vs. depth weighting. Perplexity surfaces fresh sources; Claude synthesises training data for depth. Fresh content may tell a different story than historical synthesis.
Gemini vs. Copilot Moderate Google vs. Microsoft index. Differences in which third-party sources dominate each index affect brand characterisation.
Claude vs. ChatGPT Lower Both are training-data models with similar input sources for well-established brands. Divergence typically smaller unless training cutoffs differ significantly.
Grok vs. Any Highest Real-time X/Twitter data introduces unique social-signal-driven brand characterisation. Brands described very differently on social media than on corporate web may show extreme divergence between Grok and other platforms.

5. Case Study: Marketing Technology Company — Narrative Fragmentation After Repositioning

Case Study: MarTech Platform — Closing the Repositioning Narrative Gap

Profile: Marketing automation platform that repositioned from "email marketing tool" to "revenue intelligence platform" in early 2025. Brand refresh, new website, new messaging. Strong internal alignment on new positioning. Confident the market understood the shift.

ANCS Audit (October 2025 — 8 months post-reposition):

The audit collected brand descriptions from 6 platforms across 5 narrative attributes. Representative descriptions on the most critical attribute — positioning — revealed significant fragmentation:

Platform How It Described the Brand's Positioning
ChatGPT "[Brand] is primarily an email marketing platform for small and mid-sized businesses, offering automation features and campaign management."
Perplexity "[Brand] has recently repositioned as a revenue intelligence platform, integrating marketing automation with sales pipeline data."
Gemini "[Brand] is an email marketing and automation tool, competing with Mailchimp and HubSpot in the SMB space."
Claude "[Brand] provides marketing automation services with a focus on B2B lead nurturing and email campaigns."
Copilot "[Brand] is a revenue intelligence platform that connects marketing attribution to CRM data for revenue teams."
Grok "[Brand] recently announced a major platform rebrand — discussions on X suggest mixed reception of their pivot from email tool to revenue platform."

ANCS Score: 41 — Fragmented Brand Narrative

Root Cause Analysis:

  • ChatGPT, Gemini, and Claude were still primarily trained on pre-reposition content (emails, blog posts, directory listings describing the old positioning)
  • Perplexity and Copilot had partially picked up the new positioning from updated web content — but only partially, because not all pages had been comprehensively updated
  • Grok reflected the social media narrative, which included both excited early adopters of the new positioning and confused existing customers who associated the brand with its original identity
  • No structured data on the new website explicitly stated "formerly known for X, now positioned as Y" — so AI platforms had no unified signal about the transition

Narrative Consistency Programme:

  • Created an explicit "About [Brand]" anchor page with structured Organisation schema defining the brand's current positioning, category, target audience, and primary use cases — using vocabulary chosen to embed into AI training/retrieval patterns
  • Updated all product pages, case studies, and comparison pages with new positioning language — eliminating all references to "email marketing" as the primary description (replaced with "revenue intelligence" as primary, email as a feature)
  • Published a "Brand Evolution" article explicitly explaining the repositioning — designed to be cited by AI platforms when explaining the transition context
  • Updated llms.txt with a structured brand description following the new positioning taxonomy exactly
  • Ran a press outreach programme generating 14 new third-party mentions describing the brand under its new positioning — giving AI platforms consistent external validation of the new narrative
  • Updated all third-party directory listings (G2, Capterra, Clutch, Crunchbase) with new positioning language — these are heavily weighted sources in AI citation models

ANCS Results After 6 Months:

Metric Before After 6 Months
ANCS Score 41 74
Positioning attribute score 29 (most fragmented) 71
Price tier attribute score 58 82
Target audience attribute score 52 79
ChatGPT positioning description "Email marketing platform" "Revenue intelligence platform with marketing automation"
Sales-qualified leads from AI-attributed sources Baseline +29% (more aligned with new ICP)

The ANCS improvement correlated with a shift in the quality — not just quantity — of AI-attributed leads. Leads arriving post-ANCS improvement were more aligned with the new ICP (revenue teams, not email marketers), converting at 1.8× the rate of pre-programme AI-attributed leads.

6. Strategies for Enforcing Narrative Consistency

Strategy 1: Create a Canonical Brand Description Infrastructure

AI platforms learn your brand narrative from the sum of content they access about you. A canonical brand description infrastructure ensures that every touchpoint reinforces the same narrative:

Brand Description Infrastructure Checklist

  • About page with Organisation schema: Machine-readable description of your brand using schema.org/Organization — name, description, category, founding date, number of employees, url
  • llms.txt brand summary: First section of your llms.txt should include a concise brand description paragraph using your canonical positioning language
  • Consistent "About [Brand]" paragraph: A single canonical paragraph describing your brand used verbatim on About page, media kit, third-party listings, and press release boilerplates
  • Third-party listing consistency: G2, Capterra, Clutch, Crunchbase, LinkedIn company page — all should use identical positioning language in the company description field
  • Wikipedia / knowledge base entry: If your brand has a Wikipedia presence or industry knowledge base entry, ensure it reflects current positioning. AI platforms weight these heavily for brand characterisation.

Strategy 2: Use Authoritative Structured Data for Brand Attributes

Schema.org Organisation markup should be extended beyond basic fields to include brand attributes AI platforms use for characterisation:

Extended Organisation Schema for ANCS

  • knowsAbout: List your primary topic domains — establishes topical authority for AI citation
  • hasOfferCatalog: Link to your product/service offerings — establishes what you sell and for whom
  • priceRange: Explicit price tier signal — prevents AI mischaracterisation of your positioning
  • audienceType / targetAudience: Explicitly state your buyer audience — prevents audience mismatch descriptions
  • slogan: Your brand tagline in structured data form — reinforces positioning signal
  • sameAs: All social profiles and third-party listings — consolidates entity signals into unified brand understanding

Strategy 3: Build a Consistent Third-Party Citation Layer

AI platforms weight third-party sources heavily for brand characterisation. If review platforms, press coverage, and industry directories all describe you with the same positioning language, AI platforms have strong consistent signals to draw from. If they vary — because you have not managed your positioning across these channels — AI descriptions will vary.

A brand narrative enforcement programme should include:

  • Annual review and update of all third-party directory listings
  • Press release boilerplate that contains canonical brand description language — every press release published online adds an external citation of your positioning
  • Media kit with canonical brand description paragraph for journalists — earned media that uses your language directly reinforces ANCS
  • Managed review generation that encourages customers to describe your product in ways that reinforce key positioning attributes (not scripted, but informed by what AI needs to hear)

Strategy 4: Monitor and Remediate Temporal Divergence

The ChatGPT vs. Perplexity divergence pattern (historical training vs. current web) is particularly persistent because training data cutoffs mean ChatGPT may not "know" about recent positioning changes for months after they occur. The only remediation is:

  • Ensure current positioning is represented in high-authority, highly-linked content that will be prominent in the next training cycle
  • Prioritise third-party sources (not just owned content) that describe the new positioning — these are weighted more heavily than self-descriptions in training data
  • Use the "Brand Evolution" content pattern: explicitly published content that acknowledges previous positioning and describes the transition — AI models read this as authoritative guidance on the brand's current identity

Key Takeaway

AI narrative inconsistency is not a marketing problem — it is a revenue problem. Every buyer who receives contradictory AI descriptions of your brand faces a friction point in their purchase journey. The ANCS measures that friction quantitatively, and — critically — it provides a platform-by-platform and attribute-by-attribute breakdown that makes remediation actionable rather than aspirational. A brand cannot control exactly what every AI platform says about it, but it can build the canonical infrastructure that gives AI platforms no reason to diverge.

Audit your AI Narrative Consistency Score

UltraScout AI's ANCS audit collects brand descriptions from all six major AI platforms, calculates semantic distance across five narrative attributes, and delivers a platform-by-platform narrative gap report with prioritised remediation recommendations.

References

  • Halavachova, Y. (2026). "AI Narrative Consistency Score: Measuring Semantic Brand Coherence Across AI Platforms." UltraScout AI Research Series.
  • UltraScout AI. (2026). "Platform Narrative Divergence Analysis: Brand Description Consistency Study, 2025–2026." Internal Research Report.
  • Reimers, N., & Gurevych, I. (2019). "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks." EMNLP 2019. Semantic similarity methodology foundation.
  • schema.org. (2026). "Organization Schema Documentation." schema.org/Organization. Structured data reference for brand attribute encoding.
  • Aggarwal, P., et al. (2024). "GEO: Generative Engine Optimization." arXiv:2311.09735.