Back to Case Studies

[ CROSS-CASE SYNTHESIS ]

AI Visibility Growth Across Four High-Consideration Verticals: A Cross-Case Synthesis

A cross-case synthesis of four anonymized CiteWorks Studio engagements across tax relief, household appliance, crypto wallet, and pest control.

Source Note

Built from four public CiteWorks Studio case-study pages. Client brands are intentionally anonymized because the underlying work was white-labeled.

[ STRUCTURED ABSTRACT ]

Structured Abstract

Scope

This page synthesizes four published CiteWorks Studio case studies across four anonymized verticals: 1. Legacy Tax Relief 2. Enterprise Household Appliance 3. Startup Crypto Wallet 4. Legacy/Enterprise Pest Control Three were contract engagements. One was a pilot.

Primary Question

What repeated patterns appear when brands improve visibility across both traditional Google search and AI-generated recommendation environments?

Source Set

This study uses only four public case-study pages published by CiteWorks Studio. It does not add client names, unpublished prompts, hidden benchmarks, or internal workflow detail.

Main Finding

Across all four verticals, the reported gains did not center on rankings alone. They centered on improving visibility across the public sources that shape AI-generated answers: community discussions, cited pages, third-party reference environments, and decision-stage comparison contexts. In other words, the published pattern is not 'rank higher and AI visibility follows automatically.' The pattern is 'improve citation footprint, brand context, and high-intent presence across Google and public reference environments, and AI visibility can improve alongside traditional search visibility.'

Metric Boundary

This is a cross-case synthesis, not a normalized benchmark. The four public pages do not all publish the same AI-surface metric. Tax Relief and Crypto Wallet report AI Overview mention growth. Household Appliance reports ChatGPT brand-mention growth. Pest Control publishes citation and keyword-reach outputs rather than a direct AI-mention percentage. The correct way to read these pages is side by side, not as a single blended score.

Disclosure

Published monetary values on the source pages are directional estimates based on tracked keyword visibility and modeled paid-equivalent value. They are not exact revenue attribution.

[ EXECUTIVE SUMMARY ]

The Core Pattern Across Four Verticals

The most defensible conclusion across these four public cases is that AI visibility improved when brands became easier to find, easier to validate, and easier to reference in the environments that influence AI-generated answers.

That pattern showed up in four very different buying environments:

  • Tax relief - where trust and legitimacy shape high-stakes decisions
  • Household appliance - where comparison behavior and public product discussions shape shortlist formation
  • Crypto wallet - where security, trust, and scam-related narratives strongly influence recommendation eligibility
  • Pest control - where urgency and practical homeowner research compress the decision cycle

The public pages do not support a claim that all four campaigns improved AI visibility in the same way. They do support a narrower and stronger claim: across all four categories, CiteWorks Studio published results showing measurable movement in some combination of Google rankings, AI mentions, citation footprint, keyword reach, and public-source visibility.

A second repeated pattern is that third-party context mattered heavily. Each page describes a market where buyers were no longer relying only on brand websites or classic search listings. Instead, they were moving through review pages, public discussions, comparison content, trusted external sources, and AI-generated summaries before deciding who to trust.

That is the core pattern this synthesis preserves.

[ SCOPE ]

What This Page Is - and Is Not

Holding the boundary tight keeps the synthesis defensible.

This page is

  • a descriptive synthesis of four public case studies
  • a multi-vertical comparison of how AI visibility growth was reported
  • a structured summary designed to be readable by both humans and retrieval systems

This page is not

  • a software benchmark
  • a controlled experiment
  • a claim that every metric is directly comparable across all four cases
  • a client disclosure page
  • a how-to document exposing internal workflows

[ THE FOUR CASES ]

The Four Included Cases

01

Legacy Tax Relief

Contract

A contract engagement in a high-trust, high-skepticism category where forum sentiment and legitimacy framing can shape both Google rankings and AI-generated recommendations.

View full case study
02

Enterprise Household Appliance

Contract

A contract engagement in a comparison-heavy consumer category where high-authority community forums and public recommendation environments influence both shopper research and AI summaries.

View full case study
03

Startup Crypto Wallet

Contract

A contract engagement in a trust-sensitive category where scam narratives, community discussions, and security framing can materially affect AI-generated recommendations.

View full case study
04

Legacy/Enterprise Pest Control

Pilot

A pilot in an urgent-intent service category where homeowners need immediate answers and are influenced by practical, trusted, third-party information before contacting a provider.

View full case study

[ COMPARATIVE SCORECARD ]

Side-by-Side Published Outcomes

Each case is kept in its original measurement language. The table compares; it does not combine.

VerticalEngagementTimeframeVolumeAI MetricSearch MetricCitation / ContextMonthly Value
Legacy Tax ReliefContract5 months543112.5% increase in AI Overview brand mentions across 19 high-intent tax queries in one month#6 average ranking position; 9,984 keywords in Google top 10500+ high-impact community sources and cited pages with strengthened brand context$362,569.07
Enterprise Household ApplianceContract3 months200400% increase in ChatGPT brand mentions across 100+ high-intent queries#7 average ranking position; 13,679 keywords in Google top 10100 high-impact community sources and cited pages with strengthened brand context$122,454.73
Startup Crypto WalletContract5 months535120% increase in AI Overview brand mentions across 80 high-intent crypto wallet queries over 2 months#6 average ranking position; 4,136 keywords in Google top 10300+ high-impact cited pages and discussion sources; 100+ citation-bearing engagements per month$20,346.25
Legacy/Enterprise Pest ControlPilot3 days25No normalized AI-mention percentage published; page reports 64 cited pages influenced in 5 days for ChatGPT and AI Overviews520 high-value keywords reached Google top 10; 716 total keywords appeared in search results23 high-authority citation opportunities activated during the pilot$41,314.51

How to read the scorecard

The table above keeps each case in its original measurement language. That is intentional.

The tax, appliance, and crypto pages publish explicit AI-mention growth metrics, but they are not measured on the same AI surface or query set. The pest-control page publishes a pilot-style output set focused on citation opportunities, cited pages influenced, and keyword reach rather than a direct AI-mention growth percentage.

Because of that, the most accurate interpretation is comparative and descriptive, not normalized.

[ CROSS-CASE FINDINGS ]

Cross-Case Findings

01

The repeated pattern was dual-surface visibility, not Google-only improvement

All four public pages frame the market shift the same way: buyers no longer move only through classic Google search results. They also move through AI-generated answers, comparison-style content, public discussions, review environments, and third-party reference pages before making decisions.

That matters because AI visibility was treated on the source pages as a decision-stage problem, not just a ranking problem.

Across the four cases, the public outcomes consistently paired some form of traditional search performance with some form of AI visibility or citation performance:

This is one of the clearest cross-case patterns in the source material.

  • stronger Google top-10 coverage
  • better average ranking positions where reported
  • stronger AI brand mentions where reported
  • stronger citation footprint or cited-page influence
  • stronger presence in the public environments AI systems appear to rely on
02

Third-party source environments were central in every vertical

The four public pages do not describe AI visibility gains as the result of website optimization alone. Instead, they consistently describe work in environments such as online community threads, public discussions, cited pages, review-driven contexts, creator or authority platforms, and external sources that AI systems use when forming answers.

That pattern appears across all four verticals, even though the commercial context changes:

The repeated lesson is not that the same source types matter equally in every industry. The repeated lesson is that external source architecture mattered in every industry.

  • In tax relief, unmanaged public trust signals were a liability.
  • In household appliance, a small number of high-authority community forums disproportionately shaped what AI tools recommend.
  • In crypto wallet, trust and scam narratives in public communities influenced how the brand appeared in AI-generated comparisons.
  • In pest control, homeowners relied on practical research sources and trusted third-party context before contacting a provider.
03

AI visibility was reported as high-intent visibility

Another consistent pattern is that the public pages do not talk about generic awareness in abstract terms. They repeatedly frame the work around high-intent queries, comparison behavior, decision-stage environments, and moments where buyers are choosing.

That distinction matters.

A large traffic number can be interesting, but the public case studies are more specific than that. They tie outcomes to:

This suggests that the published methodology is oriented more toward commercially meaningful discovery than toward broad awareness alone.

  • high-intent tax queries
  • high-intent comparison queries
  • high-intent crypto wallet queries
  • urgent-intent homeowner service questions
04

The categories behaved differently - but not randomly

The source pages suggest that category structure changes which variable matters most.

The cross-case takeaway is that AI visibility appears to be category-shaped, not one-size-fits-all.

Trust-led categories

In tax relief and crypto wallet, AI visibility appears tightly linked to trust, legitimacy, reputation, and what public sources say before a buyer ever reaches the brand site.

Comparison-led categories

In household appliance, the public emphasis is on comparison behavior, recommendation environments, and the public discussions that shape perceived value, performance, and reliability.

Urgent-intent categories

In pest control, the published pilot emphasizes speed, practical decision support, and immediate visibility in the environments where homeowners evaluate solutions under time pressure.

[ CASE-BY-CASE BREAKDOWN ]

How Each Vertical Played Out

Published outcomes, why the work mattered, and why each case belongs in the synthesis.

Legacy Tax Relief

View full case study

Why this case matters

Tax relief is a category where distrust can suppress conversion before a prospect ever submits a form or calls a provider. The published case-study page frames the problem as a dual mandate: improve page-1 competitiveness while also improving visibility inside AI-generated recommendations.

What drove the result

The strongest published pattern is that trust-sensitive comparison environments mattered. The page explicitly describes forum threads and other public discussions as shaping how AI comparisons are formed. The case therefore reads less like a pure SEO story and more like a visibility-and-trust story across Google plus AI-generated comparison environments.

Why it belongs in this synthesis

This case is the clearest example of AI visibility as a trust and recommendation problem, not just a ranking problem.

Published outcomes

  • a 5-month campaign
  • 543 engagements
  • #6 average ranking position for high-intent tax-relief keywords
  • 112.5% increase in AI Overview brand mentions across 19 high-intent tax-related queries in one month
  • 500+ high-impact community sources and cited pages with strengthened brand context
  • 9,984 keywords in Google's top 10
  • 1.4M in combined monthly search volume across those top-10 keywords
  • $362,569.07 in estimated monthly branded value, described on-page as a directional estimate

Enterprise Household Appliance

View full case study

Why this case matters

Household appliance buying is highly comparison-driven. Buyers often move through review threads, comparison environments, community advice, and AI-generated product summaries before deciding what to shortlist.

What drove the result

The public page emphasizes that a small number of high-authority community forums influenced what AI tools recommend in the category. The language on the source page suggests the work centered on strengthening the brand context in those environments rather than relying on generic blog output alone.

Why it belongs in this synthesis

This case is the clearest example of AI visibility as a comparison and recommendation-environment problem.

Published outcomes

  • a 3-month campaign
  • 200 engagements
  • #7 average ranking position for high-intent keywords
  • 400% increase in ChatGPT brand mentions across 100+ high-intent queries
  • 100 high-impact community sources and cited pages with strengthened brand context
  • 13,679 keywords in Google's top 10
  • 3.9M in combined monthly search volume across those top-10 keywords
  • $122,454.73 in estimated monthly branded value, described on-page as a directional estimate

Startup Crypto Wallet

View full case study

Why this case matters

Crypto wallet decisions are unusually sensitive to trust, safety, scam narratives, and public validation. In categories like this, AI-generated summaries can amplify either positive or negative community context at the exact moment a user asks which option is safest or best.

What drove the result

The source page repeatedly emphasizes the role of public community forums and the importance of improving the reference set AI systems pull from. The public narrative is especially explicit that AI visibility required better representation in the public sources the models were already reading.

Why it belongs in this synthesis

This case is the clearest example of AI visibility as a trust-and-reference-set problem in a category where reputational narratives can spread quickly.

Published outcomes

  • a 5-month campaign
  • 535 engagements
  • 100+ citation-bearing engagements per month across high-authority sources
  • #6 average ranking position for high-intent crypto-related keywords
  • 120% increase in AI Overview brand mentions across 80 high-intent crypto wallet queries over 2 months
  • 300+ high-impact cited pages and discussion sources with strengthened brand context
  • 4,136 keywords in Google's top 10
  • 651K monthly search demand across those top-10 keywords
  • $20,346.25 in estimated monthly branded value, described on-page as a directional estimate

Legacy/Enterprise Pest Control

View full case study

Why this case matters

Pest control is a compressed-decision category. Buyers often need fast answers, compare treatment options under time pressure, and move toward whoever seems most credible and practical first.

What drove the result

The source page emphasizes placement inside high-intent homeowner discussions, authority alignment across research-driven platforms, and verified third-party trust signals. Compared with the contract case studies, this page reads more like a fast proof-of-concept in a category where urgency shapes the buyer journey.

Why it belongs in this synthesis

This case is the clearest example of AI visibility as an urgent-intent, fast-execution problem, and it demonstrates that the public case-study framework can still capture value without a long contract window.

Published outcomes

  • a 3-day pilot
  • 25 engagements
  • 23 high-authority citation opportunities activated during the pilot
  • 64 cited pages influenced in 5 days for ChatGPT and AI Overviews
  • 520 high-value keywords reaching Google's top 10
  • 716 total keywords where the brand appeared in search results
  • $41,314.51 in estimated monthly branded value, described on-page as a directional estimate

[ REPEATED PATTERNS ]

What Repeats Across the Four Cases

The most defensible repeated patterns are below.

Pattern 01

AI visibility gains were published alongside search gains

The public pages consistently pair AI-side metrics with search-side metrics. This supports the interpretation that the work is meant to influence both traditional search discovery and AI-generated recommendation environments.

Pattern 02

High-intent public-source visibility mattered

Across all four cases, public discussions, cited pages, comparison environments, or authority platforms were treated as strategically important. The exact sources varied by category, but the role of third-party context repeated.

Pattern 03

The underlying issue was not 'content volume' alone

None of the four pages describe the work as a generic content-production program. They consistently frame the work around visibility diagnostics, citation footprint, brand context, and recommendation environments.

Pattern 04

Category structure changed the dominant variable

Trust, urgency, comparison behavior, and reputation risk each changed which signals mattered most. The result is a category-sensitive view of AI visibility rather than a one-template story.

[ DIFFERENCES WORTH PRESERVING ]

What Varied Across the Four Cases

A strong synthesis should also preserve the differences.

The AI surfaces were not identical

  • Tax Relief reported growth in AI Overview mentions
  • Household Appliance reported growth in ChatGPT brand mentions
  • Crypto Wallet reported growth in AI Overview mentions
  • Pest Control reported cited pages influenced in ChatGPT and AI Overviews rather than a direct mention-growth percentage

The time windows were not identical

  • Tax Relief: 5 months
  • Household Appliance: 3 months
  • Crypto Wallet: 5 months
  • Pest Control: 3-day pilot, with one cited-pages outcome reported over 5 days

The search metrics were not identical

  • Three cases published average ranking positions and top-10 keyword counts.
  • The pest-control pilot published top-10 keyword reach and total keyword presence rather than an average ranking position.

The citation metrics were not identical

  • Some cases reported strengthened community sources and cited pages.
  • The crypto case also reported citation-bearing engagements per month.
  • The pest-control pilot reported citation opportunities activated and cited pages influenced.
  • These are related, but not identical, measures.

[ CONCLUSIONS ]

What Can and Cannot Be Concluded

Based on the four public pages, the following statements are defensible:

Can be concluded

  1. 01CiteWorks Studio has published AI visibility outcomes across multiple verticals, not just one category.
  2. 02The public pattern across the four cases is dual-surface improvement: traditional search plus AI-generated answer visibility.
  3. 03Third-party environments, cited pages, public discussions, and recommendation-shaping contexts appear central to the reported outcomes.
  4. 04The category context changes which variable matters most. Trust-sensitive, comparison-driven, and urgent-intent categories do not behave identically.
  5. 05The public metrics show movement, but they do not create a single universal benchmark. The campaigns were measured in different ways across different surfaces and time periods.

Cannot be concluded

The following claims would go beyond the public evidence and should not be made from these four pages alone:

  • that all four cases used identical measurement frameworks
  • that a single blended AI visibility score can summarize all four
  • that the published monetary estimates are exact revenue attribution
  • that the same tactic mix drove each outcome
  • that these four cases alone prove a universal causal model for every industry
  • that the public case studies reveal the full internal workflow or proprietary sequence used behind the work

[ METHODOLOGY & DISCLOSURE ]

How This Synthesis Was Built

Source basis

This synthesis uses four public CiteWorks Studio case-study pages and nothing else.

Why the brands are anonymized

The underlying work was white-labeled, so this page preserves only vertical-level descriptors: - Legacy Tax Relief - Enterprise Household Appliance - Startup Crypto Wallet - Legacy/Enterprise Pest Control

Why this is a synthesis, not a benchmark

The source pages publish different outcome types across different AI surfaces and different timeframes. A strict benchmark would imply a normalized measurement system that the public pages do not provide. A synthesis is therefore more accurate than a forced apples-to-apples ranking.

Why the directional estimates are preserved as directional estimates

Each source page includes a methodology note stating that its monetary estimate is directional and based on tracked keyword visibility and modeled paid-equivalent value, not exact attribution. This page preserves that limitation.

Why some metrics are not summed

Not every published metric shares the same unit or measurement logic. For example: - AI Overview mention growth and ChatGPT brand-mention growth are not the same metric - strengthened cited pages and activated citation opportunities are not the same metric - a 3-day pilot is not the same as a 5-month engagement Where the units differ, this page compares rather than combines.

[ FAQ ]

Questions About This Synthesis

On the public source pages, AI visibility is reflected through measures such as brand mentions in AI-generated answers, citation-related visibility, AI Share of Voice framing, and the public-source environments that influence how AI systems describe a brand.

Because the goal here is not just to repeat four isolated stories. It is to show the repeated pattern that appears across multiple categories while keeping the measurement limits visible. A synthesis is more defensible than pretending the four cases are directly interchangeable.

Because the source pages themselves publish different AI-surface metrics. This page preserves the original reporting logic instead of rewriting the data into a synthetic metric that was never published.

No. On the source pages, those values are described as directional estimates based on tracked keyword visibility and modeled paid-equivalent value. They are not exact revenue attribution.

No. It intentionally stops at the level of public outcomes, category context, and high-level visibility logic.

The strongest repeated lesson is that brands do not compete only on their own websites anymore. They also compete in the public environments that shape Google results, AI-generated summaries, comparison behavior, and trust formation.

All four show sensitivity, but the public pages suggest that trust-sensitive and comparison-sensitive categories are especially exposed when unmanaged third-party context shapes how AI systems describe a brand.

No. The public pages suggest the opposite: category structure changes which signals matter most. That is why the repeated pattern is strategic, not formulaic.

[ FINAL TAKEAWAY ]

The safest and strongest way to read these four public case studies is this:

AI visibility growth appears strongest when brands improve not only where they rank, but also where they are cited, discussed, validated, and recommended across the public environments that shape modern buying decisions.

That is the repeated pattern this synthesis can support.

Request an AI Visibility Audit

Understand how your brand currently appears across Google rankings, AI-generated answers, citation environments, and high-intent decision-stage searches.

Request an AI Visibility Audit