Growth Newsletter #323
Last week, we made the case that AI search is the first truly disruptive distribution platform since mobile. And that the zero-sum game of SEO is giving way to something startups can actually win.
This week, Zach Boyette, Managing Partner @ Saturation, is back with the playbook.
This week's tactics
How to Be the Answer: 5 Levers for AI Search Visibility
By Zach Boyette, Managing Partner @ Saturation
SEO was zero-sum. AI search isn't.
For twenty years, search was a cage match. Hundreds of companies fighting for three spots. If you weren't in the top three organic results, you were effectively invisible. Everyone targeted the same keywords, built the same kinds of pages, competed for the same finite inventory.
Why was it so brutal? Because the search engine had almost no context to work with. The average Google query is 3.4 words (Semrush). "Best CRM for startups." The engine returns the same ten links to everyone who types that phrase. Three winners, everyone else gets nothing.
AI search broke that constraint.
The average AI prompt is around 23 words (Semrush, 80M clickstream records). Seven times longer. But length isn't the point. Context is. Users don't type keywords. They describe situations, constraints, and goals in ways they never would in a search bar.
A million people asking about "the best CRM" no longer get the same answer. They get different answers because they have different context. The inventory of slots isn't 3. It's functionally unlimited.
Today, this is driven by query specificity and the long tail. Users who describe their situation in detail get answers matched to that detail. Tomorrow, as LLM memory and personalization deepen, even identical text queries will produce different answers for different users based on conversation history and stored preferences. The addressable inventory grows further still.
This is what makes AI search a genuine blue ocean. You don't have to beat the incumbent for position 1. You have to be the right answer for a specific user asking a specific question. That's a game startups can win.
How to claim the new inventory
Here are five levers for claiming that inventory. They're not sequential steps. They're levers you pull based on where you are and what's holding you back. Some companies need all five. Some need two. The order you pull them depends on your situation.
Lever 1: Clarity — Make your brand machine-readable
AI models need to understand what you do, who you serve, and how you're different. Not in a tagline. In plain, structured, retrievable language across every surface you control.
If an LLM can't parse what your company does from your website, nothing downstream matters.
In practice:
- Your homepage, product pages, and about page should answer core questions in plain language: What does this company do? Who is it for? How is it different? Don't bury this in brand copy or clever headlines. State it directly.
- Structure content with clear heading hierarchies. Pages structured in 120–180 word sections between headings earn 70% more LLM citations than pages with sparse or inconsistent structure.
- Use comparison tables where relevant. Structured tables increase AI citation rates by approximately 2.5x compared to the same information in paragraph form. Great example here from Gloroots.
- Maintain absolute consistency. If your product page says you serve "mid-market SaaS companies" but your blog says "startups and enterprises," you've introduced ambiguity. AI models lower confidence in brands with contradictory positioning.
One overlooked high-ROI move: your help center and product documentation. Most companies treat docs as a support cost center. But help center content is well-positioned for AI citation because it naturally matches the long-tail, specific questions users ask AI: "Does this product integrate with X?" "Can I use this for Y?" These are exactly the prompts where startups can win.
Lever 2: Positioning — Your brand story, everywhere
This lever isn't about "AI positioning" as a standalone exercise. It's about having exceptional positioning and messaging at the company level, then delivering it consistently across every surface where AI models encounter your brand.
In traditional SEO, you could rank #1 with mediocre brand positioning if your content and backlinks were strong. SEO was a closed-loop system. You played within Google's rules, on one stage. Your brand only had to perform in that vacuum. AI search breaks that loop entirely. The model pulls from training data, search results, social media, reviews, forums, press, help docs. Every surface where your brand appears (or doesn't) feeds its understanding. With SEO, you could hide a weak brand behind good technical execution. With AI, there's nowhere to run.
In practice:
- Get your core positioning sharp at the company level first. Who you serve, what outcome you deliver, why you're different. Then make sure that story is expressed consistently across your website, documentation, social profiles, and anywhere else your brand shows up.
- Create content that explicitly connects your brand to the categories and problems you want AI to associate you with. This reads like basic marketing, but most websites are surprisingly vague about the outcomes they deliver.
- Build "versus" and "alternative" content. If a user asks "X vs Y," the model looks for content that directly addresses that comparison.
What's still unclear: How quickly models update brand representations. RAG-based answers can shift within days. Training-data-based answers can take months.
Lever 3: Off-site presence — Win the away game
Research from AirOps found that 85% of brand mentions in AI-generated answers come from third-party sources, not the brand's own website. Your "away game" matters more than your "home game."
This is where startups have a structural advantage. In SEO, off-site authority required years of domain authority and backlink building. With AI search, a new company can get mentioned on URLs that LLMs already trust and appear in answers almost immediately. The barrier is relevance and authenticity, not accumulated SEO equity.
In practice:
- SEO/backlinks: Get included in comparison articles, listicles, and review roundups that AI models already cite. Nearly 90% of third-party brand mentions in AI answers come from these structured formats. Find the pages that appear for your target prompts and get on them.
- Social/communities: Contribute useful content to communities where your audience asks questions. Reddit and Quora responses surface frequently in AI retrieval. Substantive answers from domain experts, not drive-by product mentions.
- PR/editorial: Earn press and editorial coverage. AI models weigh editorial sources heavily because they signal independent validation.
- Partnerships/co-marketing: Build partnerships where complementary brands mention you. Integration pages, guest posts, co-authored research. Each creates a retrievable node AI models can pull from.
Notice the pattern. These bullets map to different organizational functions: SEO, social, PR, partnerships. This is why AI search requires cross-functional coordination in a way that traditional SEO never did. Old SEO lived in one team. AEO touches brand, content, PR, partnerships, and product. No single function owns it, which means no single function can execute it.
This is the thesis behind Saturation: AI search requires a cross-functional discipline that needs a coordinating layer, not just an SEO team with a new mandate.
What varies by model: Perplexity leans on recent web content. Claude draws more from training data. Others blend both. Diversified off-site presence hedges across all of them.
Lever 4: Content structure — Format for AI citation
Even if AI models find your content, they won't cite it if they can't extract a clean, self-contained answer.
In practice:
- Lead every section with a direct answer in 40–60 words before elaborating. AI models extract from the first 100 words far more often than from buried conclusions.
- Write self-contained paragraphs. If an AI model pulls a single paragraph from your page, does it communicate something useful on its own?
- Include specific data, numbers, and attributable claims. Adding statistics to content improves AI visibility by 41%, the single most effective optimization technique tested (GEO optimization research).
- Use markdown-style tables for comparisons. Structured tabular data gets parsed with high accuracy by LLMs.
A critical caveat: fully AI-generated content does not perform well in AI search. 82% of articles cited by AI models are human-written (Graphite, 2026). If you're tempted to scale AEO with AI-written content farms, the data says it won't work. AI-assisted content with significant human editing is a different story and may be effective, but the research on that is still emerging.
Lever 5: Measurement — Reconnaissance and attribution
AEO measurement has two distinct layers, and conflating them causes confusion.
The first layer is reconnaissance: understanding how AI models currently represent your brand. This is low-effort, high-signal, and available to anyone right now. Run your highest-value prompts through major AI models. Document whether you appear, how you're described, and which competitors show up. This takes 2–3 hours and gives you more signal than any tool currently offers. It's not "measurement" in the analytics sense. It's intelligence gathering. You should be doing this monthly at a minimum.
The second layer is attribution: connecting AI search visibility to pipeline and revenue at scale. This is genuinely hard. No equivalent of Google Search Console exists for AI search. You can't see impressions, clicks, or referral paths the way you can with traditional search. Tools are emerging (Profound, Otterly, AirOps) but the space is early.
Don't let the difficulty of attribution stop you from doing reconnaissance. They're different activities with different maturity levels.
In practice:
- Track which third-party URLs get cited when AI mentions your category. These are your highest-leverage targets for Lever 3.
- Monitor for inaccuracies. If an AI model misrepresents your product, that's a clarity problem (Lever 1) or a third-party information problem (Lever 3). Fix the source; the model eventually self-corrects.
Where to start
Not every lever matters equally for every company. Here's how to prioritize:
If AI models can't accurately describe what you do: Start with Lever 1 (Clarity). High-leverage, low-cost, entirely within your control. Most companies are surprisingly bad at clearly stating what they do in structured, retrievable language. The help center tactic alone can shift your visibility.
If you have clarity but aren't showing up in answers: Lever 3 (Off-site presence) is likely your biggest unlock. Audit what AI says about your category, identify the pages that get cited, and pursue inclusion. This takes more effort but moves the needle most.
If you're showing up but getting misrepresented: Lever 2 (Positioning). Your brand story isn't consistent enough across sources for models to synthesize a clean narrative.
Regardless of where you are: Run reconnaissance (Lever 5) monthly. Even 30 minutes builds pattern recognition about how AI models see your brand.
Don't bother with yet: Hyper-technical schema optimization, LLMs.txt files, model-specific prompt engineering. Real tactics, but optimizations on top of the fundamentals.
Real results
In our work with clients at Saturation, this framework is producing measurable outcomes:
Bluepeak Fiber Internet: After restructuring site content for machine readability and launching a targeted off-site campaign, they went from appearing in 0% of relevant AI queries to being recommended in over 40% within 90 days.
Home Nation: By rewriting product documentation and help center content for AI-citation readiness, they saw a 3.2x increase in AI-referred traffic within 60 days.
About Saturation
Saturation is our new AI search agency. We help startups and growth-stage companies build visibility across AI models, combining the technical infrastructure of AEO with the cross-functional brand strategy that AI search demands.
If AI search matters to your business, book a free audit and we'll show you where you stand across major AI models.
Wrapping up
Stay tuned for Part 3: How to turn AI citations into a compounding engine.
Zach will be back next week to break down the content cadences, monitoring stack, and mechanics required to build an AI search flywheel.
Thanks y'all!





