The Selection Economy: How AEO and GEO Are Redefining Brand Authority
Larry Tucker
March 27, 2026
12
minutes read
Search is still where questions begin, but discovery is increasingly resolved inside an answer layer. Your brand isn’t always encountered as a page someone chooses to visit. More often, it shows up as a sentence someone accepts.
That shift is bigger than a feature release. When the interface answers more completely, fewer people leave to compare options, click through to sources, or explore beyond the first screen. We’re moving into a Selection Economy—an environment where brands compete to be included, cited, and recommended inside generated responses, not merely to “rank well.”
In this article, I’m going to look at what’s driving the change, what it does to consideration and measurement, and what brands can do to stay legible (and credible) when discovery is increasingly mediated by generative systems.
The interface changed, so the economics are changing
Google has been explicit about pushing Search toward a conversational flow: AI Overviews that invite follow-up questions, and a deeper “AI Mode” experience. Whether you love that direction or not, it changes the economic model of discovery. The more the page answers, the less incentive there is to click.
Pew Research Center’s analysis of U.S. browsing datamakes the behavioral shift hard to ignore. When an AI summary appeared, users clicked a traditional result link in 8% of visits—versus 15% when no AI summary appeared. Clicks on links inside the summary itself were rarer still: 1% of visits. Pew also found that users ended their browsing session after visiting a page with an AI summary 26% of the time, compared with 16% on pages with only traditional results.
Pic. % of Google searches that resulted in users taking specific actions (Source).
It’s tempting to reduce this to “SEO is dead,” but that’s not the right diagnosis. What’s happening is more specific: the unit of competition is moving from the click to the citation and the recommendation.
From visibility to veracity
For years, marketers treated clicks as a workable proxy for intent. In a ranked-list world, the click was a visible choice. In a generated-answer world, that proxy breaks. The model can satisfy the intent without sending you the visit, and it can still influence the decision without ever naming you.
That’s why AEO (answer engine optimization) and GEO (generative engine optimization) matter. Not because they’re the newest acronyms, but because they describe a new gate. Generative systems don’t only sort results; they synthesize a stance. If your claims can’t be verified across credible sources, the model either hedges, omits, or defaults to something it trusts more.
This is where “brand authority” starts to feel less like a story you tell and more like a record you can prove. The systems making selections are effectively asking:
Are the brand’s key claims consistent across reputable sources?
Do those sources agree on the entity (name, product, category, differentiators)?
Can the claim be traced back to something structured, specific, and stable?
When those conditions are met, models cite you confidently. When they aren’t, you may be present on the web and still absent from the answer.
Consideration is collapsing, and “page one” is no longer the battlefield
Traditional funnels assumed a visible comparison step: the user browses options, evaluates tradeoffs, and clicks deeper. Generative systems still compare, but they do it behind the curtain. They ingest multiple sources, compress the tradeoffs, then output a narrative with a small shortlist or a single recommendation.
That creates a winner-take-most dynamic that’s easy to underestimate. A brand can be “in the market” and still be missing at the moment a buyer asks the question that triggers selection.
There’s also a second-order effect: answer layers are drifting into more commercially meaningful queries. Semrush’s analysis of 10M+ keywords across 2025 shows AI Overviews stabilizing at around 16% of queries by late 2025 after earlier volatility. The larger point is that coverage isn’t limited to definitions and trivia. As the feature expands, it becomes a brand-defense issue, not an “upper funnel” experiment.
Pic. Share of keywords triggering AI Overview (Source).
If you’re leading growth, brand, or performance, this reframes the job. You’re no longer competing for a slot in a list. You’re competing to be the system’s final answer in the moment that matters.
Semantic authority: making your brand legible to machines
SEO used to reward the shape of a page. GEO rewards the shape of a brand’s footprint.
The brands that show up consistently in answer layers tend to share one advantage: they are easier to verify. Their core claims are repeated coherently across high-trust sources, and they’re expressed in ways that machines can parse without guessing.
A practical way to organize this is what I call a claim graph:
Core claims: What you want to be known for—category, outcomes, constraints, proof. Not ten messages. Two or three that you can defend.
Evidence nodes: Third-party validation, standards, methodology pages, leadership credentials, transparent comparisons, documentation. These are the assets that make a claim feel solid instead of promotional.
Consistency surfaces: Where the brand appears across the open web: publisher coverage, listings, industry knowledge bases, community citations, partner pages, structured data.
Update discipline: A cadence for refreshing what must stay current (pricing, availability, compliance, product lines) without rewriting everything into vague language.
This is where media and creative strategy meet. Strong messaging still matters. But messaging that can’t be corroborated becomes harder for machines to repeat.
A new scoreboard: measuring inclusion, accuracy, and recommendation
Most organizations still report what the legacy funnel made easy to report: impressions, clicks, sessions, rankings. Those aren’t useless, but they no longer tell you whether you’re being selected or whether you’re being described correctly.
A better approach is to focus on three things that map to how answer layers behave. Think of it as an Inclusion Index: a simple way to track whether you’re showing up, being described accurately, and getting recommended when the questions that matter are asked:
There are signs that platforms recognize the pressure they’ve created. Google, for example, is iterating on how sources are shown in AI answers, including more prominent source indicators and grouped previews.
Referral traffic from AI assistants is also becoming trackable at scale. Similarweb’s 2025 Generative AI report estimates that AI platforms drove over 1.1B referral visits in June 2025, up 357% year over year.
The implication is not to “chase AI traffic" but to accept that selection is happening whether you measure it or not. If you don’t have a way to spot omission, misattribution, or competitor defaulting, you’ll diagnose problems late.
The governance fight is now explicit
This shift isn’t only a product evolution. It’s also a conflict over who gets to summarize the web and under what terms.
For instance, in February 2026, the European Publishers Council filed an antitrust complaint with the EU focused on Google’s AI Overviews, arguing that AI-generated summaries use publisher content without effective consent or compensation and undermine the economics of journalism.
Regulatory intervention, publisher negotiations, platform UI tweaks—however it plays out, the rules aren't settling down anytime soon. Brands that anchor their discovery strategy to a single platform will be the first to feel every shift.
The Open Garden approach: portability over black-box dependence
Meanwhile, walled gardens are getting better at ingesting brand value without returning much in the way of traffic. That’s the zero-click reality in its most practical form: you can shape the buyer’s decision and still never see the visit.
The counter-move is content portability. Build your core narrative so it’s technically robust—structured, corroborated, and consistent enough that it travels cleanly across generative engines. If you leave your story to a black-box algorithm, you’re accepting whatever interpretation it decides to synthesize. If you own the underlying data structures and evidence trails, you give those systems far less room to improvise.
This is the logic behind AI Digital’s Open Garden model: an engine-agnostic way to strengthen what machines can verify, so your authority holds up across ecosystems rather than being trapped inside one platform’s ruleset.
Open Garden works because it treats “being selected” as a cross-platform outcome. Instead of optimizing for one engine’s quirks, it focuses on the durable layer underneath: entity consistency, evidence-backed claims, and a footprint that can be referenced without distortion. When answer layers shift UI, weighting, or citation behavior, that foundation still holds, so the brand narrative stays stable even as the distribution mechanics change.
Closing thoughts: signal, not verdict
There’s a mistake I see brands making already: treating generated answers as a verdict. They’re not. They’re a signal—about what the ecosystem believes is true, which sources it trusts, and which narratives are easiest to justify.
That’s also why this moment is an opportunity. If you can make your claims clearer, more consistent, and more provable than your competitors’, you don’t just gain traffic. You gain default status in the places where decisions begin.
So yes, keep doing the basics well. But add a new discipline alongside them: treat semantic authority as brand infrastructure. In the Selection Economy, that infrastructure is what keeps you visible when visibility is no longer the point.
If you want a practical way to approach that shift, AI Digital’s Open Garden model is built for it—engine-agnostic, evidence-led, and designed around content portability so your core narrative stays consistent across generative systems as the rules keep changing. If you’d like to talk through what this could look like for your category (and where your brand is currently being selected, omitted, or misframed), get in touch, and we’ll compare notes.
Blind spot
Key issues
Business impact
AI Digital solution
Lack of transparency in AI models
• Platforms own AI models and train on proprietary data • Brands have little visibility into decision-making • "Walled gardens" restrict data access
• Inefficient ad spend • Limited strategic control • Eroded consumer trust • Potential budget mismanagement
Open Garden framework providing: • Complete transparency • DSP-agnostic execution • Cross-platform data & insights
Optimizing ads vs. optimizing impact
• AI excels at short-term metrics but may struggle with brand building • Consumers can detect AI-generated content • Efficiency might come at cost of authenticity
• Short-term gains at expense of brand health • Potential loss of authentic connection • Reduced effectiveness in storytelling
Smart Supply offering: • Human oversight of AI recommendations • Custom KPI alignment beyond clicks • Brand-safe inventory verification
The illusion of personalization
• Segment optimization rebranded as personalization • First-party data infrastructure challenges • Personalization vs. surveillance concerns
• Potential mismatch between promise and reality • Privacy concerns affecting consumer trust • Cost barriers for smaller businesses
Elevate platform features: • Real-time AI + human intelligence • First-party data activation • Ethical personalization strategies
AI-Driven efficiency vs. decision-making
• AI shifting from tool to decision-maker • Black box optimization like Google Performance Max • Human oversight limitations
• Strategic control loss • Difficulty questioning AI outputs • Inability to measure granular impact • Potential brand damage from mistakes
Managed Service with: • Human strategists overseeing AI • Custom KPI optimization • Complete campaign transparency
Fig. 1. Summary of AI blind spots in advertising
Dimension
Walled garden advantage
Walled garden limitation
Strategic impact
Audience access
Massive, engaged user bases
Limited visibility beyond platform
Reach without understanding
Data control
Sophisticated targeting tools
Data remains siloed within platform
Fragmented customer view
Measurement
Detailed in-platform metrics
Inconsistent cross-platform standards
Difficult performance comparison
Intelligence
Platform-specific insights
Limited data portability
Restricted strategic learning
Optimization
Powerful automated tools
Black-box algorithms
Reduced marketer control
Fig. 2. Strategic trade-offs in walled garden advertising.
Core issue
Platform priority
Walled garden limitation
Real-world example
Attribution opacity
Claiming maximum credit for conversions
Limited visibility into true conversion paths
Meta and TikTok's conflicting attribution models after iOS privacy updates
Data restrictions
Maintaining proprietary data control
Inability to combine platform data with other sources
Amazon DSP's limitations on detailed performance data exports
Cross-channel blindspots
Keeping advertisers within ecosystem
Fragmented view of customer journey
YouTube/DV360 campaigns lacking integration with non-Google platforms
Black box algorithms
Optimizing for platform revenue
Reduced control over campaign execution
Self-serve platforms using opaque ML models with little advertiser input
Performance reporting
Presenting platform in best light
Discrepancies between platform-reported and independently measured results
Consistently higher performance metrics in platform reports vs. third-party measurement
Fig. 1. The Walled garden misalignment: Platform interests vs. advertiser needs.
Key dimension
Challenge
Strategic imperative
ROAS volatility
Softer returns across digital channels
Shift from soft KPIs to measurable revenue impact
Media planning
Static plans no longer effective
Develop agile, modular approaches adaptable to changing conditions
Brand/performance
Traditional division dissolving
Create full-funnel strategies balancing long-term equity with short-term conversion
Capability
Key features
Benefits
Performance data
Elevate forecasting tool
• Vertical-specific insights • Historical data from past economic turbulence • "Cascade planning" functionality • Real-time adaptation
• Provides agility to adjust campaign strategy based on performance • Shows which media channels work best to drive efficient and effective performance • Confident budget reallocation • Reduces reaction time to market shifts
• Dataset from 10,000+ campaigns • Cuts response time from weeks to minutes
• Reaches people most likely to buy • Avoids wasted impressions and budgets on poor-performing placements • Context-aligned messaging
• 25+ billion bid requests analyzed daily • 18% improvement in working media efficiency • 26% increase in engagement during recessions
Full-funnel accountability
• Links awareness campaigns to lower funnel outcomes • Tests if ads actually drive new business • Measures brand perception changes • "Ask Elevate" AI Chat Assistant
• Upper-funnel to outcome connection • Sentiment shift tracking • Personalized messaging • Helps balance immediate sales vs. long-term brand building
• Natural language data queries • True business impact measurement
Open Garden approach
• Cross-platform and channel planning • Not locked into specific platforms • Unified cross-platform reach • Shows exactly where money is spent
• Reduces complexity across channels • Performance-based ad placement • Rapid budget reallocation • Eliminates platform-specific commitments and provides platform-based optimization and agility
• Coverage across all inventory sources • Provides full visibility into spending • Avoids the inability to pivot across platform as you’re not in a singular platform
Fig. 1. How AI Digital helps during economic uncertainty.
Trend
What it means for marketers
Supply & demand lines are blurring
Platforms from Google (P-Max) to Microsoft are merging optimization and inventory in one opaque box. Expect more bundled “best available” media where the algorithm, not the trader, decides channel and publisher mix.
Walled gardens get taller
Microsoft’s O&O set now spans Bing, Xbox, Outlook, Edge and LinkedIn, which just launched revenue-sharing video programs to lure creators and ad dollars. (Business Insider)
Retail & commerce media shape strategy
Microsoft’s Curate lets retailers and data owners package first-party segments, an echo of Amazon’s and Walmart’s approaches. Agencies must master seller-defined audiences as well as buyer-side tactics.
AI oversight becomes critical
Closed AI bidding means fewer levers for traders. Independent verification, incrementality testing and commercial guardrails rise in importance.
Fig. 1. Platform trends and their implications.
Metric
Connected TV (CTV)
Linear TV
Video Completion Rate
94.5%
70%
Purchase Rate After Ad
23%
12%
Ad Attention Rate
57% (prefer CTV ads)
54.5%
Viewer Reach (U.S.)
85% of households
228 million viewers
Retail Media Trends 2025
Access Complete consumer behaviour analyses and competitor benchmarks.
Identify and categorize audience groups based on behaviors, preferences, and characteristics
Michaels Stores: Implemented a genAI platform that increased email personalization from 20% to 95%, leading to a 41% boost in SMS click through rates and a 25% increase in engagement.
Estée Lauder: Partnered with Google Cloud to leverage genAI technologies for real-time consumer feedback monitoring and analyzing consumer sentiment across various channels.
High
Medium
Automated ad campaigns
Automate ad creation, placement, and optimization across various platforms
Showmax: Partnered with AI firms toautomate ad creation and testing, reducing production time by 70% while streamlining their quality assurance process.
Headway: Employed AI tools for ad creation and optimization, boosting performance by 40% and reaching 3.3 billion impressions while incorporating AI-generated content in 20% of their paid campaigns.
High
High
Brand sentiment tracking
Monitor and analyze public opinion about a brand across multiple channels in real time
L’Oréal: Analyzed millions of online comments, images, and videos to identify potential product innovation opportunities, effectively tracking brand sentiment and consumer trends.
Kellogg Company: Used AI to scan trending recipes featuring cereal, leveraging this data to launch targeted social campaigns that capitalize on positive brand sentiment and culinary trends.
High
Low
Campaign strategy optimization
Analyze data to predict optimal campaign approaches, channels, and timing
DoorDash: Leveraged Google’s AI-powered Demand Gen tool, which boosted its conversion rate by 15 times and improved cost per action efficiency by 50% compared with previous campaigns.
Kitsch: Employed Meta’s Advantage+ shopping campaigns with AI-powered tools to optimize campaigns, identifying and delivering top-performing ads to high-value consumers.
High
High
Content strategy
Generate content ideas, predict performance, and optimize distribution strategies
JPMorgan Chase: Collaborated with Persado to develop LLMs for marketing copy, achieving up to 450% higher clickthrough rates compared with human-written ads in pilot tests.
Hotel Chocolat: Employed genAI for concept development and production of its Velvetiser TV ad, which earned the highest-ever System1 score for adomestic appliance commercial.
High
High
Personalization strategy development
Create tailored messaging and experiences for consumers at scale
Stitch Fix: Uses genAI to help stylists interpret customer feedback and provide product recommendations, effectively personalizing shopping experiences.
Instacart: Uses genAI to offer customers personalized recipes, mealplanning ideas, and shopping lists based on individual preferences and habits.
Medium
Medium
Share article
Url copied to clipboard
No items found.
Subscribe to our Newsletter
THANK YOU FOR YOUR SUBSCRIPTION
Oops! Something went wrong while submitting the form.
Questions? We have answers
Have other questions?
If you have more questions, contact us so we can help.