Foot Traffic Attribution: Measuring Real-World Impact in a Privacy-First Era
January 21, 2026
17
minutes read
Proving the real-world impact of digital ads has become one of the biggest challenges for omnichannel marketers, especially as privacy regulations tighten and online metrics fall short of capturing in-store behavior. In this article, we unpack foot traffic attribution: what it really measures, how it works, and how to use it responsibly.
Digital advertising has never been short on numbers. Clicks, views, video completion rates, platform “lift,” on-platform conversions—most of it arrives neatly packaged, confidently attributed, and suspiciously frictionless.
Meanwhile, the real world keeps doing what it does: people visit stores when it’s convenient, when a friend recommends something, when there’s a promotion, when the weather changes, when they pass a location, or when a need finally becomes urgent. The gap between tidy digital reporting and messy offline behavior is exactly where foot traffic attribution has found its moment.
And it is a moment. Even in the U.S., ecommerce accounted for 16.4% of total retail sales in Q3 2025—meaning most retail spend still happens outside the browser.
This is why foot traffic attribution is suddenly showing up everywhere: retail, automotive, QSR, telecom, big-box, even “brand” campaigns that used to accept fuzzier measurement. Done well, it gives you a practical way to connect exposure to real-world visits. Done poorly, it becomes a new flavor of measurement inflation, complete with black-box methodology and dashboards that feel precise without actually being trustworthy.
💡 If you want a broader lens on why measurement is getting rebuilt right now, AI Digital’s 2026 Media Trends Report is a useful companion.
The framing that matters for 2026: foot traffic attribution is signal, not verdict. It can help you steer. It cannot “prove” causality by itself. It becomes far more valuable when you treat it like directional evidence, pair it with incrementality, and pressure-test the assumptions underneath.
⚡ ESW’s research on US shopping behavior found23% of shoppers research products online and then buy in-store. That’s exactly the gap foot traffic attribution tries to cover—digital influence that doesn’t end in a click.
Foot traffic attribution (also called footfall attribution or footfall measurement) is a measurement approach that estimates whether people who were exposed to ads later visited a real-world location.
At a high level, it tries to answer questions like:
Did this campaign increase in-store visits?
Which channels drove the strongest incremental lift in visits?
What audiences were most likely to show up after exposure?
How did timing (daypart, day of week, proximity) affect store visitation?
The important distinction is that foot traffic attribution isn’t trying to track an online action like a click or a purchase event. It’s attempting to connect media exposure to offline movement, usually using privacy-safe location signals and statistical comparison (not deterministic “this person definitely went because of this ad”).
That’s also why it’s often misunderstood. In many decks, foot traffic attribution gets presented like online conversion tracking: ad shown → visit happens → credit assigned. Real-world behavior isn’t that clean. Good footfall measurement approaches admit uncertainty, quantify noise, and lean heavily on control groups and baselines.
Why it’s suddenly everywhere (and why that’s not always a good thing)
Foot traffic attribution is everywhere for three reasons:
Offline impact is back in focus. Retail and service brands need evidence that upper- and mid-funnel media is doing more than generating clicks.
Channels like CTV and DOOH grew up. They deliver real reach, but their performance story depends on connecting exposure to downstream behavior.
Signal loss pushed marketers toward probabilistic measurement. With consent requirements and identifier changes, measurement is shifting from user-level certainty to cohort-level inference.
The risk is obvious: when a method becomes popular, vendors race to make it look easy. That’s how you get dashboards that promise precision they can’t honestly deliver, and “lift” numbers that don’t survive basic scrutiny.
💡 If you want a critique of black-box measurement culture more broadly, AI Digital’s “The biggest AI blind spot in advertising” is aligned with the skepticism you should bring to any vendor model.
How foot traffic attribution works
Here’s the practical version: a foot traffic attribution system tries to (1) define a valid visit, (2) identify who was exposed to ads, (3) match exposure to visits using privacy-safe linking, and (4) estimate incremental impact using controls.
Data inputs: where the “visit” signal comes from
A “visit” typically comes from one or more location-related signals, such as:
Device location services (GPS-level signals when available)
Wi-Fi and Bluetooth proximity signals (context-dependent)
App-derived location events (when users have granted permissions)
Aggregated location panels (opted-in cohorts managed by providers)
Quality varies wildly depending on permissions, device settings, app behavior, and the provider’s filtering. A mature measurement partner will talk openly about coverage gaps and confidence thresholds rather than pretending every environment is equally measurable.
Defining a valid visit: geofences, polygons, dwell time, exclusions
“Visited the store” sounds simple until you try to define it.
Most systems use:
Geofences or polygons: A boundary around a location (polygons are often more accurate than circular fences).
Dwell time thresholds: A minimum time on-site to reduce drive-by false positives.
Exclusions: Rules to filter out employees, neighboring businesses, shared parking lots, or “visits” that occur too frequently.
The difference between a good and bad footfall measurement setup often comes down to the visit definition. If a vendor can’t explain theirs clearly, treat their results as marketing.
{{26-Foot-Traffic-Attribution-1="/tables"}}
Exposure matching: how ads are linked to devices or people
This is where things get delicate.
Exposure can be known at different levels depending on channel:
Mobile/in-app: Device-level exposure may be available, but consent and platform rules shape what’s linkable.
CTV: Exposure is often at the household or device level, then modeled to likely visitors.
DOOH: Exposure is inferred from time and place (who was near a screen during a window), then matched probabilistically.
Matching approaches often rely on identity graphs, hashed identifiers, or cohort-level aggregation. The more a partner talks about “deterministic matching” without acknowledging consent and coverage, the more you should interrogate the methodology.
{{26-Foot-Traffic-Attribution-2="/tables"}}
Attribution windows and decay
Foot traffic attribution depends on an attribution window, which is simply the time after exposure during which a visit can “count.”
Short windows reduce noise but risk missing consideration-driven visits. Longer windows capture more behavior but increase the chance you’re counting unrelated trips.
Better systems:
Use different windows by category (QSR vs furniture is not the same purchase cycle)
Apply decay (a visit 1 day after exposure is weighted more than a visit 12 days later)
Let you run sensitivity checks across windows
Control groups, incrementality, and lift
This is the core of credibility: you need a comparison group.
A basic structure looks like this:
Exposed group: People/devices/households that were served ads
Control group: Similar people/devices/households that were not served ads (or are held out)
Then you compare visit rates. The difference is your estimated lift.
Without a real control, foot traffic attribution drifts into correlation. With a good control, it becomes a real measurement tool.
⚡ If a vendor can’t explain the control, you’re not looking at attribution, you’re looking at storytelling.
Outputs: what you actually get in a report
Most foot traffic attribution reporting includes:
Attributed visits (sometimes modeled)
Incremental visits (the number that matters more)
Lift (percent increase vs control/baseline)
Cost per visit and ideally cost per incremental visit
Breakdowns by channel, audience, geography, creative, daypart
{{26-Foot-Traffic-Attribution-3="/tables"}}
⚡ If a report shows lift but hides match rate, it’s incomplete by design. You can’t judge signal strength if you never see coverage and baselines
Treat “attributed visits” as descriptive and “incremental visits” as decision-grade (assuming the test design holds up).
Channels that commonly use foot traffic attribution
Foot traffic attribution shows up most often in channels where reach is high and clicks are not the main point.
CTV and streaming TV
CTV is the poster child for foot traffic attribution because it has two qualities marketers love:
Big-screen attention and scale
Digital-style targeting and optimization
Streaming has grown to the point where it’s reshaping baseline media planning. Nielsen reported streaming represented 46.7% of TV viewership in November 2025, surpassing broadcast and cable combined.
⚡ CTV also keeps growing as a budget line item—eMarketer estimated US CTV ad spending at $33.35B in 2025 and rising to $46.89B by 2028. That scale makes foot traffic attribution tempting, but it also makes sloppy audience modeling more expensive.
But measurement is not straightforward:
Exposure can be household-level, not person-level
Visits are individual behavior, not household behavior
Matching depends on identity graphs, privacy-safe panels, or modeled linkages
Match rates vary by environment and consent
This is why CTV footfall attribution should be read directionally unless you have strong controls and transparency.
Programmatic DOOH is another natural home for foot traffic attribution because it’s inherently location-adjacent: screens exist in real places, and many campaigns are planned around proximity.
The measurement catch is that DOOH often relies on modeled exposure:
You rarely know exactly who looked at a screen
You infer likely exposure based on time, place, and movement patterns
Then you compare visitation patterns in exposed vs unexposed cohorts
DOOH’s growth is one reason foot traffic attribution keeps expanding. OAAA reported U.S. out-of-home advertising revenue surpassed $9.1B in 2024, and digital OOH accounted for 34% of total OOH ad spend.
Mobile is still where foot traffic attribution feels most intuitive, because the device is physically present and can generate location signals (when users allow it).
The tradeoffs:
Consent and platform rules shape what you can observe
Location permissions vary by app category and user trust
Bidstream/location “shortcuts” can be noisy or non-compliant depending on sourcing
The best mobile foot traffic setups typically use:
Clear consent-based data sourcing
Conservative visit definitions
Strong baselines and holdouts
Cross-validation with POS or store-level trends
Foot traffic attribution vs other measurement models
Foot traffic attribution adds an offline dimension to marketing measurement, and it differs in approach from traditional digital attribution models. Below we compare it with a few common measurement frameworks:
Foot traffic vs last-click attribution
“Last-click” attribution gives all the credit to the final ad a customer clicked before converting—best suited to online actions like e-commerce purchases or form fills. Foot traffic attribution usually has no clicks to work with: it links ad exposure (impressions) to a later store visit, treating the visit as the “conversion event,” so it can capture offline impact that last-click analytics would miss.
Foot traffic attribution also tends to use multi-touch or rules-based crediting (fractional credit, channel hierarchies, time-based weighting), though some setups can mimic “last-touch” by crediting the last ad seen within the attribution window. Because in-store visits are often influenced by multiple factors, marketers often pair footfall attribution with lift tests to sanity-check results.
Foot traffic vs online conversion attribution
Online conversion attribution tracks digital actions (add-to-cart, purchase, signup) and ties them to marketing touchpoints using signals like cookies, click IDs, or logins. Foot traffic attribution tracks real-world actions—store visits—using location signals and often device IDs. Both can use multi-touch models, but the data sources (and what counts as a “conversion”) are different.
The other key difference is certainty and value. An online purchase is a confirmed transaction with a clear revenue amount. A store visit is a proxy metric: it suggests intent, not a guaranteed sale, and its value is usually estimated (conversion rate × average order value). That’s why foot traffic is often used as a faster read when sales data isn’t available immediately, then validated against revenue later.
In omnichannel retail, they work best together: foot traffic shows store impact, online attribution shows e-commerce impact—and combined, they give a more complete view of how campaigns drive results across channels.
Foot traffic vs incrementality studies
Incrementality studies measure causal impact by comparing a test group (exposed) to a control group (unexposed)—often via audience splits or geo holdouts—to quantify lift in conversions or store visits. This isn’t an either/or with foot traffic attribution; it’s what makes foot traffic results more trustworthy.
Foot traffic attribution matches ad exposure to store visits, but without a control baseline it’s correlation, not causation. The strongest setups bake in lift testing (e.g., reporting a % lift in visits for exposed vs. control). Standalone lift tests can be more rigorous, but they’re typically slower and costlier, while attribution platforms can provide ongoing, granular readouts (which ads/placements drove visits).
Best practice is using both: incrementality to validate overall lift, and attribution to understand where that lift came from.
The biggest problems with foot traffic attribution
Foot traffic attribution works best when you treat it as applied statistics, not as a reporting feature.
Here are the failure points that show up most often.
Location data is sensitive because it can reveal patterns about someone’s life, even when names aren’t attached.
Regulators increasingly treat precise geolocation as sensitive. For example, California’s privacy framework explicitly listsprecise geolocation as sensitive personal information.
Platforms also tightened expectations. Apple’s App Tracking Transparency (ATT) requires apps to ask permission to track activity across other companies’ apps and websites, which affects identifier availability and measurement strategies.
The result: privacy-first foot traffic attribution needs consent-aware sourcing, aggregation, and defensible governance. Anything that sounds like “we track everyone everywhere” is not just risky—it’s increasingly unrealistic.
Signal loss and match-rate decline
Even without regulation, the practical reality is that match rates aren’t what they used to be.
Common drivers:
users declining tracking prompts
reduced identifier persistence
limited cross-app visibility
fragmentation across devices and households
AppsFlyer’s analysis shows opt-in rates vary by market, with the U.S. at 44% as of Q1 2024, while global opt-in was around 50% in their reporting.
You don’t need to panic about these numbers. You do need to design measurement that remains useful when coverage is incomplete.
Black-box methodology and “lift” theater
“Lift” is easy to sell and surprisingly easy to manufacture.
Watch for:
unclear control group design
proprietary scoring that can’t be audited
reporting that hides sample sizes, baselines, or confidence ranges
one number presented as universal truth
This is where the “signal not verdict” framing matters most. If the vendor won’t show their math, don’t bet your budget on their output.
💡 AI Digital’s “Advertising Intelligence” is a helpful piece for thinking about measurement as a system rather than a dashboard.
The halo effect and over-attribution
The halo effect is real: ads can increase general awareness, store consideration, and future behavior that doesn’t show up neatly inside a window.
But it can also be misused.
A sloppy model can “find” lift where none exists because:
stores are in high-traffic areas
exposed audiences already shop more
campaigns run during promotions or seasonal spikes
competitor activity shifts demand
If you don’t control for confounders, your model can end up attributing the world to your media.
Walled gardens and partial visibility
Omnichannel measurement is often stitched together from incomplete views:
some platforms report aggregated lift but don’t share raw signals
retail media has strong closed-loop data inside retailer ecosystems, but not always outside them
cross-channel frequency and overlap are hard to unify without clean-room style approaches
This is another reason to treat foot traffic attribution as one input, not as your single source of truth.
How to read foot traffic attribution data correctly
The difference between “useful measurement” and “expensive noise” is often the reader, not the report.
Here’s how to read foot traffic attribution like a skeptic who still wants value.
Look for direction, not precision
If your dashboard says lift is 12.37%, treat that precision as cosmetic.
The better mindset:
Is lift consistently positive across tests?
Does it vary logically by distance, daypart, or audience?
Does it replicate when you run similar campaigns?
Consistency beats decimal points.
Ask how “visits” are defined
You should be able to answer, in plain language:
What counts as a visit?
What gets excluded?
How are multi-tenant locations handled?
How do you filter employees and frequent visitors?
If the vendor can’t explain this, they can’t defend their numbers.
Understand match rate, baselines, and noise
Any report worth trusting should disclose:
match rate or coverage proxy
baseline visit rate for control groups
sample size thresholds
how often results are statistically ambiguous
If you only see lift and cost-per-visit, you’re missing the conditions that make those numbers meaningful.
Watch for confounders: promos, seasonality, store density
Before you credit media, check what else happened:
price promotions
loyalty events
store openings/closures
competitor activity
weather anomalies
local events that spike traffic
Foot traffic is not a lab environment. Your analysis has to reflect that.
Use cost-per-incremental-visit carefully
“Cost per visit” is often misleading because it can include visits that would have happened anyway.
The more decision-grade metric is:
Cost per incremental visit (tied to lift vs control)
Even then, it’s only useful if your control design and visit definition are solid. Otherwise you’re optimizing to a number that looks accountable but isn’t stable.
When foot traffic attribution makes sense (and when it doesn’t)
Foot traffic attribution works best when the business question is practical and the environment is measurable.
Best-fit use cases
Foot traffic attribution tends to perform well when:
you have many locations (enough volume for signal)
visits are a meaningful proxy for revenue (QSR, convenience, specialty retail)
campaigns run with clear flighting and geo structure
you can tie outcomes to other data (POS, CRM cohorts, loyalty)
It’s especially helpful for comparing:
CTV vs mobile vs DOOH contribution
audience segments (new movers, high-intent shoppers, lapsed visitors)
creative variants that drive action vs awareness-only messaging
When to avoid it or treat it as directional only
Be cautious when:
you have few locations or low visit volume
stores sit in dense retail zones where attribution is noisy
the consideration cycle is long and attribution windows become guesswork
you can’t get transparency on methodology, controls, or sourcing
In these cases, foot traffic attribution can still be useful, but only if you treat it as directional and validate with other measurement approaches.
Best practices for using foot traffic attribution in 2026
If you want foot traffic attribution to hold up in 2026, treat it as part of a system. The goal is not “a number you can report.” The goal is a repeatable way to learn what’s working, with enough transparency that teams can trust the conclusions and improve the next test.
Use clean rooms and aggregated reporting where possible
Privacy-first measurement is moving toward aggregation and controlled environments. Google’s Privacy Sandbox direction is one example of the broader shift: measurement APIs designed to generate useful reporting while limiting sensitive data sharing.
In practice, you’ll see more:
Clean-room based analysis for cross-platform questions. This is most valuable when you need to understand overlap and contribution across channels, without moving raw user-level data around. Expect more “join inside a controlled environment” workflows and fewer spreadsheets of device IDs.
Cohort reporting instead of user-level trails. Cohorts are not a compromise. They are often the only defensible way to report outcomes at scale in a consent-aware world. The real skill is choosing cohorts that are actionable (e.g., geography + exposure intensity + audience segment) rather than so broad they blur everything.
Modeled outcomes with explicit uncertainty ranges. Modeling is not the enemy. Unacknowledged modeling is. If an output is modeled, it should come with clear caveats: what the model assumes, what it cannot see, and what changes would break comparability month to month.
Two practical moves make this approach stronger:
Standardize your aggregation thresholds. Decide what “safe to report” means internally (minimum audience size, minimum store count, minimum visits) so teams do not cherry-pick granular views that look exciting but are not stable.
Separate measurement from activation. You can still optimize toward signals, but keep a clean line between what you use for bidding/targeting and what you use for evaluation. Otherwise you risk reinforcing the model’s blind spots.
⚡ A privacy-first stack is not “less data.” It’s more discipline about what data is allowed to mean.
Demand transparency from measurement partners
A basic vendor transparency checklist should include:
clear visit definitions (including exclusions)
control group methodology
match rate/coverage disclosure
deduplication logic across devices and channels
how they handle store density and multi-tenant geographies
whether results include confidence ranges or ambiguity flags
If you’re relying on a partner that won’t answer these, you’re outsourcing accountability.
To make this actionable, ask partners to show you four things, in plain language:
A visit definition you can defend. What counts as a visit? What is excluded? How do they handle parking lots, malls, and shared entrances? If they cannot explain this quickly, the output is not decision-grade.
A control you can explain to non-analysts. “We compare exposed to similar unexposed audiences” is not enough. You should know how similarity is established and what prevents biased comparisons (for example, heavy shoppers being more likely to get served).
A stability story. Will the methodology stay consistent across quarters? If they change filters, panels, or identity logic, how will they flag it? Without this, you will mistake measurement drift for performance improvement.
A disclosure culture. Do they surface “inconclusive” outcomes, or do they force every test into a positive-looking number? Honest ambiguity is a feature, not a failure.
{{26-Foot-Traffic-Attribution-4="/tables"}}
Combine foot traffic with other signals (POS, CRM, brand lift)
Foot traffic alone is rarely the finish line. A visit is a proxy. Sometimes it is a strong proxy (QSR). Sometimes it is a weak one (high-consideration categories). The best stacks use foot traffic to narrow uncertainty, then connect it to signals closer to business impact.
The strongest measurement stacks connect foot traffic attribution with:
POS revenue trends (even if aggregated). You may not be able to tie a specific visit to a specific purchase, and you often do not need to. If foot traffic lift shows up with no change in store-level sales trends across comparable locations, that’s a signal worth investigating.
Loyalty identifiers (where permissions allow). Loyalty can help answer the questions foot traffic alone can’t: new vs existing customers, frequency changes, and whether the campaign drove different behavior or simply more of the same customers.
Store-level conversion proxies (appointments, calls, sign-ups). These are especially useful for automotive, telecom, healthcare, and services. They often move before revenue does, and they can validate whether “more visits” are meaningful visits.
Brand lift or search lift signals (for upper funnel). If a campaign is intended to build demand, you should expect to see supporting signals. Store visits rising while branded search and brand recall remain flat can still happen, but it is worth pressure-testing.
This is also where retail media becomes interesting: it can link exposure to purchase inside retailer ecosystems, while footfall measurement helps you understand broader store movement and halo effects across channels.
A simple way to operationalize this is to define a measurement stack per campaign type:
Brand-heavy: foot traffic lift + search lift + geo holdout where possible
⚡ Retail media keeps pulling measurement gravity toward “closed loops”—eMarketer projected US retail media ad spend at $58.79B in 2025 and $69.33B in 2026. Use it as a purchase-confirmation layer, then use footfall measurement to understand the broader halo outside the retailer’s walls.
Retail media network advertising revenue (Source).
Test, calibrate, and keep a measurement ledger
Make this boring on purpose. You want consistency.
That ledger becomes your protection against chasing random lift spikes and calling it strategy.
To make the ledger genuinely useful, add two more fields:
“Decision taken” (what you changed because of the result)
“What would change my mind” (what evidence would contradict this conclusion)
This forces measurement to do its job: support decisions and reduce repeat debates.
💡 For a broader view of AI Digital’s ecosystem approach across inventory types, Smart Supply’s DSP-agnostic, supply-side model is useful context.
The future of foot traffic attribution
Looking ahead, foot traffic attribution is set to get more capable, but also harder to execute well. As consumer behavior and the ad-tech ecosystem evolve, the way we connect ads to offline visits will keep shifting. Key trends to watch:
More sophisticated data and AI enhancements: AI and machine learning should improve both accuracy and usefulness. Expect better noise reduction in location signals (fewer false positives), stronger confidence scoring for “true” visits, and models that automatically adjust touchpoint weighting for offline outcomes. We’ll also see more “multi-signal” approaches that blend GPS with Wi-Fi/Bluetooth and, where available, inputs like POS events or other sensors to build a richer view of real movement patterns.
New data sources (wearables, connected cars, IoT): Measurement won’t be limited to smartphones. Wearables can add additional location signals, and connected cars may enable privacy-permitted analysis of whether exposure influenced a stop or route. Broader IoT and venue-level sensors (for example, aggregated mall footfall counters) may increasingly support lift reads when paired with exposure data.
In-store technologies for deeper attribution: Today’s measurement often ends at “did they visit.” That may expand toward privacy-safe in-store engagement signals, such as aggregated department traffic via beacons, RFID, or anonymous flow mapping. Retailers may also integrate loyalty and POS data through hashes or clean rooms to connect exposure to transaction outcomes, narrowing the gap between visit attribution and sales attribution.
Privacy-first measurement and clean rooms: As third-party tracking continues to shrink, attribution will lean more on consented first-party data and secure matching. Clean rooms and aggregated reporting will become more common, alongside privacy-preserving techniques (like differential privacy) that reduce device-level exposure while still producing directional lift. Panel-based models and modeled extrapolation will likely play a bigger role.
Standardization and verification: Foot traffic metrics still lack consistent standards across vendors. Over time, expect clearer definitions (what counts as a visit, dwell-time thresholds, match rules) and more third-party auditing against ground truth where possible (door counters, store systems). That would increase comparability and trust.
Deeper integration with retail media and OOH: Store-visit measurement is increasingly packaged into retail media networks and DOOH offerings. The workflow should get simpler: “toggle on store visits” inside buying platforms, rather than running separate studies. Cross-channel integrations (CTV to store visits, OOH to store visits) should become more turnkey.
Growing importance as an outcome metric: As teams move from proxy metrics to business outcomes, cost-per-visit, incremental visits, and visit lift will show up more routinely alongside online conversions. Over time, this becomes less a separate discipline and more a standard part of omnichannel measurement.
Overall, foot traffic attribution’s future looks promising but more complex. The winners will be the teams who treat it as directional measurement, demand transparency, and pair attribution with privacy-safe incrementality so “visits attributed” becomes “visits credibly caused.”
Conclusion: foot traffic attribution is a signal, not a verdict
Foot traffic attribution is most valuable when you use it responsibly, with context and a healthy amount of skepticism. That means treating it as a way to reduce uncertainty, not eliminate it. You are looking for patterns you can defend: lift vs a credible control, sensible visit definitions, and results that hold up when you change windows, geographies, or creative.
Used that way, foot traffic attribution becomes a strategic input. It helps you decide where to invest, what to test next, and which parts of your media plan are most likely to move real-world behavior. It should also trigger better questions: Are we measuring incrementality, or correlation? Are we over-crediting convenience and under-crediting influence? What would we need to see to change our minds?
If you want help building a drive-to-store measurement approach that can stand up to scrutiny, get in touch with us at AI Digital. We support advertisers with a DSP-agnostic model built for transparency and cross-platform execution, anchored in our Open Garden framework. Depending on what you need, that can include managed service planning and optimization across channels, supply-side curation through Smart Supply to improve inventory quality and reduce inefficiencies, and Elevate, our intelligence platform designed to help teams make faster, better-informed optimization decisions across platforms.
Blind spot
Key issues
Business impact
AI Digital solution
Lack of transparency in AI models
• Platforms own AI models and train on proprietary data • Brands have little visibility into decision-making • "Walled gardens" restrict data access
• Inefficient ad spend • Limited strategic control • Eroded consumer trust • Potential budget mismanagement
Open Garden framework providing: • Complete transparency • DSP-agnostic execution • Cross-platform data & insights
Optimizing ads vs. optimizing impact
• AI excels at short-term metrics but may struggle with brand building • Consumers can detect AI-generated content • Efficiency might come at cost of authenticity
• Short-term gains at expense of brand health • Potential loss of authentic connection • Reduced effectiveness in storytelling
Smart Supply offering: • Human oversight of AI recommendations • Custom KPI alignment beyond clicks • Brand-safe inventory verification
The illusion of personalization
• Segment optimization rebranded as personalization • First-party data infrastructure challenges • Personalization vs. surveillance concerns
• Potential mismatch between promise and reality • Privacy concerns affecting consumer trust • Cost barriers for smaller businesses
Elevate platform features: • Real-time AI + human intelligence • First-party data activation • Ethical personalization strategies
AI-Driven efficiency vs. decision-making
• AI shifting from tool to decision-maker • Black box optimization like Google Performance Max • Human oversight limitations
• Strategic control loss • Difficulty questioning AI outputs • Inability to measure granular impact • Potential brand damage from mistakes
Managed Service with: • Human strategists overseeing AI • Custom KPI optimization • Complete campaign transparency
Fig. 1. Summary of AI blind spots in advertising
Dimension
Walled garden advantage
Walled garden limitation
Strategic impact
Audience access
Massive, engaged user bases
Limited visibility beyond platform
Reach without understanding
Data control
Sophisticated targeting tools
Data remains siloed within platform
Fragmented customer view
Measurement
Detailed in-platform metrics
Inconsistent cross-platform standards
Difficult performance comparison
Intelligence
Platform-specific insights
Limited data portability
Restricted strategic learning
Optimization
Powerful automated tools
Black-box algorithms
Reduced marketer control
Fig. 2. Strategic trade-offs in walled garden advertising.
Core issue
Platform priority
Walled garden limitation
Real-world example
Attribution opacity
Claiming maximum credit for conversions
Limited visibility into true conversion paths
Meta and TikTok's conflicting attribution models after iOS privacy updates
Data restrictions
Maintaining proprietary data control
Inability to combine platform data with other sources
Amazon DSP's limitations on detailed performance data exports
Cross-channel blindspots
Keeping advertisers within ecosystem
Fragmented view of customer journey
YouTube/DV360 campaigns lacking integration with non-Google platforms
Black box algorithms
Optimizing for platform revenue
Reduced control over campaign execution
Self-serve platforms using opaque ML models with little advertiser input
Performance reporting
Presenting platform in best light
Discrepancies between platform-reported and independently measured results
Consistently higher performance metrics in platform reports vs. third-party measurement
Fig. 1. The Walled garden misalignment: Platform interests vs. advertiser needs.
Key dimension
Challenge
Strategic imperative
ROAS volatility
Softer returns across digital channels
Shift from soft KPIs to measurable revenue impact
Media planning
Static plans no longer effective
Develop agile, modular approaches adaptable to changing conditions
Brand/performance
Traditional division dissolving
Create full-funnel strategies balancing long-term equity with short-term conversion
Capability
Key features
Benefits
Performance data
Elevate forecasting tool
• Vertical-specific insights • Historical data from past economic turbulence • "Cascade planning" functionality • Real-time adaptation
• Provides agility to adjust campaign strategy based on performance • Shows which media channels work best to drive efficient and effective performance • Confident budget reallocation • Reduces reaction time to market shifts
• Dataset from 10,000+ campaigns • Cuts response time from weeks to minutes
• Reaches people most likely to buy • Avoids wasted impressions and budgets on poor-performing placements • Context-aligned messaging
• 25+ billion bid requests analyzed daily • 18% improvement in working media efficiency • 26% increase in engagement during recessions
Full-funnel accountability
• Links awareness campaigns to lower funnel outcomes • Tests if ads actually drive new business • Measures brand perception changes • "Ask Elevate" AI Chat Assistant
• Upper-funnel to outcome connection • Sentiment shift tracking • Personalized messaging • Helps balance immediate sales vs. long-term brand building
• Natural language data queries • True business impact measurement
Open Garden approach
• Cross-platform and channel planning • Not locked into specific platforms • Unified cross-platform reach • Shows exactly where money is spent
• Reduces complexity across channels • Performance-based ad placement • Rapid budget reallocation • Eliminates platform-specific commitments and provides platform-based optimization and agility
• Coverage across all inventory sources • Provides full visibility into spending • Avoids the inability to pivot across platform as you’re not in a singular platform
Fig. 1. How AI Digital helps during economic uncertainty.
Trend
What it means for marketers
Supply & demand lines are blurring
Platforms from Google (P-Max) to Microsoft are merging optimization and inventory in one opaque box. Expect more bundled “best available” media where the algorithm, not the trader, decides channel and publisher mix.
Walled gardens get taller
Microsoft’s O&O set now spans Bing, Xbox, Outlook, Edge and LinkedIn, which just launched revenue-sharing video programs to lure creators and ad dollars. (Business Insider)
Retail & commerce media shape strategy
Microsoft’s Curate lets retailers and data owners package first-party segments, an echo of Amazon’s and Walmart’s approaches. Agencies must master seller-defined audiences as well as buyer-side tactics.
AI oversight becomes critical
Closed AI bidding means fewer levers for traders. Independent verification, incrementality testing and commercial guardrails rise in importance.
Fig. 1. Platform trends and their implications.
Metric
Connected TV (CTV)
Linear TV
Video Completion Rate
94.5%
70%
Purchase Rate After Ad
23%
12%
Ad Attention Rate
57% (prefer CTV ads)
54.5%
Viewer Reach (U.S.)
85% of households
228 million viewers
Retail Media Trends 2025
Access Complete consumer behaviour analyses and competitor benchmarks.
Identify and categorize audience groups based on behaviors, preferences, and characteristics
Michaels Stores: Implemented a genAI platform that increased email personalization from 20% to 95%, leading to a 41% boost in SMS click through rates and a 25% increase in engagement.
Estée Lauder: Partnered with Google Cloud to leverage genAI technologies for real-time consumer feedback monitoring and analyzing consumer sentiment across various channels.
High
Medium
Automated ad campaigns
Automate ad creation, placement, and optimization across various platforms
Showmax: Partnered with AI firms toautomate ad creation and testing, reducing production time by 70% while streamlining their quality assurance process.
Headway: Employed AI tools for ad creation and optimization, boosting performance by 40% and reaching 3.3 billion impressions while incorporating AI-generated content in 20% of their paid campaigns.
High
High
Brand sentiment tracking
Monitor and analyze public opinion about a brand across multiple channels in real time
L’Oréal: Analyzed millions of online comments, images, and videos to identify potential product innovation opportunities, effectively tracking brand sentiment and consumer trends.
Kellogg Company: Used AI to scan trending recipes featuring cereal, leveraging this data to launch targeted social campaigns that capitalize on positive brand sentiment and culinary trends.
High
Low
Campaign strategy optimization
Analyze data to predict optimal campaign approaches, channels, and timing
DoorDash: Leveraged Google’s AI-powered Demand Gen tool, which boosted its conversion rate by 15 times and improved cost per action efficiency by 50% compared with previous campaigns.
Kitsch: Employed Meta’s Advantage+ shopping campaigns with AI-powered tools to optimize campaigns, identifying and delivering top-performing ads to high-value consumers.
High
High
Content strategy
Generate content ideas, predict performance, and optimize distribution strategies
JPMorgan Chase: Collaborated with Persado to develop LLMs for marketing copy, achieving up to 450% higher clickthrough rates compared with human-written ads in pilot tests.
Hotel Chocolat: Employed genAI for concept development and production of its Velvetiser TV ad, which earned the highest-ever System1 score for adomestic appliance commercial.
High
High
Personalization strategy development
Create tailored messaging and experiences for consumers at scale
Stitch Fix: Uses genAI to help stylists interpret customer feedback and provide product recommendations, effectively personalizing shopping experiences.
Instacart: Uses genAI to offer customers personalized recipes, mealplanning ideas, and shopping lists based on individual preferences and habits.
Medium
Medium
Share article
Url copied to clipboard
No items found.
Subscribe to our Newsletter
THANK YOU FOR YOUR SUBSCRIPTION
Oops! Something went wrong while submitting the form.
Questions? We have answers
Can foot traffic attribution prove ROI?
Foot traffic attribution (and footfall attribution/footfall measurement) can support an ROI narrative, but it rarely proves ROI on its own because in-store visits are a proxy, not revenue. You get closer when you connect visit lift to downstream signals like POS trends, loyalty data, appointments, or retail media purchase data, and when your audience modeling and controls are strong enough to argue incrementality rather than correlation.
How should brands think about attribution windows?
Attribution windows should match real consumer behavior, not what makes the report look best. Short windows reduce noise but miss longer consideration cycles; longer windows capture more delayed in-store visits but increase confounding. For many categories, a 7–14 day window is a reasonable starting point, then you validate by running sensitivity checks (e.g., 3/7/14 days) to see whether lift is stable across windows and channels like connected TV and other CTV advertising.
Is foot traffic attribution better suited for brand or performance campaigns?
It can work for both, but it’s usually most helpful where ads don’t have a natural click-to-conversion path—connected tv/CTV advertising and programmatic DOOH are common examples. For brand campaigns, it provides directional evidence of behavior change; for performance campaigns, it helps quantify store response when online conversions don’t capture the full story, especially when paired with retail media or store-level conversion proxies.
How should foot traffic data be used in optimization decisions?
Use foot traffic attribution to guide testing and budget shifts, not to micromanage day-to-day bidding as if it were deterministic conversion data. The most reliable optimization signals are patterns that repeat: consistent lift by audience segment, geography, frequency, creative, or channel, backed by clear footfall measurement rules and a credible control. If results are volatile, treat them as learning, tighten the visit definition or controls, and test again rather than “optimizing” noise.
How does seasonality affect foot traffic attribution?
Seasonality can overwhelm attribution signals because store traffic changes for reasons unrelated to ads—holidays, weather, promotions, local events, and competitive activity. The best defense is using matched controls, comparing against prior baselines, and avoiding conclusions from short, noisy windows. If you see lift during peak periods, pressure-test it with holdouts or comparable geos to make sure you’re not just measuring seasonal demand.
Can foot traffic attribution be used for non-retail businesses?
Yes—foot traffic attribution can apply to any physical destination where a visit is meaningful, including restaurants, gyms, dealerships, banks, clinics, and entertainment venues. The key is defining valid visits properly (dwell time and exclusions), choosing realistic attribution windows, and pairing visit lift with downstream signals (leads, appointments, calls, membership sign-ups) so audience modeling doesn’t stop at “they showed up,” but helps explain whether the visits were valuable.
Have other questions?
If you have more questions, contact us so we can help.