Blog Tables

Action
What
How to implement
Buy from certified supply
Prefer TAG-certified partners.
Require certification in IOs; enable DSP filters that prefer certified sellers; verify status quarterly.
Turn on pre-bid quality controls
IVT, suitability, viewability, and attention filters.
Set thresholds per campaign; enable MFA avoidance; block risky categories before bidding.
Authenticate the supply chain
ads.txt/app-ads.txt, sellers.json, and schain.
Restrict to authorized sellers; prune unauthorized resellers; reject inventory without a valid chain.
Avoid MFA and low-impact domains
Sidestep sites that game surface metrics.
Use curated allowlists; run domain/app audits; apply attention/time-in-view benchmarks; reroute spend when lift is flat.
Demand device authentication in CTV
Prevent device and SSAI spoofing.
Favor publishers with watermark/device attestation and SSAI transparency; prefer curated CTV PMPs.
Shorten supply paths (SPO/QPO)
Fewer hops, fees, and risks.
Prefer direct SSP routes; compare take rates; remove intermediaries that don’t add quality.
Monitor and react in near-real time
Catch suspicious spikes early.
Build IVT and anomaly dashboards; auto-block offenders; run post-campaign forensics on suspect traffic.
Write protection into contracts
Remedies when fraud slips through.
Include IVT credit/make-good clauses; require log-level transparency on demand.
Elevate outcomes over vanity metrics
Judge buys by attention and lift.
Hold weekly reviews of lift/attention; move budget to sources that prove incremental impact.
Audit on a cadence
Keep the ecosystem honest.
Run quarterly audits; rotate allowlists/blacklists; pressure-test partners with controlled buys.
Action
What
How to implement
Make first-party data the spine
Collect consented site/app/CRM data.
Offer clear value exchange; log consent; unify data in a CDP; implement server-side tagging for durability.
Adopt clean rooms for collaboration
Privacy-safe audience matching and measurement.
Define schemas with partners; run overlap and conversion/lift queries; QA outputs against ground truth regularly.
Pair contextual with publisher signals
Reach relevant readers without IDs.
Use page-level classification; activate publisher first-party audiences; buy curated PMPs with transparent metadata.
Test Privacy Sandbox APIs pragmatically
Trial Topics, Protected Audiences, and related tools.
Run controlled A/Bs; monitor CPM/CPA/attention deltas; keep Sandbox tests in parallel with proven tactics.
Use interoperable IDs where permitted
Extend reach with compliant identifiers.
Deploy only with consent and clear governance; monitor quality and opt-out handling closely.
Modernize measurement
Capture impact without cookie precision.
Combine MMM, geo-matched lift, always-on experiments, and post-view incrementality; use attention/time-in-view as diagnostics.
Control frequency and reach without cookies
Manage exposure at person/household level.
Lean on publisher graphs and clean-room frequency; use device-level controls in CTV; reconcile overlaps in reporting.
Tighten data governance
Compliance you can prove.
Maintain a CMP; document data uses and retention; audit partners for purpose limitation and deletion pathways.
Model
What you pay for
Billing trigger
Best for

Strengths
CPM
Every served impression
Impression is served
Broad reach and predictable delivery
Simple to plan; widest inventory access; easy to scale
vCPM
Viewable impressions only
Provider confirms viewability per standard
Quality-screened reach; viewability-sensitive buys
Reduces pay-for-waste; aligns cost to on-screen opportunity
CPC
Clicks
Valid click occurs
Traffic/DR when clicks predict conversions
You pay for interaction, not passive exposure; easy to compare landing-page quality
CPA
Defined actions (lead, sale, install)
Conversion fires
Clear, outcome-tied campaigns
Risk shifts away from advertiser; tightly aligned to ROAS/CPA goals
Attention-based CPM
Attention (e.g., attentive seconds or an AU score)
Attention threshold/measurement achieved (e.g., AU-CPM)
Quality exposure and outcome-linked delivery; branding & mid-funnel
Pays for attention quality, not just presence; improves comparability across formats
Metric
What it is
Why it matters
How to use
Reach & frequency (unique users, average frequency)
How many unique people you reached and how often each person saw your ad on average.
Reach builds familiarity; frequency reinforces it. Too little doesn’t stick; too much wastes budget and annoys users.
Set caps by funnel stage (lower for prospecting, higher for remarketing). Monitor unique reach so spend isn’t concentrated on the same users.
Viewability (MRC: ≥50% pixels/1s display; ≥50%/2s video)
Confirms an ad had the opportunity to be seen.
Paying for non-viewable impressions is waste.
Buy on vCPM or set viewability floors; prioritize placements and formats that consistently clear your threshold.
Time-in-view (seconds)
How long a viewable ad remained on screen.
More seconds generally increase message encoding and downstream action.
Track by domain/placement/format; favor inventory that delivers more seconds at equal or lower effective cost.
Attention metrics (attentive sec/1000, AU score, eyes-on rate)
Quality signals estimating whether a human likely noticed the ad (combining dwell time, viewability, screen coverage, scroll speed, audibility, eye-tracking models).
Attention correlates more closely with brand/performance outcomes than viewability alone.
Activate pre-bid attention segments and optimize toward attentive time or attention scores, not just served/viewable impressions.
Brand safety/suitability & IVT (incident and block rates)
Verification that ads ran in appropriate contexts and reached humans (invalid traffic protection).
Misplacement harms brands; bots distort results and waste budget.
Enforce suitability rules, use TAG-certified partners, monitor post-bid incidents and IVT rates; maintain allowlists/blacklists.
Engagement (CTR, interaction rate, video completion rate)
Observable user actions—clicks, hovers/expands, plays/completions.
Indicates creative resonance and mid-funnel interest.
Compare by creative and placement. Treat CTR as directional; validate with post-click quality (bounce rate, time on site).
Outcomes (brand lift, conversion lift, post-view assists, sales proxies)
Impact on real goals—awareness/consideration shifts, incremental conversions, modeled revenue.
Proves value beyond clicks and credits exposures that influenced behavior.
Run brand/conversion lift studies, track post-view assists, and use incrementality tests (geo or PSA holdouts).
Dimension
Display
Search
Native
Primary goal
Reach + awareness, retargeting, incremental consideration
Capture expressed demand; harvest intent
Mid-funnel education and soft conversion within content
Placement
Across the open web & apps (publisher pages, apps, out-stream)
Search engine results pages (SERPs)
In-feed or in-content units that match the look/feel of the page
Targeting method
Contextual, first-party & partner audiences, lookalikes, retargeting, geo, device; increasing use of attention pre-bid
Keyword & query intent, audiences, geo, device
Contextual + interest/behavioral; often first-party publisher signals
Creative format
Static banners, HTML5 rich media, interactive, video, interstitials
Text & product/listing ads; some image extensions
Headline + image/video styled to the feed; sponsored content
Strengths
Massive scale, flexible pricing, visual storytelling, cross-device reach
High intent, direct response efficiency, measurable
High engagement within content, lower disruption, strong mid-funnel
Best use case
Build familiarity, retarget site/app visitors, launch, drive incremental branded search
Capture bottom-funnel demand, conquesting, lead gen
Educate with content, drive consideration, distribute thought leadership
Attention / user mindset
Browsing/scrolling; attention varies by format & context
Task-oriented; users seeking answers
Content-consuming; engagement rises when disclosure is clear
Dimension
Banner ads
Display ads (umbrella category)
Scope and formats
Classic rectangular IAB units (e.g., 300×250, 728×90, 300×600, 320×50/100).
Encompasses banners plus rich media (expandable/interactive), native display (in-feed), interstitials (full-screen in-app), and out-stream/in-feed video.
Creative capability
Static or light HTML5 animation with headline, image, logo, CTA.
Adds product carousels, dynamic creative, interactive states (expand/hover/swipe), and full video—better for storytelling and deeper engagement.
Placement behavior
Renders in predefined rectangles within a page/app layout.
Can expand over the page (polite expand), fill the screen at natural breaks (interstitials), flow inside feeds (native), or play within content blocks (out-stream video).
Buying and pricing
Typically programmatic CPM/vCPM; efficient reach and testing at lower average CPMs.
Also programmatic, often at higher CPMs for larger canvases/higher viewability; many high-impact units via curated PMPs or programmatic guaranteed.
Measurement nuance
Optimized to viewability, CTR, post-view outcomes.
Adds time-in-view, interaction/expansion rate, video completion rate—useful for attention and mid-funnel effects.
When to use
Efficient scale, broad retargeting, rapid multivariate testing across sizes/placements.
Brand storytelling, product demos, attention-qualified reach, and moments where motion/interaction improves outcomes.
Rule of thumb
All banners are display…
…but not all display is a banner—use banners for scale, layer richer formats when the objective and context warrant more attention.
Establish a Regular Reporting Cadence
Don't just set and forget. Regularly review performance dashboards to identify winning creatives, targeting segments, and placements.
Analyze for Insights, Not Just Numbers
Look beyond the surface-level display advertising statistics. Why did Creative A outperform Creative B? Which contextual keyword groups drove the most engaged traffic? Use these insights to inform your next round of tests.
Manage Frequency and Combat Fatigue
Even the best ad becomes ineffective if shown too often. Set frequency caps and have a plan to refresh your creative library every few weeks to maintain a good CTR for display ads throughout the campaign lifecycle.
Implement Supply-Path Optimization (SPO)
SPO is the practice of identifying the most efficient and highest-quality paths to purchase ad inventory. It cuts out wasteful intermediaries, ensuring your ads appear on premium, brand-safe sites that foster user trust and engagement.
Leverage Programmatic Platforms with AI
Modern Demand-Side Platforms (DSPs) use AI to analyze billions of data points to bid on the most valuable impressions in real-time. They can automatically prioritize placements that have a proven history of high viewability and engagement for your specific goals, systematically improving your average CTR for programmatic display.
Embrace Contextual Targeting
Place your ads on web pages based on the content's meaning and themes, not on the user's past behavior. An ad for running shoes on a fitness blog is inherently relevant, leading to a higher competitive CTR.
Leverage AI-Powered Predictive Audiences
Use machine learning to analyze your first-party data and identify high-value users who resemble your best customers. AI can process thousands of signals to find new audiences likely to engage, ensuring your banner impressions are served to the most receptive users.
A/B Test Everything
Don't rely on guesswork. Systematically test headlines, value propositions, calls-to-action (CTAs), and color schemes. A simple change from "Learn More" to "Get Your Free Guide" can significantly lift your average CTR for banner ads.
Prioritize Value and Clarity
Within the first two seconds, a user should understand what you're offering and why it benefits them. Use clear, concise copy and high-quality, relevant visuals.
Incorporate Motion and Rich Media
Static banners blend into the background. Animated GIFs, HTML5, and interactive elements (like polls or hover effects) can dramatically increase visibility and engagement, often doubling your CTR for display ads compared to static images.
Mobile (Smartphones)
Mobile continues to dominate both impression volume and engagement rates. The tactile nature of a touchscreen, combined with larger, more immersive ad formats, leads to a display ad CTR that is typically 20-35% higher than on desktop. The average click-through rate for banner ads on mobile often falls in the 0.08% - 0.12% range. However, this comes with a caveat: accidental clicks can inflate this number, so it's vital to monitor post-click engagement metrics like bounce rate and time-on-site to gauge true quality.
Desktop
While desktop display ads CTR is generally lower, often around 0.05% - 0.08%, it frequently drives higher-value actions. The desktop environment is associated with more deliberate, research-driven behavior, especially in B2B and high-consideration purchases. Users are less prone to accidental clicks, making desktop a key channel for driving qualified leads and conversions, even with a lower initial click-through rate for display ads.
Tablet: Tablet performance often splits the difference, with a CTR for display ads that mirrors or slightly exceeds desktop. The larger screen than a phone allows for more detailed creative, while the touch interface maintains a level of interactivity. This makes tablets a strong performer in verticals like retail and travel, where visual appeal is key.
Tablet
Tablet performance often splits the difference, with a CTR for display ads that mirrors or slightly exceeds desktop. The larger screen than a phone allows for more detailed creative, while the touch interface maintains a level of interactivity. This makes tablets a strong performer in verticals like retail and travel, where visual appeal is key.
Connected TV (CTV)
CTV represents a paradigm shift. While traditional display ad metrics like CTR are less relevant in a lean-back environment, new engagement metrics are emerging. Clickable
Standard Banners (e.g., 300x250, 728x90)
These workhorses of display advertising deliver the most volume but the lowest engagement. The average CTR for banner ads in this category is typically 0.04% - 0.06%. Their strength lies in broad-reach and frequency-building, not high engagement.
Native Ads
By seamlessly blending into the surrounding content, native ads overcome banner blindness. The average click through rate for display ads in a native format can be 2-3x higher than standard banners, often ranging from 0.08% to 0.15%. They are perceived as less intrusive and more trustworthy by users.
Rich Media & Interactive Ads
Featuring elements like expandable banners, video players, or in-ad games, these formats command attention. It's not uncommon for high-quality rich media units to achieve a CTR for display ads of 0.15% - 0.3% or more, as they offer a value-exchange to the user.
Video Ads (Out-Stream)
Auto-playing video in a display placement is a powerful engagement driver. The average click-through rate display ads for video can vary widely but often sits between 0.1% and 0.25%, with completion rates being an equally important metric.
Choosing the right format is a strategic decision that hinges on your campaign objectives.
By Industry
The industry average CTR for display ads varies dramatically. Sectors with high-intent users, like Finance and B2B/SaaS, often report higher averages (0.08% - 0.12%), as users are actively researching solutions. In contrast, more established eCommerce brand-awareness campaigns might see lower rates (0.04% - 0.07%).
By Device
Mobile-centric campaigns typically see a display ads CTR that is 20-30% higher than desktop equivalents, driven by thumb-scroll behavior and larger, more intrusive ad placements on smaller screens.
By Campaign Goal
A campaign optimized for viewability and brand lift will logically have a lower average CTR for banner ads than one laser-focused on driving traffic.
Tier
# accounts
Channels in pilot
Success criteria
Tier 1
50
LinkedIn + open-web display/video + (optional) CTV
≥70% account reach; meetings in ≥25% of accounts
Tier 2
200
Open-web display/native; contextual video
≥40% account reach; 15% lift in targeted site sessions
Control
50
Holdout (no ABM media)
Baseline for incrementality on engagement & pipeline
Fig. OTT vs traditional TV comparison.
Layer
Question answered
Example metrics
Primary owner
Coverage
Are we reaching the buying group?
% accounts reached; unique exposed roles; freq by account
Media
Engagement quality
Are they interacting with substance?
High-intent page views; repeat cadence; asset completions
Digital/Content
Progression
Are accounts moving stages?
Researching→Evaluating rate; meetings; SAL/SQL creation
RevOps
Revenue
Are we creating value?
Pipeline $, win rate, velocity, revenue from targeted accounts
Leadership
Fig. OTT vs traditional TV comparison.
Aspect
Traditional ABM
Programmatic ABM
Targeting unit
Named accounts and known contacts.
Named accounts mapped to cookies, devices, emails, and publisher IDs to reach the wider buying group (e.g., The Trade Desk).
Personalization
Bespoke content and human outreach.
Audience rules and dynamic creative tailored by account, industry, seniority, or stage, sequenced over time (e.g., The Trade Desk).
Scale and coverage
Dozens of accounts with deep treatment.
Hundreds of accounts with controlled reach and frequency, plus always-on nurturing across channels (e.g., The Trade Desk).
Measurement
Meetings, opportunities, and qualitative sales feedback.
Daily account-level reach and frequency, site behavior from target accounts, and contribution to pipeline and revenue (e.g., The Trade Desk).
Channels
Email, events, SDR, direct mail, content hubs.
Open-web display/video/native, LinkedIn, audio, and CTV to lift attention inside named accounts (e.g., LinkedIn).
Fig. OTT vs traditional TV comparison.
Feature
Traditional Programmatic Advertising
Standalone Native Advertising
Programmatic Native (Hybrid
Automation & Scale
High. Fully automated real-time bidding across vast ad exchanges enables massive, efficient scale
Low. Often requires manual negotiation and integration with individual publishers, limiting scale
High. Combines the automation of programmatic bidding with the native format, achieving scale without sacrificing format quality
Targeting Precision
High. Uses extensive user data (demographics, behavior, retargeting) for precise audience targeting.
Moderate. Primarily relies on contextual and publisher-level targeting, with limited user-level data.
High. Leverages programmatic's data-driven audience targeting while adding a layer of contextual relevance from the native environment.
User Experience
Moderate. Low CPMs but often suffers from poor viewability and engagement, reducing overall ROI.
Low. Premium pricing due to manual processes and high-quality placements, often resulting in high CPAs
High. Achieves the engagement rates of native ads with the cost-efficient scaling of programmatic, optimizing overall ROI.
Creative Flexibility
Low. Typically confined to standard IAB display ad sizes and formats
High. Offers custom creative formats tailored to each publisher's unique layout and audience
Moderate-High. Uses dynamic native templates that maintain a consistent, platform-appropriate look across multiple publishers at scale.
Fig. OTT vs traditional TV comparison.
Feature
Programmatic Advertising
Native Advertising
Definition
The automated process of buying and selling ad space in real-time
An ad format designed to blend seamlessly with a platform's content
Primary Focus
Efficiency, scale, and data-driven targeting
User experience, contextual relevance, and non-disruption
How It Works
Uses DSPs, SSPs, and ad exchanges for real-time bidding
Ad creative is built to mimic the style of the publisher's content
Common Formats
Can include display banners, video, audio, and native ads
In-feed units, content recommendation widgets, and promoted listings
Fig. OTT vs traditional TV comparison.
Feature
Traditional TV
OTT Advertising
Targeting
Broad demographics (age, gender)
Behavioral, interest-based, custom audiences
Measurement
Panel-based estimates
Exact impression counts & attribution
Ad delivery
Same ad to all viewers
Different ads to different households
Geographic reach
DMA-level
Zip code or household-level
Completion rates
65-70%
90-95%
Attribution
Correlation studies
Direct tracking to conversions
Fig. OTT vs traditional TV comparison.
Platform
Audience profile
Best for
Approx. minimum spend
Key advantage
Hulu
18-49, diverse, current content fans
Broad reach campaigns
$25,000+
Robust targeting options
Amazon Freevee
Budget-conscious, varied demographics
E-commerce brands
$10,000+
Amazon purchase data
Pluto TV
Older viewers, traditional TV fans
Mass market awareness
$5,000+
Low entry cost
Roku
Cord-cutters, tech-savvy
Tech & streaming products
$15,000+
Device + platform reach
Peacock
Sports & news viewers
Live event tie-ins
$25,000+
NBCUniversal content
Fig. Major OTT platform comparison.
Ad format
Typical length
Completion rate
Best use case
Est. cost premium
Pre-roll
15-30 seconds
94%
Brand awareness
Standard
Mid-roll
15-60 seconds
97%
Deep engagement
+10-15%
Interactive
30+ seconds
89%
Product education
+25-30%
Pause ads
Static
N/A
Gentle reminders
-20%
Shoppable
15-30 seconds
91%
Direct response
+30-40%
Fig. OTT ad format performance guide.
Budget component
% of total
$50K campaign
$100K campaign
Notes
Media spend
75-80%
$37,500-40,000
$75,000-80,000
Actual ad inventory
Creative development
10-15%
$5,000-7,500
$10,000-15,000
Multiple formats needed
Platform/Tech fees
5-8%
$2,500-4,000
$5,000-8,000
Programmatic costs
Measurement/Analytics
3-5%
$1,500-2,500
$3,000-5,000
Attribution tools
Management
2-5%
$1,000-2,500
$2,000-5,000
Agency or internal
Fig. Example OTT campaign budget breakdown.
Campaign goal
Primary KPIs
Secondary KPIs
Attribution window
Success benchmark
Brand Awareness
Reach, Frequency
Completion Rate, Brand Lift
30 days
3+ frequency, 80% reach
Lead Generation
Cost Per Lead, Form Fills
View-through Rate
14 days
<$50 CPL
App Installs
Install Rate, CPI
Post-install Events
7 days
2-3% install rate
Direct Sales
ROAS, Conversions
Cart Additions
3-7 days
3:1 ROAS
Video Views
VCR, Engagement
Share Rate
1 day
95%+ completion
Fig. Example OTT metrics by campaign goal.
Capability
Integrated platform (e.g., Amazon DSP)
Independent DSP (e.g., The Trade Desk)
Why it matters
Authenticated identity
Native, logged-in audiences at scale
Interoperates via IDs and clean rooms
Identity depth drives match rates and measurement quality
Premium CTV access
Direct access to owned/partner inventory
Access via exchanges, deals
Inventory exclusivity can gate reach and formats
Closed-loop measurement
Native commerce signals
Modeled outcomes + partner data
Proves sales impact faster; shifts budget confidence
Commercial control
End-to-end pricing levers
Discrete fees with partners
Price/fee leverage influences net CPM and ROI
AI/automation
Full-funnel automation tied to first-party data
Algorithmic bidding across open web
Where automation “sees” more data, it usually wins speed
Fig. What integrated stacks control vs. independents.
Metric
Q2 2024 Performance
YoY Change / Context
Advertising revenue
$15.7B
+22% YoY
Net sales
$167.7B
+13% YoY
AWS (cloud services)
$30.9B
+17.5% YoY
Subscription services
$12.2B
+11% YoY
3rd-party seller services
$40.3B
+10% YoY
Fig. Amazon Q2 by the numbers (Source).
Standard / framework
What it enables
Why it matters
Quick win
Transaction ID
End-to-end deal/impression traceability
Auditable paths; easier debugging
Make transactionid mandatory on all curated deals
ads.txt / app-ads.txt
Authorized sellers list
Cuts spoofing; clarifies who can sell
Block non-authorized supply; re-verify weekly
sellers.json
Seller identity transparency
Exposes intermediaries and resellers
Prefer direct sellers; flag unknown nodes
schain (SupplyChain object)
Node-by-node path disclosure
Detects extra hops and duplicates
Set a max-hop policy; penalize hidden resellers
OpenRTB extensions (e.g., supply metadata)
Richer, standardized context
Better pre-bid decisioning
Require key fields; reject incomplete bids
IAB LEAP APIs (live events)
Real-time event signaling
CTV/live readiness; lower mismatch
Pilot on one live event; monitor latency vs. fill
Fig. Standards that clean the supply path.
Feature
Description
Interactivity
Enables users to take action directly within the ad — tapping, swiping, dragging, hovering, expanding, exploring products, or playing elements.
Animation & Motion Graphics
Uses movement to capture attention and guide visual focus toward key elements and CTAs.
Multimedia Support
Integrates video, audio, 3D models, maps, carousels, and shoppable components for deeper engagement.
Dynamic Layouts
Layouts adapt or expand based on user interaction, including expandable banners, floating units, and responsive HTML5 elements.
Advanced Tracking & Analytics
Measures behavioral signals like dwell time, interaction rate, expansion rate, gesture tracking, and video completion.
Cross-Device Compatibility
Works seamlessly across mobile, desktop, tablet, apps, and connected TV with responsive formatting.
Real-Time Personalization
Updates creative elements in real time based on user data, context, or live feeds for higher relevance.
Programmatic Enablement
Easily integrated with programmatic platforms for scalable delivery, targeting, optimization, and testing.
Factor
Rich Media Ads
Standard Display Ads
Transaction ID
High engagement driven by interactions (taps, swipes, expansions, video plays, exploration). Users actively participate in the experience.
Very low engagement. Mostly passive viewing with only clicks as a measurable action.
User Experience
Immersive, dynamic, and responsive. Feels closer to a mini-app or micro-experience.
Static and passive. Limited ability to capture or hold attention.
Visibility & Attention
Higher visibility due to movement, animation, expansion, and adaptive layouts. Reduces banner blindness.
Low visibility; easily ignored. Strongly affected by banner blindness and ad clutter.
Performance Metrics
Deep analytics: interaction rate, dwell time, gesture tracking, expansion rate, video completion, multi-event funnels.
Basic metrics: impressions, clicks, CTR. Limited behavioral insight.
Creative Capabilities
Supports video, animation, carousels, 3D, maps, shoppable elements, real-time data feeds, and interactive layouts.
Limited to static images, basic text, and simple HTML. Very restricted creative options.
Conversion Potential
Higher conversion potential due to engagement loops and more persuasive, immersive storytelling.
Lower conversion potential; relies entirely on click-through.
Cross-Device Experience
Fully responsive across mobile, apps, desktop, tablet, and CTV.
Basic responsiveness; often resized versions of the same static banner.
Brand Recall
Strong brand recall thanks to interactivity and rich media content.
Weak brand recall; often forgotten immediately after viewing.
Interaction Mechanism Type
Description
Clicking and tapping
These actions trigger expansions, open product cards, reveal hidden elements, or launch quick-view videos. Even a small tap signals curiosity, deepening user involvement.
Swiping
Swipe-based navigation is one of the most popular behaviors in rich media advertising, especially for mobile rich media. Carousels, galleries, and multi-screen storytelling feel intuitive because they replicate familiar social and app interactions.
Hover and rollover behavior
On desktop devices, rich media ads respond to hover states by revealing specs, animations, or secondary CTAs. These micro-interactions keep the user’s attention without forcing a commitment.
Dragging and rotating
Drag-to-rotate 3D product views and interactive hotspots transform rich media content into a hands-on exploration space. This simulates the tactile experience that standard display ads can never achieve.
Dynamic behavior Type
Description
Real-time animations
Subtle animations — product slides, text fades, micro-movement — catch the eye without overwhelming the viewer. They create a sense of motion that naturally draws engagement.
Adaptive layouts
Rich media ads expand, shrink, reposition, or transform based on user behavior. A compact teaser can grow into a full-screen interactive canvas within a single gesture.
Context-aware elements
Some rich media formats update in real time based on weather, location, product availability, or user intent signals. This personal relevance increases both engagement and conversion rates.
Dynamic content flows
Auto-playing silent video snippets, sequential storytelling modules, or scroll-driven parallax effects keep users exploring for longer — turning an ad into an interactive journey instead of a static message.
Campaign Goal
Recommended Rich Media Formats
Why These Formats Work
Awareness

• Takeovers

• Interstitials

• Pushdown units

• Large expandable banners

• High-impact Lightbox ads

Maximize visibility and time-in-view; ideal for brand launches or big reach-focused campaigns.
Engagement

• Multi-directional units

• Swipeable carousels

• Hotspot/3D product demos

• Lightbox galleries

• Expandable banners

Encourage user actions (tap, swipe, drag); ideal for mid-funnel interaction and deeper engagement.
Conversions

• Dynamic product banners

• Shoppable rich media

• Interactive video ads

• Expandable product cards

• In-banner mini landing pages

Reduce friction by allowing product exploration within the ad; drive users toward purchase decisions.
Retargeting

• In-banner video

• Dynamic carousels

• Product feed–powered units

• Quick-view expandable panels

Deliver personalized reminders and recommendations; ideal for re-engaging high-intent audiences.
Rich Media Ads
Idea
Why this increases conversions?
Gamified ads
A juice brand creates a mini-game inside an expandable rich media ad where users swipe to match ingredients (e.g., mango + mint = new summer blend). Each correct match reveals a fun animation and leads to a CTA offering a discount.
Games trigger dopamine through reward loops. A simple challenge holds attention 3–5x longer than a static ad and creates a positive emotional association with the product.
Sliders (before/after)
A beauty brand shows a drag-to-reveal transformation: “Before” on the left, “After” on the right. As users slide, product benefits appear as hotspots.
Before/after visuals provide proof. This format builds trust quickly — ideal for products where improvement is visible.
360° interactive product views
A sportswear brand allows users to rotate a 3D sneaker, zoom into textures, and tap hotspots to view cushioning or grip technology.
360-degree views mimic the in-store experience — building confidence and reducing hesitation for high-consideration purchases.
Countdown timers
A fashion retailer’s rich media ad features a ticking countdown and dynamically displays items that are selling out in real time.
Timers create urgency and FOMO. Users take action immediately instead of saving the page for later.
Weather or location-triggered creatives
Users in Rome see “Weekend Escapes: Amalfi Coast.” Users in Berlin see “Fly to Greece from €39.” The creative updates automatically based on location.
Location relevance makes the ad feel personalized, reducing friction and increasing conversion rates.
Metric
What It Measures in Rich Media Ads
How It Differs From Standard Display
Engagement Rate
Tracks taps, swipes, hovers, expansions, video plays, and other meaningful interactions.
Standard display rarely measures engagement beyond clicks; gesture-level data is absent.
Interaction Rate
Counts total interaction events per impression, including multiple actions per user.
Standard display only counts one action — a click — and ignores micro-interactions.
Dwell Time
Measures how long users stay inside or interact with the ad experience.
Standard display offers no time-in-ad metric; exposure time cannot be measured.
Expansion Rate
Shows how many users intentionally expanded an ad (for expandable units).
Standard banners do not expand, so this metric does not exist for static units.
Video Engagement
Tracks play rate, completion rate, drop-off points, and time watched inside the ad.
Standard display often tracks only clicks on video thumbnails, not in-ad video behavior.
Conversions (In-Ad & Post-Click)
Measures actions like add-to-cart, sign-ups, product exploration, and post-click conversions.
Standard display focuses mainly on post-click conversions and lacks in-ad action tracking.
View-Through Engagement (VTE)
Tracks conversions or high-value actions taken after viewing the ad but without clicking.
Standard display relies heavily on CTR and rarely accounts for non-click conversion influence.
Network
Type
Audience reach
Supported ad formats
Payment model
Minimum traffic requirement
Best for
Google AdSense / Google Ad Manager
Hybrid (Self-serve + Managed)
Global / Very broad
Display, native, video, mobile
CPM / CPC / Dynamic bidding
None
Both
Media.net
Self-serve
Broad (content-rich markets)
Display, contextual, native
CPM / CPC
Low
Publishers & Advertisers
PropellerAds
Self-serve
Global (strong in emerging markets)
Display, pop-under, push
CPM / CPC / CPA
Low
Advertisers & Publishers (performance)
AdPushup
Managed
Mid–large publishers (via SSPs)
Display, native, header bidding
CPM
Medium
Publishers
Ezoic
Self-serve + Managed support
Global
Display, native, video, mobile
CPM / RPM
Low–Medium
Publishers (all sizes)
Monumetric / Mediavine / Raptive
Managed / Premium
Premium mid–large publishers
Display, native, video
CPM / RPM
High (50k–100k+ monthly)
Premium publishers
Amazon Publisher Services (APS)
Managed
Global (commerce-heavy)
Display, native, video
CPM / Dynamic
Low–Medium
Publishers & Advertisers
AdRoll
Self-serve + Managed
Global
Display, native, dynamic product ads, video
CPM / CPC / CPA
Low–Medium
Advertisers
Criteo
Managed
Global (retail-focused)
Display, native, dynamic retargeting, video
CPM / CPC / CPA
Medium–High
Advertisers
BidVertiser / HilltopAds / ReklamStore
Self-serve
Global (incl. emerging regions)
Display, native, push, pop-under
CPM / CPC / CPA
Low
SMBs (both sides)
SmartyAds
Self-serve + Managed
Global
Display, video, mobile, native
CPM (RTB)
None
Both (programmatic control)
Category
RTB (Real-Time Bidding)
Programmatic Buying (Overall Ecosystem)
Core definition
Auction-based buying where each ad impression is bid on in real time.
The broader automated system that includes RTB plus non-auction deal types.
How it works
Real-time bidding happens in 40–100 ms with DSPs competing in live auctions.
Automation used to buy digital ads across multiple deal models.
Deal types included
Only auction-based buying (open RTB, PMP, header bidding).
RTB + programmatic guaranteed + preferred deals + private marketplace.
Pricing model
Dynamic pricing determined by first-price auctions.
Dynamic (RTB) or fixed (PG, PD).
Control for advertisers
High control at impression level (audience, bid price, context).
Varies by deal type; less granular in guaranteed deals.
Transparency
Varies by exchange; RTB supply paths differ in transparency.
Generally higher in guaranteed/direct programmatic deals.
Use cases
Scale, performance, user-level targeting, mobile RTB, CTV RTB.
Premium inventory, brand campaigns, long-term deals + RTB for reach.
Data usage
Relies on real-time signals, contextual data, algorithmic bidding.
Uses same data sources but not always in real-time auctions.
Primary tools
DSPs, SSPs, RTB software, ad exchanges.
DSPs, SSPs, DMP/CDPs, PG systems, PMPs.
Relationship
RTB is a subset of programmatic.
Programmatic is the umbrella that includes RTB.
Best for
Performance, efficiency, dynamic optimization, user-level bidding.
Premium access, stable CPMs, controlled environments + RTB for scale.
Category
First-Price Auction
Second-Price Auction
How it works
Winning bidder pays exactly the amount they bid.
Winning bidder pays slightly above the second-highest bid.
Adoption in 2026
Standard across nearly all RTB ad exchanges, DSPs, and SSPs.
Largely deprecated; used only in limited or legacy marketplaces.
Pricing behavior
Pricing is more transparent and predictable for publishers.
Pricing can fluctuate because the final cost doesn’t match the bidder’s offer.
Impact on advertisers
Requires bid-shading and smarter bidding strategies to avoid overpaying.
Historically cheaper for buyers due to the discounted final price.
Impact on publishers
Higher and more stable revenue; reduces auction gaming.
Lower revenue potential; publishers had less control.
AI role
DSPs use machine-learning bid shading, value prediction, and pacing control.
Minimal AI complexity since bidding was simplified.
Best suited for
Competitive RTB environments (CTV, mobile in-app, premium display).
Rare exceptions where transparency matters less or legacy tech persists.
Platform
Category
Key Strengths
Best For
Google Display & Video 360 (DV360)
DSP
Massive reach, YouTube inventory, GMP data integrations, advanced bidding & fraud protection
Enterprise advertisers, cross-channel RTB, video & CTV scale
The Trade Desk
DSP
Omnichannel RTB, UID 2.0 identity, detailed reporting, AI-driven optimization
Precision targeting, CTV, audio, global RTB scale
Xandr (Microsoft Advertising)
DSP + Ad Exchange
Premium video, broadcaster deals, customizable bidding strategies, CTV strength
CTV buying, premium video, retail media
Yahoo DSP
DSP
Native + video formats, proprietary audience data, competitive CPMs
Mid-market advertisers, performance RTB campaigns
Amazon DSP
DSP
Exclusive Amazon audiences, Fire TV inventory, retail media integrations
Commerce-led brands, upper-funnel campaigns tied to purchase behavior
Magnite
SSP + Exchange
Large independent SSP, strong video/CTV supply, transparent auctions
High-quality CTV/video, transparent supply paths
PubMatic
SSP + Exchange
Global RTB infrastructure, transparency, mobile in-app strength, PMP deals
Mobile & video scaling, premium programmatic deals
Without Smart Supply
With Smart Supply
Fragmented supply chain with inconsistent inventory quality
Streamlined supply paths curated for transparency and verified quality
High exposure to invalid traffic, bots, and spoofed domains
AI-driven fraud screening that blocks unsafe inventory before bidding
Slow bid responses due to multiple intermediaries and long hops
Faster, more efficient auction participation with reduced latency
Higher CPMs caused by hidden fees and unnecessary resellers
Cleaner supply and fewer middlemen, leading to more stable, efficient CPMs
Difficulty predicting performance or maintaining delivery consistency
Reliable pacing, stable performance, and improved win rates across RTB buying
Limited visibility into which SSPs and exchanges deliver true value
AI-based ranking of SSPs and exchanges, highlighting the strongest inventory sources
Wasted budget on low-value impressions and non-engaged audiences
Budget directed toward high-quality, high-engagement inventory with better outcomes
Reactive brand-safety measures applied after damage is done
Proactive protection that avoids unsafe environments before bids are placed
Struggle to maintain ROI as competition increases
Smarter bidding, higher-quality impressions, and stronger long-term ROI
Ad Format
Typical CPM Range (USD)
Notes
Display (Banner / Native)
$1 – $3
Cheapest inventory; increases with deeper targeting or premium placements.
Mobile In-App
$2 – $5
Higher due to engagement and device-level data.
Video (Outstream / Instream)
$6 – $12
Strong demand; CPMs rise with viewability guarantees.
Social Video / Short-Form
$5 – $10
Depends on platform competition and audience segment.
Audio Programmatic
$4 – $8
Varies by region and contextual relevance.
Connected TV (CTV)
$20 – $40+
Premium inventory; limited supply and high audience value.
Driver
What’s changing
Why native video benefits
User behaviour
People spend most of their time in feeds, apps and short-form video environments
Native video is built for feeds and mobile, so it fits how people actually consume content
Privacy & signals
Third-party cookies and easy cross-site identifiers are disappearing
Contextual and content-integrated formats like native are easier to target and measure without invasive IDs
Media performance pressure
Marketers are under pressure to prove attention, brand lift and incrementality, not just impressions
Native video can be bought on CPCV and evaluated on attention and brand impact metrics
Creative expectations
Audiences expect ads to feel like content, not generic spots
Native lets brands tell platform-native stories that feel closer to creator content than to traditional commercials
Fig. Drivers of native video growth in 2026.
Format
Traditional video ads
Native video ads
What it means for marketers
Role in the experience
Sit before or inside content as a separate “commercial break”
Sit inside feeds, articles or apps as part of the content experience
Native works best when the ad feels like another piece of content, not a detour
Viewer control
Often non-skippable or only skippable after a few seconds
Usually scrollable or dismissible, viewers choose to stop and watch
Success depends on winning attention, not forcing exposure
UX fit
TV-style creative dropped into digital environments
Creative adapted to each platform’s look, feel and behaviour
Native demands platform-specific creative, not one-size-fits-all assets
Typical sweet spot
Broad reach, short logo/message blasts
Attention, storytelling, mid-funnel influence and incremental performance
Use pre-roll for blunt reach; use native to build understanding and preference
Fig. Native vs traditional video at a glance.
Format
Primary placements
Best for
Creative notes
Pre-roll native video
Publisher video players, streaming clips
Guaranteed exposure before specific content categories
Keep it tight, style it like the surrounding show or channel, avoid “TV ad dumped online” feeling
In-feed video
Social feeds, news feeds, content recommendations
Scale, quick storytelling, driving traffic or lightweight action
Design as if it were an organic post first, ad second; strong opening frame is critical
In-article / in-read video
Mid-article placements on publisher sites
Mid-funnel education, explainers, how-to content
Lead with a clear headline, assume sound-off, use subtitles to carry the story
Platform-specific native (FB/IG/TikTok/LinkedIn)
Social and professional networks
Audience-specific narratives and retargeting
Match visual language of each platform; lean into creator-style production where appropriate
Fig. Native video formats cheat sheet,
Layer
What it handles
Why it matters for native video
Supply-side platforms (SSPs)
Aggregate native inventory from publishers, apps and native networks
Give you scale across in-feed, in-article and outstream placements without one-off deals
Demand-side platforms (DSPs)
Audience targeting, bidding, pacing, frequency and optimisation
Let you run native alongside other channels, apply AI models and optimise to CPCV, CPA or ROAS
Creatives & templates
Video files, thumbnails, headlines, descriptions and CTAs
Ensure your native ads render correctly in each environment and allow dynamic creative optimisation
Measurement & verification
Viewability, AVOC, brand lift, attention scores, fraud checks
Native lets brands tell platform-native stories that feel closer to creator content than to traditional commercials
Fig. Programmatic native video building blocks.
Metric
What it tells you
When to prioritise it
Practical use
Viewability
Whether the ad had a real chance to be seen
Always; baseline quality control
Exclude low-viewability placements, negotiate guarantees with partners
VTR / completion rate
How far people get through your video
When optimising creative and story arc
Identify drop-off points and re-edit the opening or pacing
CPCV
How much a completed view costs
When you buy on view-based outcomes
Compare channels and formats on efficiency, not just CPM
Attention / watch time
How long people actively stay with your ad
When the story and message are complex
Use as a mid-funnel KPI and as a proxy for deeper processing
Fig. Native video metrics quick reference.
Factor
Standard display banners
Native ads on content sites
Visual attention
Often ignored due to banner blindness
Integrated into content layout, more likely to be seen
User experience
Interruptive, clearly separate from content
Feels like part of the experience
Typical CTR range
~0.05–0.1% in many verticals
~0.2–0.6% on recommendation widgets, higher in niche
Perceived value
“Ad slot” that users mentally filter out
Useful or interesting content that earns the click
Fig. Native vs banner at a glance.
Platform type
Primary goal
Ideal use cases
Example platforms
Content discovery & recommendation
Scale top/mid-funnel content traffic
Driving traffic to articles, advertorials, product pages
Taboola, Outbrain, Revcontent, MGID
Programmatic native DSPs / SSPs
Omnichannel buying and optimization
Running native alongside display, video, CTV via one DSP
StackAdapt, TripleLift, Sharethrough
Sponsored content & storytelling
Deep engagement and brand building
Sponsored series, explainers, in-depth stories
Nativo, Pressboard
Newsletter & email-native platforms
High-intent, niche audiences via email
Thought leadership, B2B offers, subscription funnels
Paved
Fig. Types of native advertising platforms.
Platform
Core strength
Best suited for
Taboola
Massive open-web reach & content discovery
Performance content, large-scale awareness
Outbrain
Premium publisher focus
Brand-sensitive campaigns, quality-first buyers
Nativo
In-feed branded content experiences
Storytelling in complex or regulated categories
Pressboard
Branded content analytics & workflow
Measuring and scaling publisher content deals
Revcontent
Performance-focused recommendation widgets
Aggressive testing and cost-efficient traffic
MGID
Global performance native network
International scale and multi-language campaigns
TripleLift
Programmatic native & video SSP
DSP-driven buying with strong creative fit
Criteo Native
Commerce-native and retargeting
Retailers and e-commerce product promotion
Paved
Newsletter sponsorship marketplace
Niche audiences and inbox-focused strategies
StackAdapt
AI-driven omnichannel DSP
Sophisticated, cross-channel native programs
Fig. Snapshot of the top 10 native ad platforms.
Decision area
Questions to ask
What “good” looks like
Audience & publishers
Does this platform reach my core buyers and key sites?
Clear publisher list, strong vertical and geo match
Formats & funnel fit
Do formats map cleanly to awareness, consideration, conversion?
Mix of content, video, and product formats that fit your journey
Targeting & AI
Can it use first-party data and optimize to my KPIs?
Conversion-based bidding, lookalikes, contextual options
Brand safety & quality
How do we control where ads appear and filter low quality?
Site-level reports, third-party verification, robust blocks
Reporting & integration
Can we easily plug data into our analytics stack and Elevate?
APIs, exports, and flexible reporting granularity
Fig. Platform selection checklist.
Metric
Typical native range / behaviour
What to watch for
CTR
~0.2–0.6% on recommendation units; higher in niche
Rising CTR with stable or improving conversion rate
Time on site / page
Usually higher than standard display traffic
Depth of engagement vs bounce rate
CPC vs search/social
Often lower than search; competitive with social
Cost trends as you scale budgets
CPA / ROAS
Competitive or better when integrated with retargeting
CPA stability and ROAS when you scale and diversify
Fig. Performance benchmarks for native campaigns.
Step
What happens
Key questions to answer
1. Define audience & objective
Align on business goal and target segment
Who do we need to reach, and what should they do?
2. Configure in DSP
Set targeting, budgets, pacing, and inventory preferences
Where should we run, and under what constraints?
3. Real-time bidding & routing
DSP evaluates bid requests and bids on matching impressions
Is this impression worth buying right now?
4. Delivery & frequency control
Winning creative is served, with caps to avoid overexposure
Have we reached this household too often already?
5. Measurement & optimization
Exposure logs are tied to outcomes and used to refine the plan
What’s working, what isn’t, and where should we shift?
Fig. Programmatic TV workflow summary.
Type
Where it runs
Main strengths
Typical use cases
Programmatic CTV
Smart TVs, streaming devices, CTV apps
Precise targeting, strong identity, rich reporting
Prospecting, retargeting, full-funnel campaigns
OTT programmatic
Streaming apps on TV, mobile, desktop
Cross-screen reach, flexible formats
Video reach extension, multi-screen storytelling
Addressable TV
Set-top boxes, smart TV OS, VOD in linear environments
Household-level targeting on traditional TV feeds
High-value audiences in premium TV environments
Programmatic linear TV
Broadcast and cable linear feeds with addressable slots
Scale plus data overlay in live scheduled content
National moments, live events, incremental reach
Fig. Types of programmatic TV and where they fit.
Dimension
Traditional TV
Programmatic TV advertising
Why it matters
Buying method
Manual IOs, upfronts, and scatter
Automated buying via DSPs and programmatic platforms
Less admin, faster changes, more agility
Targeting
Broad demos and program-level targeting
Audience, behavioral, and contextual targeting
Less waste, higher relevance
Optimization
Limited mid-flight changes
Ongoing, data-driven optimization in near real time
Better performance during the campaign
Measurement
Panels, GRPs, and brand-lift studies
Impression-level logs, cross-device attribution, outcomes
TV closer to digital performance standards
Fig. Programmatic vs traditional TV at a glance.
Component
What it measures / does
Example use
Where it helps most
Platform analytics
Delivery, completion, basic engagement
See which apps, times, and creatives deliver views
Day-to-day optimization and pacing
ACR / device graphs
Who saw which ad across which devices and screens
Link a CTV impression to mobile or desktop behavior
Cross-device reach and frequency management
Attribution partners
Incremental outcomes, contribution of TV vs other channels
Quantify lift in sales or leads after TV exposure
Budget decisions and channel valuation
Conversion APIs
Server-to-server logging of conversions and offline events
Capture purchases from CRM or POS into attribution
Closing the loop when cookies or IDs are weak
Fig. Measurement toolkit for programmatic TV.
Component
Primary role
Impact on programmatic TV
Open Garden
DSP-agnostic, multi-platform buying framework
Access to 15+ DSPs and broad CTV/OTT supply without bias
Smart Supply
Premium supply curation and SPO
Higher-quality inventory, fewer fees, more working media
Elevate
Cross-channel planning, optimization, and attribution
Aligns TV buying with business KPIs and outcome reporting
Fig. How Open Garden, Smart Supply, and Elevate work together.
Layer
Who controls it
Typical formats
Best used for
Smart TV OS (Samsung, LG, Google TV)
TV manufacturer / OS owner
Homescreen banners, sponsored tiles, system video, OS-owned FAST
Big-screen awareness, discovery, platform-level targeting
Streaming apps (Hulu, YouTube, Freevee, Pluto TV, etc.)
Individual publishers / networks
Pre-roll and mid-roll video, pause ads, app-specific takeovers
Mid-funnel engagement, reach within specific content or genres
Fig. OS vs app-level inventory.
Dimension
Traditional linear TV
Smart TV advertising
Targeting
Broad demos based on programme and channel
Addressable audiences based on device and viewing behaviour
Buying focus
Programmes, dayparts, network mixes
Audiences, devices and OS environments
Formats
15–30s spot breaks inside scheduled programming
Homescreen units, navigation tiles, in-app CTV video, interactive/shoppable
Measurement
Panel-based ratings, modelled impact
Device-level delivery, digital-style attribution and lift studies
Fig. Smart TV vs traditional TV at a glance.
Format
Where it appears
Best for
Example use case
Homescreen placements
Main OS homescreen
Reach, launches, “always-on” brand presence
New series launch, seasonal retail push
In-navigation units
Menus, search results, recommendation rails
Influencing choice at the moment of selection
Sponsored “Kids & Family” rail for a family SVOD app
In-app CTV video ads
AVOD and FAST streams on Smart TVs
Mid-funnel engagement and scale
30-second spot in a FAST channel for a DTC brand
Interactive and shoppable formats
Within CTV ad breaks or OS-owned experiences
Driving direct actions and measurable responses
Remote-controlled shoppable ad for a retail offer
Fig. Smart TV ad formats and their role.
Platform
Headline footprint (US)
Signature data asset
Standout format / surface
Roku
~37% CTV device share; 90M+ streaming households
Account-based household graph, streaming behaviour
Action Ads with remote-based responses and Roku Channel UI
Samsung Ads
~67–68M Samsung Smart TVs; 77M+ active devices
ACR across linear + streaming, cross-device data
Tizen homescreen mastheads and Samsung TV Plus FAST
LG Ad Solutions
~45M LG Smart TVs; 200M+ globally
Global ACR dataset from webOS TVs
High-impact LG webOS home screen placements
Google TV / Android TV
Tens of millions of devices; strong YouTube CTV reach
Logged-in Google identity and YouTube behaviour
Sponsored rows on Google TV and YouTube on TV formats
Fig. Major Smart TV platforms side-by-side.
Challenge
What it looks like in practice
How a Smart TV partner helps
OS and device fragmentation
Multiple buys across Samsung, LG, Roku, Fire TV, apps and DSPs
Consolidates planning, buying and frequency across platforms
Limited homescreen access
Difficulty securing hero and navigation units at key moments
Uses relationships and packages to unlock premium placements
Measurement and walled gardens
Disconnected reports, no single view of reach or outcomes
Builds a unified KPI and reporting layer across data sources
Smart TV–specific creative needs
Linear spots reused without optimisation for UI or interactivity
Guides creative formats, placements and testing for each OS
Fig. Challenges and how a partner helps.
Channel group
Typical role in the funnel
Strengths
TV & CTV
Awareness, broad reach
Mass reach, high impact storytelling
Digital display
Mid-funnel, retargeting
Scalable reach, flexible formats
Search & shopping
Lower-funnel, intent capture
High intent, strong measurability
Social & creators
Consideration, engagement
Rich formats, social proof, community
Retail & commerce
Lower-funnel, point-of-sale impact
Shopper data, proximity to purchase
Audio & podcasts
Consideration, brand affinity
Intimate environment, commuting moments
OOH & DOOH
Awareness, local impact
High visibility, geotargeting opportunities
Fig. At a glance: where media money goes.
Component
What it covers
Why it matters
Objectives & KPIs
Business goals, media goals, success metrics
Aligns everyone on what “good” looks like
Audience & segments
Who you’re targeting and how they differ
Focuses spend on the most valuable people
Channel strategy
Role of each channel and format
Prevents overlap and clarifies the funnel
Budget allocation
Spend by channel, region, audience
Makes trade-offs explicit
Timing & flighting
Start dates, peaks, pauses, seasonality
Matches media pressure to demand and context
Measurement plan
Data sources, models, experiments
Ensures learnings and accountability
Fig. Core components of a media plan.
Aspect
Media planning
Media buying
Main focus
Strategy and alignment with business goals
Executing and optimising the plan
Timing
Mostly pre-campaign
In-flight and post-campaign
Core questions
Who, where, when, how much, and why?
How do we buy it, at what price, and how do we keep improving?
Typical outputs
Media plan, channel mix, budget split, forecast
IOs, DSP setups, bids, pacing rules, actual delivery
Key skills
Research, forecasting, scenario modeling, stakeholder alignment
Negotiation, platform fluency, bid strategies, data analysis
Data focus
Audience insights, historical performance, market trends
Live performance data, clearing prices, quality metrics
Success measured by
Fit of plan to objectives, expected reach/ROI
Actual results: ROI/ROAS, CPA, cost per incremental outcome
Fig. Media planning vs media buying.
Goal type
Example KPI
Example metric(s)
Awareness
On-target reach
% of target reached, GRPs, impressions
Consideration
Quality engagement
Video completion rate, engaged sessions
Conversion
Cost per acquisition
CPA, leads, sales, app installs
Efficiency
Profitability of spend
ROAS, cost per incremental conversion
Fig. Media planning goals, KPIs and common metrics.
Step
Key decisions
Primary owners
Choose buying platforms
Which DSPs, ad platforms, and direct partners
Media buyer, ad ops
Negotiate & select
Deal types, inventory, pricing, quality controls
Buyer, vendor reps
Activate campaigns
Trafficking, tracking, bid strategies, pacing
Buyer, ad ops
Optimize in-flight
Budget reallocation, targeting, creative testing
Buyer, analyst
Measure & report
Performance vs plan, insights, next-step actions
Buyer, planner, data
Fig. Media buying process overview.
Type
Primary channels
Strengths
Typical use cases
Traditional
Linear TV, radio, print, OOH
Mass reach, cultural impact
Brand launches, seasonal brand campaigns
Digital
Display, search, social, online video
Precision, rich data, fast optimization
Always-on performance, retargeting
Programmatic
Display, video, CTV, audio, DOOH
Automation, scale, granular controls
Cross-publisher reach, dynamic optimization
In-house vs agency model
All of the above
Control (in-house) vs expertise (agency)
Hybrid setups, global + local orchestration
Fig. Types of media planning and buying.
Stage
What Happens
Who Is Involved
Purpose / Why It Matters
Ad Rendered
Creative loads on the user's device.
Publisher, SSP, Ad Server
Ensures the impression is actually delivered.
Viewability Measurement
Systems check if the ad was viewable using MRC standards.
Verification Vendors (IAS, DV, MOAT), MRC
Determines meaningful visibility for optimization and billing.
Invalid Traffic (IVT) Detection
Algorithms detect bots, spoofed environments, or anomalies.
Fraud-Detection Vendors, Verification Partners
Protects advertisers from fraudulent or low-quality impressions.
Brand Safety & Suitability Checks
Context is scanned for suitability or sensitive content.
Brand Safety Vendors, Contextual Intelligence Providers
Ensures ads appear in safe, compliant environments.
Outcome Measurement
Impressions, clicks, conversions, attention metrics are logged.
DSPs, Analytics Platforms, Attribution Partners
Provides performance data for decisioning and optimization.
Feedback Loop into DSP
Performance data updates DSP algorithms for future bidding.
DSP Optimization Engine
Improves bidding accuracy, relevance, and ROI.
Phase
What Happens
Inventory Selection & Access
Smart Supply taps into direct SSP partnerships to secure high-quality, scalable inventory. Deal IDs are custom-built for each campaign’s requirements
Traffic Filtering & AI Optimization
Using machine-learning and real-time analytics, Smart Supply filters out low-quality supply, invalid traffic (IVT), and brand-unsafe placements — ensuring only clean, relevant inventory enters the bidding pool
Real-Time Performance Adjustments
As the campaign runs, Smart Supply dynamically adjusts deals — reallocating budget, optimizing bids and supply paths based on performance signals to maximize KPI success and minimize waste.
Factor
Pre-Bid Fraud Prevention
Post-Bid Fraud Prevention
When it happens
Before the bid is placed — during the evaluation of the bid request.
After the ad is served and impression data is analyzed.
Primary purpose
Prevent advertisers from bidding on invalid, fraudulent, or unsafe impressions.
Validate delivered impressions and identify fraud that slipped through pre-bid filters.
Key benefit
Saves budget by avoiding fraudulent inventory upfront.
Ensures accurate reporting, billing, and performance attribution.
Main techniques
Device fingerprinting, domain/app verification, bot probability scoring, supply-path validation (schain), viewability prediction.
Traffic pattern analysis, post-impression behavior checks, IVT classification, attribution validation.
AI involvement
Real-time machine learning models assess fraud risk within milliseconds before bidding.
AI examines impression logs, user engagement anomalies, and cross-channel behaviors to detect fraud retrospectively.
Common fraud types intercepted
Domain spoofing, app spoofing, bot traffic, invalid bid requests, misrepresented supply paths, hidden resellers.
Click farms, attribution fraud, ad stacking, pixel stuffing, sophisticated bot networks, post-delivery manipulation.
Who uses it
DSPs, SSPs, exchanges, verification vendors.
DSPs, verification vendors, analytics platforms, auditors.
Impact on performance
Reduces wasted impressions and improves supply-path quality.
Improves reporting accuracy, protects optimization models, and ensures clean performance data.
Aspect
YouTube (Standard YouTube Advertising)
YouTube TV Advertising
Platform type
User-generated content (UGC) and creator-led video platform
OTT / Connected TV service with live TV, VOD, and DVR
Content environment
Mix of creator videos, shorts, podcasts, and user uploads
Premium, professionally produced TV content (live channels + on-demand)
Viewing device
Primarily mobile, desktop, and tablet (CTV optional)
TV screen first (Connected TVs, streaming devices)
Ad formats
Skippable & non-skippable in-stream, bumper ads, Shorts ads
Non-skippable TV-style video ads
Ad length
6–30 seconds (varies by format)
Typically 15–30 seconds, TV-standard
Viewer intent
Lean-forward, browsing, multitasking
Lean-back, appointment viewing
Brand safety & quality
Varies by channel and creator
High brand safety due to curated TV inventory
Targeting
Google audience signals, interests, behaviors
Household-level targeting + TV viewing data
Measurement focus
Clicks, views, engagement, conversions
Reach, frequency, incremental lift, completion rates
Buying method
Google Ads auction-based buying
Reserved and programmatic CTV buys
Use case for advertisers
Performance, awareness, creator alignment
Premium reach, upper-funnel branding, TV-like impact
Step 1: Plan & target ↓
Step 2: Buy inventory ↓
Step 3: Deliver ads ↓
Step 4: Measure & optimize ↓
Define campaign goals (reach, awareness, incremental TV reach) and select household-level targeting such as location and audience segments.
Access YouTube TV inventory through programmatic CTV buying or reserved placements to secure premium live and on-demand TV ad slots.
Run non-skippable 15–30 second video ads within natural TV ad breaks on connected TVs.
Track reach, frequency, completed views, and lift, then optimize delivery and creative for better performance
Aspect
Google Ads
DV360
Primary use
Standard YouTube (UGC) video advertising
YouTube TV and premium CTV advertising
Access to YouTube TV inventory
Limited or none
Full access to YouTube TV inventory
Content environment
Creator-led and user-generated videos
Live TV channels + on-demand TV content
Buying model
Auction-based buying
Programmatic + reserved TV-style buying
Ad formats
Skippable, non-skippable, bumper ads
Non-skippable 15–30s TV ads
Targeting level
User-level interests and behaviors
Household-level TV targeting
Reach & frequency control
Basic
Advanced TV-grade reach & frequency management
Measurement focus
Views, clicks, conversions
Reach, frequency, completed views, lift studies
Brand safety & quality
Varies by channel
High brand safety, curated TV inventory
Best for
Performance, demand generation, scalable video
Brand awareness, incremental TV reach, premium CTV
Treating YouTube TV like performance video
YouTube TV is not designed for clicks or immediate conversions. Optimizing toward CTR or last-click attribution leads to misleading conclusions and underestimates real impact. This channel works best when success is defined by reach, frequency, completion, and lift, not direct response metrics.
Over-targeting and shrinking scale
Narrow audience definitions may feel safer, but on YouTube TV they often restrict delivery and inflate CPMs. Household-level TV inventory needs room to scale. Starting broader and refining later usually delivers better efficiency.
Using non–TV-ready creative
Ads built for mobile or social feeds often underperform on the big screen. Small text, slow branding, or weak audio cues reduce effectiveness. YouTube TV creative should assume sound on, lean-back viewing, and branding in the first few seconds.
Ignoring frequency management
Without clear frequency caps, campaigns can overserve the same households or fail to build sufficient repetition. TV effectiveness depends on controlled exposure, not just impressions.
Launching without measurement planning 
Waiting until after launch to think about measurement is a costly mistake. Brand lift studies, reach benchmarks, and attribution frameworks should be set before campaigns go live, especially since clicks are limited.
Expecting unlimited premiuminventory 
Live sports, prime-time programming, and seasonal moments have finite supply. Late planning can lead to underdelivery or higher prices. Early forecasting and flexibility across CTV inventory are critical.
Evaluating YouTube TV in isolation
YouTube TV performs best as part of a broader CTV and digital video strategy. Judging it alone—without considering incremental reach or cross-channel impact—often undervalues its role in the media mix.
Approach
How it works
Strengths
Limitations
Traditional personalization
Rule-based segments (e.g., demographics, basic behavior) with manually defined if/then logic.
Simple to understand, easy to launch small pilots.
Hard to scale; brittle rules; limited to broad averages; slow to update.
AI-driven personalization
Machine learning models continuously adjust experiences based on real-time and historical data.
Scales to millions of users; adapts quickly; uncovers micro-segments and patterns humans miss.
Requires good data, governance, and careful monitoring to avoid bias or “black box” issues.
Hybrid approach
Combines human-defined rules with AI scores and predictions.
Balances control with automation; easier stakeholder buy-in.
Needs clear ownership so rules and models don’t conflict or duplicate effort.
Fig. Traditional vs AI-driven personalization.
Data type
Examples
What it’s good for
First-party behavioral data
Page views, clicks, scroll depth, app events.
Understanding intent and interest in near real time.
Transactional data
Orders, returns, subscriptions, renewals.
Calculating value, lifecycle stage, and next-best-offer opportunities.
Zero-party data
Preference centers, surveys, quizzes.
Capturing explicit preferences and constraints customers want you to use.
Contextual & device data
Region/DMA, device type, time of day.
Tailoring timing, format, and creative to the situation.
Marketing & media signals
Campaign source, channel, creative ID.
Measuring which campaigns and messages actually drive outcomes.
Customer support & product data
Tickets, chatbot logs, NPS, usage metrics.
Spotting friction, churn risk, and opportunities for proactive outreach.
Fig. Data inputs for AI models.
Feature
What it does
Where it shows up
Dynamic customer profiles
Maintain a live view of each person’s behavior, value, and lifecycle stage.
CDPs, CRM views, profile dashboards.
Predictive analytics & intent modeling
Forecast who is likely to convert, churn, or respond to a specific offer.
Scoring models, audience builders, next-best-action engines.
Recommendation engines
Suggest products, content, or features for each individual.
On-site carousels, in-app modules, email blocks, “up next” rails.
Automated journey orchestration
Sequence touchpoints and move people between journeys based on behavior.
Marketing automation platforms, journey orchestration tools.
Omnichannel personalization
Keep experiences and decisions consistent across channels.
Web, app, email, SMS, ads, CTV, in-product experiences.
Fig. Key features of AI-driven personalization.
Benefit
How it shows up
Strengths
Higher engagement & relevance
More people open, click, watch, and interact.
Higher CTR, time on site, content depth.
Increased conversions & revenue
More purchases, upgrades, and renewals from the same or lower spend.
Conversion rate, average order value, sales velocity.
Better satisfaction & retention
Fewer cancellations and complaints; stronger loyalty.
NPS, churn rate, repeat purchase rate.
Efficient use of data & automation
Less manual segmentation and campaign setup; faster iteration.
Time saved per campaign, number of automated journeys.
Scalability
Ability to handle more users, SKUs, and channels without linear headcount growth.
Growth in active users or SKUs without drops in performance.
Fig. Business benefits of AI-driven personalization.
Category of AI personalization tool
What it does
Examples of tools
Customer data platforms (CDPs)
Ingest, unify, and activate customer data from multiple sources to create a single customer profile that other systems use for segmentation, modeling, and personalization.
Twilio Segment, mParticle, Tealium Customer Data Hub, Adobe Real-Time CDP, Salesforce Data Cloud
Recommendation engines
Use machine learning to suggest products, content, or offers based on a user’s behavior, preferences, and similarity to other users, powering “you may also like” and “because you viewed” experiences.
Dynamic Yield, Nosto, Algolia Recommend, Bloomreach Discovery, LimeSpot
Journey orchestration platforms
Coordinate and automate multi-step, cross-channel customer journeys by deciding when to trigger messages, which paths to take, and which channel to use based on each person’s real-time behavior and profile.
Braze (Canvas Flow), Iterable, Adobe Journey Optimizer, Salesforce Marketing Cloud Journey Builder, Pega Customer Decision Hub
Dynamic creative & content personalization (DCO / experience optimization)
Break creative and content into modular elements (headlines, images, CTAs, offers) and use AI to assemble the best combination for each user or impression across web, email, and ads.
Smartly.io (DCO), Jivox, Clinch, Google Marketing Platform (Studio + DV360 for DCO), Movable Ink
Marketing automation platforms with embedded AI
Automate recurring campaigns (email, SMS, in-app) while using AI to score leads, recommend segments, optimize send time, and suggest next-best actions.
HubSpot Marketing Hub, Adobe Marketo Engage, Salesforce Marketing Cloud, Klaviyo, Mailchimp
AI-powered advertising platforms (DSPs / ad platforms)
Use AI for targeting, bidding, budget allocation, and creative optimization in paid media, deciding which impressions to buy, at what price, and with which message to hit KPIs like conversions or ROAS.
Google Ads (incl. Performance Max), Meta Ads (Advantage+), The Trade Desk, Google Display & Video 360 (DV360), Amazon DSP, StackAdapt
Fig. AI personalization tools.
Strategic Purpose
Why It Matters
What CPM Helps You Do
Comparing Channels
Linear, streaming, and digital video each price impressions differently. CPM creates a shared currency across all of them.
Benchmark tv cpm, connected tv cpm, and digital video CPMs to determine where ad dollars work hardest.
Setting Budgets
Planners need a predictable cost model to estimate how many viewers they can reach within a given spend.
Model spend scenarios, forecast impressions, and allocate budgets efficiently across publishers and platforms.
Measuring Efficiency
A lower CPM doesn’t always mean better performance—audience quality matters.
Evaluate value per dollar spent by analyzing CPM alongside engagement, frequency, and ROI.
Forecasting Revenue Impact
Advertisers want to understand how impression volume and audience scale translate into sales outcomes.
Estimate how CPM-driven reach supports conversion modeling and revenue forecasts in TV and CTV campaigns.
Factor
How It Works
Impact on CPM
Channel Type: Linear TV
Impressions are estimated via panel-based measurement; broad reach across scheduled programming.
Lower CPM due to broad audiences and lower measurement granularity.
Channel Type: CTV
Impressions are verified at device level when the ad renders; supports precise targeting and segmentation.
Higher CPM due to verified delivery, premium inventory, and audience-level targeting.
Metric
What It Measures
When It’s Useful
How It Relates to CPM
CPM (Cost per Thousand Impressions)
Cost of delivering 1,000 ad impressions.
Planning reach, budgeting, comparing linear vs CTV costs
CPM sets the baseline cost—all other TV metrics build on the audience CPM delivers
CPV (Cost per View)
Cost each time a viewer watches your ad (fully or to a key threshold)
Evaluating engagement, especially in CTV where completion rates reach 90–98%
A low CPM doesn’t guarantee strong CPV; a higher CPM might deliver better-quality viewers who actually watch the ad
CPA (Cost per Acquisition)
Cost of driving a desired action (signup, purchase, install).
Performance and lower-funnel optimization.
CPM influences CPA indirectly: targeted impressions may cost more (higher CPM) but reduce CPA if the audience is high-intent
ROAS (Return on Ad Spend)
Revenue generated per $1 spent on ads
Measuring campaign profitability.
ROAS depends heavily on CPM because impression cost determines how efficiently the campaign brings users into the funnel. Strong CPM efficiency helps maximize ROAS.
Aspect
Traditional OOH
Programmatic DOOH (pDOOH)
Buying process
Manual, reservation-based buying with long lead times
Automated, platform-based buying using programmatic media workflows
Media flexibility
Fixed placements and schedules once booked
Flexible activation with real-time campaign adjustments
Targeting approach
Broad, location-only targeting
Audience- and context-driven targeting using data signals
Campaign optimization
Minimal optimization during the campaign
Continuous optimization based on performance and real-world conditions
Creative delivery
Static creative or limited rotation
Dynamic creative adapted to context and audience signals
Measurement & reporting
Estimated reach and impressions
Enhanced measurement with delivery logs and attribution signals
Role in media strategy
Primarily awareness-focused
Integrated programmatic out of home within data-driven strategies
Implement Dynamic Content Optimization (DCO)
DCO technology automatically customizes ad creative in real-time for different audience segments. It can swap out images, messages, and offers based on a user's location, demographics, past browsing behavior, or the weather, creating a uniquely relevant experience that drives a higher average click-through rate for display ads.
Use Data-Driven Personalization
Incorporate a user's name (where appropriate and privacy-compliant), recently viewed products, or local information. This level of relevance demonstrates that you understand the user's needs, making a click far more likely.
What you can usually get
What it helps answer
What you do not get
Aggregated overlap counts
“How much audience overlap exists?”
The other party’s user-level audience list
Aggregated conversion linkage
“Did exposure correlate with outcomes?”
Raw impression logs or raw purchase rows
Indexed audience profiles
“What attributes index high among converters?”
Direct access to partner identity graphs
Cohort-level performance
“Which segments performed best?”
One-to-one user matching you can export
Fig. Clean room outputs vs what you never get.
Driver
What breaks in old workflows
What clean rooms enable
Cookie instability and signal loss
Incomplete attribution paths and reach estimates
Privacy-safe matching for measurement and overlap
Privacy laws and consent boundaries
Risky data sharing, unclear legal exposure
Governed collaboration with constrained outputs
Walled gardens and platform control
Limited cross-platform visibility
Approved analysis inside controlled environments
Fig. Drivers and what they change operationally.
Type
Best for
Tradeoffs to expect
Typical stakeholders
Walled-garden clean rooms
Deep measurement inside one platform
Siloed view; limited portability
Brand, agency, platform partner
Independent/neutral clean rooms
Partner collaboration across multiple publishers/retailers
More setup; identity and governance complexity
Brand, agency, publishers/retailers
Cloud-based frameworks
Enterprises standardizing collaboration as infrastructure
Requires technical ownership; longer ramp
Data/IT, analytics, privacy, marketing ops
Fig. Types of clean rooms and how to choose.
Use case
What you can measure
KPIs that usually make sense
What not to promise
Measurement and attribution
Exposure-to-outcome linkage, lift, deduped reach
Incremental conversions, deduped reach, cost per incremental outcome
Perfect user-level attribution everywhere
Audience overlap and insights
Overlap sizing, audience indexing
Net-new reach potential, overlap %, segment performance
A portable identity graph
Publisher/partner collaboration
Joint planning and closed-loop reporting
Partner contribution, overlap efficiency, publisher lift
Full transparency into partner raw data
Fig. Use case to KPI mapping.
Readiness signal
What “yes” looks like
What “not yet” looks like
What to do first
Data foundation
Stable conversion events + usable first-party IDs
Messy event tracking and inconsistent taxonomy
Fix measurement hygiene and naming conventions
Partner environment
You have partners willing to collaborate
Key partners can’t or won’t support collaboration
Start with one high-value partner and a narrow use case
Business question
A decision worth changing budget/strategy
Vague “we want insights” exploration
Define one question tied to a real spend decision
Resourcing
Analytics + governance support exists
No owner, no privacy review, no ops process
Assign owner and define access + output rules
Fig. “Do we need a clean room?” quick decision rubric
Inventory Source
Description
Typical Environments
Key Advantages
Trade-Offs
CTV & Streaming Platforms
Ads inserted into TV-like content such as shows, movies, and FAST channels
Smart TVs, streaming apps, broadcaster VOD
Highest attention, sound-on, full-screen, very high completion rates (85–95%+)
Higher CPMs, limited supply
Premium Publisher Video (OLV)
In-stream ads within editorial or long-form video content
News, entertainment, lifestyle publishers
Strong brand safety, contextual relevance, scalable reach
Performance varies by player size and placement
Video Platforms (e.g. YouTube)
Platform-controlled in-stream formats
Desktop, mobile, connected TV apps
Massive scale, standardized formats, strong intent signals
Less transparency, platform-specific measurement
Programmatic Video Exchanges
Aggregated in-stream inventory accessed via DSPs
Open web, apps, premium video players
Targeting flexibility, frequency control, cross-channel buying
Quality varies, requires strict inventory controls
Broadcaster & TV Network Apps
Digital extensions of linear TV inventory
Smart TV apps, OTT platforms
TV-grade content, strong brand lift, familiar buying models
More rigid pricing and deal structures
Metric Category
Key Metrics
What They Indicate
Typical Benchmarks
Delivery & Exposure
Impressions, Reach, Frequency
How many users were exposed and how often
Stable, predictable delivery
Viewability
Player-based viewability
Whether the ad was actually viewable on screen
Higher than out-stream by design
Engagement
Video Completion Rate (VCR), Quartiles (25%, 50%, 75%, 100%)
Depth of viewing and attention
60–80% (OLV), 85–95%+ (CTV)
Audio Signals
Sound-on rate
Whether messaging was fully experienced
Very high on CTV and premium instream
Brand Impact
Ad recall, brand lift, consideration lift
Upper- and mid-funnel effectiveness
Stronger than out-stream formats
Attribution Signals
Incremental reach, household frequency, lift studies
Contribution to business outcomes
More reliable than scroll-based formats
Quality & Safety
Invalid traffic (IVT), fraud detection
Trustworthiness of impressions
Significantly lower IVT vs out-stream
Factor
In-Stream Video Ads
Out-Stream Video Ads
User intent
Intentional viewing
Incidental exposure
Attention mode
Lean-in (focused)
Scroll-driven (fragmented)
Audio state
Sound-on by default
Autoplay muted
Attention stability
Time-based
Behavior-based
Factor
In-Stream Ads
Out-Stream Ads
Typical environment
CTV, streaming, premium video players
Articles, feeds, native placements
Content adjacency
Professional video content
Mixed editorial or social content
Brand perception
High-quality, TV-like
Highly variable
Brand safety control
Stronger, deal-based
Requires stricter controls
Factor
In-Stream
Out-Stream
Available scale
Limited, premium supply
Very high
Incremental reach
Moderate
Strong
Frequency control
Deterministic (especially CTV)
More fragmented
Cross-site duplication risk
Low
Higher
Metric
In-Stream Video
Out-Stream Video
Completion rate (VCR)
60–95%+ (env. dependent)
Low and inconsistent
Engagement driver
Time watched
Time-in-view
Playback continuity
Linear
Starts/stops with scroll
Creative tolerance
Longer formats viable
Short formats required
Cost Factor
In-Stream
Out-Stream
Typical CPM
Higher
Lower
Cost per completed view
Predictable
Variable
Cost per attentive second
Competitive
Often inefficient
Waste risk
Lower
Higher without controls
Measurement Aspect
In-Stream
Out-Stream
Impression logic
Player-based
Viewability-triggered
Exposure certainty
High
Probabilistic
Attribution strength
Stronger (CTV, OLV)
Weaker, assist-based
Fraud/IVT risk
Lower
Higher on open web
Format
Typical locations
Average dwell / exposure time
Core strengths
Best-fit use cases
Digital screens in stations and transit hubs
Metro stations, train stations, bus terminals, entrances, platforms
5–15 minutes per visit, repeated 3–5× per week
High dwell time, large-format visibility, predictable foot traffic, strong recall
Brand awareness, retail launches, local presence, upper-funnel reinforcement
In-vehicle digital displays
Buses, trams, subways, commuter trains, ride-share fleets
15–45 minutes per trip with limited distractions
Long continuous exposure, route-based targeting, strong message retention
Product education, sequential messaging, citywide reach with frequency control
Mobile-enabled and dynamic placements
Stations, vehicles, transit corridors, nearby sidewalks
Immediate + delayed exposure via mobile follow-up
Action-driven engagement, real-time creative updates
Store visits, app installs, promotions, retail and QSR activation
Aspect
Transit Advertising
Digital Transit Advertising
Primary locations
Buses, trams, subways, trains, stations
Public transport vehicles and transit hubs
Audience context
Commuters on fixed routes and schedules
Commuters in high-frequency, repeat exposure settings
Attention dynamics
High dwell time, habitual exposure
High dwell time + repeated daily exposure
Format types
Static posters, wraps, some digital screens
Digital screens inside vehicles and stations
Targeting capability
Minimal (location-based only)
Geo, time-of-day, route-based, commuter patterns
Measurement & data
Limited (reach estimates, OTS)
Impressions + mobility data + route-level analytics
Flexibility & optimization
Low (fixed placements, long lead times)
High (real-time scheduling tied to transit flows)
Role in media mix
Awareness and brand presence
Bridge between offline reach and data-driven planning
Best use cases
Mass awareness, city-wide visibility
Reaching commuters with contextual, repeat messaging
Inventory quality lens
What you verify
Evidence you request
Environment and context
The location and venue match the brief
Screen/venue list with IDs, address-level detail, venue taxonomy
Screen and presentation
Ads are visible, rendered correctly, and played as planned
Screen specs, loop/share-of-voice details, creative QA confirmation
Data and accountability
Delivery and audience reporting is defensible
Proof-of-play logs, impression methodology summary, reconciliation rules
Fig. What inventory quality means in practice.
Environment type
Viewer mode
Best job for the ads
Transit hubs
High repetition, mixed dwell
Fast awareness + brand cues, directional prompts
Retail/in-store
Purchase-minded, short dwell
Offers, price anchors, “find it here” prompts
Office/residential lobbies
Predictable patterns
Frequency building, reminders, launches
Roadside large-format
High speed, low dwell
One idea, bold brand, minimal text
Fig. Environment-to-message fit.
Validation check
Why it matters
What ‘good’ looks like
Loop length / share of voice
Determines true frequency
Clear loop length, share stated, consistent delivery
Creative rendering
Prevents cut-off or unreadable ads
QA proof on the actual screen spec
Uptime / downtime handling
Avoids invisible impressions
Downtime logs + makegood rules stated upfront
Proof of play
Turns delivery into an auditable receipt
Timestamped logs at screen level
Fig. Screen and playback validation checklist.
Access model
Operational friction
Quality control strength
Best use case
Many direct buys
High
Varies by media owner
Flagship screens, bespoke activations
Multiple marketplaces
Medium–high
Inconsistent
Broad coverage when standards are strict
Consolidated programmatic access
Lower
More consistent
Scaled campaigns needing fast optimization
Curated deal paths
Lower
Strongest
Performance-first buying with guardrails
Fig. Access model scorecard.
Factor
Fragmented access
Consolidated access
Optimization speed
Slower due to multiple workflows
Faster with unified controls
Reporting clarity
Often inconsistent
More consistent and comparable
Verification coverage
Varies by partner
More standardized
Fee transparency
Harder to see
Easier to manage
Risk of duplicates
Higher
Lower
Fig. Fragmented vs consolidated access.
Standard
Minimum requirement
What to ask for
Transparent inventory
Screen-level disclosure
Screen list + venue IDs + taxonomy
Verified playback
Proof-of-play access
Logs, reconciliation rules, makegoods
Reliable audience data
Methodology clarity
Data source + update cadence + definitions
Scalable access
Clean supply paths
Fewer hops, consistent reporting surfaces
Proven outcomes
Measurement plan pre-launch
KPI framework + lift method + readout format
Fig. Standards decision-makers should demand.
What the viewer does during the game
What it usually signals
Best “in-sync” response
What to measure
Searches the brand/product name
Curiosity + active consideration
Search intercept + landing page built for game-night traffic
Brand search lift, CTR to key pages, engaged sessions
Opens social (TikTok/IG/X)
Wants context, commentary, proof
Short-form cutdowns + creator-style explainers + retargeting
View-through + saves/shares + site quality from social
Scans QR / clicks on-screen prompt
High intent, needs low friction
Simple offer, short path, pre-filled checkout where possible
Completion rate, drop-off step, incremental conversions
Checks reviews/Reddit/YouTube
Trust check
“Why this / why now” proof points + FAQs + UGC
Time on proof pages, repeat visits, assisted conversions
Fig. Second-screen intent map.
Signal
What it tells you
How to read it without fooling yourself
Best paired validation
Brand search lift
Demand created in the moment
Separate brand vs. product vs. competitor drift
Geo/time controls + historical baselines
On-site engagement quality
Whether clicks meant anything
Watch for “cheap traffic” spikes with no depth
Engaged sessions, key-path completion, new vs. returning
Retail/commerce signals
Intent that’s closer to revenue
Don’t treat ROAS as the only truth in a tentpole
Basket adds, store locator, coupon saves, CRM matches
Incremental outcomes
Proof the moment moved behavior
Requires holdouts or credible comparisons
Lift tests, matched markets, clean-room/aggregated studies
Fig. The 2026 scoreboard for the liquid layer.
What breaks in practice
Why it happens
TV-led fix
Fragmented messages across touchpoints
Channels optimize to their own incentives
Anchor one “master narrative” in TV, then modularize companions
Upper funnel feels weak or “soft”
Digital skews toward captured intent
Use TV to create shared context; let digital harvest and deepen
Attribution arguments stall decisions
Signal loss + walled garden reporting
Move to a layered measurement stack (delivery→response→outcomes→lift)
Frequency waste and fatigue
Ad-supported supply + limited unified frequency views
Use addressable/CTV pacing rules and rotation to control wear-out
Fig. Omnichannel breaks: symptoms and structural fixes.
Format
Best job in pharma
Optimize for
Typical risk
Linear TV
Fast national scale and credibility
Reach, cost-efficient scale
Household over-frequency
Connected TV
Incremental reach + sequencing
Incremental reach, completion, controlled frequency
Treating it like click-optimized digital video
Addressable TV
Frequency control + message rotation
Pacing, rotation, geo/access tailoring
Over-assuming “targeting” equals condition targeting
Fig. Video architecture roles by format.
Layer
What you look at
What it answers
Delivery truth
Reach, frequency, incremental reach
“Who did we actually reach, and how often?”
Behavioral response
Search lift, site engagement, branded queries
“Did exposure create observable intent?”
Outcomes
New scripts, refills, persistence proxies (where feasible)
“Did behavior translate into outcomes?”
Incrementality
Geo tests, holdouts, matched markets
“What happened because of TV (not with TV)?”
Fig. Privacy-first measurement stack.
Asset type
Purpose
Where it runs
Master narrative spot
Establish the category/brand frame
Linear + broad CTV
Modular variants
Emphasize different angles by proxy/region/stage
CTV + addressable rotations
“Handoff” assets
Answer questions TV cannot fit
Landing pages, FAQ, support content
Proof/support units
Reduce anxiety and friction
Site, search, HCP resources (as appropriate)
Fig. Creative system: master narrative + modules.
Dimension
Traditional geofencing
Addressable geofencing
Targeting unit
Devices in an area
Households/addresses (then devices linked to them)
Typical geometry
Radius around POI
Address-level polygons / property boundaries
Best for
Broad local awareness, conquesting at scale
High-intent, known audiences, sequencing, local-to-national consistency
Channel mix
Mostly mobile display
CTV + mobile + display + desktop/tablet, sometimes DOOH/audio
Waste risk
Higher (passersby, commuters, employees)
Lower (audience defined before delivery)
Measurement
Often directional (visits, clicks)
Stronger options (holdouts, matched sales, incrementality)
Fig. Traditional vs addressable geofencing.
Step
What you do
What you get
Define the addressable audience
Start with addresses (trade area, CRM/loyalty, modeled households) and apply suppressions
A “who we actually want” list, before media starts
Match & resolve identity
Link households to privacy-safe IDs/devices (CTV, mobile, desktop/tablet)
Reachable households across screens
Activate with rules
Set frequency, sequencing, exclusions, channel roles
A plan that behaves consistently (not channel-by-channel chaos)
Measure with guardrails
Define visits, windows, holdouts, and what “success” means
Results you can explain—and repeat
Fig. From input to activation.
Industry
Best audience anchor
Best channel mix
Primary success metric
Retail / brick-and-mortar
Trade area households + lapsed buyers
CTV for attention → mobile/display for nudges
Incremental store visits or matched sales
Auto / dealerships
In-market modeled households + conquest segments
CTV + search retargeting + display
Dealer visits, leads, or booked test drives
Healthcare / pharma
Consent-forward segments + strict exclusions
CTV + contextual + controlled mobile
Lift-based outcomes (not individual inference)
Finance / real estate
Homeowner/renter segments + life-stage signals
CTV + display + site retargeting
Qualified leads, appointment starts
QSR / franchises
Near-location households + daypart intent
Mobile + CTV + DOOH overlays
Visit lift and offer redemption rate
Fig. Industry fit matrix.
Confidence tier
What you can responsibly claim
What you need in the setup
High
“We drove incremental lift”
Holdout/geo-holdout + tight visit rules
Medium
“We saw measurable movement consistent with impact”
Strong exposure logging + exclusions + stable baselines
Directional
“We observed signals worth testing further”
Clean reporting, but no control—use as learning only
Fig. Measurement confidence tiers.
Method
Strength
Weakness
Best use
DMA/ZIP targeting
Scale
Low intent
Awareness, broad local coverage
Radius geofencing
Easy proximity
High noise
Quick local pushes, conquesting at scale
Contextual/location context
Privacy-friendly
Less precise
Brand-safe reach, lightweight targeting
Interest-based
Scalable
Not place-tied
Prospecting where location is secondary
Addressable geofencing
Precision + measurement
Requires data + setup
High-intent acquisition, store lift, cross-device sequencing
Fig. Addressable geofencing vs other location-based targeting methods.
Device reach
Multi-DRM support (e.g., Widevine, FairPlay, PlayReady) dictates which devices and operating systems a platform can certify as secure playback environments. Broad DRM compatibility influences both audience scale and advertiser confidence because brands expect OTT DRM solutions to secure rich media across Smart TVs, mobile, and connected devices.
Regional expansion
Regulatory regimes vary by market. Europe’s GDPR, for example, imposes privacy constraints that intersect with measurement strategies, making DRM-enabled server-side control essential for compliant attribution and cross-border scaling.
Creative formats
The lack of deep client-side hooks in OTT and DRM streams compels planners to prioritize formats that perform well without invasive tracking—such as unskippable ads, interactive prompts tied to session-level events, and contextual creative aligned with content themes.
Inventory availability and premium demand
DRM adoption is often a gating factor for eligibility with premium advertisers. Buyers allocate larger budgets to secure, brand-safe inventory backed by robust OTT rights management and controlled playback environments because these reduce fraud risk, unauthorized ad removal, and piracy-related dilution of ad impact.
Context signal
What it tells you
Best used for
Topic & subtopic
What the content is broadly about
Scalable prospecting and relevance at scale
Entities
Specific brands, products, locations, people mentioned
Competitive conquesting, category adjacency, precision relevance
Sentiment
Whether the content is positive, neutral, or negative
Brand protection and “right moment” messaging
Suitability tier
How risky the environment is for your brand
Controlling adjacency, tightening quality without killing reach
Fig. Contextual signals and what they tell you.
Pipeline stage
Typical inputs
What to QA before scaling
Analyze content
Page text, headline, metadata, transcript/CC
Misclassification, keyword traps, missing context
Assign label
Taxonomy + sentiment + suitability rules
Segment definitions, exclusions, tier thresholds
Activate
Deals, pre-bid filters, bidder inputs, PMPs
Coverage, overlap, supply concentration
Measure
Context-tier reporting + lift/holdouts
Incrementality, variance, creative-context mismatch
Fig. Contextual advertising workflow steps.
Environment
Why contextual fits
What to watch
Research-driven content
Clear problem/solution intent
Broad categories hiding mixed intent
Premium editorial
Strong signals + credibility transfer
Suitability rules that over-block news
Streaming/CTV
Identity is partial; context is dependable
Metadata quality, limited transparency
In-app/in-game
Scales without IDs in many cases
“App category” being treated as a full context
Fig. Where contextual works best.
Type
How the context is determined
Best-fit objective
Search-based contextual ads
Query intent + SERP context
Capture demand when intent is explicit
Display contextual ads
Page/app classification + suitability
Scalable relevance and prospecting
Native contextual advertising
Surrounding content + format alignment
Mid-funnel engagement and consideration
Video/CTV contextual advertising
Metadata/transcripts/genre + suitability
Reach with brand control in streaming environments
Fig. Contextual advertising types at a glance.
Dimension
Contextual targeting
Audience-based targeting
What it means in practice
Primary signal
Content environment (topic, sentiment, suitability)
User identity/behavior (IDs, segments, lookalikes)
Context is resilient when IDs degrade
Privacy exposure
Lower (less dependence on personal data)
Higher (depends on consent, data sharing, policy shifts)
Legal and operational load can rise quickly
Reach limitations
Inventory in chosen contexts
Match rates + segment availability
Audience reach can collapse if IDs disappear
Control over environment
High (you define where you show up)
Variable
Context is a governance lever
Best-fit objectives
Relevance, brand alignment, intent moments
Retargeting, lifecycle, loyalty
Use identity where you can defend it
Fig. Differences between contextual and audience-based targeting.
Capability
Tizen OS (Samsung)
Google TV (Android TV OS)
Why it matters to advertisers
UI entry point
Samsung-led home screen and services (Smart Hub / One UI Tizen)
Content-first interface layer that sits on Android TV OS
Home screen design shapes discovery and premium placements (e.g., sponsored tiles vs Masthead)
Cross-app discovery
Strong, but Samsung-centered (content curated in Samsung’s environment)
Built for cross-app aggregation and watchlist-style discovery
Recommendation rails influence organic viewing, app selection, and ad adjacency
App ecosystem
Samsung TV apps via Samsung’s Smart Hub/app environment; availability can vary by region
Google Play Store for Android TV; broad “apps & games” ecosystem
App coverage changes reachable audiences and the mix of streaming vs FAST viewing
Voice + smart home
SmartThings integration (TV as an IoT hub/control point)
Google Assistant on TV devices; broader Google home ecosystem
Impacts household identity continuity and cross-device behavior
Performance consistency
More predictable within Samsung’s lineup (single OEM controlling stack)
Varies by OEM hardware and update approach across many brands
Performance affects session length, churn, and how often viewers stay on OS-native surfaces
Data + ads linkage
Samsung Ads + TV usage signals (ACR-based viewership insights/targeting)
Google identity + Google Ads/YouTube extensions via Google TV network
Determines targeting options, measurement paths, and retargeting feasibility
Fig. Tizen smart TV vs Google TV.
Discovery lever
Tizen OS (Samsung)
Google TV (Android TV OS)
Default orientation
Samsung services + partner rails
Content-first aggregation across apps
Personalization anchor
Samsung account/services behavior (varies by setup)
Google account signals when signed in
What to watch next
Often steered by Samsung hubs + promoted services
Strong emphasis on unified recommendations + watchlist
Fig. Discovery logic, Tizen vs Android: how each OS organizes attention.
App category
What to verify
Why it affects delivery
Top streamers
Availability by country, model year, and OS version
Missing “must-haves” shifts viewing time and reachable audiences.
Regional broadcasters
Local OTT/news apps, sports rights apps
Regional gaps can cause underdelivery against geo targets.
FAST services
Whether FAST is OS-integrated or just an app
Integration influences time spent in ad-supported environments.
Niche subscriptions
Fitness, kids, international programming
Audiences can concentrate here—small gaps become big reach gaps.
Fig. App ecosystem due diligence by market.
Decision you’re making
Tizen / Samsung-led path
Google TV / Google-led path
What to ask your partner
Where the impression runs
OEM home screen + Samsung-owned environments
Google TV network eligible surfaces
“Show me the exact placement taxonomy.”
Identity backbone
Device + OEM account (plus ACR-style signals where enabled)
Google account ecosystem (where enabled)
“What happens when users aren’t signed in?”
Measurement reality
OEM reporting + lift/incrementality options
Google reporting + YouTube/Google stack alignment
“How do you model when signals are missing?”
Optimization loop
OEM-centric (surface + device usage)
Google-centric (campaign + audience logic)
“What levers are actually controllable?”
Fig. Buying and measurement map (what you’re really choosing).
Topic
Google TV
Samsung/Tizen
Buyer implication
Ad personalization controls
Account-based controls (e.g., My Ad Center)
TV-level settings vary by model/region
Opt-outs can shrink deterministic audiences.
Signed-in dependency
Many features improve when signed in
Samsung account strengthens continuity
Treat sign-in rate as a planning variable.
Consent + regulation pressure
Platform policies + regional rules
Device-level tracking scrutiny rising
Your measurement plan must survive signal loss.
Fig. Privacy controls quick reference.
2026 shift
Where it breaks first in real execution
What to put in place
Cure periods sunset / no-cure states expand
Tag drift, vendor toggles, “we’ll fix it next sprint”
Pre-launch privacy QA + post-launch monitoring (alerts for rule changes)
Universal opt-out signals become operational
Opt-out honored on-site but dropped downstream
End-to-end GPC/opt-out propagation tests + loggable proof
“Sensitive” categories expand and sharpen
Location-heavy tactics, cross-device graphs, modeled segments
High-scrutiny workflow for sensitive inputs + clear vendor boundaries
Fragmentation gets worse before it gets better
Separate IDs, inconsistent reporting, measurement gaps
Unified measurement plan + clear assumptions per channel
Fig. What changed, where it breaks, what to do.
CTV/OTT touchpoint
What data is often involved
Fast “risk check”
Household/IP-based targeting
IP/household identifiers, inferred household attributes
Can you explain how opt-out changes targeting and measurement?
Cross-device/identity matching
Device graphs, probabilistic signals, partner IDs
What’s shared vs. only used internally—and how is it audited?
Location-influenced segments
Precise/approx location signals, venue proximity
Is “precise” location present anywhere in the chain?
Foot-traffic / offline attribution
Exposure matching + location visit logic
What’s consented, aggregated, and retained—and for how long?
Fig. OTT/CTV privacy hotspots.
Ask this
Good answer sounds like
Red flag
“Show how opt-out is enforced downstream.”
Clear propagation steps + logs + test method
“We comply” with no mechanics
“What’s your definition of sale/share in practice?”
Specific data flows + subprocessor clarity
“Depends” without documentation
“How do you handle drift?”
Monitoring + change management + alerts
“We review quarterly”
“What proof can you provide today?”
Repeatable tests + artifacts you can keep
“Trust us”
Fig. Vendor proof checklist.
Sector
Example Uses
Recent Evidence
Digital marketing & retail
Targeted ads, competitor targeting
Advertising guides and stats on walk-in attribution
Logistics & transportation
Fleet tracking, asset monitoring
Industry market reports
Government & elections
Election vehicle tracking, polling station mapping
Deployment in Mumbai and Palamu
Public safety & smart cities
Traffic control, alerts
Smart city research
Security & compliance
Restricted area alerts
Safety tech documentation
Consumer IoT
Home automation, pet monitoring
Definitions and use cases
Specialized (drones, events)
No-fly enforcement, event messaging
General industry descriptions
Key Benefit
What It Delivers
Real-World Impact / Use Case
High-precision targeting
Targets users only when they enter a specific physical area (store, venue, block, competitor location)
Reduces wasted impressions and improves relevance compared to ZIP-level or demographic targeting
Intent-driven engagement
Reaches users when physical proximity signals intent (shopping, attending an event, visiting a competitor)
Higher engagement rates due to real-time, context-aware messaging
Foot traffic attribution
Connects ad exposure inside a geofence to actual store or venue visits
Enables measurable lift analysis and clearer ROI between digital ads and offline conversions
Event engagement
Triggers messages inside venues (stadiums, conferences, festivals) based on zones or entry points
Improves attendee navigation, sponsor visibility, and on-site interaction
Competitive conquesting
Targets users visiting competitor locations with alternative offers
Influences decision-making at the moment of competitive consideration
Local conversion lift
Delivers timely offers when users are close enough to act immediately
Increases walk-ins, redemptions, and short-path conversions
Operational insights
Collects data on visits, dwell time, and repeat location behavior
Supports campaign optimization, audience segmentation, and budget allocation
Improved ROI efficiency
Focuses spend on high-intent, location-qualified audiences
Lowers cost per visit and improves overall campaign efficiency
Model type
How credit is assigned
Built-in bias
Best used for
Last-click
100% to final tracked touch
Over-rewards closers (brand search, retargeting)
Quick “what closed it?” diagnostics
First-click
100% to first tracked touch
Over-rewards discovery, ignores nurture
Awareness/discovery mapping
Multi-touch (rule-based)
Fractional credit across touches via a fixed rule
Inherits your assumptions
A readable baseline you can explain
Multi-touch (data-driven)
Credit based on observed contribution patterns
Becomes a black box if inputs are messy
Adaptive weighting at scale (when volume is strong)
Fig. Single-touch vs multi-touch.
Pipeline step
What you need
Fast QA question
Collect touchpoints
Platform logs + site/app + CRM events
“Do we consistently capture impressions/clicks/events by channel?”
Resolve identity
First-party IDs; optional modeled stitching
“What % of conversions can we tie to a durable identifier?”
Build conversion paths
Lookback window + dedupe rules
“Are we double-counting the same conversion across systems?”
Apply model
Chosen weighting logic
“Would a different model flip our budget conclusion?”
Output decisions
Channel/campaign/creative cuts & tests
“Can a human explain the ‘why’ in one minute?”
Fig. MTA pipeline and QA checks.
Model
Credit rule
Best used when
Watch-outs
Linear
Equal share to every touch
You want a clean baseline
Treats low-intent and high-intent touches the same
Time-decay
More weight closer to conversion
Shorter cycles; recency matters
Can quietly reward retargeting and saturation
U-shaped
Extra weight to first + last
Clear intro + close steps
Middle touches can get under-valued
W-shaped
Weight to first + milestone + last
B2B stages are reliable
Milestone quality decides model quality
Data-driven
Weights learned from patterns
Enough volume + stable tagging
Biased inputs → biased outputs; trust drops if unauditable
Fig. Model cheat sheet.
Method
What it answers best
Best time horizon
What to pair it with
MTA
“What worked inside the journey?”
Weekly/in-flight
Holdouts + sanity checks
MMM
“What drove outcomes over time?”
Quarterly planning
MTA for tactical steering
Incrementality tests
“Did it cause lift?”
Test windows
MTA/MMM to target what to test
Fig. MTA vs MMM vs incrementality
Feature
Traditional Personalization
Hyper-Personalization
Data Sources
Limited to CRM, past purchases, demographics
Real-time behavioral, context, device, location
Execution Speed
Campaign-level, scheduled updates
Real-time, instantaneous adaptation
Personalization Scope
Static segments
Individual-level tailored experiences
Technology
Rule-based systems
AI-driven, machine learning, predictive analytics
Impact on CX & Performance
Moderate relevance and engagement
Higher relevance, engagement, and ROI potential
Industry
Primary Use Cases
Key Data Signals Used
Business Impact
Retail & E-commerce
Dynamic product recommendations, personalized promotions, adaptive pricing
Browsing behavior, purchase history, cart activity, location, device
Higher conversion rates, increased average order value, improved repeat purchases
Financial Services
Personalized product offers, contextual financial advice, predictive cross-sell
Transaction data, spending patterns, lifecycle stage, risk indicators
Increased product adoption, stronger customer trust, higher engagement
Travel & Hospitality
Tailored destination offers, loyalty personalization, dynamic upsell
Search behavior, travel history, booking intent, seasonality
Higher booking conversion, increased ancillary revenue, improved guest satisfaction
Healthcare
Personalized patient communication, care reminders, content personalization
Appointment history, treatment data, engagement signals (privacy-compliant)
Improved adherence, better patient experience, reduced no-shows
Telecommunications
Churn prevention, plan recommendations, usage-based offers
Network usage, device data, contract status, engagement trends
Reduced churn, higher retention, more efficient upsell
Media & Entertainment
Content recommendations, personalized ad delivery, audience segmentation
Viewing behavior, content preferences, time-of-day, device type
Higher engagement, longer session duration, improved ad performance
Advertising & Martech
Real-time audience activation, dynamic creative optimization, media personalization
First-party data, behavioral signals, contextual data, performance feedback
Higher ROAS, reduced wasted spend, more efficient media utilization
Aspect
IPTV
OTT
Network control
Operates on closed, managed IP networks controlled by telecoms or private operators, allowing traffic prioritization and enforced service levels
Delivered over the open public internet, with no control over last-mile networks; optimization is software-based rather than network-based
Infrastructure
Requires dedicated operator infrastructure (headend, managed backbone, middleware, often set-top boxes), resulting in higher fixed and operational costs
Built on cloud platforms and third-party CDNs, enabling faster deployment, lower upfront investment, and modular scaling
Latency and QoS
Consistently low latency and predictable quality of service, especially for live and linear TV, due to network-level prioritization
Variable latency and QoS depending on geography, congestion, and device; adaptive bitrate mitigates but does not eliminate variability
Device dependency
Typically restricted to operator-approved devices (set-top boxes or controlled apps), simplifying QA but limiting flexibility
Device-agnostic by design, supporting smart TVs, mobile, desktop, streaming devices, and consoles without operator hardware
Geographic scalability
Expansion limited by network footprint, regulatory constraints, and capital investment, making cross-border scaling slower
Highly scalable across regions, with rapid market entry enabled by CDN coverage and app-store distribution
IPTV
OTT
Managed by telecom or ISP operators, often tied to existing customer accounts
Direct-to-consumer billing via apps, platforms, or app stores
Commonly bundled with broadband, mobile, or voice services
Typically standalone, with optional bundles or add-ons
High predictability and lower churn due to contracts and bundled services
Higher churn risk, requiring continuous acquisition and retention efforts
Limited pricing experimentation, slower plan iteration
Highly flexible pricing, including tiers, trials, promotions, and hybrids
Slower, dependent on operator rollout and infrastructure
Rapid launch and iteration, often globally from day one
IPTV
OTT
Typically network- or headend-based, less flexible
Server-side or app-based ad insertion, enabling real-time decisioning
Slower optimization cycles, limited cross-device measurement
Real-time optimization and granular measurement, aligned with CTV buying
Limited and operator-dependent
Built for programmatic and performance-driven advertising
IPTV
OTT
Rights limited by network footprint and carriage agreements
Multi-device and multi-region rights frameworks
Access controlled via closed networks and approved devices
Access enforced through DRM and application-level controls
Geographic expansion constrained by infrastructure and regulation
Rapid international expansion via licensing and CDNs
Strong suitability for exclusive carriage and live broadcast rights
Flexible syndication and platform partnerships
Rapid international Primarily subscription-led monetization via licensing and CDNs
Multiple monetization paths including ads and hybrids
Hybrid IPTV/OTT platforms
Many broadcasters, telecoms, and pay-TV operators now deploy hybrid platforms that combine IPTV’s managed delivery with OTT’s application-based flexibility. Linear channels and premium live content may still be delivered via IPTV to guarantee quality and reliability, while on-demand libraries, catch-up TV, and ad-supported content are delivered via OTT apps over the public internet.
This hybrid approach allows operators to protect core subscription revenue while expanding reach and monetization options. From a platform strategy perspective, it enables controlled inventory management on the IPTV side and scalable, programmatic monetization on the OTT side—bridging closed and open environments without forcing a single delivery model.
Smart TVs and app ecosystems
Smart TVs have accelerated convergence by becoming the unifying interface for both IPTV and OTT consumption. Operator-provided IPTV services now coexist alongside OTT apps within the same home screen, blurring the distinction for viewers even when the underlying delivery mechanisms differ.
For content owners and advertisers, this means delivery model matters less to the user than the experience itself, while backend decisions still determine monetization, measurement, and control. App ecosystems enable faster feature deployment, richer personalization, and tighter integration with advertising technologies—capabilities historically associated with OTT but increasingly layered onto IPTV-driven services.
The role of CTV
Connected TV sits at the center of IPTV–OTT convergence. CTV environments aggregate linear IPTV feeds, broadcaster apps, FAST channels, and global streaming platforms into a single viewing context. As a result, advertising strategies are shifting from delivery-model-centric to screen-centric, focusing on reach, addressability, and outcomes rather than whether content is technically IPTV or OTT.
Aspect
CTV Advertising
CTV Media Buying
Focus
Strategic planning and creative execution
Operational execution and inventory management
Primary responsibility
Marketing directors, brand managers, strategists
Media buyers, programmatic traders, campaign managers
Key activities
Audience research, creative development, campaign positioning
Platform selection, deal negotiation, bid management, pacing control
Success metrics
Brand lift, awareness, consideration, overall ROAS
CPM efficiency, reach vs frequency balance, supply path cost, viewability
Decision-making
What message to communicate and to whom
Where to buy inventory and at what cost
Timeframe
Long-term campaign planning cycles
Day-to-day optimization and real-time adjustments
Fig. CTV media buying vs CTV advertising.
Signal to check
Why it matters
What to do next
App-level transparency
If you can’t see the app, you can’t validate quality
Require app reporting in PMPs; downgrade or exclude opaque supply
Seller identity (direct vs reseller)
Reselling can add fees and increase spoofing risk
Prefer direct sellers; build seller allowlists
Deal ID coverage
“CTV” can mean wildly different packages
Map each deal ID to explicit apps/publishers and rules
Verification compatibility
Some CTV environments support stronger signals than others
Confirm IAS/DV/Pixalate support and what they can actually measure
Fig. Supply path quick audit.
Deal type
Best for
Tradeoff to accept
Open auction
Fast testing, flexible scaling
Higher variance in transparency and quality
PMP
Quality controls, negotiated access
Not guaranteed; floors can raise costs
Preferred deal
Consistency without full commitment
Less flexibility than auction; still not guaranteed
Programmatic guaranteed / direct IO
Predictable delivery, premium adjacency
Less agility; commitments can reduce learning
Fig. Deal types selection cheat sheet.
Question you need answered
Best-fit approach
What it won’t tell you
“Did the ad run in a quality environment?”
Transparency + verification
Whether it changed outcomes
“Did people receive the message?”
Completion + brand lift surveys
Incremental sales or visits
“Did it drive incremental impact?”
Holdouts / geo lift / experiments
Creative diagnostics at a granular level
“How does CTV interact with other channels?”
MMM + cross-channel attribution
Individual-level causality without testing
Fig. Measurement methods by the question you’re trying to answer.
What you see
Likely cause
Fix
High frequency, low reach
Targeting too tight or supply too narrow
Expand supply, relax targeting, set distribution goals
Under-delivery
Deal inventory too constrained
Add backup supply paths; renegotiate floors/terms
“Great” completion, weak results
Measurement misfit or creative wear-out
Add lift testing; refresh creative; validate reach quality
Opaque reporting
Buying through bundles/resellers
Require app transparency; use allowlists; renegotiate deal terms
Fig. Troubleshooting map for common CTV buying problems.
Tactic
What makes it attractive
What it usually depends on
Why it gets risky fast
What to switch to first
Geo-fencing at venue level
Tight relevance, simple story
Precise device location + high-frequency signals
Often triggers “precise geolocation” handling and complex vendor chains
Broader geo + contextual placement + local creative
Visit-based attribution
“Proof” of impact
Device IDs + location observation
Makes “location” feel like identity, not market
Incrementality tests + market-level lift
Foot-traffic reporting
Easy to sell internally
Location feeds stitched across partners
Hard to validate sourcing, consent, and downstream transfers
KPI basket + modeled lift + store-level outcomes
“Visited X” audience segments
Retargeting power
Location history over time
Creates sensitive inferences + retention risk
Intent/context segments + first-party audiences
Fig. Quick risk map for common tactics.
What you’re doing
What it looks like in-market
Data typically required
MODPA exposure (practical)
Safer way to keep the intent
Targeting
DMA/ZIP/city targeting, local publisher packages, radius around a market
Broad geography, contextual signals
Often lower—if you avoid precise device location
Lean into contextual/local supply + broader geo
Measurement
Store lift, trade-area analysis, incrementality testing
Market-level outcomes, test/control design
Medium—depends on methodology
Geo holdouts, modeled lift, blended truth sets
Reporting
“Foot traffic,” visitation dashboards, path-to-store stories
Frequently device-level location + IDs
Often highest—common to rely on precise location
Aggregate market insights + privacy-first measurement
Fig. “Location-based” isn’t one thing.
What you need to know
A strong answer sounds like
Red flags
What to do next
Data sourcing
Clear sources, documented permissions, limited partners
“Proprietary,” “trusted partners,” no documentation
Require a written data-flow summary
Precision
Explicitly distinguishes broad geo vs precise geolocation
Won’t define “precise,” won’t share thresholds
Re-scope to broader geo/contextual
Sensitive-data handling
Treats precise location as sensitive, strict controls
“We don’t think it applies to us”
Pause device-level tactics
Sale/transfer
Clear position on sale/sharing, contract language aligns
Vague licensing language, unclear downstream use
Legal/compliance review
Privacy signals
Can explain how they honor preference signals (e.g., GPP)
“We’re working on it”
Make support a requirement, not a nice-to-have
Retention/deletion
Specific retention window + deletion process
“We retain for analytics” (no timeframe)
Set retention limits in contract
Fig. Vendor diligence scorecard.
Layer
What it includes
Output you can use
Identity signals
Logins, hashed email/phone, MAIDs (where permitted), IP/household hints, contextual patterns
A set of input “keys” that may or may not be durable by environment
Graph logic
Deterministic joins + probabilistic models + household clustering
A person/household view with confidence levels
Activation + measurement
DSP activation, suppression/sequencing rules, clean rooms, measurement partners
Reach/frequency control + multi-device reporting (directional unless validated)
Fig. How cross-device targeting actually works.
Approach
Typical identifiers
Best used for
Main tradeoff
Deterministic
Logged-in IDs, hashed emails/phones, first-party IDs, MAIDs
Sequencing, suppression, first-party outcome measurement
High confidence, limited scale
Probabilistic
Modeled device/behavior/context patterns, IP-based inference (with caveats)
Reach extension, directional analysis
More scale, more uncertainty
Household graph
Shared household/account/device clusters
CTV planning, dedupe, frequency management
Great for households, weaker for person-level precision
Fig. Comparing deterministic, probabilistic, and household-based targeting.
Environment
What’s reliable now
What’s fragile
What to plan for
Logged-in apps/CTV
Platform/account IDs, first-party signals
Data portability across partners
Walled-garden reporting constraints + household logic
Open web browsers
First-party relationships, contextual signals
Cross-site cookies/IDs
Treat ID coverage as uneven; keep a strong contextual plan
Measurement workflows
Clean-room matching, modeled lift, first-party outcomes
One-size-fits-all attribution
Directional models unless validated via holdouts/calibration
Fig. What cross-device targeting looks like in a cookieless environment.
Factor
Single-device targeting
Cross-device targeting
Audience definition
Cookie/device ID-based
Person/household-based (graph) variance in transparency and quality
Frequency control
Per device → often inflated
Deduplicated across devices
Sequencing
Hard to coordinate
Designed for sequential messaging
Measurement
Higher duplication, last-touch bias
Better dedupe; still needs incrementality
Scale
Can be broad but unstable
Depends on identity signals + partners
Privacy durability
Often weaker (third-party dependency)
Stronger when based on consented first-party signals
Best for
Simple retargeting, device-specific apps
Omnichannel reach, CTV coordination, controlled exposure
Fig. Cross-device targeting vs single-device targeting.
Coordination pattern
What it does
When it’s the best choice
KPI to watch
Suppression
Reduces duplicate exposure across devices
You’re seeing frequency complaints or wasted retargeting
Deduped reach + frequency-to-convert curve
Sequencing
Controls message order across screens
You have distinct creative stages (story → proof → offer)
Completion rate + assisted conversions
Outcome assist
Uses digital devices to capture actions CTV triggers
CTV drives demand but doesn’t close
Lift in site/CRM outcomes vs control
Fig. Common cross-device coordination patterns across CTV and digital.
Failure mode
How it shows up
Practical fix
Over-crediting
CTV “wins” every conversion inside long windows
Shorten windows, exclude retargeting, add holdouts
Graph mismatch
Sequence collapses, suppression leaks
Apply confidence thresholds + test match quality by use case
Walled-garden opacity
Strong results, weak explainability
Plan separate measurement lanes + compare at objective level
Consent fragility
Sudden drop in addressability
Build fallback paths (contextual, cohort/household, modeled lift)
Fig. Where cross-device targeting breaks down—and how to fix it.
Environment
What you gain
What can go wrong
Buyer guardrails
In-app
Strong engagement formats (rewarded, interstitial, native), richer app context
App/SDK spoofing, incentivized traffic, murky placement visibility on low-quality supply
app-ads.txt checks, allowlists, strict IVT filtering, placement-level reporting
Mobile web
Familiar web workflows, easier URL-level inspection, simpler landing path
Cookie/consent constraints, domain spoofing, MFA-style inventory, viewability variance
ads.txt + sellers.json checks, domain allowlists, viewability thresholds, supply-path controls
Fig. In-app vs mobile web—what changes for buyers.
Deal type
How pricing works
What’s fragile
Open auction
Auction CPM (dynamic)
Broad reach, testing, scalable performance once quality is controlled
Private marketplace (PMP)
Auction within a restricted seller set
Cleaner supply, more predictable placements, often stronger brand safety controls
Preferred deal
Fixed CPM, no guarantee of volume
You want first look at supply without committing to delivery
Programmatic guaranteed
Fixed CPM, guaranteed delivery
You need certainty (high-impact placements, tentpole moments, strict frequency planning)
Topic
Mobile programmatic
Desktop programmatic
Inventory environments
Heavy mix of in-app SDK supply + mobile web
Mostly web environments; app inventory is less central
User identification
Device identifiers may exist (AAID), but iOS tracking is permissioned; higher reliance on privacy-safe IDs, cohorts, contextual
Cookies and first-party IDs historically stronger; cookie constraints still apply but the tooling is more mature
Measurement and attribution
App installs/events need MMPs + OS attribution APIs; post-view is contentious; SKAN/AdAttributionKit constraints shape optimisation
More stable web conversion paths (when tags and consent allow); clearer click-to-conversion journeys for many advertisers
Fraud risks
App spoofing, SDK spoofing, IVT, MFA-like patterns in apps, incentivized traffic, emulator farms
Bot traffic, domain spoofing, MFA sites, cookie stuffing; generally easier to inspect page-level signals
Fig. Mobile programmatic vs desktop programmatic.
Signal type
What it’s good for
Where it fits best
Risk note
First-party audiences
Highest intent + LTV control
Retargeting, suppression, value-based bidding
Needs clean consent + event hygiene
Device IDs (where available)
Frequency, app event optimization
Android UA + re-engagement
Fragile if users reset/limit ID access
Publisher/contextual
Scalable prospecting with fewer privacy headaches
Upper funnel + mid funnel
Requires strong taxonomy + placement controls
Modeled/cohort signals
Bridging gaps when IDs are limited
iOS prospecting + incrementality
Must validate with experiments, not just attribution
Fig. Targeting ladder for programmatic mobile advertising.
Goal
Primary measurement
What to validate with
App installs + early events
OS attribution (iOS) + MMP event logging
Creative tests + geo/audience holdouts
Mobile web conversions
Click-based tracking + consented analytics
Incrementality or matched-market tests
Re-engagement/retargeting
First-party cohorts + short windows
Holdout tests (retargeting is the usual over-credit zone)
Offline impact (stores, calls)
Lift studies (geo) + multi-signal triangulation
Control groups + seasonality checks
Fig. Measurement method chooser (use-case to method).
Channel
What “completion” usually reflects
What can distort it
Best paired with
CTV/OTT
Lean-back viewing; ad has a fair chance to play through
Transparency gaps, fraud/IVT, “TV off” waste
App transparency + fraud controls + frequency
Programmatic video
Player + placement behavior (autoplay, outstream, viewability)
Long-tail junk, poor measurability, low viewability
Viewability/measurability + supply controls
Social video
Scroll behavior; opening seconds dominate
Accidental plays, mixed completion definitions
Thumbstop rate + retention curve + CPCV
DOOH
Often modelled exposure + confirmed playout
Movement, sightlines, attention modelling assumptions
Playout verification + location/time context
Fig. What “completion” tends to mean by channel.
What you know
Use this formula
Why this is useful
Total spend + completed views
CPCV = Spend ÷ Completed views
The cleanest “what did we pay for full message delivery?” number.
CPM + completion rate
CPCV ≈ (CPM ÷ 1000) ÷ Completion rate
Helps you forecast CPCV before launch and diagnose whether pricing or completion is the issue.
CPV + completion rate
CPCV ≈ CPV ÷ Completion rate
Useful for social/online video buys where CPV is the primary buying unit.
Fig. CPCV calculation cheat sheet.
Metric
What you’re paying for
Best use
Classic mistake
CPV
A defined “view” threshold
Buying scalable attention starts
Treating a “view” as full message delivery
CPCV
Completion of the message
Storytelling, sequencing, message discipline
Optimizing CPCV while ignoring reach/quality
CPM
1,000 impressions
Planning reach & frequency
Assuming impressions = attention
CPA
A conversion event
Lower-funnel efficiency
Expecting video delivery metrics to replace CPA
Fig. CPCV vs CPV vs CPM vs CPA.
Platform
What counts as a “view”
Why it matters to CPCV
YouTube (Google Ads)
Typically 30 seconds (or full ad if <30s); some formats use 10 seconds

CPV can be meaningful attention, but still not completion (Google Help)

Meta (Facebook/Instagram)
3 seconds (or nearly full length if shorter)

CPV often measures initial attention only (Facebook)

TikTok Ads
2-second video views (also tracks 6-second and 100% plays)

Fast swipe environments make “view” a very light threshold (TikTok For Business)

LinkedIn
2+ continuous seconds with 50% on screen (MRC-aligned)

View is closer to in-view attention, still not completion (LinkedIn)

Fig. Platform “view” thresholds (why CPV ≠ CPCV in practice).
Lever
What it usually improves
How to validate
Risk to watch
Creative opening + pacing
Completion rate
Compare CPCV + completion by creative ID
“Winner” is actually just shorter
Supply/inventory controls
Viewability + measurability + stable completion
CPCV by placement/app/site
Cheap completion from low-quality long tail
Audience fit
Attention quality
CPCV by segment + downstream KPIs
Over-targeting reduces scale
Frequency/sequencing
Efficiency over time
CPCV by frequency bucket
CPCV looks good because you spammed the same people
Fig. Optimization levers and what they change.
Native element
What it looks like on mobile
Why it affects performance
Layout match
Auction CPSame card/tile structure as surrounding contentM (dynamic)
Reduces “this is an ad” friction and earns a longer glance
Interaction match
Swipe/tap patterns feel normal in the placement
Users don’t need to learn a new UI to engage
Context match
The topic/task aligns with what the user is doing
Relevance does the heavy lifting before persuasion
Clear disclosure
“Sponsored/Ad” is visible and legible
Trust improves follow-through and reduces low-quality clicks
Fig. What makes an ad ‘native’ on mobile.
Format
Best when the user is…
Primary KPI to judge it
In-feed
Scrolling casually
Engaged sessions + CTR (not CTR alone)
Story-native video
Consuming vertical video quickly
Completion rate + swipe-through/conversion
In-map
Looking for where to go next
Store visit proxy / direction clicks / lead actions
In-game native
Playing and open to value exchange
Post-view engagement + downstream conversion
Commerce-native
Browsing products
Add-to-cart / purchase conversion rate
Fig. Format selection cheat sheet.
Benefit claim
What to measure
What confirms it’s real
Higher engagement
CTR + engaged sessions + scroll depth
Clicks that stay and move deeper
Stronger conversion
CVR + CPA/ROAS
Conversions that hold under scaling
Reduced fatigue
Frequency vs CTR/CVR trend
Performance that doesn’t collapse at higher reach
Ad-blocker resilience
Delivery stability + match rate
Less volatility than comparable web display buys
Fig. Benefits that are measurable.
Factor
Mobile native in apps
Mobile native on the web
User mindset
Habitual, session-based
Intent bursts, task-based
Best-performing native formats
In-feed, stories, interactive, in-app commerce
In-article, in-feed publisher tiles, recommendation modules
Click behavior
More taps, more in-platform actions
More “bounce risk” if landing is slow
Measurement constraints
Platform rules, privacy frameworks
Cookie limits, browser privacy controls, ad blockers
Conversion path
Deep links, in-app checkout, app events
Mobile landing pages, web checkout, forms
Fig. Mobile native in apps and web.
Business goal
Best-fit strategy
Best-fit channels
Primary KPI
Control spend by market and reduce waste
Geotargeting
CTV, paid social, search, programmatic display
Incremental lift by market / CPA by market
Capture place-based intent (stores, venues, competitors)
Geofencing
In-app, mobile web, DOOH, paid social
Qualified visits / cost per incremental visit
Drive immediate action (calls, directions, bookings)
Mobile and in-app activation
In-app, paid social, local search
Calls, directions, bookings, conversion rate
Scale premium reach with geographic control
Location-powered CTV
CTV + mobile assist
Reach/frequency + lift vs holdout
Keep messaging locally relevant at scale
Location-based social and digital
Paid social, video, display
CTR/engagement + downstream lift
Fig. Location strategy selector.
Signal type
Typical granularity
Best use
Practical caution
GPS/device sensor signals
High
Visit qualification, POI modeling, trade areas
Treat as sensitive; require strong consent + governance
IP-based / household geo
Medium
Market-level control, service-area targeting (CTV)
Less precise; don’t oversell “store visit” claims
Declared location (profiles, shipping, sign-up)
Medium
Segmentation, market planning, personalization
Stale data risk; needs refresh logic
First-party location events (app/store systems)
Varies
Loyalty linking, conversion modeling, suppression
Data hygiene and taxonomy consistency matter most
Fig. Location signal types and when to use them.
Method
What it answers
Best for
Use when…
Holdout (audience or geo)
“What changed because of ads?”
Incrementality
You can design a clean control group
Geo experiment (matched markets)
“Did market A outperform market B?”
Local + CTV programs
You have enough scale per market
Visit lift (qualified visits)
“Did visits increase among exposed?”
Retail/QSR optimization
Vendor methodology is transparent and conservative
MMM / modeled lift
“How did channels contribute over time?”
Omnichannel budgeting
You need strategic allocation, not week-to-week tuning
Fig. Measurement methods that actually hold up.
Legacy dependency
Why it breaks
2026-ready replacement
What “good” looks like
Device-level IDs as the backbone
Opt-outs + platform limits reduce consistency
First-party identity + authenticated publisher IDs
Higher match quality, fewer “unknown” users, cleaner suppression
Cookie-style retargeting logic
Cross-environment paths are fragmenting
Contextual + intent signals + modeled reach
Stable performance without needing 1:1 tracking
Single-source attribution
Walled gardens + modeling create parallel truths
Triangulation: platform reporting + lift + MMM
Results hold up when audited from multiple angles
“More data = better”
Privacy enforcement increases risk
Data minimization + consented collection
Higher trust, fewer compliance surprises, clearer governance
Fig. What’s breaking, what replaces it.
Workflow step
Let AI run?
Human control point
Success metric
Bidding/pacing
Yes
Guardrails: CAC/ROAS floors, budget caps
Stability + fewer cost spikes
Creative rotation
Yes
Define “kill rules” and test design
Faster learning velocity
Audience expansion
Conditional
Approve inputs, exclude sensitive segments
Incremental lift, not just CTR
Attribution interpretation
No
Use triangulation and sanity checks
Decisions survive audit and holdouts
Fig. AI in mobile ops: automate vs supervise.
Channel
What it’s great at
Mobile’s connective job
KPI to validate
CTV
Reach and attention
Capture follow-through (search, site, app)
Lift in qualified visits/conversions
Retail media
Close-to-checkout intent
Extend demand off-site + drive repeat
Incremental revenue, new-to-brand lift
DOOH
High-context exposure
Proximity validation + sequencing
Trade-area visit lift
Social
Scalable engagement
Turn attention into owned signals
First-party capture + incremental conversions
Fig. Mobile as the omnichannel connector.
Signal type
Strength
Limitation
Best use
First-party events (app/site/CRM)
Highest intent and control
Needs strong tagging and governance
Retention, reactivation, high-LTV acquisition
Authenticated publisher IDs
More stable reach than device IDs
Coverage varies by publisher
Prospecting with quality frequency control
Contextual (app/session/content)
Privacy-safe and scalable
Can be too broad without creative alignment
Mid-funnel and incremental reach expansion
Location/proximity (aggregated)
Links media to offline outcomes
Must be handled carefully for privacy
Trade-area lift, store visit incrementality
Fig. Signals ladder  (what mobile can still use when identity fades).
Method
Best at answering
Works best when
Watch-outs
Platform reporting
Directional campaign health
Paired with independent checks
Modeled conversions vary by platform
Lift/holdout tests
Causal impact
Stable budgets, clean splits
Requires patience and discipline
MMM
Budget allocation across channels
Longer time horizons
Not a day-to-day steering wheel
Geo experiments
Offline impact
Clear trade areas, consistent media
Requires careful design to avoid noise
Fig. Measurement toolkit.
Seller of record
What you typically get
Common limitations
What to ask before you buy
Network / programmer
Premium adjacency, familiar TV packaging
Data overlays can be limited
What targeting is actually available on this placement?
vMVPD distributor
Stronger household controls in some cases
Reporting may differ by device/feed
Are frequency caps enforced at household level?
Programmatic marketplace
Flexible activation, faster optimization
Transparency can vary
What’s the app/site list and how is it verified?
Managed service partner
Operational lift and guardrails
Less direct control
What measurement method will be used, and what are the inputs?
Fig. Your levers change depending on who sells the inventory.
Model
Viewer behavior
Best for
Watch-outs
vMVPD
Live channel surfing, appointment viewing
Live reach with more modern controls
Rights vary, and levers depend on who sells the inventory
MVPD (cable/satellite)
Linear viewing with legacy distribution
Broad linear reach, some local patterns
Less flexible targeting and slower optimization loops
SVOD
On-demand, title-led viewing
Premium on-demand attention, lighter ad loads
Limited live moments; inventory often tied to platform rules
FAST
Free, channelized library viewing
Efficient reach extension and frequency
Supply quality varies; requires tighter guardrails
Fig. Decision framework: choosing the right video model.
Question to answer
Why it matters
What ‘good’ looks like
What is your primary KPI?
Prevents “reach-only” planning
A single KPI plus 1–2 support metrics
How will you estimate incrementality?
Avoids false attribution
Test/control, geo split, or matched-market approach
What’s the attribution window?
Aligns exposure to outcome
Window matched to purchase cycle and channel role
How will you handle deduplication?
Controls frequency waste
Cross-platform dedupe where possible, plus conservative caps
Fig. Measurement checklist for performance-minded vMVPD.
Buying path
Use when
Strength
Common pitfall
Direct / upfront-style
You need predictable premium live adjacency
Consistency and clear placement context
You accept linear-like reporting when you needed outcome proof
Programmatic guaranteed / PMP
You want control with flexibility
Better levers for pacing and targeting
You assume all “PMP” inventory is equally transparent
Biddable programmatic
You need optimization and efficiency
Fast learning cycles and scale
You chase cheap CPMs and lose supply quality
Hybrid (direct + programmatic)
You need both certainty and performance
Balances reach and learning
You don’t dedupe frequency across paths
Fig. Buying paths for vMVPD and what each is best at.
Format
What you’re really buying
Best used for
Measurement strength
Broadcast (local stations)
Market-wide reach + trusted programming
Fast awareness in a DMA, seasonal surges, credibility
Strong for reach/frequency; outcomes need a test plan
Local cable
Zone coverage + repetition
Trade-area reinforcement, multi-location retail, coverage efficiency
Moderate; improves with tracked CTAs + lift testing
Sponsorships
Association + repeated presence
Owning a local moment (weather/traffic/sports), memorability
Strong for recall and consistency; outcome proof needs design
CTV/local streaming
Targeted reach + frequency control
Incremental reach, tighter geo, measurable response paths
Strongest for response signals; lift validation still recommended
Fig. Local TV ad formats at a glance.
Planning dimension
Linear local TV
CTV/local streaming
What to watch
Primary strength
Broad reach quickly
Precision + frequency control
Don’t force CTV to “do everything” alone
Targeting
Program/daypart + broad geo
Geo + audience signals
Over-targeting can shrink scale and raise cost
Buying currency
Ratings-based (GRPs/CPP)
Impression-based (CPM)
Reporting won’t match unless you align KPIs
Measurement
Delivery proof; outcomes via incrementality
Delivery + response signals; outcomes via lift
Avoid last-click thinking in both
Fig. Linear TV vs. CTV for local campaigns.
Cost lever
Pushes cost up
Keeps cost sane
Market size + competition
Larger DMAs, high-demand windows
Smaller markets, steadier demand periods
Programming + daypart
Live sports, primetime, top news
Off-peak dayparts, balanced mix
Targeting tightness
Narrow segments, strict geo constraints
Start broader, tighten after learnings
Seasonality
Holidays, political windows, major events
Shoulder periods, planned bursts
Fig. What drives local TV cost.
Buying path
When it fits best
What to ask for
Common
Direct to local stations
Premium programming, sponsorships, speed-to-reach
Clear inventory details, makegood policy, reporting scope
Treating delivery as proof of business lift
Agency/managed service
Multi-market complexity, hybrid plans, measurement design
Fee transparency, test plan, reporting cadence
Vague “success metrics” and inconsistent reporting
Programmatic/CTV platforms
Precision, faster iteration, measurable CTAs
App/site transparency, frequency controls, brand safety
Chasing cheap CPMs and accepting low-quality supply
Fig. Three buying paths and what to ask for.
Step
What you produce
Proof you should have
Typical owner
Goals
Primary + secondary goal
KPI definitions + thresholds
Marketing lead
Audience
Trade area + segments + exclusions
Coverage map + reach estimate
Media lead
Creative
Master spot + local variants
CTA tracking tested
Creative lead
Budget + flighting
Flight plan + reach/frequency target
Pacing rules + benchmarks
Media lead
Measurement
Response + lift plan
Holdout/matched markets plan (if needed)
Analytics lead
Fig. Campaign build checklist with “proof points”.
Video hosting and content management
Video files are securely stored and organized within a centralized VOD system, allowing businesses to manage large content libraries, metadata, and publishing rules across regions and devices.
Encoding and adaptive streaming
Uploaded videos are automatically encoded into multiple formats and bitrates, ensuring consistent playback quality across bandwidth conditions, devices, and screen sizes—from mobile to CTV.
CDN-based delivery
Content is distributed through global content delivery networks, enabling fast, reliable playback at scale while minimizing latency and buffering for on-demand video services.
Playback and user experience layers
White-label players and apps allow companies to fully control branding, UI, and user journeys, whether on web, mobile apps, or OTT/CTV platforms.
Access control and security
Business-grade VOD platforms include DRM, tokenized URLs, geo-restriction, and authentication, protecting premium VOD content and supporting compliant distribution.
Monetization layers
Modern platforms support SVOD, AVOD, TVOD, FAST, and hybrid monetization, enabling pricing flexibility, ad insertion, subscription management, and transactional workflows.
Analytics and performance measurement
Beyond views, VOD analytics track engagement, completion rates, churn signals, ad performance, and conversion outcomes, allowing businesses to optimize content and revenue strategy over time.
Business Benefit
VOD Streaming Platforms
Third-Party / Consumer Video Platforms
Content ownership & control
Full ownership of video assets, distribution rights, access rules, and IP protection (DRM, geo-blocking, authentication)
Platform-controlled distribution; content subject to policy changes, takedowns, or algorithm shifts
Monetization flexibility
Supports SVOD, AVOD, TVOD, FAST, hybrid models, dynamic pricing, and regional monetization
Limited or fixed monetization models; revenue shares dictated by the platform
Data & audience insights
First-party data access: viewer behavior, engagement depth, retention, churn, conversion attribution
Aggregated or restricted analytics; limited access to user-level data
Brand consistency
White-label players and apps with full control over UI, UX, messaging, and customer journey
Platform branding, competing content, and ads dilute brand experience
Scalability & growth
Built for global delivery, OTT/CTV expansion, content library growth, and evolving business models
Optimized for reach, not long-term scalability or owned ecosystems
Revenue predictability
Direct customer relationships enable recurring revenue, pricing control, and LTV optimization
Revenue dependent on ad markets, algorithms, and platform policies
Integration with business stack
Integrates with CRM, CDP, marketing automation, ad-tech, and analytics systems
Limited or no integration with enterprise data and monetization stacks
Strategic risk
Low platform dependency; infrastructure owned or contractually controlled
High dependency on third-party platforms and external policy changes
Dimension
VOD Platforms
OTT Platforms
Primary role
Backend infrastructure for video delivery and monetization
Consumer-facing distribution and viewing environments
Audience relationship
Direct, first-party relationship owned by the business
Platform-mediated relationship controlled by the OTT ecosystem
Content control
Full control over hosting, access rules, DRM, and availability
Content subject to platform rules, app policies, and storefront requirements
Monetization logic
SVOD, AVOD, TVOD, FAST, hybrid models configured by the business
Monetization constrained by OTT platform policies and revenue shares
Data ownership
First-party data: engagement, churn, conversion, attribution
Limited or aggregated data provided by the platform
Brand experience
White-label players and apps with full UX control
Platform-level UI, navigation, and discovery
Scalability
Designed to scale across web, mobile, OTT, and CTV simultaneously
Scales audience reach, not backend control
Business use
Infrastructure layer powering video strategy
Distribution layer extending reach to viewers
Dimension
VOD Platforms
YouTube & Social Video Platforms
Distribution control
Business-controlled access, pricing, and availability
Algorithm-driven reach and visibility
Monetization models
SVOD, AVOD, TVOD, FAST, hybrid
Platform-defined ad revenue sharing
Revenue predictability
High—subscriptions and direct payments
Low—dependent on ad demand and algorithms
Data ownership
First-party audience and performance data
Limited, aggregated platform analytics
Brand experience
White-label, fully branded environments
Platform UI with competing content and ads
Audience relationship
Direct, owned, and portable
Platform-mediated and non-transferable
Scalability
Designed for long-term growth across OTT and CTV
Optimized for reach, not ownership
Strategic risk
Low platform dependency
High dependency on external platforms
Platform Category
Pricing Model
Bandwidth & Storage
Monetization Fees
White-Label Capabilities
Analytics Depth
Support Level
Scalability Profile
SMB-focused VOD platforms
Flat monthly plans with usage tiers
Limited bandwidth and storage; overage fees common
Often charge transaction or revenue share fees
Partial or limited white-label
Basic engagement metrics
Standard or self-service support
Suitable for small audiences; scaling increases costs quickly
White-label VOD platforms for brands
Base platform fee + usage-based costs
Moderate to high limits; scalable with predictable pricing
Low or no revenue share; payment processing fees apply
Full white-label across players and apps
Advanced viewer, monetization, and retention analytics
Dedicated or priority support
Designed to scale across web, OTT, and CTV
Enterprise-focused VOD platforms
Custom contracts based on volume and features
High or unlimited usage negotiated at contract level
Typically no revenue share; enterprise licensing
White-label optional or internal-facing
Deep analytics with integrations (CRM, BI tools)
Enterprise-grade SLAs and account management
Built for global scale and internal distribution
OTT-first / distribution-led platforms
Revenue share or ad-based pricing
Usage optimized for CTV delivery
Ad revenue share is common
Limited white-label at platform level
Ad and reach-focused analytics
Platform-level support
Strong audience scale; limited backend flexibility
Creator-focused VOD platforms
Low entry pricing with revenue-based fees
Entry-level limits; scales with audience size
Platform takes a percentage of subscriptions or sales
Brand customization limited but improving
Creator-centric metrics (subs, churn)
Community or tiered support
Scales revenue faster than infrastructure control
Automated encoding pipelines
Modern OTT platforms support cloud-based transcoding with multi-bitrate outputs to enable adaptive streaming.
Metadata and search optimization
Accurate tagging improves discoverability, which directly affects engagement rates.
Role-based permissions
Enterprise workflows require granular access controls for editorial, marketing, and operations teams.
Engagement metrics
Session duration, completion rate, buffering ratio, and device breakdown. Industry research shows that even small improvements in startup time (under 2 seconds) correlate with higher retention rates.
Revenue attribution
Track ARPU (average revenue per user), LTV (lifetime value), and ad yield metrics.
First-party data ownership
In privacy-regulated markets, control over first-party audience data is a competitive advantage.Streaming decisions increasingly rely on data-informed programming and monetization strategy.
99.9%–99.99% uptime SLAs
99.99% uptime equates to approximately 52 minutes of downtime per year.
24/7 monitoring and incident response
Live sports and global events require real-time escalation frameworks.
Dedicated account management
Enterprise clients often require proactive performance reviews and infrastructure optimization.
Layer
What it controls
Why it matters
Content + rights
What you can show, where, and when
Determines reach, exclusivity, and pricing power
Product experience
How fans discover, watch, and return
Determines retention, churn, and brand equity
Ad + measurement
How inventory is packaged and verified
Determines revenue, trust, and repeat budgets
Fig. Layers, what they control, and why it matters.
Product goal
Monetization bias
What usually makes it work
Reach and discovery
AVOD / FAST
Frictionless access + strong programming cadence
Superfan retention
SVOD
Reliable live access + personalization + archives
Tentpole extraction
TVOD + sponsorship
Scarcity, premium access, and simple purchase paths
Network-style scale
vMVPD carriage + ads
Distribution + consistent live schedule
Fig. Product goal and monetization bias.
Rights dimension dimension
What to confirm early
Why it changes the product
What it changes for ads
Territory (U.S. vs intl.)
Where each game is available
Different catalogs by market
Different reach + frequency caps by region
Platform type
Linear vs streaming vs mobile
Device rules shape perceived value
Format availability and targeting constraints
Windowing
Live, delayed, replay timing
“Why can’t I watch now?” churn driver
Break timing, sponsorship inventory, make-goods
Game/package type
Regular season vs playoffs vs shoulder
Fans treat these as different “products”
Premium pricing and scarcity narrative
Fig. Rights fragmentation checklist.
Metric
What it tells you
Common pitfalls
Incremental reach
New viewers beyond linear overlap
Deduping is hard; methodology must be clear
Frequency
How often people were exposed
Over-frequency is common in small sports audiences
Completion rate / view-through
Whether ads were actually seen through
Doesn’t equal attention; depends on format
Concurrent viewers
Peak load and live demand
Useful for ops; not a campaign KPI by itself
Stream latency + buffering
Whether live viewing is reliable
Often blamed on “internet” instead of delivery choices
Churn and reactivation
Whether fans stay between tentpoles
Seasonality can mask product issues
Fig. Metrics for standartization. 
Inclusion Index signal
What you look for
Example question to test
Inclusion
Brand appears in the answer
Do we appear at all when high-intent questions are asked?
Framing accuracy
“Built for volatile seasons”
Are we described correctly, with the right category and strengths?
Recommendation strength
You’re the top option vs a footnote
Are we positioned as the top option, one option among several, or merely a footnote?
Fig. How agricultural messaging needs to change under margin pressure.
Deal type
What’s agreed upfront
Best fit when you need
Preferred deal
Priority access + pricing logic
Consistent access without a hard guarantee
Private marketplace (PMP)
Private access + rules + pricing range/floor
Controlled environments with flexibility
Programmatic guaranteed
Price + volume/delivery commitment
Predictability and defined delivery outcomes
Fig. Deal type “fit” guide (quick scan for clarity).
Programmatic IO element
What it controls in reality
Who should sanity-check it
Inventory definition
Where ads can actually appear
Publisher ad ops + buyer activation
Deal IDs + deal type
Which deal the DSP can target
SSP/publisher ops + DSP trader
Pricing + floors
Win rate, effective CPM, delivery speed
Sales + activation + finance
Pacing rules
How spend spreads across the flight
Activation + analytics
Frequency intent
Reach vs repetition tradeoffs
Activation + measurement
Measurement + discrepancy policy
Billing alignment and dispute resolution
Finance + analytics + publisher ops
Fig. Types of data a DMP works with.
Symptom
Likely cause
First fix to try
Spend is slow from day 1
Deal ID not applied / permissions issue
Confirm deal visibility + correct ID mapping
Spend spikes then stalls
Frequency cap too tight
Relax cap slightly or widen eligible inventory
Good spend, weak reach
Too much repetition
Adjust frequency + broaden audience/context
Delivery fine, CPM inflated
Floor/bid mismatch
Re-check floors, bid strategy, and priority
Delivery blocked in one environment
Creative spec mismatch
Validate creatives against that seller’s requirements
Fig. “Why didn’t it deliver?” troubleshooting starter.
Reporting mismatch
What it usually means
What to check first
DSP > publisher numbers
Filtering or measurement differences
IVT policy + counting methodology
Publisher > DSP numbers
Counting window mismatch
Impression definition + dedup rules
Viewability discrepancy
Different vendors/standards
Vendor settings + standard used
Completion rate gap
Player behavior differences
VAST settings + autoplay/audio rules
Fig. Reconciliation “dispute prevention” grid.
Factor
Traditional IO
Programmatic IO
How terms are executed
Mostly manual trafficking + ad ops interpretation
Platform-configured rules tied to deal IDs
Speed to launch
Often slower due to back-and-forth and rework
Faster once standard fields and setup patterns exist
Risk of mismatched setup
Higher (two sides configure separately)
Lower (more structured, increasingly standardized sync)
Best fit
Custom sponsorships, one-offs, non-standard placements
PMP/preferred/guaranteed deals, repeatable premium buying
Auditability
Depends heavily on documentation quality
Stronger because terms map to platform settings
Fig. Programmatic IOs vs. traditional IOs.
Channel
What changes operationally
IO clause to make explicit
CTV
Creative rules, pod logic, device variability
Ad duration, acceptance criteria, completion definition
DOOH
Location/daypart specificity
Eligible locations + dayparts + proof-of-play expectations
Omnichannel
Higher (two sides configure separately)
Unified suitability rules + reconciliation hierarchy
Fig. Channel-specific clauses worth adding to the IO.
Metric
What it answers
Best used for
Common trap
eCPM
“How valuable were the impressions we served?”
Yield tests, format/placement comparisons
Looks great while unfilled requests grow
rCPM
“How much revenue did each request opportunity generate?”
Partner comparison, revenue efficiency
Can hide why performance changed unless segmented
Fill rate
“How many requests became impressions?”
Detecting lost volume, delivery/demand gaps
Chasing 100% fill by lowering standards
Fig. What each metric answers.
Scenario
Requests
Impressions
Resulting rCPM signal
High coverage
1,000,000
850,000
Request pipeline is healthy
Medium coverage
1,000,000
650,000
Pricing or delivery needs review
Low coverage
1,000,000
350,000
Demand mismatch or timeouts likely
Fig. Same revenue, different story.
Scenario
eCPM (per 1,000 impressions)
Fill rate
rCPM (per 1,000 requests)
A: strong yield, weak coverage
$6.00
30%
$1.80
B: moderate yield, strong coverage
$3.50
70%
$2.45
Fig. Key relationship scenarios.
Lever
Most likely to raise
Most likely to harm
Notes
Raise floors
eCPM
Fill rate, rCPM
Segment floors by geo/device
Add demand partners
Fill rate, eCPM
Latency, viewability
Measure incremental lift
Reduce timeouts/latency
Fill rate, rCPM
Often the fastest “hidden” win
Improve viewability
eCPM
Protect UX to keep it durable
Fig. Which lever moves what?
If you’re trying to…
Primary metric
Guardrail metric
Why
Improve yield on existing impressions
eCPM
Fill rate
Don’t win yield by losing volume
Compare partners fairly
rCPM
eCPM
Avoid high-price / low-coverage traps
Find lost revenue potential
Fill rate
rCPM
Separate “more coverage” from “better money”
Explain a revenue swing
rCPM
eCPM + Fill
Identify whether yield or coverage moved
Fig. Decision framework in one glance.
What changes
Traditional search
Conversational discovery
What it means for marketers
Input format
Keywords
A brief with constraints
Optimize for decision questions, not query volume
Intent clarity
Often ambiguous
Usually explicit
Treat prompts as “requirements,” not “interest”
Outcome pattern
Many clicks, mixed quality
Fewer clicks, higher intent
Expect fewer sessions, higher conversion density
Role of content
Ranks pages
Builds the shortlist
Product truth and attributes become performance levers
Fig. Search intent vs conversational intent at a glance.
Shortlist signal
What it looks like in prompts
How brands should respond
Owner
Attribute completeness
“for small apartments”, “wide feet”, “quiet”, “hypoallergenic”
Standardize attributes + fill gaps across variants
Ecommerce + PIM
Availability truth
“can I get it this week?”
Real-time or near-real-time stock + delivery windows
Ops + ecommerce
Pricing clarity
“under $180”, “best value”
Consistent pricing + clear promo logic
Merch + finance
Policy confidence
“easy returns?”, “warranty?”
Plain-language returns, warranty, support info
Product truth and attributes become performance levers
Fig. Search intent vs conversational intent at a glance.
Core function
What it produces
What to watch
Data collection and unification
Normalized events + usable IDs/signals
Duplicate events, inconsistent naming, missing consent flags
Audience segmentation
Reusable traits + segments with windows
Overlapping segments, unclear exclusions, stale recency rules
Enrichment and profiling
Extra context for targeting/analysis
“Trait creep” (using enrichment beyond its purpose)
Activation (campaign execution)
Destination-ready audiences
Sync lag, mismatched segment counts, frequency leakage
Analytics and reporting
Overlap/frequency/decay insights
“Pretty dashboards” that don’t change decisions
Fig. Core functions at a glance.
Data type
Common DMP inputs
Best used for
First-party
Site/app events, CRM exports, purchase signals
Retargeting, suppression, owned audience building
Second-party
Partner cohorts, publisher segments
Co-marketing, trusted enrichment, incremental reach tests
Third-party
Aggregated segments, contextual cohorts
Prospecting support, directional tests, early planning signals
Fig. Types of data a DMP works with.
Platform
Primary job
Best for
Where it struggles
DMP
Audience segmentation + activation
Programmatic targeting, suppression, cross-channel audiences
Identity loss, long-term customer view
CDP
Customer unification + lifecycle orchestration
Owned channels, retention journeys
Paid media activation without integrations
DSP
Media buying execution
Bidding, pacing, creative rotation
Building clean audience logic from scratch
DCR
Privacy-safe matching + analysis
Partner measurement, walled-garden collaboration
Day-to-day activation workflows
Fig. DMP vs CDP vs DSP vs DCR.
Criterion
Why it matters
Questions to ask
Integrations
A DMP without activation is a database
Which DSPs/social/retail tools have native connectors? How often do segments sync?
Identity and matching
Determines usable scale
What identifiers are supported? How are match rates measured and reported?
Taxonomy and segmentation
Keeps audience logic consistent
Can you version segments and see their composition?
Governance
Reduces compliance risk
Can you enforce consent/purpose and retention? Are there audit logs?
Reporting
Turns usage into learning
Can you see overlap, frequency risk, and segment decay?
Fig. How to choose the right DMP.
Identity mode
Where it shows up most
Strength
Practical limitation
Consent-based cookies
Open web display, some measurement
Familiar workflows
Coverage varies; retention/consent constraints
Mobile ad IDs (MAIDs)
In-app advertising
Useful for app ecosystems
Opt-outs and platform policies reduce stability
Publisher IDs (PPIDs)
CTV + premium publishers
Strong within publisher environments
Not universal; often walled to specific partners
Contextual/cohort signals
Open web + privacy-led targeting
Resilient without IDs
Less precise; needs strong creative + testing discipline
Fig. Identity modes you’ll actually operate in (2026).
Category
DMP
DSP
Primary job
Build and manage audiences from data
Buy ads and optimize delivery in real time
Main outputs
Segments (audiences), IDs/attributes, suppression lists
Impressions, reach/frequency, conversions, lift, pacing
Typical users
Data/marketing ops, analytics, audience strategists
Media buyers, performance teams, agencies
Best at
Audience organization + activation support
Execution + optimization across channels
2026 reality check
Less dependent on third-party data than it used to be (and often replaced by CDPs/clean rooms)
Still core to programmatic buying across display/video/CTV/audio/DOOH
Fig. Quick comparison table: DMP DSP
Data type
What it usually includes
Best used for
First-party data
Site/app events, CRM traits, purchase history, consented IDs
Retargeting, suHigh-confidence segmentation, suppression, lifecycle audiencesppression, owned audience building
Second-party data
Partner-shared insights under contract
Expansion into adjacent audiences with clearer provenance
Third-party data
Aggregated segments from external providers
Limited use cases; treat as directional inputs and validate lift
Fig. What each data type is good for in a DMP.
Hygiene rule
What to document
What it prevents
One-sentence definition
“Who is in the audience and why”
Teams activating the wrong segment
Freshness window
Lookback + expiry
Stale intent targeting
Suppression logic
Who must be excluded
Paying to reach people you already converted
Owner + usage
Who owns it and where it runs
Segment sprawl and duplicates
Fig. DMP segment hygiene rules.
DSP lever
What it controls
Common pitfall
Bidding strategy
Price vs volume vs efficiency
Chasing cheap CPMs that don’t convert
Pacing
How spend distributes over time
Overspending early before learning stabilizes
Frequency caps
How often people see ads
Wasting budget on repeated exposure
Supply selection
Open exchange vs deals vs curated
Quality issues when controls are too broad
Fig. DSP levers that change outcomes.
Decision point
DMP
DSP
Core asset
Audience segments
Campaign execution + performance data
Optimization focus
Audience quality and governance
Bids, supply, creative, pacing, outcomes
“Win” looks like
Usable, reusable audiences
Improved CPA/ROAS, reach efficiency, lift
Replacement pressure in 2026
Higher (often displaced by CDPs/clean rooms)
Lower (still required for programmatic scale)
Fig. DSP vs DMP: Key differences.
What moves
What can go wrong
What to validate
Segment definition
Different teams interpret it differently
One owner, one definition, consistent naming
Match mechanism
“Activated” segment becomes fuzzier than expected
Match rate ranges by channel/device/browser
Suppression lists
Suppression fails in some environments
Suppression coverage by channel + timing
Refresh cadence
Audience updates lag behind behavior
How quickly membership updates downstream
Fig. The DMP → DSP handoff checklist.
Situation
Recommended stack
What to measure first
Performance-first, lean team
DSP-first
CPA/ROAS stability + supply quality controls
Brand + performance teams arguing about audiences
DSP + audience governance layer
Reuse rate of segments + suppression impact
Strict privacy + strong first-party data
DSP + CDP/warehouse + clean room + lightweight governance
Incrementality, match coverage, auditability
Fig. Which stack fits which organization.
Privacy Limits and Reduced Addressability
As privacy regulations and platform changes erode third-party identifiers, marketers face diminishing addressable audiences. Modern digital advertising increasingly relies on privacy-safe identifiers and consent-based data models, forcing a pivot from hyper-targeted performance tactics toward broad, identity-safe exposure strategies that fuel awareness at scale.

This shift means that fewer signals are available to target individual intent deep in the funnel, making mental availability and familiarity established through awareness campaigns more critical for later performance metrics
Rising Acquisition Costs and Competitive Noise

In 2026, consolidated data show that consumers typically require 6–7 exposures before brand awareness is meaningfully established and begins to influence decisions.  Brands that underinvest in awareness face:

  • Higher cost per acquisition in competitive performance channels
  • Lower incremental reach as audiences fragment across platforms
  • A plateau in performance campaigns as efficiency dries up

Analysts find that awareness contributes to long-term growth by lowering future acquisition costs and strengthening retention, loyalty, and advocacy.

Competitive Advantage in a Fragmented Media Ecosystem

Marketers increasingly report brand awareness as a primary growth driver rather than a secondary objective. In recent 2026 CMO surveys, over 60% of senior marketers track awareness as a strategic KPI, reflecting a shift away from short-term ROI toward durable brand equity.

In volatile markets, strong awareness functions as a competitive moat—brands that are recognized and trusted benefit from:

  • Higher likelihood of inclusion in the consideration set
  • Enhanced customer preference and repeat purchases
  • Stronger pricing power and resilience against market fluctuations
Awareness Lowers Future Performance Costs
Effective brand awareness campaigns create a mental availability effect that makes subsequent performance campaigns more efficient. When audiences already recognize your brand, conversion rates improve and cost per sale declines because consumers enter performance touchpoints with lower psychological friction and higher trust.
Incremental Reach
Measure how many unique individuals your campaign reaches above and beyond baseline exposure or overlapping audiences. Unique reach—distinguishing distinct users rather than raw impressions—is increasingly recognized as a superior gauge of real audience coverage.
Frequency Control
Not just frequency as a number of exposures but effective frequency: how often your target audience sees the message before a memory trace is encoded without inducing ad fatigue. Classic marketing research shows that repeated exposure increases recall, particularly when spaced appropriately.
Viewability and Attention Proxies
Viewable impressions (ads actually in view) and attention proxies (e.g., time-in-view, scroll depth) signal quality of exposures rather than passive loads. These proxies are becoming industry standards for upper-funnel campaigns as measurement matures beyond impressions alone.
Brand Lift and Recall Metrics
Survey-based brand lift studies (often run through DSP or platform measurement partners) track changes in ad recall, brand favorability, or purchase intent after exposure. These direct measurements of perception shift are arguably the strongest evidence that a targeting option truly builds awareness.
Branded Search Lift 
Tracking increases in branded search queries post-campaign offers a real behavioural signal that awareness is translating into active recall and interest.
CPM Efficiency 
Cost per mille (CPM) remains important, but cost should be weighed against quality of delivery (e.g., viewable CPM, attention-adjusted CPM), not just the cheapest price per thousand delivered.
Trade-Off
Impact on Awareness Targeting
Scale vs. Control
Broad reach vs. curated placements
Automation vs. Transparency
Predictive delivery vs. insight clarity
Audience Precision vs. Inventory Quality
Narrow focus vs. widespread visibility
Proprietary Algorithm vs. Open Auction
Platform priorities vs. cross-publisher optimization
Branded Search Lift Indicator
After awareness campaigns run on YouTube or Google Display, increases in branded search queries act as a behavioral proxy for improved mental availability.
Demand Harvesting Layer
Search captures incremental traffic generated by awareness activity across video and display environments.
Consideration Reinforcement
When consumers move from awareness to evaluation, Search ads reinforce credibility and visibility at a high-intent moment.
Closed (Walled Garden) Ecosystem
Inventory is largely limited to Google-owned and partner properties. Omnichannel expansion beyond this ecosystem requires additional platforms.
Limited Cross-Channel Frequency Management
Frequency capping works within Google’s environments but does not extend across the broader open web, CTV exchanges, or retail media.
Reduced Supply Transparency
Compared to open programmatic platforms, advertisers have less granular visibility into supply path optimization and certain placement-level controls.
Algorithmic Black-Box Optimization
Automation increases efficiency but reduces insight into granular bidding logic and audience weighting decisions.
Omnichannel Constraints
Advanced awareness strategies involving premium CTV inventory, digital out-of-home, audio, or retail media networks often require programmatic platforms outside Google’s infrastructure.
1. Contextual Targeting
Contextual targeting places ads based on the content of the page or environment, not the identity of the user. Advanced contextual systems analyze keywords, sentiment, semantic meaning, and even video transcripts to determine relevance.For brand awareness, contextual targeting ensures message alignment with relevant environments while remaining privacy-compliant. In a privacy-first era, contextual has regained strategic importance as a scalable method for reaching high-intent audiences without personal identifiers.
2. Geo & Proximity Targeting
Geo targeting enables campaigns to reach users by country, region, city, ZIP code, or designated market area. Proximity targeting (often called geofencing) allows brands to target audiences within specific radiuses around physical locations.

This is particularly powerful for retail, events, hospitality, automotive, and franchise-based brands.
3. Audience Targeting

Audience targeting uses first-party data, third-party segments, and modeled lookalikes to reach users based on demographics, interests, or purchase signals.

In awareness campaigns, audience layers are typically broader than performance segments. Instead of hyper-narrow filters, brands often combine:

  • Interest clusters
  • Demographic overlays
  • Lookalike expansions
  • CRM-based suppression lists

The objective is reach expansion with controlled relevance, not micro-segmentation.

4. Behavioral & Intent Targeting
Behavioral targeting leverages browsing history, content consumption patterns, and observed digital behavior to infer likely interests or future purchase signals.Intent targeting goes deeper by identifying signals of active research behavior—useful when brands want to build awareness among high-value prospect pools before competitors capture attention.

In awareness strategies, behavioral layers can be used to prioritize exposure among likely buyers while maintaining broader reach.
5. Device & Environment Targeting

Programmatic enables targeting based on:

  • Desktop vs. mobile vs. tablet
  • Connected TV (CTV) environments
  • Smart TVs and streaming devices
  • In-app vs. mobile web
  • Audio streaming platforms

For brand awareness, CTV and premium video environments are especially effective due to higher attention and completion rates compared to standard display placements.

6. Inventory Targeting

Unlike closed ecosystems, programmatic allows advertisers to select:

  • Open exchange inventory
  • Private marketplace (PMP) deals
  • Curated publisher lists
  • Premium direct programmatic agreements

Inventory targeting helps brands align awareness campaigns with premium publishers or high-attention environments to strengthen brand perception.

7. Frequency & Reach Management

One of the strongest advantages of programmatic is cross-channel frequency control. Brands can:

  • Cap frequency at user or household level
  • Optimize toward incremental reach
  • Reduce overlap between channels
  • Avoid overexposure and creative fatigue

Effective awareness is not about maximum impressions—it is about optimal reach with controlled repetition.

Operational Complexity
Campaign setup, optimization, and supply management require experienced media teams or agency support.
Technical Cost Layers
DSP fees, data fees, verification costs, and technology overhead can increase effective CPMs.
Inventory Quality Variability
Open exchange environments require strong brand safety controls and verification layers.
Learning Curve for Optimization
Understanding incremental reach, supply path optimization, and attention metrics requires advanced measurement discipline.
Fragmentation Risk
Without unified planning, campaigns can become dispersed across too many channels without clear performance alignment.
Startups & Challengers: Build Mental Availability

Early-stage brands face a visibility deficit. The objective is not precision—it is penetration.

AI Digital’s awareness frameworks for startups prioritize:

  • Broad contextual environments
  • Affinity-based audiences
  • YouTube or scalable display reach
  • Controlled but meaningful frequency

Over-segmentation at this stage reduces growth velocity. The best targeting option for achieving brand awareness for challengers is typically one that maximizes efficient, scalable reach, not micro-targeting.

Goal: Establish recognition in category entry points.
Growth Brands: Scale Awareness Efficiently

Growth-stage companies already have some brand recall. The constraint shifts from awareness creation to efficient expansion.

AI Digital typically recommends:

  • Layered contextual + audience modeling
  • Programmatic CTV expansion for incremental reach
  • Cross-channel frequency control
  • Branded search lift tracking

At this stage, targeting for brand awareness must balance breadth and efficiency. The focus becomes incremental reach beyond existing audiences, not repeated exposure to the same pools.

Goal: Increase share of voice while lowering long-term acquisition costs.
Enterprise Brands: Sustain Dominance

Mature brands operate in competitive environments where awareness protects market share.

Enterprise-level targeting often includes:

  • Omnichannel programmatic orchestration (CTV, premium display, DOOH, audio)
  • Private marketplace deals with premium publishers
  • Multi-market frequency modeling
  • Unified measurement frameworks

At this level, the best targeting option for achieving brand awareness is rarely a single platform. It is a coordinated ecosystem designed to maintain brand dominance across screens.

Goal: Reinforce mental availability and suppress competitor encroachment.
Small Budgets: Precision Within Reach Constraints

When budgets are constrained, AI Digital recommends:

  • Geographic concentration
  • Contextual alignment over broad audience stacking
  • Single primary channel dominance (e.g., YouTube or high-quality display)
  • Strict frequency control
Small budgets cannot afford channel fragmentation. Awareness must be concentrated, not dispersed.
Mid-Scale Campaigns: Balanced Expansion

With moderate investment levels, brands can:

  • Combine Google Ads and programmatic
  • Expand into CTV or premium display
  • Optimize for incremental reach
  • Introduce brand lift measurement
At this level, targeting strategy transitions from basic reach to controlled expansion.
Enterprise Budgets: Multi-Market Orchestration

Large budgets unlock:

  • Cross-market synchronization
  • Advanced supply path optimization
  • Unified frequency management across DSPs
  • Attention and brand lift modeling
Here, awareness becomes a sustained investment strategy, not a campaign cycle.
B2C
  • Broad reach across display, video, and CTV
  • Emotion-driven storytelling
  • Higher exposure frequency
  • Retail and geo-layer integration
Shorter sales cycles allow awareness to transition into performance rapidly.
B2B
  • Account-based targeting
  • Industry and job-function segmentation
  • Thought leadership content
  • Longer exposure windows
Longer sales cycles require sustained awareness presence across months—not bursts.
High-Consideration Categories

Automotive, SaaS, financial services, real estate:

  • Multi-touch exposure
  • Cross-device reinforcement
  • Video + premium inventory alignment
Trust building requires repetition over time.
Impulse & Low-Consideration Products

CPG, fashion accessories, entertainment:

  • Broad, high-frequency exposure
  • Contextual lifestyle alignment
  • Seasonal amplification
Scale and salience matter more than segmentation depth.
Weaker angle
Stronger angle
Why it lands better
Example direction
Product innovation
Financial payback
Connects to real farm pressure
“Helps protect margin per acre”
Feature lists
Operational outcomes
Easier to evaluate fast
“Cuts waste” / “improves efficiency”
Brand promise alone
Proof plus relevance
Trust needs evidence in a tight market
“Built for volatile seasons”
Broad awareness copy
Decision-stage utility
Better fit for pressured buyers
“What to do when input costs move”
Fig. How agricultural messaging needs to change under margin pressure.
Volatility trigger
What shifts on the farm
What it changes in media
What the message should emphasize
Fertilizer price spikes
Input economics tighten
Timing and geo priorities
Efficiency, payback, risk reduction
Shipping or supply disruption
Purchase windows compress
Flighting and channel pacing
Availability, reliability, planning support
Crop-mix changes
Demand shifts by crop and region
Audience mix and regional spend
Relevance to current acreage decisions
Margin pressure
Scrutiny increases
Less tolerance for wasted impressions
Per-acre value, yield protection, ROI
Fig. How volatility reshapes agricultural marketing decisions.
Media Buying has evolved from purchasing broad demographic blocs to bidding on individual impressions in real-time
Through programmatic platforms, ads are now served based on a user's immediate context and predicted intent, not just their age or gender. This shift maximizes media efficiency by ensuring budget is allocated toward the most receptive individuals.
Creative Strategy is being revolutionized by Dynamic Creative Optimization (DCO).
Instead of a single "hero" ad, marketers now deploy modular creative components—headlines, images, CTAs—that are automatically assembled into thousands of unique ad variations. This means the messaging for a cold audience can be educational, while a retargeting ad showcases the exact product a user viewed, all from the same campaign framework.
Measurement and Analytics have moved beyond vanity metrics to focus on attributing business outcomes to specific personalized interactions.
Marketers can now track how a dynamic email influences a later website conversion or how a personalized ad impacts customer lifetime value, creating a closed-loop system for understanding true marketing ROI.
Relationship between Brands and Audiences is being redefined as a value exchange.
Consumers willingly provide data in return for more relevant, useful advertising experiences. This forces brands to act less like broadcasters and more like service providers, building trust through consistent personalization across every touchpoint.
Advanced Segmentation
Instead of static groups like "women aged 25-40," AI can create dynamic micro-segments in real-time, such as "users who abandoned a cart containing high-value electronics in the last 6 hours and have previously browsed product reviews."
Predictive Analytics
This is where the true power lies. ML models analyze historical and real-time data to forecast future behavior. They can predict a customer's lifetime value (LTV), their churn risk, and their product affinity—what they are most likely to want next. This allows marketers to be proactive, serving content that anticipates a need before the customer even explicitly states it.
First-Party Data
This is your most valuable asset, collected directly from customer interactions. It includes purchase history, website and app behavior (pages viewed, time spent, items clicked), email engagement (opens, clicks), and customer service records.
Zero-Party Data
This is information a customer proactively and intentionally shares with you, such as preferences selected in a profile, survey responses, or stated communication preferences.
Contextual Data
Real-time signals like a user's geographic location, local weather, the device they're using, and the time of day provide crucial context that makes personalization feel immediate and relevant.
Capability
Elevate delivers
Smart Supply delivers
Result
Planning & forecasting
AI planning assistant; predictive budgets
KPI-aligned deal options
Faster plans, clearer bets
In-flight optimization
Impact-scored changes; budget reallocation
Direct SPO routes; IVT controls
Spend shifts land on quality
Measurement & insight
Ask Elevate; MTA/MMM alignment
Deal-level transparency
Trustworthy readouts
Frequency & sequencing
Cross-channel guardrails
Path consolidation
Lower overlap, steadier reach
Transparency & control
Open Garden governance
Fewer hops; auditable paths
Less waste, more control
Fig. ROI metrics comparison.
Channel
Best signals
Creative tactic
Primary KPI
Watch-outs
Social / programmatic
Recent 1P actions, platform engagement, context
Modular ads; auto-placements
Revenue/CPA
Frequency, overlap, MFA paths
Search
Query intent, audience lists, device/time
RSA assets; PMax cross-surface
Conv. value/ROAS
Rising CPC; value-based bidding
Display / contextual
Page semantics, sentiment, visual cues
DCO image/copy swaps
Qualified visits/assisted conv.
Poor placements if uncapped
Email / CRM
Lifecycle events, RFM, churn risk
Triggered offers; product picks
Revenue per send/retention
Over-emailing; stale segments
OTT / CTV
Household reach, site visits, sales match
Sequenced video; cut-downs
Incremental site visits/sales
Cross-screen frequency
Retail media
SKU-level purchase/browse
Sponsored listings; shoppable
Sales/ROAS
Over-bidding hero SKUs
In-app
Install/retention cohorts, session depth
Rewarded video timing
LTV/CAC
Fatigue; mis-timed interstitials
Over-emailing; stale segments
Fig. ROI metrics comparison.
Component
Key inputs
Model/logic output
Execution layer examples
Primary owner
Data collection & unification
1P/0P events, context, consent flags
Clean features (recency, affinity, product vectors)
CDP + feature store; clean room joins
Data/marketing ops
ML & predictive analytics
Features + labels (conversions, LTV)
Propensity, LTV, uplift, next-best action scores
Model registry; scheduled retrains
Data science
Real-time optimization & personalization
Scores + live auction/context
Bids, budgets, audiences, creative variants
Smart Bidding, PMax, DCO, DSP algos
Media/performance
Continuous learning loop
Impression→outcome feedback
Updated weights, refreshed cohorts
Lift tests, MMM, model drift checks
Cross-functional
Fig. ROI metrics comparison.
Dimension
Old world (cookies)
Pressure forcing change
AI-era replacement
Identity
Cross-site third-party cookies
State privacy laws, browser limits, user controls
First-party IDs, clean rooms, cohort/on-device APIs
Targeting
Broad segments, rule-based retargeting
Low data quality, consent risk
Predictive audiences from consented + contextual signals
Optimization
Manual, weekly changes
Scale/complexity outgrew human tuning
Auction-time bidding and real-time allocation
Measurement
User-level stitching across sites
Signal loss, attribution gaps
Incrementality tests + MMM 2.0 + modeled reach
Privacy posture
Collect first, govern later
Regulatory liability, trust erosion
Data minimization, purpose limits, transparent controls
Fig. ROI metrics comparison.
Capability
Why it matters
What to ask a vendor
Data reconciliation
Trust in numbers
“Show parity vs. source by account, currency and timezone for last 30 days.”
Explainability
Action you can defend
“Open one rec and show features/weights, expected impact and risk.”
Activation safety
Change with guardrails
“Demo policy checks, approvals, rollback; provide audit log.”
Forecasting & elasticity
Spend where ROI rises
“Run a what-if (+10%/-10%) and compare to realized results.”
Quality & supply control
Preserve working media
“Prove MFA/domain exclusions and deal-level attention deltas.”
Fig. ROI metrics comparison.
Channel
Critical signals
Recommended actions
Primary outcome
Social
Attention, creative fatigue, sentiment
Promote winning edits, cap freq, reply/route issues
Lower CPA, higher CTR/VTRLower CPA, higher CTR/VTR
Search
Query trends, auction insights, landing fit
Add/trim themes, adjust bids/budgets, fix LPs
Higher conv. at stable CPA
Display & video
Viewability, attention by domain/deal
Curate supply, sequence, set freq by audience
Incremental reach & lift
Fig. ROI metrics comparison.
Aspect
Advertising intelligence
Marketing intelligence
Primary cadence
Scope
Paid media execution and creative rotation
Market, customer, product and pricing insights
Hourly–daily vs. monthly–quarterly
Data inputs
Platform logs (search, social, CTV, retail), quality/attention, first-party events, margin
Research, CRM, web analytics, surveys, market data
Near real time vs. periodic
Output
Ranked, guardrailed actions: budget shifts, bids, creative swaps, supply choices
Strategy, segmentation, positioning, budget envelopes
Activation vs. planning
Success marker
CPA/ROAS, attention, incremental sales
LTV, retention, share growth
Execution vs. direction
Fig. ROI metrics comparison.
Layer
Key inputs
Typical metrics
Decisions unlocked
Media
Impressions, clicks, cost, viewability, completion
CPA, ROAS, CPM, VCR
Budget reallocation, bid strategy
Creative
Variants, hooks, formats, captions
Attention, CTR, fatigue, lift
Rotate/refresh, tailor by audience
Audience
First-party segments, intent, R/F
Conv. rate, new-to-file, LTV proxy
Targeting, suppression, daypart
Quality & context
Suitability, IVT, supply path data
SIVT, attention by domain/deal
Inventory curation, deal selection
Outcomes
Conversions, revenue, margin, incrementality
CAC, contribution margin, iROAS
Scale winners, pause waste
Fig. ROI metrics comparison.
Metric
Calculation
Good benchmark
What it tells you
CPA (Cost Per Acquisition)
Total Campaign Cost ÷ Conversions
<$50 for D2C, <$200 for B2B
Efficiency of customer acquisition
ROAS (Return on Ad Spend)
Revenue Generated ÷ Ad Spend
3:1 minimum, 5:1+ optimal
Revenue efficiency of campaigns
Incremental Lift
(Test Group Sales - Control Group Sales) ÷ Control Group Sales
15-25% for awareness, 5-10% for DR
True campaign impact
Attention Score
AI-weighted engagement signals (0-100)
65+ for premium inventory
Quality of viewer engagement
Fig. ROI metrics comparison.
Platform/Tool
Performance improvement
Meta's Advantage+ Suite

• 32% CPA reduction

• 17% ROAS increase

Case study: Allbirds, 28% CPA reduction

[Source]

Google's Smart Bidding

• 30% average CPA reduction vs. manual bidding

Case study: ForRent.com: 37% CPA decrease

[Sources 1, 2]

Fig. Impact of AI on advertising efficiency.
Feature
AI-powered optimization
Manual optimization
Speed of optimization
Real-time, automatic
Delayed, manual analysis
Data inputs
20+ signals, cross-channel
Limited, often single-channel
Creative testing
Predictive, pre-flight & in-flight
Post-campaign, slow
Budget allocation
Dynamic, based on live performance
Static, pre-set
Learning/Improvement
Continuous, model-driven
Campaign-by-campaign
Fig. AI-driven optimization vs. manual methods.
Platform
Focus & features
Measurement & integration
TVScientific
- CTV ad platform focused on business outcomes
- Unified reporting dashboard
- Attention & exposure tracking
- Robust conversion tracking
- Google Analytics integration
- Tracks exposure, conversions (e.g., CPA, ROAS)
- Deterministic matching of ad views to conversions
Vibe.co
- CTV ad platform for awareness, leads, and sales
- Outcome-driven channel
- Measurement across all screens
- Integrates with Airbridge
- Unified KPI dashboard
- Tracks LTV, CPI, ROAS
Bazaarvoice Vibe
- Content marketing with creator/user-generated content
- Unified reporting
- Campaign, creator, and channel attribution
- Tracks reach to revenue
- Proves ROI across all touchpoints
Snapit AI's VIBE
- All-in-one marketing intelligence
- Unified dashboard
- Performance tracking across channels
- Social media & customer interaction metrics
- Holistic marketing view
Fig. Examples of CTV & marketing platforms: Features & measurement comparison.
Business alignment
Technical infrastructure

• Can you draw a direct line from every media metric to a specific business outcome?

• Are you using industry-specific benchmarks rather than generic completion rates?

• Do you track incremental business impact through holdout testing?

• Can you compare performance across all DSPs with standardized metrics?

• Are you measuring CTV impact beyond last-click (7-28 day windows)?

• Are you tracking engagement quality beyond binary completion rates?

Strategic optimization
Client value

• Can you predict which creative variants will perform before launch?

• Do you optimize for audience quality and likelihood to convert?

• Can you shift budgets automatically based on real-time performance signals?

• Can you explain campaign results in terms a CFO would understand?

• Do you provide strategic insights for future campaigns, not just reports?

• Can you benchmark performance against industry and competitors?

Fig. The 2025 CTV measurement checklist for agencies.
Layer
Primary user
Core purpose
Key controls
Typical pricing
Common examples
DSP
Advertisers / agencies
Plan, target, bid, measure
Audiences, frequency, pacing, supply path, PMPs/PG
CPM (optimize to CPC/CPA); tech/service fees
DV360, The Trade Desk, Amazon DSP, Yahoo DSP, Adobe, Roku OneView
SSP
Publishers / media owners
Package and sell inventory; maximize yield
Floors, brand safety, creative/category blocks, deal priority
First-price auctions; SSP take rates; PG terms
Magnite, PubMatic, Index Exchange, OpenX, Microsoft Monetize
Ad exchange
Marketplace infrastructure
Run RTB auctions; route bids
Auction type, deal IDs, reporting
Market-clearing price per impression; exchange fees
Google Authorized Buyers (AdX), OpenX Exchange, Index Exchange
Fig. DSP vs SSP vs ad exchange at a glance.
Step
What happens
Initiated by
Standards/signals
1
Campaign rules set (budget, audiences, caps)
Advertiser via DSP
Frequency rules, deal IDs
2
Impression described and offered
Publisher via SSP/ad server
OpenRTB request, ads.txt/sellers.json
3
Bid evaluation and response
DSP
Bid price, creative eligibility
4
Auction and winner selection
Ad exchange
First-price auction, timeout SLAs
5
Enforcement and render
SSP/ad server
Floors, brand safety, priority
6
Logging and optimization
Both sides
Log-level data, attribution, SPO
Fig. How they work together: the impression flow.
Challenge
What it looks like
Who owns the fix
Quick fix
Longer-term fix
Fee opacity
Unknown deltas; unclear take rates
Buyer + seller
Demand LLD + sellers.json in IOs
Consolidate paths, SPO contracts
Fraud/MFA
High IVT; low-quality domains/apps
Buyer + SSP
Pre-bid verification; inclusion lists
Curated supply; MFA thresholds in MSAs
Duplicate auctions
Self-competition; volatile CPMs
Buyer + SSP
Prefer direct sellers; cap SSP count
Enforce cross-exchange IDs; SPO rules
Identity shifts
Reach loss; freq. spikes
Buyer
First-party data; contextual
Clean rooms; conversion APIs
Concentration risk
Single-vendor dependency
Both
Backup vendor path
Multi-stack resilience, portability
Fig. Challenges and fixes cheat sheet.
Approach
How decisions are made
Pros
Limitations
Manual direct buying
Humans negotiate IOs and placements
High control over specific deals
Slow, labour-intensive, limited scalability
Programmatic (rules-based)
Buyers set fixed rules and bid strategies
Automation of basic tasks, broader reach
Still requires frequent manual tuning
AI-assisted programmatic buying
Models optimise bids and audiences in real time
Higher efficiency and performance potential
Requires good data, guardrails, and oversight
Fig. Manual vs programmatic vs AI-assisted programmatic.
Platform type
Primary owner
Core role in the stack
Typical examples
DSP
Advertiser / agency
Buy impressions and manage campaigns across inventory
The Trade Desk, DV360, Amazon DSP
SSP
Publisher / media owner
Package and monetise inventory via auctions and deals
Magnite, Microsoft Monetize, Index Exchange
Ad exchange
Neutral marketplace
Run auctions between buy-side and sell-side platforms
Google AdX, Index, Xandr
DMP / CDP
Advertiser or publisher
Build and activate audience segments from data
Lotame, Adobe Audience Manager, Salesforce Data 360
Ad network
Network / intermediary
Aggregate and resell inventory as curated packages
Criteo, AdRoll, Outbrain
Full-stack platform
Enterprise tech provider
Combine demand, supply, and data tools in one environment
Adobe Advertising, StackAdapt, Amobee/Nexxen
Fig. Core platform types and roles.
Category
Platform
Best for
Key strengths
DSP
The Trade Desk
Enterprise and advanced programmatic buyers
Independent, omnichannel, strong in CTV and identity
DSP
Google Display & Video 360 (DV360)
Brands deep in Google Marketing Platform
Ties into Google stack, YouTube, strong cross-channel planning
DSP
Amazon DSP
Retail, ecommerce, and brand campaigns tied to shopping data
Uses Amazon shopper data; runs on and off Amazon
DSP
Simpli.fi
Local, mid-market, and agency workflows
Good for local targeting, flexible buying models
DSP
MediaMath (by Infillion)
Buyers wanting a configurable, composable DSP
Longstanding programmatic tech, flexible integrations
SSP / Exchange
Magnite
Premium publishers, especially video and CTV
Large independent sell-side platform, strong in CTV
SSP / Exchange
Microsoft Monetize SSP (Xandr)
Publishers tapping Microsoft and third-party demand
SSP plus marketplace tools in Microsoft’s ad stack
SSP / Exchange
Index Exchange
Publishers and buyers wanting transparent auctions
Global exchange with detailed supply path data
SSP / Exchange
Google Ad Manager / AdX
Publishers and buyers needing Google-scale liquidity
Major exchange integrated with Google’s ad server
DMP
Lotame
Marketers and publishers using anonymous audience data
Longstanding DMP; evolving into data collaboration tools
DMP
Adobe Audience Manager
Brands already on Adobe Experience Cloud
Strong for unifying audiences and activating segments
CDP
Twilio Segment
Product-led and digital brands needing flexible CDP
Real-time data capture, profiles, and wide integrations
CDP
Tealium
Organisations needing real-time, vendor-neutral data activation
Strong event collection, identity resolution, and governance
CDP
Salesforce Data 360 (CDP)
Salesforce-centric enterprises
Single customer view across sales, service, and marketing
Ad network / Commerce
Criteo
Commerce and retail-focused advertisers
Commerce media, dynamic retargeting, retailer inventory
Ad network
AdRoll
SMB and ecommerce brands
Accessible cross-channel retargeting and prospecting
Native / Network
Outbrain
Brands using native and content-style placements
Large native network across premium publishers
Full-stack
Adobe Advertising
Enterprise brands on Adobe’s stack
DSP plus tight integration with Adobe Analytics and CDP
Full-stack
StackAdapt
Agencies and mid-market buyers
Omnichannel self-serve, AI optimisation, strong education/support
Full-stack (video-first)
Amobee / Nexxen
TV and CTV-heavy advertisers
Unified TV + digital video focus, end-to-end stack
Fig. Comparative table fo the programmatic ad platforms.
Criterion
Questions to ask the vendor
Warning signs to watch for
Budget alignment
What are your minimums and pricing model?
Vague answers on fees or “all-in CPM” with no breakdown
Data integration & reporting
How do you ingest first-party data, and what reports can I export?
Limited APIs, no log-level access, siloed dashboards
Brand safety & transparency
Which verification tools and supply-path reports do you support?
No independent verification, opaque auction logic
Cross-channel capability
Which channels are truly native, and which are resold from others?
Patchwork of add-ons, no unified frequency capping
Privacy compliance
How do you handle consent, data retention, and user rights?
No clear documentation or legal sign-off on data handling
Fig. Fast checklist for platform evaluation.
AI capability
Where it typically runs
What it should improve
What you still own as a team
Predictive bidding & pacing
DSP algorithms, optimisation layers
Efficiency, CPA/ROAS, budget utilisation
KPI definitions, guardrails, exclusions
Audience discovery & scoring
DSP, CDP, data platforms
Target quality, reach, incrementality
Audience strategy, data quality and governance
Creative and performance intelligence
DCO engines, creative analytics tools
Engagement rates, creative fatigue
Brand guidelines, messaging strategy
Fraud detection & brand safety
SSPs, exchanges, verification partners
IVT reduction, suitability, waste control
Risk tolerance, blocklists, escalation paths
Fig. Where AI lives in a modern programmatic stack.
Dimension
Adtech
Martech
Primary data
Main KPI
Purpose
Paid reach
Owned engagement
Audience & signals
Incremental reach
Core tools
DSP, SSP, ad server
CDP, CRM, CMS
First-party + context
LTV/ROAS
Where it runs
Web/app, CTV, DOOH
Website/app, email
Consented IDs
Conversion rate
Decision loop
Bid & serve
Segment & message
Clean room joins
Revenue lift
Fig. Adtech vs martech snapshot.
Component
Used by
Main job
Key inputs
Primary KPI
DSP
Advertiser/agency
Buy & optimise
Audiences, budget, goals
CPA/ROAS
SSP
Publisher
Package & auction
Placements, floors
Yield/eCPM
Ad server
Both sides
Deliver & log
Creatives, caps
Viewability
DMP/CDP
Advertiser
Build segments
First-party signals
Match rate
Exchange
Marketplace
Clear auctions
Bid requests/deals
Win rate
Fig. Who uses what in the stack.
Deal type
Control
Predictability
Typical use
Pricing
Open auction
Low
Low
Scale, testing
Dynamic
Private marketplace
Medium
Medium
Quality + control
Dynamic
Preferred deal
Medium
Medium-high
Priority access
Fixed/dynamic
Programmatic guaranteed
High
High
Premium CTV/video
Fixed
Fig. Programmatic deal types at a glance.
Signal
Source
Used by
Purpose
Placement meta
Publisher/SSP
DSP
Fit & quality
Context/category
Page/app
DSP
Brand suitability
Device/geo
Request
DSP
Relevance
Deal ID
PMP/PG
DSP/exchange
Priority rules
Fig. RTB request essentials.
Approach
Horizon
Needs
Best for
Watch-outs
Pixel/S2S
Short
Tags/feeds
Fast readouts
Bias/duplication
Experiments
Mid
Design/control
Incrementality
Sample size
MMM
Long
History & spends
Mix & budget
Granularity
Clean room
Mid
Partners & IDs
Closed-loop
Access rules
Attention
Short
Viewability/eyes-on
Creative/media fit
Standard variance
Fig. Measurement methods matrix.
Benefits
Challenges
Scalable reach across open web, apps, CTV
Privacy and identity changes
Advanced targeting and personalisation
Ad fraud and brand safety risks
Budget efficiency and ROI optimisation
Fragmentation and complex integrations
Cross-channel orchestration (CTV, DOOH, in-app)
Rising costs and auction competition
Transparent controls (frequency, placements)
Measurement and attribution complexity
Faster execution and automation
Data governance and consent management
Flexible deal types (open, PMP, PG)
Supply-path opacity and fees
Continuous testing and learning
Talent and operational maturity gaps
Access to premium inventory at scale
Walled gardens and limited interoperability
Clean-room and first-party data activation
Signal loss from cookie/ID constraints
Benefit
Why it matters
Best uses
KPIs to track
Higher-quality engagement and CTR
Feels like content, so people choose to interact rather than ignore it.
Driving qualified traffic and upper–mid funnel actions.
CTR, click quality, dwell time, scroll depth
Better user experience and trust
Matches the publisher’s tone and layout, lowering ad fatigue and skepticism.
Brand building on premium publishers.
Brand lift, ad recall, favorability, attention time
Privacy-safe relevance (context + first-party)
Delivers targeting without third-party IDs; aligns message to what the user is reading or watching now.
Cookieless planning, regulated categories.
Context category performance, first-party audience response, CPA
Scale with control
Programmatic native reaches widely while adapting creative to each placement.
National campaigns that still need fit and flexibility.
Reach, frequency, site/app mix, effective CPM
Brand safety and suitability
Context controls and verification keep ads in suitable environments.
Sensitive categories and reputation-conscious brands.
Viewability, invalid traffic, suitability pass rate
Cost efficiency and durable ROI
Higher intent and longer attention reduce wasted impressions and lower effective CPA.
Always-on acquisition and retargeting.
CPA/ROAS, assisted conversions, time on site
Cross-channel integration (incl. CTV)
Native patterns now exist on web, apps, and TV UIs; they work with video for lift.
Launches and storytelling that need broad, consistent presence.
Incremental reach, brand lift, cross-screen overlap
Data-driven and testable
Headlines, images, and context can be A/B tested and optimized in flight.
Continuous improvement across creative and placements.
Variant win rates, creative fatigue, cost per quality visit
Stronger storytelling canvas
Articles, video, and light interactivity let you teach, not just tell.
Consideration plays and authority building.
Dwell time, completion rate, saves/shares
Future-ready (cookieless, AI, attention)
Works with privacy-safe signals, benefits from AI optimization, and proves attention.
Long-term planning and measurement modernization.
Attention minutes, suitability-compliant reach, incrementality/MMM outputs
Fig. 10 major benefits of native ads.
Primary objective
Best-fit native formats
Plan to
Measure with
Awareness
Publisher studio features; CTV home/menu native; in-feed
Maximize quality reach & time-in-experience
Viewable reach, attention/dwell, brand lift
Consideration/engagement
In-feed native; sponsored articles; interactive
Drive qualified sessions & repeat exposure
CTR, scroll depth, time on page, site journeys
Conversions
Feed-based product native; retargeting; commerce/native
Move to action efficiently
CPA/ROAS, conversion rate, incrementality/MMM
Fig. Objectives → formats → KPIs.
Type
Typical placements
Primary aim
Best for
Creative must-have
Large-format
Freeway digitals, landmark LEDs (e.g., Times Square)
Reach & fame
Launches, cultural tentpoles, entertainment
Big type, high contrast, one idea visible at speed
Place-based
Airports, gyms, offices, campuses, restaurants, transit
Context & dwell
Mid-funnel education, app/site traffic
Venue-relevant copy, time/daypart variants
Point-of-purchase (POP)
In-store endcaps, checkout, gas pumps
Last-mile influence
Basket size, promo redemption, add-on items
Offer/price, product visual near shelf
Fig. DOOH types side-by-side.
Feature
Traditional OOH
DOOH
Content flexibility
Single printed creative for entire campaign
Multiple creatives, animations, and video in rotation
Campaign adjustments
Fixed for weeks/months once installed
Updated instantly, paused or modified in real-time
Targeting
Basic location and estimated traffic counts
Data-driven targeting by location, time, audience demographics
Measurement
Estimated reach based on traffic data
Precise impression counts, dwell time, attribution to actions
Creative capability
Static image only
Motion graphics, interactive elements, data-triggered content
Lead time
Weeks to months for booking and production
Hours to days for programmatic campaigns
Cost structure
Long-term commitments, printing costs
Flexible budgets, no printing costs
Contextual relevance
Same message regardless of conditions
Adapts to weather, time, events, or triggers
Fig. Benefits of DOOH.
Challenge
Benefit
Fragmented supply & measurement across owners
Big, unskippable reach in high-traffic real-world environments
Creative must be legible at speed & distance
Dynamic, data-triggered creative that adapts by time, place, and weather
Proving true incrementality vs. correlation
Measurable outcomes like footfall, sales lift, and brand lift
Managing frequency & duplication across channels
Omnichannel buying & reporting in familiar DSPs
Extra workflow to set up triggers & versions
Fast activation & easy mid-flight swaps and optimization
Criteria
Native Ads
Display Ads
User Experience
Non-disruptive; designed to feel like a natural part of the content
Interruptive; designed to capture attention outside of primary content
Click-Through Rate (CTR)
Typically higher due to relevance and non-intrusive format
Typically lower due to banner blindness and user habituation
Cost Model
Often higher CPC/CPM due to premium placement and better engagement
Generally lower CPM; cost-effective for broad reach
Best For
Building brand affinity, content distribution, and high-intent conversions
Mass brand awareness, retargeting, and direct-response campaigns.
Resistance to Ad Blockers
High resistance; blends with organic content and often goes undetected
Low resistance; easily identified and blocked by ad-blocking software
Model
Native Ads
Display Ads
CPC (Cost-Per-Click)
Higher cost, but typically better quality traffic and engagement
Lower cost, effective for driving site traffic and conversions
CPM (Cost-Per-Mille)
Premium CPM rates for high-quality, contextual placements
Lower CPM rates, ideal for mass reach and brand awareness
CPA (Cost-Per-Acquisition)
Can be highly efficient due to engaged, qualified audience
Often used in retargeting campaigns for conversion optimization
Objective
Native Ads
Display Ads
Branding
Excellent for building trust and brand affinity through content
Ideal for massive reach and frequency across the web
Performance
Highly effective for leads and conversions from engaged users
Best for retargeting and direct-response campaigns
Funnel Stage
Display Ad Role
Native Ad Role
Combined Outcome
Awareness
Broad-reach video & banner ads to generate initial interest.
Promoted content on premium sites to build early brand trust
Casts a wide net while establishing quality perception
Consideration
Retargeting ads reminding users of viewed products
In-feed native banners offering detailed guides or webinars
Nurtures leads with both reminders and valuable education.
Conversion
High-impact display ads with strong CTAs for final pushes
Sponsored recommendations in trusted content environments
Closes the sale by being both persuasive and contextually relevant
Traditional display
Rich media ads
User interaction
Click only
Click, swipe, expand, play, hover, choose
Typical CTR
~0.1% for standard banners
0.14–0.44%+ for rich banners; up to multi-x lifts in advanced formats
Attention
1–2 seconds glance
Often 10–90+ seconds of active engagement
Measurement
Impressions, clicks
Full interaction maps, dwell time, drop-off
Creative options
Static/animated images
Video, AR, 360°, carousels, games, shoppable
Programmatic
Fully supported
Broad support across display, OLV, CTV, native
Format
Primary goal
Best funnel stage
Typical channels
Expandable banners
Deeper exploration
Awareness → consideration
Web display, in-app display
Interactive video ads
Time spent, branded actions
Awareness → consideration
OLV, CTV, YouTube, streaming apps
Carousel (swipe) ads
Product discovery, ROAS
Mid-funnel → conversion
Meta, LinkedIn, programmatic native
Playable & gamified ads
Qualified engagement, installs
Mid-funnel → conversion
Mobile in-app, gaming networks
360° & immersive (AR/VR) ads
Try-before-you-buy, wow factor
Awareness → consideration
Snapchat, Instagram, TikTok, WebAR
Shoppable ads
Direct sales
Conversion
Social, retail media, CTV, OLV
Native rich media ads
Education, thought leadership
Upper → mid-funnel
Premium publishers, native networks
Dynamic rich media retargeting
Recovery, upsell, cross-sell
Mid-funnel → conversion
Display, social, open web
Interactive infographics
Education, data storytelling
Awareness → consideration
Publisher hubs, content hubs
Social AR & story ads
Social engagement, brand lift
Awareness → consideration
Snapchat, Instagram, TikTok
Fig. Rich media format cheat sheet.
Aspect
Static banner
Expandable banner
Why it matters
First impression
Single frame, limited space
Teaser first, deeper panel on interaction
Lets you stay lightweight but add depth
User behaviour
Click or ignore
Expand, browse, then click
Captures mid-funnel interest, not just clicks
Creative canvas
One visual and short copy
Space for galleries, video, copy blocks, forms
Better for complex products or multi-offers
Key metric
CTR only
Expand rate, dwell time, secondary actions, then CTR
Gives optimization levers beyond “did they click?”
Fig. Expandable banners vs static banners.
Shoppable format
Typical use case
Example placements
Primary KPI
Tagged social posts/stories
Single product or small range
Instagram, TikTok, Facebook
Purchases, cost per purchase
Shoppable video
Product in context, reviews
YouTube, TikTok, CTV commerce units
Click-through, conversion rate
Product feed display units
Always-on catalog promotion
Retail media, GDN, open web
ROAS, revenue per impression
Live shopping events
Launches, drops, bundles
TikTok Live, Instagram Live, retail
GMV, average order value, watch time
Fig. Shoppable formats at a glance.
Factor
Programmatic display
Native advertising
Best use case
What it is
A buying method that automates impression-level bidding and delivery across formats (display, video, audio, DOOH, CTV).
A format strategy where ads match the design, location, and behavior of surrounding content (e.g., in-feed, branded content).
Treat “programmatic” as the rails; “native” as the look and feel.
Creative/UX
Standard IAB sizes, rich media, out-stream video; built for quick testing and rotation (incl. DCO).
Headlines, thumbnail, brand label, and disclosure rendered to fit the feed or page style (often via OpenRTB Native).
Rapid iteration vs. deeper message absorption.
Buying paths
Open exchange, PMPs, programmatic guaranteed—managed in a DSP.
Same programmatic rails (open/PMP/PG) plus native specialists; relies on structured native fields.
One plan can run both and unify frequency/safety.
Primary KPIs
Reach, on-target %, viewability/attention, CPC/CPA/ROAS, incremental sales/reach.
Time on content, scroll depth, branded content engagement, brand lift, assisted conversions.
Pick KPIs that reflect how users consume each format.
Strengths
Scale, speed, standardization, granular controls (frequency, brand safety, SPO).
Context fit, editorial adjacency, mid-funnel education and consideration.
Fast testing at scale vs. storytelling within content.
Watch-outs
Creative fatigue; variable context quality without curation; needs strong verification.
Disclosure must be clear and proximate; execution quality varies by publisher; longer production cycles for premium content.
Scenario
Recommendation
Why this path works
Build fast awareness with control (reach, frequency, brand-lift)
Use programmatic display/CTV/DOOH with unified frequency caps and curated supply (PMP/PG)
Scales quickly across premium inventory while keeping suitability and duplication in check; easy to add third-party brand-lift
Drive applications, leads, or sales (conversion KPIs)
Use programmatic performance/retargeting, powered by clean conversion signals (pixel/CRM) and multiple creative/DCO variants
Algorithms allocate spend to high-propensity audiences and best-performing creatives; retargeting captures known intent
Control
Where it runs
Use it for
Risk if omitted
Pre-bid suitability/IVT filters
DSP/exchange
Block unsafe/MFA/IVT before spend
Higher fraud, wasted impressions
ads.txt / sellers.json / schain checks
DSP/SSP verification
Authorized selling and path transparency
Arbitrage, spoofed inventory
Allowlists/curated deals
DSP setup
Critical flights, premium contexts
Inconsistent environment quality
Post-bid verification & attention
Third-party verification
Ongoing QA and optimization inputs
No feedback loop to improve buys
Path
Use when
Main advantages
Common watch-outs
Open exchange
You need fast scale and price discovery
Broad reach, learning velocity
Supply duplication, variable quality
Private marketplace (PMP)
You want curated environments or publisher/retailer data
Higher suitability, clearer fees
Limited scale, higher floors
Programmatic guaranteed (PG)
You need reserved impressions and predictable delivery
Guaranteed placement, stable pacing
Less price flexibility, longer setup
Signal in the request
What it tells the buyer
Common source
Gotchas to watch
Placement & size
Where the ad renders and in what dimensions
Publisher ad server / SSP
Misdeclared size or stacked/hidden slots
Page/app context
Content category and environment
SSP taxonomy, URL/app bundle
Weak or missing page semantics
Device & UA hints
Form factor and capabilities
Structured UA / device graph
Inconsistent UA parsing across browsers
Location & connection
Approximate geo, bandwidth
IP-derived geo, SDK
VPNs, carrier NATs, shared IPs
Allowed formats
Display, video, native, audio
SSP / ad server
Format mismatch with your creative set
Privacy/consent flags
What data can be used
GPP string, TCF, COPPA flag
Missing or invalid consent propagation
Comparison channel
Where audio shines vs this channel
Where the other channel wins
How to use them together
Display (banners/web & in-app)
Reaches people during commutes, chores, and workouts; higher attentive seconds and common 80–90%+ completion on streams; less cluttered listening environment.
Lowest-cost impressions with immediate clicks and on-screen formats (native, rich media).
Run audio for dependable message delivery, then retarget exposed listeners with display for the click. Cap cross-channel frequency.
Online video (OLV)
Covers more daily moments when watching isn’t feasible; adds frequency without visual fatigue; keeps the story alive between video exposures.
Sight, sound, and motion for product demos and rich storytelling when the screen is in view.
Use audio to shape frequency around video peaks (e.g., 1 video → 2 audio within 72 hours). Deduplicate reach across both.
Connected TV (CTV)
Extends big-screen messages into daily routines at lower cost; adds efficient incremental reach among light-TV viewers and younger audiences.
Lean-back, premium screen experience with cinematic creative and household-level targeting at scale.
Plan in the same DSP, set sequence rules (CTV first exposure, audio follow-ups), and manage household-level frequency.
Social
Fewer visual distractions; podcast host reads transfer trust from creator to brand; strong for mid-funnel reinforcement.
Rapid creative testing, shoppable formats, and precise on-platform conversion tools.
Use audio for credibility and habit moments; retarget audio-exposed users with social for visual reminder or conversion; keep messaging consistent.
Checklist item
What to include / do
Brief
Objective, audience, markets, flight dates, KPIs, budget split by format, deal list.
Assets
Final WAV/MP3 masters, loudness-normalized per publisher spec; companion/CTA card files with click tracking; vanity URL/code list.
Targeting
Geo, device, daypart, contextual lists, first-party segments, exclusions.
Suitability
Content categories to avoid; episode-level filters for podcasts.
Delivery controls
Frequency caps (daily/weekly), pacing, pod position preferences (pre/mid/post-roll), bid ceilings.
Measurement
IAB v2.2 confirmation for podcast partners; VAST events tested; brand-lift booked; attribution pixels live; deduplicated reporting configured.
Privacy
Consent string passing verified; data-use notes for first-party segments documented.
Go-live QA
Test plays, companion rendering, click-throughs, event fires; confirm costs and pacing after the first 24–48 hours.
Optimization cadence
Weekly creative and supply reviews; pre-agreed mid-flight reallocation rules (e.g., shift 15–25% to top-quartile placements).
Environment
Suggested daily cap
Suggested weekly cap
Notes
In-stream music
1–2 per person
3–4 per person
Even pacing; rotate variants
Podcasts (DAI)
1 per person
2–3 per person
Mix mid-/pre-roll; vary scripts
Smart speaker
1 per household
2 per household
One clear, speakable CTA
Hands-free engagement, simple actions (send info, set reminder)
Household context, time of day, content category; voice intent prompts
Voice interactions, follow-up events, brand lift; household-level attribution where supported
In-app radio (live/linear streams)
Live/linear radio
1–2 per hour
6–8 per week
Prefer shorter, more frequent breaks
Fig. Frequency & pacing guardrails.
Format
Where it runs
Best for
Targeting & context
Common KPIs / notes
In-stream audio (music & digital radio)
Ad-supported music streams and digital radio apps (Spotify, Pandora/SiriusXM, iHeart)
Efficient reach, consistent delivery, frequency shaping alongside CTV/video
First-party contexts (genre, mood, activity), device (mobile, in-car, smart speaker), daypart, geo
Listen/completion rate, reach & frequency, companion clicks, site/app visits via attribution
Podcasts (DAI host-read or produced)
Podcast networks and marketplaces via dynamic ad insertion
Trust, niche audiences, mid- to lower-funnel outcomes
Show/episode-level context using transcripts & suitability signals; audience and geo layers
IAB-compliant delivery counts, brand lift, vanity URL/code usage, site/app visits
Smart speaker / voice-interactive
Alexa, Google Assistant, other voice platforms; some mobile “voice to act” formats
Hands-free engagement, simple actions (send info, set reminder)
Household context, time of day, content category; voice intent prompts
Voice interactions, follow-up events, brand lift; household-level attribution where supported
In-app radio (live/linear streams)
Live station streams inside radio apps plus curated digital stations
Scale with strong daypart control; live news/sports alignment
Station/genre, daypart, geo; device (mobile/in-car)
Completion rate, reach & frequency, site/app visits; manage ad-load and cap frequency
Fig. Programmatic audio ad formats (quick reference).
Goal
What to measure
Tool/example
Where to use
Delivery quality
Starts, quartiles, completes
VAST events / IAB v2.2
In-stream + podcasts
Brand impact
Awareness/consideration/intent lift
Publisher brand-lift study
At least one major line
Site/app outcomes
Post-exposure visits/events
Platform analytics + pixels
Streams + podcasts
Cross-media value
Deduped reach/frequency vs. CTV/video
Cross-media reporting
Omnichannel wrap-ups
Fig. Measurement & attribution cheat sheet.
Week
Task
Owner
Output
1
Lock audience & KPI

Marketing + Analytics
1-page brief with success metric
2–3
Data onboarding & ID resolution
Data/MarTech
Hashed lists; ID spine verified
3–4
Deals live (CTV + one DSP)
Media team/agency
PG/PMP contracts; frequency plan
5–8
Flight + lift test
Media + Analytics
Interim read (reach, on-target, EVR/CPCV)
9–10
Clean-room matchback
Analytics + Partner
Incrementality report & scale plan
Fig. Pilot timeline (first-quarter quick start).
Channel
Primary KPI
Also track
Notes
CTV/OTT
Cost per completed view (CPCV)
Deduped reach, on-target rate, site/quote/sales
Use SSAI creative IDs; cap HH frequency
Mobile in-app
Cost per incremental app action (CPIA)
Post-view conversions via clean room
Tight geo/time windows for relevance
Display/online video
Effective frequency achieved
Cohort brand lift; MMM contribution
Govern frequency at identity level
DOOH
Exposure-to-visit rate (EVR)
Geo-lift, time-of-day response
Use mobile panels/clean-room matchbacks
Fig. Channel KPIs you can own in the brief.
Step
What to do
Why it helps
Define the audience spine
Build consented first-party segments (customers, lapsed buyers, look-alikes). Resolve them to durable IDs with an identity partner or publisher IDs in a data clean room so no raw PII changes hands.
You can activate and measure without exposing personal data.
Pipe the audience into programmatic buying tools
Send the audience spine to your DSP and CTV partners. Buy inventory via PMPs or PG for quality control, with open exchange RTB as a supplement when needed.
Automation finds scale quickly; private deals keep quality predictable.
Control frequency and creative at the person/household level
Use the ID spine for cross-channel frequency management and dynamic creative rules.
Reduces waste and improves relevance; avoids over-exposing the same people.
Measure outcomes and feed them back
Run incrementality tests and clean-room matchbacks to sales, quotes, or bookings. Feed results into your DSP and creative stack to reweight audiences and rotate creative.
Spend shifts to what works without waiting for the next planning cycle.
Fig. Activation deal-type decision guide.
Deal type
Use when
Pros
Watch-outs
Programmatic guaranteed (PG)
Fixed supply & KPI certainty
Quality + delivery control
Less flexibility mid-flight
Private marketplace (PMP)
You want quality + some scale
Better transparency & pricing
Inventory can be finite
Open exchange RTB
Incremental scale
Cost discovery; breadth
Supply path hygiene; signal loss
Direct/household addressable (CTV/MVPD)
Deterministic HH targeting
Strong ID, SSAI logs
Fragmented workflows if unmanaged
Fig. Activation deal-type decision guide.
Data source
Example signals
Identity step
Activation route
First-party (CRM, web, app)
purchases, recency, product interest
Hash/tokenize; resolve to person/HH in graph
DSP, CTV/MVPD via clean room or platform IDs
Publisher/platform IDs
login/subscriber identifiers
Overlap in clean room
PG/PMP deals; curated marketplaces
Partner data (2P/3P)
in-market, lifestyle, location
Map to durable IDs with interoperability checks
Audience extension across web/app/CTV
Fig. Audience data → identity spine → activation map.
Channel type
Number of major platforms
Measurement challenge
Attribution complexity
Streaming video
~10–12 platforms often >1% share of TV usage (varies by month)
Cross-platform frequency capping; identity fragmentation
Multiple conversion paths across apps/devices; limited log-level parity
Social media
~6 dominant platforms globally (FB, IG, X, YT, TikTok, Pinterest)
Walled-garden data silos; privacy/signal loss
Last-click bias; limited user-level export for unified models
Retail media
Dozens in U.S.; 200+ globally
Closed-loop sales vs. upper-funnel brand metrics
In-store vs. online attribution; retailer-specific methodologies
Audio/podcast
Highly fragmented across many apps (YouTube/Spotify/Apple lead)
Limited viewability/attention metrics; inconsistent identifiers
Delayed conversions; MMM reliance; sparse in-channel pixels
Fig. Media fragmentation impact on measurement.
Funnel stage
Traditional metrics
Outcome-driven metrics
Awareness
Reach, Impressions, CPM
Brand lift, Aided/unaided awareness, Attention scores
Consideration
Click-through rate, Engagement rate
Cross-channel traffic lift, Audience expansion, Search volume increase
Conversion
Clicks, Viewability, Completion rate
Cost per acquisition, ROAS, Incremental revenue
Fig. Traditional vs. outcome-driven funnel metrics.
Aspect
Martech (owned & lifecycle)
Adtech (paid media)
Primary objective
Engage, convert, retain known audiences
Acquire/reach audiences via paid channels
Typical data
First-party profiles (CRM/CDP), consent & events
Ad IDs, contextual signals, modeled audiences
Core systems
CRM, CDP, marketing automation, CMS, analytics
DSP, SSP, ad server, exchanges, verification
Where it operates
Email/SMS, web/app personalization, loyalty, on-site
Display/video/CTV/retail media, paid social/search
Common KPIs
Repeat purchase rate, LTV, conversion rate, churn
Reach, CPM/CPC, incremental lift, ROAS
Example outputs
Triggered journeys, segments, experiments, dashboards
Deal IDs, bids, pacing/flight plans, brand safety reports
Fig. Martech vs adtech (cheat sheet).
Benefit
What it changes
Typical features
Teams impacted
Example KPI lift to watch
Improved efficiency
Fewer manual steps, faster launches
Automation, templates, integrations
Marketing ops, channel owners
Time-to-launch, hours saved, error rate
Data-driven insights
Better decisions from first-party data
CDP/CRM, analytics, experimentation
Growth, analytics, product
Win rate, next-best-action accuracy
Better customer experience
Relevant, timely interactions
Journey orchestration, personalization
Lifecycle, CRM, CX
CTR, CVR, CSAT/NPS
ROI improvement
Smarter spend allocation
Attribution (MMM/MTA), incrementality
Finance, media, leadership
CAC, ROAS, revenue per user
Fig. Benefits at a glance.
Category
Core tools
Primary users
Key outputs
Management tools
CRM, CDP, marketing automation, CMS/DXP, DAM
Marketing ops, CRM admins, content
Unified profiles, segments, content, journeys
Social & content optimization
Social schedulers, listening, SEO/CMP, editors
Social, content, brand
Posts/calendars, briefs, optimized pages
Analytics & insights
Web/product analytics, BI, MMM/MTA, testing
Analytics, growth, leadership
Dashboards, lift studies, budget guidance
Advertising & programmatic
DSP, SSP, exchanges, ad server, verification
Media buyers, agencies
Deals, pacing, brand safety, performance logs
Fig. Tool categories and outputs.
Challenge
Symptoms in your org
First diagnostics
First fixes
Complexity & integration
Duplicate tools, manual CSVs, inconsistent counts
Map tools/data flows; audit overlaps
Consolidate, standardize IDs/events; integrate to CDP/CRM spine
Data privacy & compliance
Consent mismatches, slow deletions, audit risk
Trace consent through stack; test DSAR latency
Implement CMP hooks; automate erasure; log lineage
High costs & ROI
Underused licenses, unclear payback
License/utilization report; KPI tie-backs
Decommission, renegotiate; set stack-level KPIs & experiments
Skills gap & training
Single-point failures, slow launches
Skills matrix; feature adoption review
Playbooks, cross-training, 30/60/90 enablement plan
Fig. Challenges, symptoms, first diagnostics, first fixes.
Layer
Purpose
Owner(s)
Common pitfalls
What “good” looks like
Data & identity
Unify consented profiles and events
Data/CRM
Fragmented IDs, stale events
One profile, freshness SLAs, privacy logs
Experience
Convert and capture signals (site/app/store)
Product/UX
Slow pages, weak search
Fast PDPs, relevant search, low drop-off
Media
Create/capture demand (search, social, RMNs, CTV)
Growth/Media
Proxy ROAS chasing
KPI ladder, incrementality cadence
Measurement
Prove impact; guide scaling
Analytics/Finance
Last-click bias
MMM + geo/holdout tests + clean rooms
Governance
Permissions, safety, standards
Legal/RevOps
Shadow tools, unclear ownership
RACI, review gates, exportable logs
Fig. The digital retail marketing system at a glance.
Surface
Why it matters
Primary job
Proof method
Search
Highest intent
Demand capture
Query-level revenue, incrementality
RMNs
Near shelf; first-party
Digital shelf, trade funds
Retailer clean room, new-to-brand
Social/Creators
Daily reach & trust
Discovery, social proof
Codes/links, MMM contribution
CTV/Streaming
Scalable attention
Prospecting, store lift
Geo/household lift, site/store visits
Email/CRM
Owned compounding
Retention, LTV
Cohort LTV, RPR, churn deltas
Fig. Where attention, spend, and proof intersect.
Type
Core tactics
Key integrations
How to prove it
Online
SEO/SEM, marketplaces, social commerce
PDP schema, reviews, payments
Conversion rate, AOV, new-to-file
Omnichannel
BOPIS/curbside, ship-from-store, clienteling
POS, inventory, loyalty
Store lift from digital, repeat rate
Phygital/in-store
AR try-ons, guided selling, digital signage
Store apps, loyalty IDs
Basket add-on, time-to-decision
Fig. Three retail marketing types mapped to tactics & proof.
Strategy
Critical inputs
Expected outputs
Data-driven optimization & predictive
Truth sets, feature store
Budget shifts, segment priorities
AI-powered optimization & automation
Guardrails, creative pool
Faster pacing, lower waste
Privacy-first, cookieless
First-party data, clean room
Stable targeting, verified lift
Personalization at scale
Merch rules, product graph
Higher CVR/AOV, lower decision time
Omnichannel integration
Unified inventory, offers
BOPIS usage, store-credited sales
Community & influencers
Creator fit matrix
Assisted revenue, sustained reach
Leveraging RMNs
SKU-level margins, promo calendar
Digital shelf share, incremental sales
CTV & streaming
Audience packages, geo test plan
Incremental reach/sales, halo to RMNs
Fig. Strategy → inputs → outputs.
Outcome you buy on
Primary KPI
Guardrails to watch
Typical channels
Notes
Purchase
ROAS, CPA
LTV/CAC, payback period
Search, retail media, CTV, social
Closed-loop sales on RMNs tighten feedback
Qualified lead
CPL, lead acceptance rate
Pipeline velocity, SQO rate
Search, social, B2B partner/affiliate
Score leads in CRM to avoid cheap but low-quality volume
App install
CPI, day-7 retention
Cohort LTV, event depth
Paid social, programmatic, influencer
Optimize to post-install actions, not installs alone
Content signup
Cost per subscriber
Churn, email engagement
Paid social, native, influencer
Use double opt-in to lift quality
Store visit
Cost per visit
Visit-to-sale rate
CTV, Waze, local social
Use geo-holdouts to confirm lift
Fig. Outcome-to-metric map.
Channel
Best at
Weak at
Use when
Avoid when
Search
High-intent capture
Creating demand
You have clear queries/offer fit
Category is immature; few queries exist
Paid social
Demand creation, rapid testing
Deep intent
Launching offers/creatives quickly
You lack creative variety
Retail media (RMN)
Closed-loop sales
Upper-funnel storytelling
You sell via that retailer
Your category isn’t on the retailer
CTV
Scaled reach with digital measurement
Hyper-granular intent
You need new-to-brand growth
You can’t measure lift/visits
Affiliate/partners
Efficient incremental sales
Pure reach
You can pay on outcomes
You can’t track fairly
Influencer
Trust transfer
Broad frequency
Your category is community-driven
You require strict message control
Fig. Channels and their best jobs-to-be-done.
Step
“Average” pattern
Best-in-class pattern
Impact you’re aiming for
Entry
Account wall
Guest by default + optional sign-in
+X% checkout starts
Forms
20–25 fields, manual entry
12–14 fields, auto-fill, address lookup
− form time; − abandonment
Payment
Card only
Wallets (Apple/Google/Shop/PayPal) + saved cards
+ mobile conversion
Errors
Inline but vague
Clear, persistent, field-level guidance
− retries; + completion
Summary
Hidden until last step
Sticky order summary + shipping ETA
+ trust; − surprises
Fig. Checkout UX: from average to best-in-class.
Component
Examples
Data hooks
QC notes
Headlines
Benefit, offer, urgency
Geo, weather, inventory
Max 40–60 chars; brand voice check
Body lines
Proof points, feature bullets
Price, promo calendar
Avoid repeating headline nouns
Visuals
Product angles, lifestyle, UGC
Catalog feed, color/size
Crop safe zones for all placements
CTAs
“Shop now”, “Build your plan”
Stock status, lead depth
A/B test verbs vs. outcomes
Legal
Pricing footers, disclosures
Promo rules
Auto-append for regulated SKUs
Fig. DCO modular asset library blueprint.
Method
Where to use it
Pros
Cons
Decision rule
Geo holdout
CTV, retail media, OOH
Real-world, less cookie reliance
Needs scale; seasonality risk
Use for upper/mid-funnel
Audience holdout
Social, search
Precise, fast reads
Potential contamination
Use when IDs/signals are strong
Conversion-lift experiment
Walled gardens
Native tooling, clean setup
Platform-bound
Run quarterly for big platforms
Interrupted time series
Always-on channels
Uses history you already have
Confounds if many changes
Use to sanity-check MMM
Synthetic control
Mixed media
Robust counterfactuals
Complex modeling
Use for high-stakes tests
Fig. Incrementality methods at a glance.
Touchpoint
Micro-conversion
Value offered
Consent status
PDP
“Email my cart”
Save items + price alerts
Email + marketing opt-in
Blog
Lead magnet download
Guide/checklist
Email + content prefs
Checkout
Account-lite
Faster returns
Account consent
Post-purchase
Loyalty enrolment
Points + perks
Loyalty consent
Fig. Consent-first data plan.
Hypothesis
Page/placement
Est. impact
Status
Move proof near CTA lifts CR
Checkout
High
Queued
Shorter headline improves scan
Landing hero
Medium
In test
Add wallet pay reduces drop-off
Payment step
High
Signed off
Price anchoring clarifies value
Pricing page
Medium
Complete
Fig. Test backlog snapshot.
Element
Purpose
Quick test
KPI to watch
Headline
Value clarity
5-second test
Bounce rate
Subhead
Why act now
Scroll to 50%
Time to first click
Primary CTA
Make action obvious
Single dominant CTA
CTA CTR
Visual
Relevance cue
Product-in-context swap
Hero interactions
Proof block
Reduce risk
Insert star/rating near CTA
Form start rate
Risk reversal
Safety net
Add returns/guarantee inline
Checkout start rate
Fig. Above-the-fold blueprint.
Metric
What it is / measures
How to use in 2026
Conversion rate (CR)
% completing a defined action
Track by funnel step; segment by source/device; fix biggest drop first
Cost per acquisition (CPA)
Media cost per conversion
Compare channels; set guardrails; validate with lift tests
Customer acquisition cost (CAC)
All-in cost per new customer
Use for payback planning; track by cohort; allow higher CAC for high-LTV
Click-through rate (CTR)
% of impressions that become clicks
Diagnose message–audience fit; pair with post-click CR/CPA
Return on ad spend (ROAS)
Revenue ÷ ad spend
Monitor blended/marginal; split new vs returning; confirm with tests/MMM
Lifetime value (LTV)
Net revenue per customer over time
Model by cohort; use predictive early reads; set LTV:CAC targets
LTV:CAC ratio
Value vs cost balance
Gate scaling; set thresholds by segment/risk
Retention conversions
Repeat, renewal, reactivation
Measure by original source; time nudges to reorder windows
Engagement score
Weighted behaviours predicting conversion
Prioritise audiences; trigger micro-conversions or sales handoffs
Multi-touch attribution (MTA)
Contribution across touchpoints
Use where identity is strong; never as sole budgeting input
Media mix modelling (MMM)
Channel impact via aggregates
Quarterly planning; quantify diminishing returns; reconcile
Fig. Metrics cheat sheet.
Method
Best for
Cadence
Caution
Platforms (conversions)
Daily optimizations
Daily
Modeled/partial views
Experiments (lift)
Causality
Monthly/quarterly
Sample/power needed
MMM
Budget reallocation
Quarterly
Needs stable inputs
Finance actuals
Revenue truth
Monthly
Lag vs. marketing data
Fig. Measurement triangulation cheat sheet.
Layer
Primary tool
Backup/alt
Purpose
Analytics
GA4
Mixpanel
Events & funnels
Data pipe
Snowplow
RudderStack
Clean event transport
CDP
Segment
mParticle
Profiles & audiences
Testing
VWO
Optimizely
Experiments & flags
Fig. Stack map at a glance.
Model
Full form
How access works
Examples
SVOD
Subscription video on demand
Recurring subscription
Netflix, Disney+, Max
AVOD
Advertising video on demand
Free/low price with ads
Tubi, Pluto TV, Freevee
TVOD
Transactional video on demand
Rent or buy per title
Apple TV (Store), Prime Video Store
Fig. VOD at a glance.
Tier
Price position
Ads
Best for
Ad-free
Highest
None
Premium experience/loyalty
With ads
Mid/low
Light ad load
Broader reach/price-sensitive users
Annual
Discounted
Depends on tier
Long-term retention
Bundle
Variable
Mixed
Cross-product value seekers
Fig. Common SVOD tier types.
Break type
Typical duration
Placement
Common objective
Pre-roll
5–30s
Before content
Awareness
Mid-roll
15–60s
During content
Consideration
Post-roll
5–15s
After content
Reminders/retargeting
Bumpers
5–6s
Anywhere
Reinforcement
Fig. AVOD ad breaks at a glance.
Transaction
Start window
Viewing window
Ownership
Rent (new)
~30 days to start
~48 hours to finish
None after window
Rent (catalog)
~30 days to start
~48 hours to finish
None after window
PVOD rent
Short theatrical-adjacent
~48 hours to finish
None after window
EST (buy)
Immediate
Unlimited replays
None after window
Fig. TVOD access windows.
Model
How you pay
SVOD
Recurring fee (monthly/annual); some services offer a lower-priced with ads tier
AVOD
Free or lower-priced access; the “cost” is watching ads
TVOD
One-off rental or buy (EST); no ongoing commitment
Fig. AVOD vs SVOD vs TVOD payment model.
Model
What the viewer experiences
SVOD
Full catalogue access; ad-free on premium tier, ads on lower tier; binge-friendly
AVOD
On-demand catalogue with ad breaks (pre/mid/post-roll); frequency controls vary by app
TVOD
Title-by-title access; rentals with viewing windows or permanent ownership (EST)
Fig. AVOD vs SVOD vs TVOD user experience.
Model
How the platform earns
SVOD
Subscription ARPU; optional ad revenue on with ads tier; churn/retention critical
AVOD
Advertising (direct + programmatic); data-driven targeting; CPM-driven margin
TVOD
Transaction margin per title (rental/buy) + occasional PVOD premiums; hit-driven
Fig. AVOD vs SVOD vs TVOD revenue generation.
Capability
What it does
Typical jobs
Primary inputs
Primary outputs
Machine learning
Finds patterns, predicts outcomes
Propensity scoring, bid/ budget optimization
First-party events, campaign logs
Scores, recommended bids/budgets
Natural language processing
Understands & generates text
Copy drafts, intent/sentiment, routing
Chats, emails, reviews
Summaries, tone-fit variants, intents
Predictive analytics
Forecasts KPIs
LTV/churn forecasting, pacing
Historical performance, cohorts
Forecasts, early-warning alerts
Generative models
Create text/imagery
Ad/landing page/email variants
Briefs, style guides
Brand-checked creative options
Fig. AI based digital marketing capabilities mapped to marketing jobs.
Area
What improves
Example KPIs to watch
Efficiency & cost
Fewer manual steps
Cost per ticket, time-to-first-response, cost per optimization
Personalization
Relevance at scale
Repeat rate, AOV, CLV, on-site engagement
Real-time optimization
In-flight course correction
eCPA/eROAS, frequency waste, view-through conversions
ROI
Spend tied to outcomes
Incremental revenue, contribution margin, payback period
Fig. Where AI creates value (with example KPIs).
Task
Examples
What to evaluate
Content generation
LLM assistants; marketer writers
Brand controls, red-team features, export formats
Images & video
Firefly, Runway, Midjourney
Rights/attribution, editability, render time
Analytics & SEO
Elevate; GA4; Semrush/Ahrefs
KPI alignment, explainability, channel connectors
Automation & CRM
HubSpot, SFMC, Marketo, Braze
Governance, audience sync, rate limits/SLAs
Fig. Tooling map (task → examples → evaluation criteria).
Pitfall
How it shows up
Mitigation
Proxy metrics
Optimizing to clicks/views
Define KPI hierarchy; instrument incrementality
Data gaps
Broken tags, sparse events
Feature store + freshness SLAs; backfills
Opaque vendors
Black-box decisions
Interoperability clauses; exportable logs
Voice drift
Generic copy/images
Style guide prompts; human review gates
Fig. Common pitfalls and mitigation of AI in digital advertising.
Item
Detail
Best for
Prestige originals, deep catalog, national-scale premium reach
Core features
Strong recommendations; ad tier supports 1080p, two streams, downloads
Plans & pricing
$7.99 (Standard with ads) · $17.99 (Standard) · $24.99 (Premium 4K)
Ad tier
Yes (Standard with ads)
Downloads
Yes (including on ad tier)
Live sports/events
Limited specials; not a primary sports hub
Devices
Broad (smart TVs, sticks, consoles, mobile, web)
Why marketers care
Premium, brand-safe context; building tools via Netflix Ads Suite
Fig. Netflix: quick facts.
Item
Detail
Best for
Big-tent entertainment tied to Amazon’s retail data
Core features
Default ad-supported experience; optional Ad Free add-on; TNF and event programming
Plans & pricing
Prime $14.99/mo or $139/yr (with ads) · Ad Free add-on +$2.99/mo
Ad tier
Yes (default); optional Ad Free
Live sports
Yes—Thursday Night Football and more
Devices
Ubiquitous; deep smart-TV and stick penetration
Why marketers care
Commerce/retail signals for ROAS and new-to-brand measurement
Fig. Amazon Prime Video: quick facts.
Item
Detail
Best for
Current-season TV + flexible upgrade to live TV
Core features
Next-day network episodes; originals; Disney bundle options
Plans & pricing
$11.99/mo (with ads, from Oct 21, 2025) · $18.99/mo (No Ads) · Hulu + Live TV ~ $89.99/mo (varies)
Ad tier
Yes
Live TV
Available via Hulu + Live TV (unlimited DVR)
Devices
Broad support across living-room and mobile ecosystems
Why marketers care
Broad genre mix; Disney sales/measurement stack and ESPN adjacencies via bundles
Fig. Hulu: quick facts.
Item
Detail
Best for
Four-quadrant franchises (Disney, Marvel, Star Wars, Pixar) and family co-viewing
Core features
Robust bundles (Disney+ + Hulu; + ESPN Select/Unlimited); 4K on many titles
Plans & pricing
$11.99/mo (with ads) · $18.99/mo (Premium ad-free; $189.99/yr) · Bundles from $12.99–$19.99/mo (Disney/Hulu) and $29.99–$38.99/mo with ESPN options
Ad tier
Yes
Live sports
Via ESPN bundle tiers (Select/Unlimited)
Devices
Widely available; strong family profile support
Why marketers care
High household penetration; franchise launch windows; bundle flexibility for sports adjacency
Fig. Disney+: quick facts.
Item
Detail
Best for
Prestige originals; select live sports windows
Core features
Friday Night Baseball; MLS Season Pass sold separately; new Apple TV+ × Peacock bundle
Plans & pricing
Apple TV+ $12.99/mo · MLS Season Pass (separate) · Apple TV+ × Peacock bundle $14.99–$19.99/mo
Ad tier
Limited (sports sponsor inventory)
Live sports
MLB Fridays on Apple TV+; MLS via Season Pass
Devices
Apple TV app on Apple/Samsung/LG/Roku/Fire TV/PS/Xbox and more
Why marketers care
High-polish sports and originals for curated sponsorships/integrations
Fig. Apple TV+: quick facts.
Item
Detail
Best for
HBO tentpoles plus live sports via B/R Sports
Core features
B/R Sports included with Standard/Premium (NBA, MLB, NHL, March Madness, U.S. Soccer, more)
Plans & pricing
$9.99/mo (Basic with Ads) · $16.99/mo (Standard) · $20.99/mo (Ultimate)
Ad tier
Yes (Basic with Ads)
Live sports
Yes (Standard/Premium only)
Devices
Broad living-room coverage
Why marketers care
Event-driven sports reach alongside prestige series in one environment
Fig. Max (formerly HBO Max): quick facts.
Item
Detail
Best for
NFL on CBS (in-market), UEFA, and a broad family library
Core features
CBS, Nickelodeon, Comedy Central, Paramount films; Showtime integrated in top tier
Plans & pricing
$7.99/mo (Essential with ads) · $11.99/mo (with SHOWTIME)
Ad tier
Yes (Essential)
Live sports
NFL (in-market); UEFA Champions/Europa
Devices
Broad support across living-room and mobile
Why marketers care
Reliable sports plus family co-viewing; pairs well with Pluto TV for FAST reach
Fig. Paramount+: quick facts.
Platform
Best for
Key features (2025 highlights)
U.S. plans & pricing
Ad tier?
Live sports?
Netflix
Prestige originals and depth
Large originals slate; strong recommendation engine; ad tier supports 1080p, 2 streams, downloads
$7.99 (Standard with ads), $17.99 (Standard), $24.99 (Premium 4K)
Yes
Limited/special events
Amazon Prime Video
“Big tent” entertainment + commerce tie-ins
Default ad-supported; shoppable/interactive formats via Amazon Ads; TNF and event programming
$14.99/mo (Prime incl. ads) or $139/yr; +$2.99/mo for Ad Free
Yes (default)
Yes—TNF and more
Hulu
“Big tent” entertainment + commerce tie-ins
Default ad-supported; shoppable/interactive formats via Amazon Ads; TNF and event programming
$14.99/mo (Prime incl. ads) or $139/yr; +$2.99/mo for Ad Free
Yes
Via Hulu + Live TV
Disney+
Family franchises (Disney/Marvel/Star Wars/Pixar)
Robust bundles (Disney+ + Hulu; + ESPN options); 4K on many titles
$11.99 (with ads), $18.99 (Premium ad-free; $189.99/yr); bundle pricing varies
Yes
Through ESPN bundles
Apple TV+
Prestige originals + weekly baseball
Award-winning originals; Friday Night Baseball; MLS Season Pass (separate); new Apple TV+ × Peacock bundle
$12.99/mo (Apple TV+); MLS Season Pass separate; Apple TV+ × Peacock $14.99–$19.99/mo
Limited (sports/runs)
Yes—MLB Fridays; MLS as add-on
Max (HBO Max)
HBO tentpoles + B/R Sports
HBO series/films; B/R Sports included on Standard/Premium (NBA/MLB/NHL/March Madness etc.)
$9.99 (Basic w/ ads), $16.99 (Standard), $20.99 (Ultimate)
Yes (Basic)
Yes—on Standard/Premium
Paramount+
NFL on CBS & UEFA + broad library
CBS, Nickelodeon, Comedy Central; NFL (in-market) simulcasts; UEFA rights; Showtime integrated on top tier
$7.99 (Essential w/ ads), $11.99 (with SHOWTIME)
Yes
Yes—NFL, UEFA
Fig. Top subscription over the top platforms (U.S., 2025). Pricing listed is U.S. list price as of late 2025 and may change.
Item
Detail
Best for
The largest TV-screen reach across formats (long-form, live, Shorts)
Core features
#1 share of U.S. TV watch-time in many 2025 months; powerful biddable ad products
Plans & pricing
Free with ads (Premium optional for ad-free)
Ad tier
Yes (default experience)
Live sports
Yes (creator/live streams; leagues/events vary)
Devices
Universal on smart TVs and sticks
Why marketers care
Fastest path to mass CTV reach; logged-in signals and flexible formats
Fig. YouTube (CTV app): quick facts.
Item
Detail
Best for
Free, lean-back viewing with huge library and FAST channels
Core features
On-demand movies/series + FAST; increasing Tubi Originals
Plans & pricing
Free with ads
Scale signal
100M+ MAUs milestone (May 2025)
Devices
Broad coverage; quick-start without sign-in
Why marketers care
Low-CPM reach at scale; Fox sales/measurement stack
Fig. Tubi: quick facts.
Item
Detail
Best for
Live sports + NBC/Bravo next-day + originals
Core features
Sunday Night Football, Premier League, WWE; strong unscripted slate
Plans & pricing
Premium (with ads) $10.99/mo · Premium Plus (mostly ad-free) $16.99/mo
Ad tier
Yes (Premium)
Live sports
Yes (NFL SNF, EPL; more windows coming)
Devices
Broad support; deep Comcast/Xfinity promos
Why marketers care
Weekend sports tentpoles and co-viewing at mainstream price points
Fig. Peacock (Premium with ads): quick facts.
Item
Detail
Best for
100% free FAST with a traditional “channel guide” feel
Core features
Hundreds of themed linear channels; on-demand library; CBS News/Sports HQ
Price to viewer
Free with ads
Scale signal
Widely cited ~80M MAUs (latest public snapshot, 2023)
Live sports
Yes (NFL SNF, EPL; more windows coming)
Devices
Preloaded on many TVs; easy channel-surfing UX
Why marketers care
Genre/context buys at national scale; funnels discovery into Paramount+
Fig. Pluto TV: quick facts.
Item
Detail
Best for
Free movies/TV, live channels, and everyday utility on Roku devices
Core features
On-demand + FAST; Roku Originals; kids/family hubs
Price to viewer
Free with ads
Scale signal
Roku platform reach ~145M U.S. households (Q4 2024); TRC benefits from footprint
Devices
One tap on Roku TVs/players; also web/mobile
Why marketers care
Targeting/measurement via Roku identity and OneView; efficient incremental reach
Fig. The Roku Channel: quick facts.
Platform
Best for
Standout content/rights
Price (viewer)
Notable reach signal
Why it’s a top choice
YouTube (CTV app)
Global video platform, dominant on TV screens
Everything from long-form talk, news, music, gaming to live streams
Free with ads (Premium optional)
#1 share of U.S. TV watch-time in many 2025 months
Massive, habitual viewing on the big screen
Tubi
Free on-demand + FAST channels (Fox)
Huge library across genres; growing Tubi Originals
Free with ads
100M+ MAUs (2025 milestone)
Lean-back, zero-cost viewing with breadth
Peacock (Premium w/ ads)
NBCU streaming with heavy live sports
Sunday Night Football, Premier League, WWE; NBC/Bravo next-day
$10.99/mo (ad-supported), $16.99 ad-free
~41M paid subs (Q2 2025)
Sports + unscripted = reliable engagement
Pluto TV
100% free FAST (Paramount)
Hundreds of themed linear channels; CBS News/Sports HQ; on-demand catalog
Free with ads
~80M MAUs (last widely cited snapshot, 2023)
True “channel-surf” experience without cable
The Roku Channel
Free movies/TV + live channels
Broad catalog; Roku Originals; kids/family hubs
Free with ads
Benefits from Roku’s platform footprint (145M+ U.S. HHs reached by Roku)
One click away on Roku TVs/players; everyday utility
Fig. Top AVOD platforms (free or ad-supported OTT). Peacock appears here as an ad-supported option (not free). The others are free-to-view with ads. Reach metrics reflect the most recent public figures commonly cited in 2025.
Step
What happens
Data or tech involved
Outputs to watch
1. Collect first-party data
Site/app events, POS, loyalty and CRM stitched into shopper profiles
Identity resolution, consent management, product catalog
Addressable audiences; compliant data foundation
2. Build ad platform
Package onsite, offsite, and in-store inventory; enable buying UI
Ad server/auction, DSP/SSP pipes, screen CMS, clean room
Surfaces and specs brands can activate
3. Target audiences
Brands choose segments, budgets, bids, and goals
Audience builder, look-alikes, eligibility rules
Reach, eligible impressions, forecasted ROAS
4. Deliver ads
Render sponsored listings, display/video, CTV, and in-store loops
Real-time decisioning, page/app integration, DOOH players
Viewable impressions, CTR/PDP views, store coverage
5. Measure outcomes
Tie exposure to orders online and in store
Sales matching, incrementality tests, clean-room joins
Revenue, ROAS & iROAS, new-to-brand, basket lift
Fig. Five-step RMN flow at a glance.
Channel
Primary objective
Typical placements
Targeting data
Best used for
Core KPIs
Onsite
Win the digital shelf
Sponsored listings, PDP/display, search
Real-time browse/search + purchase
Conversion, share of results, trade support
PDP views, add-to-cart, orders, ROAS
Offsite
Scale to the open web/CTV with retail data
Display/video, social, CTV via DSP
Retailer audiences ported to partners
Upper–mid funnel with sales tie-back
Reach, VTR, assisted orders, iROAS
In-store
Influence at the shelf
Endcap/aisle screens, audio, checkout
Store/location, daypart, loyalty
Launches, promos, seasonal features
Store lift, units per store, geo iROAS
Fig. Onsite vs offsite vs in-store (quick chooser).
Retailer (RMN name)
Ad offering
Unique strengths
Amazon (Amazon Ads)
Sponsored products in search, display banners on Amazon.com, video ads on Fire TV and Twitch, Amazon DSP for offsite programmatic, audio ads via Alexa
Scale & data: Amazon commands approximately 75% of U.S. retail media ad share, making it the undisputed leader. Decades of Prime customer data create unparalleled targeting precision. Its ad business spans the full funnel with closed-loop insights, forming a new advertising triopoly alongside Google and Meta.
Walmart (Walmart Connect)
Onsite search and sponsored products, display and native ads on Walmart.com, in-store TV and radio through Walmart TV network, offsite ads via Walmart DSP powered by The Trade Desk, partnerships with platforms like TikTok, Snap, and Roku
Omnichannel reach: Walmart ranks second in U.S. retail media, leveraging its massive physical store footprint alongside digital presence. With 150 million weekly store visitors and growing streaming partnerships, Walmart offers both local in-store impact and national digital scale.
Target (Roundel)
Display ads and sponsored listings on Target.com and app, programmatic offsite ads using Target audience data, in-store signage and audio, Roundel Media Studio self-serve platform
Guest data & brand partnerships: Target's loyalty and credit card data enable precise targeting of "Target guests." Known for cross-channel campaigns that integrate with Target's brand marketing, Roundel provides transparency and control. Target's media business grew approximately 13% in 2024.
Instacart (Instacart Ads)
Sponsored product placements in Instacart marketplace, shoppable video and display ads within the app, partnerships for CTV ads using grocery data
Shopper intent & multi-retail reach: Instacart's network covers multiple grocery retailers, allowing CPG brands to reach high-intent shoppers across many stores through one platform. Advertising represents a huge revenue source for Instacart. Its Roku partnership enables ads that connect viewing with grocery purchases for closed-loop measurement.
Kroger (Kroger Precision Marketing)
Onsite ads on Kroger's grocery sites and apps, personalized coupons via loyalty program, in-store digital screens and audio, offsite advertising using 84.51° first-party data with CTV and publisher partners
First-party sales data: Kroger's loyalty card data connects individual shoppers to both online and in-store purchases. This enables highly precise CPG targeting and closed-loop attribution. Kroger's RMN reportedly yields 50%+ profit margins, turning grocery's thin-margin model into lucrative media business.
eBay (eBay Ads)
Promoted listings in eBay search results, display ads on eBay website and app, advertising opportunities via eBay's ad network
Commerce marketplace data: As a major marketplace, eBay possesses extensive data on product search and purchase trends, especially in long-tail and collectible categories. eBay has been expanding its retail media network with technology improvements, offering strength in niches like motors and collectibles.
DSP (Demand-Side Platform)
SSP (Supply-Side Platform)
Ad Exchange
The "buyer's tool." Advertisers use a DSP to purchase ad inventory. Its primary goal is to help buyers find the most valuable impressions for the best price
The "seller's tool." Publishers use an SSP to sell their available ad space. Its primary goal is to help publishers maximize their revenue for every impression.
The "digital marketplace." This is the neutral, technological platform where the buying and selling happens. It facilitates the RTB auction between the multiple DSPs (buyers) and SSPs (sellers).
DSP Vendor Type
Why does it matter?
What to ask?
Channel & Inventory Access
Not all DSP platforms offer equal access to premium inventory. Your chosen platform must support the channels central to your omnichannel advertising strategy.
Do you specialize in CTV DSP, mobile in-app, or DOOH? Can you provide direct deals with premium publishers? Ensure their strengths align with your media plan.
Data Integration & Identity Solutions
The cornerstone of modern DSP advertising is data. With the deprecation of third-party cookies, how a platform handles identity is critical.
How easily can I onboard my first-party data? What alternative ID solutions (e.g., Unified ID 2.0) do you support? What contextual targeting tools are available?
AI & Automation Capabilities
Basic RTB is table stakes. True DSP benefits come from sophisticated AI in advertising that optimizes campaigns beyond human capability.
Do you offer predictive bidding and budget pacing? How does your AI automate audience discovery and creative optimization? Request case studies showing performance lifts.
Transparency & Reporting
You cannot optimize what you cannot measure. Transparent reporting is essential for data-driven decision-making and proving ROI.
What level of granularity is available in campaign reports? Are fees and auction dynamics clear? Can I track performance across the entire funnel?
Pricing Model & Total Cost
DSP costs can vary significantly and impact your overall media efficiency.
What is your fee structure (e.g., percentage-of-spend, fixed CPM)? Are there hidden costs for data integration or support?
Ease of Use & Support
A powerful DSP is useless if your team cannot operate it effectively.
Is the interface intuitive for your team's skill level? What level of onboarding and strategic support is provided? Is it a self-service or managed model?
Aspect
AdTech
MarTech
Primary Purpose
Reaching new, anonymous audiences at scale
Nurturing known audiences to build loyalty and drive revenue
Core Focus
Efficiently buying attention and driving initial engagement
Managing the entire customer lifecycle and delivering personalized experiences
Key Channels
Paid Media (Display ads, Video, Social Media Ads, Native Advertising)
Owned Media (Website, Email, SMS, CRM, Mobile App, Social Profiles)
Data Foundation
Relies on third-party cookies, device IDs, and contextual data for targeting. Increasingly shifting to first-party data
Built on deterministic, first-party data (e.g., email, phone number) from direct customer interactions
Key Technologies
Demand-Side Platforms (DSPs), Supply-Side Platforms (SSPs), Ad Exchanges
CRM, Marketing Automation, Customer Data Platforms (CDPs), Analytics Platforms
Performance KPIs
Reach, Frequency, Click-Through Rate (CTR), Cost-Per-Click (CPC), Cost-Per-Acquisition (CPA)
Customer Lifetime Value (LTV), Conversion Rate, Engagement Rate, Lead Velocity, Churn Rate
Relationship Model
Short-term, transactional focus on the immediate conversion
Long-term, relational focus on nurturing and retaining the customer
Category
Purpose
Key Tools / Platforms (Examples)
How It Supports Advertising
Demand-Side Platforms (DSPs)
Automate media buying for advertisers across multiple ad exchanges
Google Display & Video 360, The Trade Desk, Amazon DSP
Enables programmatic ad buying, targeting specific audiences in real time
Supply-Side Platforms (SSPs)
Help publishers manage, sell, and optimize ad inventory
Google Ad Manager, Magnite, PubMatic
Maximizes revenue from ads by auctioning inventory to multiple buyers
Ad Exchanges
Marketplaces connecting DSPs and SSPs for real-time bidding (RTB)
Google AdX, OpenX, Xandr
Facilitate transparent, competitive ad auctions between buyers and sellers
Data Management Platforms (DMPs)
Aggregate and analyze audience data from multiple sources
Lotame, Oracle BlueKai, Nielsen DMP
Create detailed audience segments for targeted ad campaigns
Ad Networks
Aggregate inventory from publishers and sell it to advertisers
Google Ads Network, Taboola, Outbrain
Simplify ad buying and expand reach across publisher sites
Attribution & Analytics Tools
Measure ad performance and ROI across channels
Adjust, AppsFlyer, Neustar
Track which ads drive conversions and optimize campaign effectiveness
Retargeting Platforms
Serve personalized ads to users who’ve interacted with a brand before
Criteo, AdRoll, Meta Ads Retargeting
Increases conversion rates by re-engaging warm audiences
Verification & Brand Safety Tools
Ensure ads appear in suitable environments and prevent fraud
DoubleVerify, Integral Ad Science (IAS), Moat
Protects brand reputation and verifies viewability and authenticity
Creative Management Platforms (CMPs)
Manage, design, and scale dynamic ad creatives
Celtra, Bannerflow, Hunch
Streamlines creative production and enables A/B testing for ads
Core Platform Categories
Examples and Results
CRM & Marketing Automation
Platforms like Salesforce, HubSpot, and Marketo form the operational backbone, storing customer data and automating personalized communication flows. Companies using marketing automation for lead management see a 10% increase in revenue within six to nine months.
Customer Data Platforms (CDPs)
Solutions like Segment, Tealium, and mParticle unify customer data from multiple sources, creating comprehensive customer profiles. Organizations using CDPs achieve 2.3 times higher customer satisfaction scores through improved personalization.
Content Management & Experience Platforms
WordPress, Contentful, and Optimizely empower teams to create, manage, and personalize digital content. Websites with personalized content experiences see an average 20% increase in sales conversions.
Analytics & Attribution
Google Analytics, Adobe Analytics, and Mixpanel provide insights into customer behavior and campaign performance. Companies that leverage data-driven attribution models improve marketing ROI by 15-30% through better budget allocation.
Email & Conversion Rate Optimisation
Tools like Mailchimp, Klaviyo, and Unbounce optimize conversion points across the customer journey.
Feature
Traditional OOH (Static)
Digital Out-of-Home (DOOH) (Dynamic)
Core Nature
Static, physical printed media.
Dynamic, digital screens powered by software
Content
Fixed for weeks or months. A single message
Dynamic, changeable in real-time. Can run multiple ads in a loop
Flexibility & Agility
Low. Changing creative requires a physical print and installation crew
High. Content can be updated instantly via a cloud-based Content Management System (CMS) from anywhere.
Targeting & Context
Broad, location-based. Audience is assumed by location.
Can use real-time data (time, weather, traffic, live events, social feeds) to trigger relevant ads
Audience Engagement
One-way broadcast. Passive viewing.
Can integrate with mobile via QR codes, NFC, and use motion sensors or touchscreens.
Measurement & Analytics
Estimated impressions based on traffic/location data. Difficult to prove impact
More measurable because it provides playlogs, can be integrated with mobile data for attribution
Cost Structure
High upfront production and installation. Long-term lease (months/years).
Lower/no production cost for digital files. Flexible buying (dayparts, weeks, days)
Creative Impact
Relies on a single, powerful visual and message
Allows for motion, video, animation, and storytelling sequences. Higher recall due to movement.
Sustainability
Physical prints often use vinyl and other materials that end up in landfills.
Digital screens consume electricity but eliminate physical waste from frequent print cycles
Campaign Duration
Long-term (ideal for brand-building over months).
Short-term & tactical (ideal for promotions, events, time-sensitive offers).
Campaign Objective
Primary Goal
Essential KPIs to Track
Awareness & Reach
Increase brand visibility and attract a new audience.

• Impressions

• Click-Through Rate (CTR)

• Share of Voice

• Brand Lift (Unaided/Aided Awareness)

Conversion & Performance
Drive specific, valuable actions that contribute to revenue.

• Conversion Rate (CVR)

• Cost Per Acquisition (CPA)

• Return on Ad Spend (ROAS)

• Website Engagement Rate (GA4)

Retention & Loyalty
Foster long-term customer relationships and increase lifetime value.

• Customer Lifetime Value (CLV)

• Net Promoter Score (NPS)

• Customer Retention Rate (CRR)

• Repeat Purchase Rate

Unaided Awareness
Aided Awareness
This measures the percentage of your target audience that can spontaneously name or recognize your brand without any prompting. It is a powerful indicator of your brand's top-of-mind presence and market leadership. Due to its unprompted nature, it is less prone to bias and is considered a strong indicator of brand strength.
This gauges the percentage of people who recognize your brand when given a specific prompt (e.g., "Which of these brands are you familiar with?"). While more susceptible to bias than unaided awareness, it provides crucial data on your brand's overall footprint and potential for growth within a broader market.
Tracking Too Many KPIs
The "data overload" dilemma. When you track every possible metric, you focus on none. This dilutes attention and resources away from the key performance indicators that truly matter to your business objectives.
Ignoring Attribution
Relying solely on last-click attribution is one of the costliest mistakes. It massively overvalues bottom-funnel tactics and undervalues vital top-of-funnel activities like brand awareness, leading to skewed CPA and an unbalanced marketing strategy.
Focusing on Vanity Metrics 
Prioritizing numbers that look good (like social media likes) over those that drive business outcomes (like conversion rate) creates a false sense of success and misallocates budget.
Setting and Forgetting
KPIs are not static. Failing to regularly review and refine your marketing performance indicators in line with changing business goals or market conditions means you could be optimizing for the wrong targets.
Working with Data Silos
Viewing channel-specific KPIs (like a single social platform's metrics) in isolation prevents you from understanding the synergistic effect of your cross-channel strategy and its collective impact on customer lifetime value (CLV)
Tracking Too Many KPIs
The "data overload" dilemma. When you track every possible metric, you focus on none. This dilutes attention and resources away from the key performance indicators that truly matter to your business objectives.
Ignoring Attribution
Relying solely on last-click attribution is one of the costliest mistakes. It massively overvalues bottom-funnel tactics and undervalues vital top-of-funnel activities like brand awareness, leading to skewed CPA and an unbalanced marketing strategy.
Audio Ads
15–30 seconds
Between songs on Spotify Free tier
Genre, mood, playlist, device, listening behavior, location
Video Ads
15–30 seconds
Full-screen in-app during active sessions
Same as audio ads + demographic targeting
Sponsored Sessions
User chooses to watch a video
Unlock ad-free listening for 30–60 minutes
Contextual + behavioral targeting
Podcast Ads
Pre-roll, mid-roll, post-roll
During Spotify podcasts
Show-specific targeting, listener demographics, interests
Display Ads
Banner, homepage takeover, overlay
App homepage, playlist pages, search pages
Device, location, demographics, playlist type
Video Takeovers
15–30 seconds
App startup or playlist launch
Broad reach targeting, ideal for awareness campaigns
Stage
What it does
Typical inputs
Common pitfall
Signals
Collects usable, consented clues
Site/app events, CRM, context
Collecting more data than you can govern
Interpretation
Turns signals into intent
Segments, propensities
Overfitting to weak or stale signals
Decision
Chooses audience + message + bid
Rules + models
Too many micro-segments to scale
Delivery
Serves the ad in-channel
DSP/social/RMN/CTV
Frequency creep and wasted impressions
Learning
Improves future decisions
Lift tests, MMM, modeled conv.
Optimizing to platform-only metrics
Fig. Personalization pipeline at a glance.
Channel
Why personalization works here
Best lever to use
Biggest risk
Retail media
Logged-in, high-intent signals
Product + category relevance
Over-indexing on last-click
CTV
Household context + broad reach
Frequency + sequencing
Overserving small audiences
Paid social
Fast algorithmic learning loops
Creative variants + hooks
Limited portability of learnings
Programmatic display/OLV
Multi-signal targeting at scale
Context + supply quality
Measurement noise + waste
Email/lifecycle
First-party relationship + triggers
Journey sequencing
Over-messaging and churn
Fig. Which channels benefit most and why.
Benefit
What it looks like in practice
Metrics that prove it
What to avoid
Relevance & engagement
Higher attention, better click quality
Engaged sessions, CTR (directional), view-through
Treating CTR as success on its own
Performance (ROAS/CPA/LTV)
Better conversion efficiency
Incremental ROAS, CAC/CPA, LTV:CAC
Only reporting platform-attributed ROAS
Customer experience
Less friction, better fit
Repeat rate, unsub rate, NPS/CSAT (if available)
“Creepy” precision that hurts trust
Media efficiency
Less waste + better working media
Reach vs frequency, viewability, cost per incremental outcome
Optimizing to cheapest inventory
Fig. Benefits → what to measure.
Type
Primary signal
Where it works best
What to watch
Demographic & geographic
Age, income bands, location, region
Broad prospecting, local offers
Can be too generic; easy to over-assume intent
Interest & behavior-based
Content consumed, category activity, modeled intent
Social, in-app, online video
Signal decay, platform differences, consent limits
Retargeting & remarketing
First-party recency (site/app/email), product views
Commerce, lifecycle, CTV household follow-ups
Frequency fatigue, identity gaps, measurement noise
Contextual
Page/app/video context, keywords, genre
Open web, online video, CTV
Needs strong creative mapping to context
Journey-based
Stage, triggers, sequence logic
Email/SMS + paid media orchestration
Requires clean event tracking and rules discipline
Fig. Types of personalized ads at a glance.
Approach
Best for
Strengths
Blind spots
Platform reporting
Fast directional results inside one publisher
Cleanest view inside the platform
Hard to compare across publishers; limited transparency
View-through attribution (household/cross-device)
Performance readouts tied to exposed households
Connects exposure to later actions
Susceptible to over-crediting without incrementality
Incrementality lift tests
Proving causal impact
Closest thing to “truth” for outcomes
Requires design rigor, holdouts, and enough scale
MMM
Budget allocation and long-term ROI
Holistic view across channels
Less granular; slower feedback loop
Clean room analysis
Privacy-safe matching and measurement
Better governance; controlled joins
Still depends on match rates and shared taxonomies
Fig. Comparison table: common measurement approaches in CTV
Item to define
Example decision
Why it matters
Impression definition
Served vs started vs viewable
Stops conflicting “delivery” numbers between sources
Completion rule
100% watched vs quartiles
Keeps CPCV and completion rate meaningful
Attribution window
3/7/14-day view-through
Prevents inflated conversions and inconsistent reporting
Source of truth
Verification vendor vs DSP logs
Avoids post-campaign arguments about “real” results
Fig. What to lock before launch.
Metric
What it tells you
Common misread
Reach / unique households
How many homes saw the ad at least once
Mistaking platform reach for deduped reach
Frequency
Average exposures per reached household
Focusing on averages and missing heavy overexposure
Completion rate
Percent of starts that finish
Treating completion as attention or persuasion
ROAS/CPA
Outcomes per dollar spent
Treating attribution as “proof” without incrementality
Fig. CTV KPIs translation: what it really means in CTV.
Campaign goal
Primary KPI
Good target to start
Adjust when…
Awareness
Reach + frequency
Frequency cap to avoid saturation; watch distribution
Reach stalls and frequency climbs quickly
Consideration
Completion + site visits
High completion plus consistent view-through
Completion is high but downstream actions are flat
Performance
CPA/ROAS + lift
CPA/ROAS steady with incremental lift
Attributed results rise but lift is weak or absent
Retention
Frequency + incremental revenue
Controlled frequency to known customers
Frequency increases but repeat purchase doesn’t
Fig. Benchmarks vs targets: a practical starting point.
Mistake
What it looks like
Fast fix
Optimizing to cheap CPM
Spend shifts to opaque apps; results get noisy
Set an “eligible impression” standard and block low-quality supply
Treating VTC as truth
ROAS spikes with no real business lift
Tighten windows and validate with a lift test
No deduplication plan
Great reach per platform, poor unique reach overall
Use cross-media measurement or dedupe logic across buys
Ignoring frequency distribution
“Avg frequency = 6” but some homes see 25+
Review histograms and cap aggressively where needed
Fig. Common measurement mistakes and fast fixes.
Aspect
Digital Audio Advertising
Traditional Radio Advertising
Audience targeting
Highly granular targeting based on interests, listening behavior, location, device, and time of day
Broad demographic targeting based on station and time slot
Personalization
Dynamic, personalized audio ads that can change by audience segment or context
One static ad message for all listeners
Measurement & tracking
Full performance tracking (impressions, reach, frequency, completion rates, attribution signals)
Limited measurement, mainly estimated reach and GRPs
Buying model
Programmatic and flexible, with real-time optimization and budget control
Fixed schedules and upfront buys
Scale & reach
Scales across streaming apps, podcasts, digital radio, smart speakers, and in-car audio
Limited to terrestrial broadcast coverage
Listener context
Reaches listeners during on-demand, high-attention moments
Reaches listeners during live broadcasts with less control over context
Optimization
Continuous optimization based on performance data
Minimal optimization once the campaign is live
ROI transparency
Clear visibility into results, supporting both brand and performance goals
ROI often inferred rather than directly measured
Signal
Often available?
What it tells you
Common limitation
App / bundle name
Sometimes
Where the ad claims to run
Often missing or inconsistent
Device type / OS
Usually
Environment and format assumptions
Can be spoofed
Supply chain path (SCO)
Sometimes
Who touched the impression
Not always present or audited
Log-level playback evidence
Rare
Whether delivery matches reality
Often withheld or delayed
Fig. What you can verify vs what you can’t.
Fraud scheme
What it looks like in reporting
Why it’s dangerous
First-line defense
Device spoofing
“Premium CTV” delivery from odd devices/OS mixes
You pay CTV CPMs for non-CTV traffic
Require app transparency + device validation signals
SSAI spoofing
Very high completion rates with thin app detail
Playback signals can be simulated or masked
Demand SSAI transparency + verification coverage
App spoofing/misrepresentation
Big delivery to “known” apps that you can’t confirm
You think you’re buying premium, but you aren’t
App allowlists + bundle validation
Botnets/automated traffic
Stable volume, unnatural dayparts/geos
Fake viewers contaminate performance learning
IVT filtering + anomaly monitoring
Reselling/supply path manipulation
“Direct” supply with unexpected intermediaries
Accountability disappears; fraud hides in hops
SPO + authorized-seller enforcement
Fig. CTV fraud schemes at a glance.
What gets distorted
What you see
What’s actually happening
Why it matters
Reach
More “unique households” than expected
Fake IDs inflate scale
Planning decisions get overconfident
Frequency
“Controlled” frequency that still feels saturated
Duplicated/unstable IDs
Waste rises while incrementality drops
Completion rate
Near-perfect video completions
Automation or SSAI spoofing
Optimizers chase fake quality
Attribution
Conversions that don’t match business reality
Bots pollute exposure paths
ROI debates become unwinnable internally
Fig. Impact map: how fraud damages ROI.
Stage
What to do
What to require
What to block
Pre-bid
Filter risky supply before the auction
App transparency, seller authorization, SPO rules
Unknown bundles, suspicious “direct” sellers
In-flight
Watch anomalies and tighten fast
Frequent quality reporting, quick block workflow
Delivery spikes, implausible patterns
Post-bid
Audit and claw back where possible
Log-level diagnostics, refund/makegood terms
Repeat offenders, unverifiable inventory
Fig. Prevention controls by stage.
Shift
What’s changing
Implication for advertisers
Streaming overtakes linear
More TV time is spent in streaming apps than on broadcast or cable.
Plans need to treat CTV as the core reach driver, not an experimental add-on.
Ad-supported tiers surge
Viewers increasingly choose lower-priced, ad-supported streaming plans.
There is more premium, big-screen ad inventory with better targeting.
Measurement gets granular
Cross-device attribution and incrementality testing become standard.
TV budgets can be justified using the same language as performance channels.
AI in planning & creative
AI tools shape media mix, creative variants, and pacing decisions.
Marketers can test more ideas faster, but need clear guardrails and goals.
Fig. Macro shifts reshaping TV in 2026.
Industry
Role of TV/CTV today
Typical objectives
CTV advantages
Retail & ecommerce
Always-on presence around promotions and tentpoles.
Incremental sales, new-to-brand buyers.
Shoppable formats and retail data links.
Streaming & entertainment
Launches, new seasons, sports and tentpole events.
Subscriber growth, viewing hours.
Precise audience segments by taste.
Automotive
Launches, consideration, and dealer support.
Test drives, online configurator visits.
In-market targeting and dealer attribution.
Finance & insurance
Trust-building and product education.
Applications, quotes, account sign-ups.
Household-level segmentation and ROAS.
Fig. Industries leading TV & CTV spend.
Brand
Category
Primary TV/CTV goal
Notable CTV angle
Amazon
Retail & streaming
Drive retail sales, Prime sign-ups, and brand equity.
Prime Video + retail data for closed-loop measurement.
Disney (incl. Hulu/ESPN)
Entertainment & sports
Grow streaming subs and event tune-in.
Deep sports portfolio and family content.
P&G
CPG
Maintain category leadership and mental availability.
Household-level targeting via retail partners.
Toyota / Ford
Automotive
Launch new models and push in-market audiences to dealers.
In-market targeting and dealership attribution via CTV.
Fig. Snapshot of top TV advertisers in 2026.
Platform
Core strengths for advertisers
When it’s especially useful
YouTube on TV screens
Massive reach, strong under-35 audience, granular targeting.
When you need scale plus performance-style reporting.
Disney (Hulu/Disney+/ESPN)
Premium scripted content and live sports.
Brand-safe reach and family/sports environments.
Amazon Prime Video
Logged-in shoppers plus retail measurement.
When you want TV to feed directly into retail sales.
Roku / The Roku Channel
OS-level placements and broad FAST footprint.
Incremental reach and advanced attribution use cases.
Fig. Comparing major CTV platforms.
KPI
It matters
Interaction rate (IR)
This is the foundational metric for interactive CTV. It measures the percentage of impressions that resulted in a deliberate action (click, choice, QR scan, explore). Unlike views, interaction rate signals active intent, making it a stronger indicator of message resonance and creative clarity.
Primary action completion rate
Not all interactions are equal. Track how many viewers complete the intended action—such as visiting a landing page, requesting an offer, or adding a product to a cart. This KPI connects creative design directly to business goals and helps identify where friction still exists.
Time earned / depth of engagement
Interactive ads often extend attention beyond the base ad length. Measuring how long viewers stay engaged—or how many steps they complete—reveals whether interactivity is adding value or creating drop-off. Deeper engagement typically correlates with stronger recall and consideration lift.
Post-ad outcomes (lift metrics)
Look beyond the TV screen. Metrics like site visits, search lift, sign-ups, and conversions attributed to interactive exposure show whether engagement translated into downstream action. These outcomes distinguish interesting ads from effective ads.
Cross-device impact
Because interactive CTV often activates follow-up journeys on mobile or desktop, cross-device KPIs matter. Track assisted conversions, retargeting performance, and sequential message lift to understand how CTV interactions influence behavior across channels.
Cost per action (CPA), not just CPM
Interactive CTV reframes efficiency. While CPM provides buying context, cost per qualified action (engaged visit, completed interaction, conversion) is the KPI that reflects ROI. Campaigns optimized to CPA rather than impressions consistently deliver clearer business value.
Dimension
Classic web retargeting
CTV retargeting
What to watch
Identity anchor
Browser cookies / MAIDs
Household + identity graph
Match rate and suppression speed
Ad environment
Mixed quality open web + apps
Premium streaming inventory
Supply path transparency matters
Creative format
Click-friendly units
Lean-back video storytelling
CTA design (QR, vanity URL, timing)
Measurement
Click + last-touch heavy
View-through + lift-friendly
Define success before launch
Fig. CTV retargeting vs classic web retargeting.
Intent tier
Typical signals
Best first message
Best second message
High
Cart/checkout start, pricing page
Friction removal + reassurance
Offer/urgency (light touch)
Medium
PDP/category views, repeat visits
Differentiation + proof
Objection handling
Warm CRM
Past buyers, loyalty members
“What’s new” + value reminder
Cross-sell/upsell with benefit focus
Lapsed
No purchase in X days
Re-intro + progress since last time
Win-back hook + easy re-entry
Fig. Audience tiers and best-fit messaging.
Exposure
Job of the ad
What to show
CTA style
1
Reconnect
Core value + category fit
“Learn more” / “See options”
2
Resolve
Proof, reviews, shipping/returns, guarantees
“Get details” / “Compare”
3
Close
Offer or urgency + simple next step
QR + short prompt (“Finish checkout”)
Fig. Sequencing blueprint (what to run on exposure 1, 2, 3).
Metric/model
Best for
When it misleads
What to pair it with
Completion rate
Creative quality check
Doesn’t prove impact
Visit rate or lift
Visit rate
Re-engagement signal
Can inflate from existing intent
Control/holdout where possible
View-through conversions
Shorter purchase cycles
Over-credits “would’ve happened anyway”
Tight windows + suppression
Incremental lift
Proving causality
Needs volume + clean setup
Conversion + ROAS reporting
Fig. Measurement model selection guide.
Pitfall
What it looks like
Root cause
Fix in one move
Over-targeting
High frequency, low reach
Too many filters
Add a medium-intent tier
Broad caps
Stable avg freq, angry comments/low lift
Tail overexposure
Cap the tail + rotate creative
Poor match
Under-delivery
Weak identity resolution
Improve first-party capture + onboarding
Weak attribution
“It performed great” but no business movement
Wrong KPI
Predefine success + run lift
Fig. Pitfalls and fast fixes.
Dimension
Linear TV Remnant Inventory
CTV Remnant Inventory
How remnant inventory emerges
Arises from unsold broadcast or cable ad slots after upfront and scatter commitments
Arises from unsold broadcast or cable ad slots after upfront and scatter commitments
Buying timing
Typically purchased very close to airtime, often days or hours in advance
Can be accessed closer to real time through programmatic or platform-based buying
Placement control
Limited control over program context, daypart, or exact slot position
Greater control through content categories, app-level placement, or audience signals
Audience targeting
Relies primarily on broad demographic estimates and ratings
Uses household-level or device-based targeting and first-party data
Pricing dynamics
Lower CPMs than premium linear buys, but highly volatile
CPMs vary widely; often lower than premium CTV but higher than linear remnant
Delivery guarantees
Minimal or no guarantees on impressions or reach
Soft delivery expectations possible, but still less predictable than premium CTV
Measurement and reporting
Based on panel-based ratings and estimated reach
Digital-style reporting with impression-level logs and frequency controls
Best-fit use cases
Incremental reach, frequency extension, or short-term budget utilization
Scalable reach with improved targeting for performance or test-and-learn campaigns
The Drawback
Why? 
Limited and inconsistent scale is one of the most common challenges.
Because remnant inventory is defined by what remains unsold, availability fluctuates based on demand cycles, seasonality, and broader market conditions. Advertisers cannot rely on remnant ads to deliver consistent volume week over week, particularly during high-demand periods such as Q4, election cycles, or major live events.
Delivery is inherently inconsistent and difficult to forecast.
Unlike premium or upfront media buying, remnant TV advertising typically comes with no guaranteed impression volume, reach, or frequency. Campaigns may overdeliver in one market and underdeliver in another, making pacing and optimization more complex—especially for teams accountable to fixed performance targets.
Audience composition can be unpredictable
In linear TV, remnant ads often rely on broad demographic estimates rather than precise audience controls. In CTV environments, targeting may be more advanced, but remnant availability can still shift quickly as platforms rebalance demand. This variability increases the risk of inefficient impressions when reach quality matters more than reach volume.
Remnant advertising is also dependent on weak or uneven market demand. 
When advertiser demand is strong, remnant supply contracts. When demand softens, remnant inventory expands—but that expansion often reflects broader market uncertainty rather than incremental opportunity. As a result, remnant TV cannot be scaled reliably as a long-term growth lever.
Forecasting and planning are inherently constrained. 
Because remnant inventory appears late in the buying cycle, it is difficult to model outcomes accurately in advance. This makes remnant TV poorly suited for campaigns where reach guarantees, brand safety controls, or strict delivery timelines are non-negotiable.
Factor
Upfront TV Buying
Scatter TV Buying
Remnant TV Advertising
Primary objective
Secure premium inventory and guaranteed reach
Adjust plans closer to airtime based on demand
Monetize leftover inventory efficiently
Cost level
Highest CPMs due to premium positioning
Mid-to-high CPMs depending on demand
Lowest CPMs; often 30–60% cheaper than premium
Inventory type
Premium inventory with high-demand programming
Mix of premium and standard inventory
Unsold remnant inventory
Delivery guarantees
Strong guarantees on impressions, reach, and placement
Partial guarantees depending on deal terms
No delivery or placement guarantees
Scalability
High and predictable
Moderate and demand-dependent
Limited and inconsistent
Flexibility
Low; long-term commitments required
Moderate; short-term adjustments possible
High; fast activation and short commitments
Planning window
Months in advance
Weeks to days before airtime
Days or hours before airtime
Placement control
High control over networks, programs, and dayparts
Moderate control
Low control over exact ad slots
Audience predictability
High and ratings-based
Moderate and variable
Low and unpredictable
Best-fit use cases
Brand launches, tentpole campaigns
Seasonal pushes, reallocations
Cost-efficient reach, testing, gap filling
Risk profile
Low risk, high commitment
Medium risk, balanced flexibility
Higher volatility, lower financial risk
Visit rule component
What it controls
“Good” looks like
What goes wrong if weak
Polygon boundary
Location accuracy
Hand-drawn polygons that match the storefront footprint
Parking lots + neighboring tenants inflate visits
Dwell-time threshold
Drive-bys
Threshold varies by venue type (e.g., QSR vs big-box)
“Visits” become pass-throughs
Repeat-visit suppression
Employee/regular bias
Frequency caps + employee filtering logic
Loyalists and staff dominate lift
Time-of-day exclusions
Operational noise
Exclude closed hours / delivery windows when needed
False positives spike overnight
Fig. Valid visit definition checklist.
Channel
Exposure granularity
Typical matching approach
Common interpretation mistake
Mobile/in-app
Device-level (when permitted)
MAIDs / consented IDs + location events
Treating “matched” as “complete coverage”
CTV
Household/device graph level
Household graphs + modeled visitor likelihood
Assuming person-level certainty from household exposure
DOOH
Inferred presence
Time/place cohorts + probabilistic matching
Confusing “near the screen” with “saw the ad”
Retail media
Platform-reported cohorts
Closed-loop + retailer identity layer
Treating retailer measurement as cross-channel truth
Fig. Exposure and matching by channel
Metric
What it really tells you
Best use
Easy trap
Attributed visits
“Visits happened after exposure”
Directional pattern-spotting
Mistaking it for causality
Incremental visits
“Visits above expected baseline”
Budget decisions + channel comparison
Ignoring confidence/variance
Lift %
Relative difference vs control
Compare segments/geos
Celebrating small lifts on tiny samples
Cost per incremental visit
Spend efficiency vs baseline
Portfolio optimization
Optimizing a fragile control design
Fig. How to interpret common foot traffic metrics.
What to ask
Minimum acceptable answer
Red flag
Why it matters
“Show me the visit definition.”
Polygons + dwell + exclusions explained clearly
“Proprietary logic” with no detail
Visit rules drive the whole outcome
“How is the control built?”
Match criteria + holdout method + bias controls
Control = “everyone else”
Lift becomes correlation
“What’s match rate/coverage?”
Clear coverage proxy + where it breaks
Only lift, no coverage
You can’t judge reliability
“How do you dedupe cross-channel?”
One person/household counted once
Separate channel dashboards
Double-counted “wins”
Fig. Vendor transparency questions that protect your results.
Aspect
Addressable TV
Connected TV
2025 market status
53% consider "must-buy"
$33.48B projected spend
Audience trend
Declining with cord-cutting
68.4% population already streaming
Best for
Household targeting in premium content
Individual targeting with measurement
Entry cost
$50,000-100,000 minimum
$5,000-10,000 minimum
Key strength
Precision within traditional TV
Growing reach + digital capabilities
Fig. Addressable TV vs Connected TV platform comparison at a glance.
Targeting feature
Addressable TV
Connected TV
Granularity
Household level
Individual/device level
Data sources
Cable/satellite subscriber data
Behavioral, app usage, cross-device
Geographic precision
DMA/zip code level
IP address/neighborhood level
Audience insights
Limited to household demographics
Detailed viewing patterns per user
Retargeting ability
Not available
Full cross-device retargeting
Fig. Targeting capabilities of addressable TV vs CTV.
Feature
Connected TV (CTV)
Addressable TV
Delivery Method
Internet-based streaming to smart TVs and connected devices
Cable/satellite set-top boxes to traditional TVs
Targeting Precision
Individual user or device-level targeting with behavioral data
Household-level targeting based on provider data
Audience Reach
Rapidly growing, projected to reach $42B by 2028
Limited to pay-TV subscribers, shrinking due to cord-cutting
Measurement
Real-time analytics with digital attribution
Improved over linear TV but limited to aggregate data
Cost Structure
Generally lower CPMs, accessible to more advertisers
Premium pricing for targeted reach
Ad Formats
Interactive, shoppable, QR-enabled
Traditional 15/30/60-second spots
Buying Process
Primarily programmatic with real-time optimization
Direct from providers, some programmatic emerging
Inventory Sources
Streaming services, FAST channels, apps
Cable and satellite provider inventory
Frequency Control
Challenging due to platform fragmentation
Easier within single provider ecosystem
Creative Flexibility
Supports dynamic creative and personalization
Standard creative across targeted households
Fig. Differences between addressable TV vs connected TV.
Future indicator
Addressable TV
Connected TV
2025-2028 growth
Limited by cord-cutting
Double-digit growth expected
AI integration
Emerging capabilities
Advanced implementation
Inventory expansion
Shrinking base
FAST channels + AVOD growth
Programmatic adoption
Growing but limited
80%+ already programmatic
Industry investment
Defensive innovation
Aggressive expansion
Fig. The future of addressable TV and CTV.
Network
Avg. primetime viewers (2024/2025)
Typical CPM (estimated)
Primary audience
Fox News
2.4 million (2025) [Source]
$35 - $50+
Adults 50+, conservative-leaning
ESPN
1.9 million (2025) [Source]
$40 - $65
Sports fans, men 25-54, increasingly diverse viewership
MSNBC
906,000 (2025) [Source]
$25 - $40
Adults 50+, progressive-leaning
CNN
480,000 (2025) [Source]
$25 - $40
Adults 35+, general news consumers, focus on digital growth
HGTV
628,000 (2025) [Source]
$20 - $30
Women 25-54, homeowners, co-viewing audience
Fig. Top cable networks by viewership, estimated CPM, and primary audience.
Aspect
Traditional buying
Programmatic buying
Process
Manual negotiations with sales reps
Automated platform-based buying
Pricing
Rate card negotiations
Dynamic, market-driven pricing
Targeting
Demographics and dayparts
Behavioral and household-level
Optimization
Post-campaign analysis
Real-time adjustments
Minimum investment
$5,000-$10,000 per market
Flexible, often lower minimums
Lead time
2-4 weeks
24-48 hours
Fig. Traditional vs. programmatic cable TV buying
Metric category
Traditional metrics
Modern metrics
Business impact
Reach
GRPs, Impressions
Incremental unique reach, Cross-screen measurement
Market penetration
Engagement
Viewability
Second-by-second attention, Social amplification
Brand consideration
Attribution
Post-campaign surveys
Multi-touch modeling, Real-time dashboards
Revenue contribution
ROI
Cost per point
Cost per acquisition, ROAS
Profit margin impact
Fig. Cable TV campaign metrics and KPIs.
Feature
CDPs
Data clean rooms
Primary role
Unify first-party customer data
Enable privacy-safe audience matching
Use case
Audience segmentation & activation
Cross-platform measurement & attribution
Data access
Owned by advertiser
Shared securely between advertiser/publisher
Key benefit
Better targeting precision
Privacy-safe visibility across walled gardens
Fig. CDPs vs. data clean rooms in OTT.
Format
Performance benchmark
Why it matters
QR code overlays
Up to 70% scan rate
Strong bridge to mobile engagement
Shoppable ads (Roku × Walmart)
3× higher sales vs. standard ads
Direct commerce impact
Interactive creative (Innovid)
+192 seconds extra engagement time
Deeper brand interaction
Connected TV commerce usage
~33% of U.S. viewers purchased via CTV
Shows purchase journey on TV is real
Fig. Interactive and shoppable ad performance benchmarks.
Device
Action
Description
Smart TV
See an ad
A viewer notices a brand ad while streaming their favorite show.
Smartphone
Look up the brand
Curious, they grab their phone to search for the product and check reviews.
Tablet
Engage with social content
The next day, they see a retargeted video ad on social media and tap to learn more.
Laptop
Visit brand site
Later in the week, they visit the brand’s website, browsing product details and FAQs.
Phone (email)
Redeem offer
They receive a promo code via email and click through on their phone.
Desktop
Complete purchase
At home, they finalize the purchase on their desktop using the discount.
Fig. Sample illustration of how campaigns guide users step by step across devices. 
Media planning
Media buying
Defines communication objectives, budgets, channels, and KPIs.
Converts the plan into specific deals, placements, and contracts.
Builds the cross-media allocation (how much to TV vs. other video).
Negotiates rates, positioning, and terms; selects partners and platforms.
Establishes audience strategy, reach/frequency goals, and measurement framework.
Activates and stewards campaigns (traffic, pacing, optimization, makegoods).
Delivers the flowchart/phasing and testing roadmap.
Reconciles and reports on currency delivery, quality, and outcomes.
Fig. TV planning and buying.
Type
Best for
Pricing & guarantees
Watchouts
One tip
Upfronts / NewFronts
Guaranteed access to high-demand programming; predictable national reach
Impressions/GRPs with audience guarantees and makegoods; negotiated once
Limited flexibility; tight cancellation windows
Lock your primary currency and a secondary verification view in the IO; push for pod position/sponsorships
Scatter
Agility around launches and late-breaking hits
Market-driven CPMs; delivery can be guaranteed, terms vary
Availability risk for marquee content
Keep a hybrid posture: guaranteed base + scatter reserve
Remnant
Cost-efficient reach extension and light testing
Discounted, dynamic CPMs; limited control; rare guarantees
Less control of context, timing, and pod position
Use allowlists/suitability filters; treat as top-up, not foundation
Programmatic
Audience precision and mid-flight control
Auction CPMs or programmatic guaranteed; targeted CPMs higher but reduce waste
Supply-path complexity, brand safety, IVT risk
Use curated PMPs with app/channel disclosure; set global frequency caps
Fig. Types of TV buys at a glance.
Provider
2025/26 status (high-level)
Strengths
Typical use
Nielsen ONE (Big Data + Panel)
Look up the brand
Curious, they grab their phone to search for the product and check reviews.
Engage with social content
VideoAmp
Visit brand site
Later in the week, they visit the brand’s website, browsing product details and FAQs.
Redeem offer
iSpot
JIC-certified (use-case specific)
Rapid readouts; TV + streaming verification
Secondary view and outcome linkage
Comscore
JIC-certified (expanded personified demos)
Scale from STB/smart TV data
Planning and supplementary reporting
Fig. Currencies & use cases (2025/26)
Phase
What to confirm
Owner
Pre-flight creative
Claims substantiation; S&P clearance; CALM loudness; captions
Brand + Creative + Legal
Contracts/IOs
Transaction currency; IVT thresholds; makegood terms; data rights
Brand + Agency
Trafficking
Correct copy mapping; Ad-ID/ISCI; QR/URL checks
Agency + Publishers
In-flight
Delivery vs. plan; deduped reach/frequency; IVT/suitability
Agency (weekly)
Post-campaign
Makegoods; incremental reach; outcome KPIs; learnings
Brand + Agency
Fig. Compliance checklist by phase.
Dimension
Traditional video buying
Programmatic video buying
Buying method
Manual IOs and upfronts
Automated, impression-level via DSPs
Speed to market
Days–weeks (negotiation, trafficking)
Minutes–hours (activate, auto-optimize)
Targeting
Broad demos (e.g., A25–54, network/daypart)
Granular: first-party, retail, contextual, geos, devices
Optimization
Limited, flight-level changes
Real-time bidding, pacing, and creative rotation
Pricing
Fixed CPMs, package rates
Dynamic/first-price auctions; vCPM/CPCV options
Inventory access
Specific networks/shows/sites
Thousands of publishers, CTV apps, PMPs, PG
Frequency control
Coarse, channel-by-channel
Cross-publisher caps; household-level on CTV
Measurement
GRPs, post buys, panel estimates
VCR, viewability, AVOC, on-target %, conversions, lift
Transparency
Placement lists after the fact
Site/app logs, supply-path disclosure (ads.txt, sellers.json)
Brand safety
Upfront standards per seller
Pre-bid suitability filters, inclusion lists, verification
Creative flexibility
Fixed cuts per buy
Multiple variants; DCO and sequencing
Attribution
Limited to reach/GRP models
Click/view-through, incrementality, cross-device
Supply path control
Single seller relationship
SPO: direct paths, PMP/PG, reseller pruning
Reporting cadence
Weekly–monthly recaps
Live dashboards; 15–60 min refresh typical
Fig. Traditional vs programmatic video: quick comparison.
Area
2024 result
Context / notes
True ad spend efficiency
43.9% of each $1,000 entering a DSP now reaches consumers
+7.9 percentage points vs prior; ≈ +$79 per $1,000
MFA spend
Average down to 6.2%; median down to 1.1%
From 15% (avg) and 10% (median) in 2023
Publisher count
22,634 domains/apps
Down from 44,000; reflects more refined, safer placements
Supply partner optimization
Average SSPs used remains just above 19
Indicates room to streamline supply paths
Log-level data (LLD) access
Most suppliers now provide LLD; some limits persist
New providers in Q3 2024 LLD Register: Adlook, Equative, TripleLift, Viant, Yahoo
Fig. ANA programmatic benchmarks 2024 (Source).
Format
Best for
Primary KPIs
Typical use cases
Watch-outs
CTV / in-stream
Broad reach, recall
VCR, AVOC, on-target reach, household frequency
Launches, sequential 15s/30s storytelling, TV extension
Cap HH frequency; pod position / competitive separation
Outstream
Incremental reach, mid-funnel
Viewable completion, time in view
Mobile feeds, in-article placements
Sound-off design (captions); confirm it’s truly outstream
Native / in-feed
Discovery, engagement
Scroll depth, engagement, CPCV
Publisher feeds, recommendation units
Creative fit with context; avoid “banner in disguise”
Interactive / shoppable
Action, lower-funnel assist
Interaction/scan rate, add-to-cart/visit attribution
QR overlays, product galleries, offers
Don’t sacrifice completion for clicks; measure both
Fig. Format selection matrix (goal × format × KPIs).
Area
Metrics to include
Eligibility & quality
Viewability (MRC-defined), AVOC, IVT rate
Delivery
Reach (deduplicated), on-target %, household frequency (CTV)
Performance
VCR and quartiles, interaction/scan rate, vCPM, CPCV
Outcomes
Brand lift, conversions/ROAS, VTC (declared window), incrementality
Fig. Quick checklist to standardize reporting.
Acronym
Stands for
Definition
CTV
Connected TV
Internet-delivered TV on big screens.
OTT
Over-the-top
Streaming content delivered via internet, any device.
OLV
Online video
Video ads on web and in apps (non-CTV).
DSP
Demand-side platform
Buyer tech to bid, target, and optimize.
SSP
Supply-side platform
Seller tech to package and auction inventory.
RTB
Real-time bidding
Auctioning ad impressions in milliseconds.
OpenRTB
Open Real-Time Bidding
Industry protocol for RTB requests/bids.
PMP
Private marketplace
Invitation-only, biddable premium inventory.
PG
Programmatic guaranteed
Reserved inventory at fixed price via APIs.
SPO
Supply path optimization
Steering spend to efficient, trusted paths.
MFA
Made-for-advertising
Low-quality sites built to farm ad revenue.
VAST
Video Ad Serving Template
Standard for serving/tracking video ads.
OM / OMID
Open Measurement Interface Definition
Standardized viewability/IVT measurement.
SSAI
Server-side ad insertion
Ads stitched into streams server-side.
DAI
Dynamic ad insertion
Targeted ad replacement in video streams.
ACR
Automatic content recognition
TV content detection for measurement/targeting.
IVT
Invalid traffic
Non-human or fraudulent ad activity.
GIVT/SIVT
General/Sophisticated invalid traffic
Basic vs. advanced fraud types.
VCR
Video completion rate
% of ads watched to completion.
AVOC
Audible and viewable on completion
Quality exposure composite metric.
vCPM
Viewable CPM
Cost per thousand viewable impressions.
CPCV
Cost per completed view
Cost only when video completes.
CPV
Cost per view
Cost when a view threshold is met.
CTR
Click-through rate
Clicks divided by impressions.
CPA
Cost per acquisition
Cost per desired action (e.g., signup).
ROAS
Return on ad spend
Revenue divided by ad spend.
KPI
Key performance indicator
Primary metric of success.
DCO
Dynamic creative optimization
Auto-variant testing/personalization of ads.
CMP
Consent management platform
Manages user consent signals.
LLD
Log-level data
Impression-level transaction logs.
IAB
Interactive Advertising Bureau
Digital ad standards and research body.
MRC
Media Rating Council
Accreditation and measurement standards.
TAG
Trustworthy Accountability Group
Anti-fraud and brand safety standards.
ANA
Association of National Advertisers
Advertiser trade group; benchmarking.
Fig. Acronyms quick reference.
Metric
Figure
Why it matters
Share of U.S. digital display that’s programmatic
91.3%
Confirms automation is the default buying motion
Real-time scale handled by a major DSP
~15M ad queries/sec
Justifies AI for impression-level decisions
Leaders who say AI is essential for DSPs/SSPs
52%
Signals AI is now table stakes
U.S. CTV ad spend (2025)
~$33.35B
Shows the channel where AI orchestration matters
GenAI in creative (video)
86% use/plan
Indicates mainstream creative adoption
Fig. Programmatic + AI at a glance.
Use case
What the model does
Primary levers you can set
Typical KPI effect
Auction-time bidding
Predicts outcome; prices the impression
Goal, ROAS/CPA target, budgets
Lower CPA / higher conv. rate
Dynamic pricing
Avoids overpaying in first-price auctions
Floors, pacing rules, win-rate guardrails
Stable CPM, efficient wins
Audience expansion
Finds lookalikes/micro-segments
Seeds, exclusions, LTV tiers
More reach at steady CPA
Contextual AI
Classifies pages/apps without IDs
Category allow/deny, brand safety
Relevance under signal loss
Fraud/suitability
Flags IVT and unsafe supply
Pre-bid filters, blocklists, curation
Less waste, fewer brand risks
Attribution feedback
Reweights spend across channels
DDA/experiments, frequency caps
Budget shifts to incremental paths
Fig. Core AI use cases inside a DSP.
Stage
AI can help you…
Output to test
Human review to require KPI effect
Concepting
Generate angles/benefit frames
5–10 copy angles
Brand voice, claims, positioning
Asset build
Produce image/video variants
Thumbs, short videos
Visual brand rules, rights/IP
Assembly (DCO)
Slot best headline/visual/CTA by audience
Modular ad combos
Guardrails, exclusions, geo/legal
Rotation
Swap in higher-lift combos
Winner sets per segment
Creative fatigue checks
Learn
Summarize why a variant won
Insight cards
Next test design, hypothesis sanity
Fig. GenAI in the creative workflow.
Method
Best for
Strengths
Watch-outs
Data-driven attribution (DDA)
Always-on budget steering
Uses impression-level paths
Needs quality identity/signals
Incrementality tests (geo/cell)
Proving lift by channel/tactic
Causal, straightforward to explain
Requires holdouts; time cost
MMM (modern/bayesian)
Long-term, cross-channel planning
Handles offline + macro factors
Coarser granularity; cadence
Conversion lift studies
Specific partners/walled gardens
Partner-verified; fast reads
Often siloed; extrapolation risk
Path analytics
Creative/sequence insights
InsighExplains why paths workt cards
Don’t mistake correlation for cause
Fig. The measurement toolbox (and when to use each).
Framework
What it governs
DSP/AI implications
Actions for advertisers
GDPR (EU)
Lawful basis, consent, profiling
Consent signals flow through the ad chain
Run CMP; log purposes/vendors; respect choices
CCPA/CPRA (CA)
Sale/sharing opt-outs; ADMT rulemaking
Limit cross-context behavioral ads when opted out
Honor GPC; maintain opt-out pipes; update notices
IAB TCF
Standardizing consent strings
Vendors read/write standardized signals
Keep vendor list current; audit adherence
FTC guidance
Deceptive practices (e.g., reviews/claims)
Guardrails on endorsements & AI-generated content
Disclose; substantiate claims; moderate UGC
Fig. Privacy & compliance quick map.
Feature
Traditional (Linear) TV
Streaming TV
Targeting
Broad demographics, time slots
Household-level, behavioral data
Measurement
Nielsen ratings, estimated reach
Real-time analytics, conversion tracking
Cost structure
Fixed CPMs ($30-45 national)
Variable pricing, performance-based options
Audience reach
Mass simultaneous viewing
Targeted, on-demand viewers
Ad formats
15, 30, 60-second spots
Flexible lengths, interactive options
Campaign flexibility
Limited after launch
Real-time optimization possible
Fig. Traditional vs. streaming TV advertising comparison.
Cost category
Price range
Notes
Commercial production
$5,000 - $500,000+
Varies by complexity and market
National broadcast CPMs
$30 - $45
Prime time slots command premium
Local TV spots
$200 - $1,500 per spot
Depends on market size and daypart
CTV/Streaming CPMs
$20 - $40
More efficient targeting offsets cost
Minimum campaign budget
$10,000 - $100,000+
National campaigns require significant investment
Post-production edits
$500 - $5,000
Required for different lengths/markets
Fig. TV advertising costs by type.
Device type
Creative best practices
CTV / large screen
  • Lead with visual story (assume sound-on)
  • Brand reveal within first 3 seconds
  • QR code placement in final 5 seconds
  • Minimize text overlays (viewing distance consideration)
Mobile OTT
  • Assume sound-off viewing (use captions)
  • Vertical format when possible (9:16)
  • Interactive elements in thumb-friendly zones
  • Front-load key message (first 3 seconds critical)
Desktop / laptop
  • Balance for both focused and background viewing
  • Include companion banner opportunities
  • Clickable elements throughout
  • Consider workplace-appropriate content
Fig. Creative best practices by device type.
Aspect
OTT
CTV
Definition
Content delivery method via internet
Internet-connected television devices
Devices
All devices (mobile, desktop, tablet, TV)
TV screens only (smart TVs, streaming devices)
Screen size
5" to 75"+ (varies widely)
32" to 75"+ (consistently large)
Viewing context
On-the-go, multi-tasking, individual
Lean-back, focused, often communal
Session length
10-30 minutes typical
120+ minutes typical
Sound
Often muted (mobile)
Usually with full audio
Ad formats
Interactive, clickable, companion units
Traditional video spots, emerging QR/shoppable
Completion rates
70-85% average
90-100% average
Targeting
Device-level, individual behavior
Household-level, viewing patterns
Measurement
Direct click-through, app installs
View-through attribution, brand lift
Best for
Direct response, app installs, immediate action
Brand awareness, premium messaging, consideration
Fig. Summary table: OTT vs CTV.
Industry
Typical attribution window
Primary KPIs
Conversion path
E-commerce/Retail
7-14 days
ROAS, Revenue per impression
CTV → Mobile → Purchase
Automotive
30-90 days
Dealer visits, Test drives
CTV → Desktop research → Dealer
Financial Services
14-30 days
Account opens, Applications
CTV → Mobile app → Sign-up
Travel/Hospitality
21-45 days
Bookings, Search queries
CTV → Multi-device → Booking
QSR/Food Delivery
1-3 days
App orders, Store visits
CTV → Mobile → Order
Fig. Attribution windows by industry.
Feature
Addressable TV
Linear TV
OTT
CTV
Delivery mechanism
Linear + digital
Traditional broadcast
Internet-based
Internet-connected device
Targeting
Household-level
Broad demographic
User/device-level
Household or device-level
Measurement
Impressions, outcomes
GRPs, limited metrics
Completion, clicks
Cross-device, granular
Creative flexibility
Multiple versions
One-size-fits-all
Dynamic or static
Highly customizable
Use case
Hybrid campaigns
Mass awareness
Streaming audiences
Premium digital TV viewers
Fig. Comparing addressable TV to linear, OTT, and CTV at a glance.
Step
What to consider
Choose a platform or partner
Linear MVPD, CTV/OTT platform, or DSP with programmatic access
Set campaign goals
Define KPIs: reach, lift, ROI, frequency control
Align with broader strategy
Integrate with linear TV and digital buys for holistic planning
Prepare creative assets
Versioning, modular templates, messaging tailored to segments
Budget and timeline
Account for media, creative, tech fees, and post-campaign measurement
Fig. How to get started with addressable TV.
Feature
Linear TV
OTT
CTV
Delivery mechanism
Cable/Satellite/Antenna
Internet (any device)
Internet (TV screens only)
Internet-connected device
Traditional TV
Phone, tablet, computer, TV
Smart TV or connected TV
Ad targeting
Broad demographic
Device-specific targeting
Household-level precision
Measurement
Nielsen ratings
Digital metrics
Cross-device attribution
Ad skipping
DVR fast-forward
Often skippable
90%+ completion rates
Fig. CTV vs. OTT vs. Linear TV comparison.
Targeting type
What it does
Example use case
Typical lift
Demographic
Age, income, education, family status
Baby products to new parents
80-90% on-target delivery
Behavioral
Based on viewing habits
Fitness ads to workout content viewers
2-3x engagement
Geographic
ZIP code precision
Local restaurant promotions
40% lower CPA
First-party
Your customer data
Upsell to existing customers
3-5x higher engagement
Contextual
Content-based matching
Travel ads during travel shows
25% higher engagement
Fig. CTV targeting capabilities.
Advantages of linear TV
Disadvantages of linear TV
Mass reach: millions reached in a single airing
Declining viewership, especially among younger audiences
High co-viewing for shared exposure
Limited targeting (broad demographics only)
Brand safety in regulated content
Measurement gaps (panel-based, estimates)
Viewers accept ad interruptions
High entry costs for national buys
Lower CPMs ($10–$15)
Audience skews older (70% of impressions 55+)
Fig. Advantages and disadvantages of linear TV.
Advantages of CTV
Disadvantages of CTV
Precise targeting by household, device, or user profile
Fragmented ecosystem across devices and platforms
Flexible, programmatic buying with scalable budgets
Higher CPMs ($30–$50) compared to linear
Interactive ad formats (QR codes, shoppable, pause ads)
Measurement inconsistencies—only ~32% unify cross-screen metrics
High completion rates (~95%) and attention
Ad fraud risk (nearly half of Fortune 500 CTV ads appeared on questionable sites in 2023)
Strong ROI: 65% of marketers report increased sales via CTV
Limited reach vs. linear for older demographics and live tentpole events
Lower ad loads (4–6 min/hr vs. 12–16 on linear)
Harder to achieve scale on a single platform
Fig. Advantages and disadvantages of CTV advertising.
Dimension
Linear TV
Connected TV (CTV)
Content delivery
Scheduled via cable/broadcast
On-demand streaming via internet
Viewing experience
Ad loads: 12–16 min/hr
Ad loads: 4–6 min/hr
Targeting
Broad demographics, regions
Household- or individual-level targeting
Measurement
Panel-based (Nielsen)
Digital logs, attribution, frequency capping
Ad formats
15/30/60 sec spots
Interactive, shoppable, dynamic creative
Cost
CPMs: $10–$15, bulk buys
CPMs: $30–$50, programmatic flexibility
Fig. Key differences between CTV and linear TV.
Component
Who uses it
Primary role
Typical controls
Key outputs
DSP
Advertiser/agency
Evaluate impressions, bid, deliver
Goals, audiences, bids, pacing, frequency, creative
Impressions, clicks, conversions, cost, ROAS
SSP
Publisher
Package/sell inventory
Floors, brand safety, deal IDs, yield rules
Bid requests, auction results, revenue
Ad exchange
Neutral
Run auctions/route demand
Auction rules, eligibility, deal enforcement
Winning bid/creative, clearing price
DMP/CDP
Advertiser/publisher
Build/activate audiences
Segment creation, identity rules, enrichment
Audience segments, targeting signals
Ad server
Publisher/advertiser
Serve/track creative
Priority rules, trafficking, QA
Audience segments, targeting signals
Fig. Programmatic stack at a glance.
Step
What happens
Who acts
Notes
1. Bid request
Impression described and sent
SSP/ad server
Context, device, allowed signals included
2. Fan-out
Request routed to buyers
Exchange
Open auction and any eligible deal IDs
3. Scoring
Opportunity valued; bid composed
DSP
Targeting + predictive models + creative check
4. Auction
Highest eligible bid wins
Exchange
First-price with floors/deal terms applied
5. Render
Performance logged
DSP & SSP
Used for pacing and optimization
Fig. RTB auction—request to render.
Model
Pricing
Access
Guarantees
Best for
Watch-outs
Open exchange RTB
Dynamic (auction)
Broad
None
Prospecting, scale, retargeting
Brand safety, supply variance
PMP (invite-only RTB)
Dynamic (auction + floors)
Curated buyers
None
Quality reach, category alignment
Volume may be limited
Preferred deal
Fixed CPM
Priority before auction
None
Predictable price without reservation
Missed volume if price too high
Programmatic direct (guaranteed)
Fixed CPM
One-to-one
Yes (delivery/placement)
Launches, CTV reservations, sponsorships
Less flexible; higher CPMs
Fig. Buying models—quick cheat sheet.
Lever
Applies to
What it does
Owner
When to use
ads.txt / app-ads.txt
Web/app supply
Verifies authorized sellers
Publisher/SSP
Always-on hygiene
sellers.json
Exchanges/SSPs
Reveals intermediaries
Exchange/SSP
Supply-path vetting
Allow/deny lists
All programmatic rails
Constrain where ads run
Advertiser/DSP
New brands, sensitive categories
Brand-safety/IVT filters
All rails
Block risky or invalid traffic
Advertiser/DSP
Performance + reputation protection
Deal IDs
PMP/direct
Bind terms to impressions
Publisher/SSP
Quality access with control
SPO routing
RTB paths
Prefer efficient routes
Advertiser/DSP
Reduce hops/fees, raise quality
Fig. Transparency & control levers.
Dimension
Programmatic (umbrella)
RTB (auction method)
What it means in practice
Scope
All automated buying methods executed via DSP/SSP/exchange pipes
Auction-based buying only (open exchange + PMPs)
Treat RTB as one rail inside a larger toolkit
Buying types
Direct/guaranteed, preferred deals, PMPs, open exchange
PMPs and open exchange auctions
Choose model by goal: certainty vs scale
Pricing
Fixed (direct/preferred) or dynamic (auctions)
Dynamic only, per impression
Budgets flex with market conditions
Guarantees
Available via programmatic direct
None in auctions
Use direct when delivery must be assured
Data use
Audience building, creative logic, cross-channel frequency, measurement
Impression-level valuation and bidding using the same signals
Data informs both; RTB applies it at bid time
Optimization
Multi-channel pacing and allocation; test-and-learn across deals
Bid shading, floor navigation, real-time bid and creative selection
Different levers, same goal: efficient outcomes
Transparency/control
Higher with direct and curated deals; known supply and terms
Varies by exchange and path; stronger controls needed
Combine allowlists/PMPs with open exchange reach
Inventory access
Premium, curated, and broad via the same stack
Broadest reach via open exchange; curated via PMPs
Mix premium reservations with auction reach
Best for
Brand launches, sponsorships, guaranteed CTV, regulated contexts
Prospecting at scale, retargeting, event- or geo-responsive bursts
Most plans benefit from a hybrid mix
Key risks
Limited scale at fixed prices; higher CPMs for premium
Brand-safety diligence, price volatility, supply-path variance
Mitigate with PMPs, SPO, verification, and guardrails
Fig. Difference between programmatic and RTB: a quick snapshot.
Channel
Common KPIs
Typical signals
Useful attribution
Display
CTR, CPA, viewability
Contextual, first-party audiences
MTA, incrementality tests
Video/CTV
Completion rate, reach, ROAS
Household/device graphs, content metadata
Geo/control, matched sales
Audio
Listen-through, lift, CPA
Contextual, device, daypart
Promo codes, geo/control
Mobile in-app
CPI/CPA, ROAS
ID-based or SKAN-like frameworks
Cohort ROAS, MMM
DOOH
Reach, footfall lift
Venue/location, time
Mobility panels, geo/control
Fig. Channel KPIs & measurement map.
Dimension
What it covers
Where it runs
Publisher video players; YouTube/large video platforms; social feeds; outstream in articles/feeds
Formats
Skippable/non-skippable in-stream; 6s bumper; vertical/square mobile cuts; native/outstream; shoppable/interactive
Buying methods
Direct in walled gardens; programmatic via DSPs (open auction, PMP, PG)
Targeting
First-party audiences; interest/behavior; contextual; lookalikes; video-viewer/site retargeting
Optimization levers
Frequency caps; creative rotation & sequencing; daypart/device/placement controls; real-time budget shifts
Measurement
Viewability & quartiles; VTR/CTR; clicks & post-view conversions; brand lift; third-party verification
Role in plan
Full-funnel: awareness & completion on premium video; DR via clickable units; complements CTV for reach/recall
Fig. OLV key characteristics at a glance.
Format
Best for
Strengths
Watchouts
Primary KPIs
In-stream (pre/mid/post)
Scale + context in video
High completion; premium adjacencies
Higher floors on non-skippable; hook matters
Quartiles, VTR, CPCV
Outstream
Incremental reach on open web
Viewability-gated; vast supply
Must win attention without sound
Viewability %, CPCV, scroll depth
Social video
Engagement + DR
Native UI, rich signals, commerce
Fast creative fatigue
VTR, CTR, add-to-cart/on-platform actions
Native video
Lower-friction discovery
Matches editorial feel
Headline/thumbnail must carry
Engagement time, completes
Shoppable video
Lower-funnel action
Click-to-product; path-to-purchase
Landing page parity critical
Product clicks, ATC, conversion rate
Fig. OLV formats: when to use what.
Tactic
Use when
Bid model pairing
Notes
Contextual packages
Need brand-safe topical alignment
CPM / CPCV
Great for mid-upper funnel scale
Interest/behavior
Prospecting into likely intenders
CPM / CPV
Refresh segments quarterly
First-party lookalikes
Expand from proven customers
CPM
Keep seed lists clean & recent
Viewer/site retargeting
Nurture people who engaged
CPCV / CPV
Use recency windows & sequencing
Deal IDs (PMP/PG)
Need stable volume/quality
CPM
Priority access; negotiated floors
Fig. Targeting & bidding: tactic → when & how.
Funnel stage
Primary metrics
Early signals to watch
Typical decision you’ll make
Attention
Viewability, 25%/50% quartiles
Start rate, time-in-view
Tighten supply, raise viewability floors
Engagement
75%/100% completes, VTR
Skip rate in first 5s
Re-cut openings; shorten length
Action
CTR, site visits, add-to-cart
CPCV/CPV trends by segment
Shift budget to efficient audiences
Outcome
Conversions, CPA/ROAS, lift
Post-view vs post-click mix
Validate with lift; scale winners
Cross-screen
Deduped reach, freq distribution
Incremental reach vs CTV
Rebalance OLV/CTV to hit target freq
Fig. Measurement ladder: metric → decision.
Benefit
What it delivers
Proof point(s)
Precision targeting & personalization
Impression-level decisions + dynamic creative
Personalization links to ~5–15% revenue lift, ~10–30% ROI lift; retargeting CTR ≈10× standard display.
Efficiency & automation
Software plans, bids, paces across channels
43.9% of every $1,000 entering a DSP now reaches consumers (+$79 YoY).
Transparency & real-time reporting
Placement-level visibility; standardized verification
ads.txt, sellers.json, OM SDK unify verification and supply checks.
Improved ROI & cost-effectiveness
More working media, lower waste
MFA spend down from 15% → 6.2% (2023→2024).
Fig. Benefits of programmatic at a glance.
Format
Primary surfaces
Typical objective
Common quality signals
Display
Open web, in-app
Efficient reach, retargeting
Viewability, domain/app quality
Video & CTV
Web/mobile video, AVOD/FAST apps
Premium reach, storytelling
VCR, deduped reach
Native
In-feed, recommendation units
Mid-funnel education, qualified visits
Time-in-view, scroll depth
Audio
Music streaming, digital radio, podcasts
Screen-free reach, recall
Listen-through rate
DOOH
Billboards, transit, place-based
Local presence, contextual moments
Proof-of-play, modeled reach
Fig. Format cheat-sheet.
Layer
Primary role
Key artifacts
DSP (buy side)
Evaluate bid requests, decide price/creative, execute buys
Audiences, pacing/frequency, logs
SSP (sell side)
Package inventory, apply floors/blocks, route demand
Floors, deals, yield tools
Ad exchange
Clear auctions, enforce deal priority, return markup
OpenRTB, win/loss notices
DMP
Build ad-addressable segments (often anonymous IDs)
Taxonomy, lookalikes
CDP
Unify person-level first-party data with consent
Profiles, identity graph
Clean room
Privacy-safe data matching and analysis
Hashed ID joins, reach/lift
Fig. Core programmatic technologies and what they do.
Model
Access
Pricing
Delivery guarantee
Typical packaging
RTB (open auction)
Open, broad
Dynamic (first-price)
No
Open exchange
PMP (private auction)
Invite-only
Dynamic with floors
No
Curated bundles/deal IDs
Preferred deal
1:1 access (“first look”)
Fixed CPM
No
Deal ID with pre-set rate
Programmatic guaranteed
1:1 reserved
Fixed CPM
Yes (reserved)
Automated reservation
Fig. Buying models side-by-side.
Metric
Stat / Insight / Source
Share of US streaming subscribers on ad-supported tiers
46%
Source: eMarketer
Share of net new SVOD subscribers from ad-supported plans (past 9 quarters)
71%
Source: eMarketer
Share of Q1 2025 gross subscriber additions from ad-supported plans
57% (down 1 pp from Q1 2024)
Source: eMarketer
Netflix ad-supported tier users (global MAUs)
94M (up from 70M six months earlier)
Source: eMarketer
Disney’s ad-supported footprint (Disney+, Hulu, ESPN+)
164M MAUs (up from 157M four months earlier)
Source: eMarketer
Amazon Prime Video ad-supported viewers (US only)
130M (following Jan 2024 shift to default ads)
Source: eMarketer
Fig. Ad-supported streaming by the numbers (2024–2025).
Blind spot
Key issues
Business impact
AI Digital solution
Lack of transparency in AI models
• Platforms own AI models and train on proprietary data
• Brands have little visibility into decision-making
• "Walled gardens" restrict data access
• Inefficient ad spend
• Limited strategic control
• Eroded consumer trust
• Potential budget mismanagement
Open Garden framework providing:
• Complete transparency
• DSP-agnostic execution
• Cross-platform data & insights
Optimizing ads vs. optimizing impact
• AI excels at short-term metrics but may struggle with brand building
• Consumers can detect AI-generated content
• Efficiency might come at cost of authenticity
• Short-term gains at expense of brand health
• Potential loss of authentic connection
• Reduced effectiveness in storytelling
Smart Supply offering:
• Human oversight of AI recommendations
• Custom KPI alignment beyond clicks
• Brand-safe inventory verification
The illusion of personalization
• Segment optimization rebranded as personalization
• First-party data infrastructure challenges
• Personalization vs. surveillance concerns
• Potential mismatch between promise and reality
• Privacy concerns affecting consumer trust
• Cost barriers for smaller businesses
Elevate platform features:
• Real-time AI + human intelligence
• First-party data activation
• Ethical personalization strategies
AI-Driven efficiency vs. decision-making
• AI shifting from tool to decision-maker
• Black box optimization like Google Performance Max
• Human oversight limitations
• Strategic control loss
• Difficulty questioning AI outputs
• Inability to measure granular impact
• Potential brand damage from mistakes
Managed Service with:
• Human strategists overseeing AI
• Custom KPI optimization
• Complete campaign transparency
Fig. 1. Summary of AI blind spots in advertising
Dimension
Walled garden advantage
Walled garden limitation
Strategic impact
Audience access
Massive, engaged user bases
Limited visibility beyond platform
Reach without understanding
Data control
Sophisticated targeting tools
Data remains siloed within platform
Fragmented customer view
Measurement
Detailed in-platform metrics
Inconsistent cross-platform standards
Difficult performance comparison
Intelligence
Platform-specific insights
Limited data portability
Restricted strategic learning
Optimization
Powerful automated tools
Black-box algorithms
Reduced marketer control
Fig. 2. Strategic trade-offs in walled garden advertising.
Core issue
Platform priority
Walled garden limitation
Real-world example
Attribution opacity
Claiming maximum credit for conversions
Limited visibility into true conversion paths
Meta and TikTok's conflicting attribution models after iOS privacy updates
Data restrictions
Maintaining proprietary data control
Inability to combine platform data with other sources
Amazon DSP's limitations on detailed performance data exports
Cross-channel blindspots
Keeping advertisers within ecosystem
Fragmented view of customer journey
YouTube/DV360 campaigns lacking integration with non-Google platforms
Black box algorithms
Optimizing for platform revenue
Reduced control over campaign execution
Self-serve platforms using opaque ML models with little advertiser input
Performance reporting
Presenting platform in best light
Discrepancies between platform-reported and independently measured results
Consistently higher performance metrics in platform reports vs. third-party measurement
Fig. 1. The Walled garden misalignment: Platform interests vs. advertiser needs.
Key dimension
Challenge
Strategic imperative
ROAS volatility
Softer returns across digital channels
Shift from soft KPIs to measurable revenue impact
Media planning
Static plans no longer effective
Develop agile, modular approaches adaptable to changing conditions
Brand/performance
Traditional division dissolving
Create full-funnel strategies balancing long-term equity with short-term conversion
Capability
Key features
Benefits
Performance data
Elevate forecasting tool
• Vertical-specific insights
• Historical data from past economic turbulence
• "Cascade planning" functionality
• Real-time adaptation
• Provides agility to adjust campaign strategy based on performance
• Shows which media channels work best to drive efficient and effective performance
• Confident budget reallocation
• Reduces reaction time to market shifts
• Dataset from 10,000+ campaigns
• Cuts response time from weeks to minutes
Smart, premium supply
• AI-powered analysis
• Human strategic oversight
• Smart Supply solution
• Contextual intelligence engine
• Reaches people most likely to buy
• Avoids wasted impressions and budgets on poor-performing placements
• Context-aligned messaging
• 25+ billion bid requests analyzed daily
• 18% improvement in working media efficiency
• 26% increase in engagement during recessions
Full-funnel accountability
• Links awareness campaigns to lower funnel outcomes
• Tests if ads actually drive new business
• Measures brand perception changes
• "Ask Elevate" AI Chat Assistant
• Upper-funnel to outcome connection
• Sentiment shift tracking
• Personalized messaging
• Helps balance immediate sales vs. long-term brand building
• Natural language data queries
• True business impact measurement
Open Garden approach
• Cross-platform and channel planning
• Not locked into specific platforms
• Unified cross-platform reach
• Shows exactly where money is spent
• Reduces complexity across channels
• Performance-based ad placement
• Rapid budget reallocation
• Eliminates platform-specific commitments and provides platform-based optimization and agility
• Coverage across all inventory sources
• Provides full visibility into spending
• Avoids the inability to pivot across platform as you’re not in a singular platform
Fig. 1. How AI Digital helps during economic uncertainty.
Trend
What it means for marketers
Supply & demand lines are blurring
Platforms from Google (P-Max) to Microsoft are merging optimization and inventory in one opaque box. Expect more bundled “best available” media where the algorithm, not the trader, decides channel and publisher mix.
Walled gardens get taller
Microsoft’s O&O set now spans Bing, Xbox, Outlook, Edge and LinkedIn, which just launched revenue-sharing video programs to lure creators and ad dollars. (Business Insider)
Retail & commerce media shape strategy
Microsoft’s Curate lets retailers and data owners package first-party segments, an echo of Amazon’s and Walmart’s approaches. Agencies must master seller-defined audiences as well as buyer-side tactics.
AI oversight becomes critical
Closed AI bidding means fewer levers for traders. Independent verification, incrementality testing and commercial guardrails rise in importance.
Fig. 1. Platform trends and their implications. 
Metric
Connected TV (CTV)
Linear TV
Video Completion Rate
94.5%
70%
Purchase Rate After Ad
23%
12%
Ad Attention Rate
57% (prefer CTV ads)
54.5%
Viewer Reach (U.S.)
85% of households
228 million viewers