Every metric, model, and method in ecommerce marketing.
105 terms
A controlled experiment comparing two ad variations (A vs B) with equal traffic split to determine which performs better on a specific metric. Requires statistical significance before declaring a winner, which typically means enough conversions to be confident the difference isn't random. Simple and reliable, but slow. At typical ecommerce conversion rates, an A/B test needs 1,000-5,000 impressions per variant to reach significance. For brands testing 20+ creatives monthly, sequential A/B testing is too slow.
Ad spend divided by ad-attributed revenue, expressed as a percentage. The inverse of ROAS. A 25% ACOS means you spend $0.25 in ads for every $1 in revenue (equivalent to 4x ROAS). Commonly used in Amazon advertising but applicable across channels. ACOS below your contribution margin means advertising is profitable; above it means you're losing money on ad-driven sales. Target ACOS should be set relative to margin, not arbitrary benchmarks.
The average number of times each person in your target audience has seen your ad. Calculated as impressions divided by reach. Moderate weekly frequencies on Meta typically begin triggering creative fatigue for prospecting audiences. Retargeting can tolerate higher frequency because the audience already has purchase intent. Monitoring frequency at the ad set level is critical - account-level frequency averages hide individual audience oversaturation.
Meta's fully automated campaign type that uses ML to optimize creative selection, audience targeting, and placement simultaneously from a single campaign structure. Requires minimal manual setup: upload creatives, set a budget, and the algorithm does the rest. Average brands see lower CPA versus manual campaigns, but with limited transparency into what's working. Key limitation: you can't see which audiences or placements are driving results, making it harder to learn and iterate. Best used alongside manual campaigns, not as a replacement.
Commerce driven by AI shopping agents that research, evaluate, and recommend products on behalf of consumers — bypassing traditional search, ads, and storefronts entirely. When a user asks ChatGPT 'best running shoes for flat feet,' the AI agent evaluates product data, reviews, and brand authority to make a recommendation without the user ever seeing a Google result or clicking an ad. Brands optimizing for agent commerce today have a structural first-mover advantage because agent mention patterns compound over time.
Cresva's shared memory system where all 7 AI agents store and retrieve institutional knowledge — brand constraints (e.g., '$65 CAC cap'), user preferences (e.g., 'weekly Slack reports'), past decisions, learned performance patterns, and competitive insights. Powered by the Maya agent. When you tell one agent about a budget limit, every other agent knows immediately. Memory persists across sessions and compounds over time, creating a knowledge base unique to your brand.
The frequency at which AI agents mention or recommend your brand when prompted with category-relevant queries. Measured by systematically probing AI platforms (ChatGPT, Perplexity, Claude, Gemini, Google AI) with purchase-intent queries and tracking brand appearances. A brand with a 34% mention rate in 'best protein powder' queries appears in roughly 1 in 3 agent responses. Unlike ad impressions, agent mentions carry implicit endorsement — the AI is recommending you, not just showing your ad.
Coordinating multiple specialized AI agents to work together on complex tasks — routing questions to the right agent, sharing context between agents, resolving conflicts when agents disagree, and synthesizing insights across domains. When you ask 'should I increase Meta spend?', the orchestrator routes to Parker (attribution check), Felix (forecast impact), Sam (scenario modeling), and synthesizes their perspectives into a unified recommendation.
A composite metric (0-100) measuring how well-optimized a brand's product data, structured markup, reviews, and content are for AI agent discoverability. Factors include: product title quality (do titles contain parseable attributes?), schema markup completeness, review density and sentiment, description depth, brand authority signals, product attribute coverage, and image quality. Higher scores correlate with higher agent mention rates.
The percentage of relevant AI agent conversations in your category where your brand is mentioned or recommended, compared to competitors. If agents mention your brand in 22 out of 100 'best moisturizer' queries, your agent share of voice is 22%. Unlike traditional share of voice (measured by ad spend or media mentions), agent share of voice is earned through product quality, data structure, and brand authority — not bought through media budgets.
How often and how favorably your brand appears when AI agents answer product-related queries. The agent-era equivalent of search engine visibility, but fundamentally different: there are no rankings, no ads, no paid placements. Visibility is determined by the quality of your product data, the strength of your reviews, the presence of structured markup, and your overall brand authority in the agent's training data and retrieval sources. Tracked by probing AI platforms hourly with category queries.
Revenue from purchases influenced by AI agent recommendations. Currently invisible to standard analytics because the user journey — ask AI agent → get recommendation → Google the brand → buy — appears as 'branded search' or 'direct traffic' in GA4. The only way to measure it is through agent probing (tracking your mention rate) combined with correlation analysis against unexplained revenue spikes.
The consumer behavior pattern of asking AI agents for product recommendations instead of browsing search results, marketplaces, or social media. A user practicing agentic shopping says 'find me a lightweight laptop under $1,200 with all-day battery' to ChatGPT instead of searching Google or scrolling Amazon. This behavior is growing at 10x the rate of traditional search and produces no ad impressions, no click data, and no trackable UTM parameters. The purchase looks like direct traffic in analytics.
When an AI model generates plausible-sounding but factually incorrect information. In agent commerce, hallucinations manifest as recommending products that don't exist, citing incorrect prices, inventing features, or attributing reviews to the wrong brand. Products with complete, structured, easily-verifiable data reduce hallucination risk because the agent has authoritative facts to ground its response. Brands with sparse product data are more likely to be hallucinated about — or worse, confused with competitors.
An autonomous AI system — ChatGPT, Perplexity, Google Gemini, Claude, or similar — that evaluates products, compares options, and makes purchase recommendations based on user queries. Unlike search engines that show links, shopping agents synthesize information from multiple sources and make a direct recommendation. They evaluate product data, reviews, brand authority, pricing, and availability to generate a response. There are no paid placements; recommendations are based on the agent's assessment of product quality and relevance.
Automatically identifying statistically unexpected changes in marketing metrics: sudden CPA spikes, unusual CTR drops, spend anomalies, or conversion volume changes that deviate from expected patterns. Catches problems hours or days before manual review would. Uses statistical models to establish 'normal' ranges for each metric and flags deviations beyond a confidence threshold. Critical for brands managing high daily spend where a day of undetected problems can waste thousands.
The average dollar amount spent per transaction. Calculated as total revenue divided by total orders. Higher AOV means you can afford higher CPA while maintaining profitability. AOV varies by channel (Google Shopping often has higher AOV than Meta because of purchase intent), by creative (product bundles drive higher AOV than single-product ads), and by audience (returning customers typically have 20-30% higher AOV than first-time buyers).
The time period after an ad interaction (click or view) during which a conversion is credited to that ad. Meta defaults to 7-day click, 1-day view. Google Ads defaults vary by campaign type. Longer windows capture more conversions but also capture more that would have happened anyway. Shortening attribution windows is one of the simplest ways to reduce overclaiming and get closer to true incremental performance. Testing different windows and comparing results reveals how much of your reported performance depends on generous window settings.
When a target audience has been shown ads so frequently that incremental reach approaches zero and each additional impression drives diminishing returns. Detected by the simultaneous pattern of rising frequency, declining CTR, and increasing CPA. Smaller audiences saturate faster than larger ones — a niche audience of a few hundred thousand reaches saturation at far less monthly spend than a broad audience of millions. The cure is audience expansion, creative refresh, or strategic spend reduction — not doubling down.
Total revenue divided by total ad spend across all channels. A top-level health metric that shows overall advertising efficiency but hides which channels are driving incremental value and which are free-riding. A brand with a 4x blended ROAS might have Meta at 6x and Google Display at 1.2x, but blended ROAS won't tell you that. Useful as a directional metric for month-over-month trending, but should never be used for channel-level budget allocation decisions.
Running ads with minimal audience restrictions, letting the platform's ML algorithm find converters from a wide pool. Increasingly effective as Meta Advantage+, Google PMax, and TikTok Smart+ mature their bidding algorithms. Counterintuitive: restricting audience often hurts performance because it limits the algorithm's optimization surface. Broad targeting with strong creative typically outperforms narrow targeting with average creative at scale. Works best above $5K/month per campaign where the algorithm has enough conversion data to optimize.
The process of distributing ad spend across channels, campaigns, and audiences based on expected incremental return. Should be dynamic and updated weekly based on current performance data, not fixed in quarterly planning cycles. Optimal allocation requires understanding the diminishing returns curve for each channel and finding the spend level where marginal CPA across all channels is equalized. A well-optimized allocation improves blended ROAS without increasing total spend.
The rate at which your daily or monthly budget is being spent relative to plan. Underpacing means you're leaving potential revenue on the table. Overpacing means you'll run out of budget before the period ends, missing late-period opportunities. Platform algorithms handle daily pacing, but monthly and quarterly pacing requires manual oversight. Budget pacing should account for day-of-week and time-of-month performance patterns.
Total cost to acquire one new customer, including ad spend, creative production, agency fees, and marketing tools. Calculated as total marketing spend divided by new customers acquired. Lower CAC means more efficient growth. Must be compared against LTV to ensure profitability: if CAC exceeds LTV, you're losing money on every customer. Benchmarks vary wildly by vertical, AOV, and category — premium categories typically run higher CAC than mass-market.
The ratio of customer acquisition cost to lifetime value. A 3:1 ratio (LTV is 3x CAC) is the common benchmark for healthy unit economics. Below 3:1 suggests you're either overspending on acquisition or your product doesn't retain well. Above 5:1 suggests you're underinvesting in growth and could scale faster. The ratio should be calculated at the channel level to identify which acquisition channels produce the most valuable customers, not just the cheapest ones.
The percentage distribution of ad spend across advertising platforms. A typical DTC ecommerce channel mix might be 50% Meta, 25% Google, 15% TikTok, 10% other. The optimal mix varies by brand, product category, AOV, and customer demographics. The right channel mix changes over time as channels mature, costs shift, and audience behavior evolves. Brands that lock into a static channel mix leave money on the table.
Grouping customers by acquisition date (or other shared characteristic) and tracking their behavior over time. Reveals whether newer cohorts are more or less valuable than older ones. A declining cohort LTV curve means your targeting is getting less efficient or product-market fit is weakening. An improving curve means your acquisition strategy is finding better customers. Essential for accurate LTV calculation and for detecting problems before they show up in top-line metrics.
The architecture by which AI models improve continuously as they learn from every marketing decision and its outcome. Named after compound interest because the improvement rate accelerates: each cycle of prediction, observation, and model update makes the next prediction more accurate. A compound learning system at month 6 reflects more learned context than at month 1 because it has observed many decision-outcome pairs specific to your brand. In the agent commerce era, compound learning also applies to agent visibility optimization — the system learns which product data structures, descriptions, and attributes correlate with higher agent mention rates, and continuously refines recommendations. This is Cresva's core differentiator: static AI degrades over time while compound learning systems improve across both ad performance and agent discoverability.
A statistical range around a forecast that quantifies uncertainty. A 90% confidence interval means the actual result will fall within that range 90% of the time. Wider intervals signal higher uncertainty and should make you more cautious about committing budget. A forecast of '$500K revenue, 90% CI: $420K-$580K' is far more useful than '$500K revenue' alone because it tells you how much to trust the prediction. Narrower confidence intervals over time indicate the model is learning and improving.
The maximum amount of text an LLM can consider at once when generating a response. GPT-4o: 128K tokens (~96K words). Claude: 200K tokens (~150K words). Larger context windows mean agents can evaluate more products simultaneously in a single query. When a user asks 'compare the top 5 protein powders,' the agent needs enough context window to hold all 5 product descriptions, reviews, and specifications at once.
Revenue minus variable costs (COGS, shipping, payment processing, returns) expressed as a percentage. The true margin available to cover fixed costs and marketing spend. A product with 70% contribution margin can afford much higher CAC than one at 30%. Performance marketers should optimize toward contribution-margin-adjusted ROAS rather than raw ROAS to ensure every dollar of ad spend is generating actual profit, not just revenue.
Purchases that originate from natural language conversations with AI agents, chatbots, or voice assistants — no search query, no ad click, no product listing page. The buyer describes what they need in plain language, the AI evaluates options, and a recommendation (or direct purchase) follows. Differs from traditional ecommerce in that the AI agent acts as a trusted intermediary, and the brand has zero control over how they're presented. Product data quality becomes the primary lever for conversion.
A platform-run experiment (available on Meta, Google, and TikTok) that measures incremental conversions by randomly splitting audiences into test and control groups at the platform level. More statistically rigorous than basic A/B tests because it uses platform-level randomization. However, still controlled by the platform, which creates potential conflicts of interest. Best used as a validation tool alongside independent incrementality measurement rather than as the sole source of truth.
The percentage of visitors or ad clickers who complete a desired action (purchase, signup, lead form). Calculated as conversions divided by clicks or sessions times 100. Ecommerce average is 2-3% but varies wildly by traffic source (3-5% for branded search, 0.5-1.5% for cold social traffic), device (desktop converts 2x higher than mobile for most categories), and price point (sub-$50 products convert 2-3x higher than $200+ products).
Meta's server-side tracking solution that sends conversion events directly from your server to Meta's ad system. Improves data accuracy, signal quality, and match rates compared to pixel-only tracking. Should run alongside the Meta Pixel in a redundant setup with deduplication to maximize event coverage. Properly implemented CAPI improves Meta's reported ROAS through better event matching and reduces CPA by improving the algorithm's optimization signal quality.
The average cost to generate one conversion, whether that's a purchase, signup, lead, or other defined action. Calculated as total ad spend divided by total conversions. The core efficiency metric for performance marketing. Important to distinguish between platform-reported CPA (based on attributed conversions, often inflated) and true incremental CPA (based on conversions your ads actually caused). A 'low' CPA that's based on overclaimed conversions is actually a mirage.
The cost per 1,000 ad impressions. A media cost metric that reflects how expensive it is to reach your target audience. Varies significantly by platform (TikTok generally cheapest, LinkedIn most expensive for B2B), audience targeting (broad is cheaper, narrow is pricier), seasonality (Q4 CPMs run substantially higher than Q1), and competitive intensity. Rising CPMs without rising conversion rates is a clear signal to either improve creative, adjust targeting, or reallocate budget.
The decline in ad performance that occurs when a target audience sees the same creative asset too many times. Detected through rising frequency paired with declining CTR, increasing CPA, and dropping ROAS. The fatigue curve varies by format: static images fatigue faster than video, and UGC-style content tends to last longer than polished studio content. Early detection is critical because performance degrades exponentially once fatigue sets in. Most brands detect fatigue late.
A proprietary Cresva concept: the decomposed DNA of ad creative broken into constituent elements — hook type (question, statistic, visual shock), visual style (UGC, studio, lifestyle), copy angle (benefit, problem-solution, social proof), offer structure (discount, free shipping, bundle), and CTA format (shop now, learn more, limited time). By analyzing which genome combinations correlate with performance across thousands of ads, Olivia can predict creative performance before launch and recommend specific element combinations.
How frequently new creative assets are introduced to replace fatigued ones. DTC brands need to refresh primary creatives regularly and maintain a steady pipeline of fresh creatives. The required refresh rate increases with spend level: higher spend drives higher frequency against the same audiences, which means more creative variety is needed to sustain performance.
Insights and model priors derived from analyzing anonymized, aggregated performance data across many brands. What works for one fashion brand often applies to others in the category. AI models trained on cross-brand data can make informed predictions for new brands from day one rather than starting from scratch. This creates a network effect where every brand on a platform contributes to the collective intelligence, and every brand benefits from it.
The percentage of people who click an ad after seeing it, calculated as clicks divided by impressions times 100. A proxy for creative relevance and audience targeting accuracy. Meta feed ads average 0.9-1.5% CTR, Google Search ads average 3-6%, and display averages 0.3-0.5%. Declining CTR at stable frequency suggests creative fatigue. Declining CTR at rising frequency confirms it. A high CTR with low conversion rate points to a landing page or offer problem, not an ad problem.
Revenue driven by channels that traditional analytics structurally cannot track — particularly AI agent recommendations that appear as 'direct traffic' or 'branded search' in GA4. When ChatGPT recommends your product and the user googles your brand name to buy, GA4 credits 'branded search' — the AI agent's influence is invisible. The dark funnel isn't a tracking gap you can fix with better UTMs; it's a structural limitation of click-based analytics in an agent-driven world.
A secure environment where two or more parties (typically a brand and an ad platform or publisher) can match and analyze their combined datasets without either party seeing the other's raw data. Used for audience matching, measurement, and attribution in a privacy-safe way. Meta's Advanced Analytics, Google's Ads Data Hub, and Amazon Marketing Cloud are major clean room environments. Increasingly important as user-level tracking becomes restricted.
Combining data from multiple sources (Meta Ads, Google Ads, TikTok Ads, Shopify, GA4, email platforms) into a single, consistent dataset with standardized naming conventions, unified timestamps, and reconciled metrics. Eliminates the discrepancies that occur when each platform reports slightly different numbers. Without unification, comparing Meta ROAS to Google ROAS is comparing apples to oranges because each platform defines conversions, attribution, and revenue differently.
An algorithmic attribution model offered by platforms like Google that uses machine learning to assign conversion credit based on observed path patterns. More accurate than rules-based models but still limited to the platform's own data and biased toward the platform's channels. Google's DDA will naturally favor Google touchpoints. Best used as one input among many rather than as a single source of truth for cross-channel budget decisions.
The phenomenon where additional ad spend produces progressively less incremental revenue. Every channel has a saturation curve, and spending past the optimal point wastes budget. A channel producing $5 in revenue per $1 at $50K/month spend might only produce $2.50 per $1 at $150K/month. The optimal spend level sits at the inflection point of the S-curve where marginal returns start declining. Most brands overspend on their 'best' channel because they don't model diminishing returns.
Automated assembly of ad creatives from component parts (headlines, images, CTAs, body copy) by the ad platform's algorithm. Meta's Advantage+ Creative and Google's responsive ads are DCO systems. Useful for scaling variations without manual design work, but reduces creative control and makes it harder to learn what's actually working because the platform mixes components opaquely. Best for testing broad messaging directions before investing in full production of winning concepts.
A numerical representation of text in a high-dimensional vector space where semantically similar items are positioned close together. 'Running shoes for flat feet' and 'supportive athletic footwear for low arches' would have similar embeddings despite different words. AI agents use embeddings to match product data to user queries by semantic meaning, not just keyword overlap. This is why natural, descriptive product content outperforms keyword-stuffed titles in agent commerce.
Google's privacy-safe tracking solution that supplements existing conversion tags by sending hashed first-party customer data (email, phone, address) from your website to Google. Improves conversion measurement accuracy by matching conversions that would otherwise be lost due to cookie restrictions. Available for both Google Ads and GA4. Implementation requires passing hashed customer data at the point of conversion, either through gtag.js, Google Tag Manager, or the Google Ads API.
The process of selecting, transforming, and creating input variables (features) that help machine learning models make better predictions. In marketing AI, features include spend by channel, day of week, time since last creative refresh, audience saturation level, competitive CPM index, and hundreds more. The quality of features matters more than the complexity of the model. Good feature engineering is why specialized marketing AI outperforms general-purpose tools.
Structuring product titles, descriptions, attributes, and schema markup so AI agents can accurately parse, evaluate, and recommend your products. Unlike SEO (optimizing for search algorithms) or feed optimization for Google Shopping (optimizing for ad relevance), agent feed optimization targets the evaluation criteria AI agents use: attribute completeness, specification clarity, comparison-ready data points, and semantic richness. A product titled 'Protein Powder 2lb Chocolate' scores far lower than 'Premium Whey Isolate Protein Powder, Chocolate, 30g Protein, 2lb, Clean Label, Muscle Recovery.'
Training a pre-existing LLM on domain-specific data to improve its performance for particular tasks. Cresva's agents are fine-tuned on marketing performance data, attribution patterns, and creative analysis — making them dramatically more accurate for ecommerce decisions than general-purpose models. A fine-tuned model can distinguish between creative fatigue and audience saturation from the same ROAS decline pattern, something a general model would miss.
An attribution model that gives 100% of conversion credit to the first touchpoint in the customer journey. Overvalues awareness channels and ignores everything that happens between discovery and purchase. Rarely used as a primary model but useful as a comparison point against last-click to understand the full spectrum of channel contribution.
Data collected directly from your customers through owned touchpoints: purchase history, email engagement, site behavior, loyalty program activity, and customer service interactions. Increasingly the most valuable data asset as third-party cookies disappear, iOS tracking restrictions expand, and platform-provided data degrades. Brands with strong first-party data strategies (email collection, account creation, loyalty programs) have a structural advantage in targeting, personalization, and attribution accuracy.
A limit on how many times a single user sees your ad within a time period. Prevents creative fatigue, wasted impressions, and negative brand perception from overexposure. Lower caps for prospecting, higher caps tolerated for retargeting. Platform-level frequency caps are blunt instruments; campaign-level caps give more control. Without caps, algorithms optimize for cheap impressions — often by repeatedly showing ads to the same engaged users until they stop engaging.
Google's current analytics platform, built around an event-based data model. Includes ML-powered predictive metrics, cross-platform tracking, and BigQuery integration. Key limitations: aggressive data sampling at high volumes, limited historical lookback, and — critically in 2026 — a structural inability to track AI-agent-referred traffic. When a user asks ChatGPT for a product recommendation, googles the brand name, and buys, GA4 attributes it to 'branded search' or 'direct traffic.' The AI agent's influence is completely invisible. This dark funnel blind spot means GA4 systematically understates the value of agent commerce and overstates the value of branded search.
An incrementality testing method that uses matched geographic regions as test and control groups. One region receives ads while a statistically similar region does not. Particularly useful when user-level holdout tests aren't feasible (e.g., due to iOS restrictions or cross-device complexity). Requires careful market matching and enough regional volume to be statistically meaningful. Works well for measuring the incremental impact of channel-level spend changes like pausing Meta in one region while keeping it active in another.
The percentage of viewers who watch 50% or more of a video ad. Measures whether your creative sustains attention after the initial hook. A high hook rate with a low hold rate indicates a strong opening but weak middle content. For direct response ads, hold rate correlates with conversion intent - people who watch most of your video are significantly more likely to click and purchase.
A controlled experiment where a portion of the target audience is deliberately excluded from seeing ads, creating a control group. By comparing conversion rates between the exposed group and the holdout group, you can measure the true incremental impact of your advertising. The most straightforward incrementality test to run. Requires enough volume to achieve statistical significance and a long enough test window (typically 2-4 weeks) to capture full purchase cycles.
The percentage of viewers who watch past the first 3 seconds of a video ad. The single most important leading indicator of video creative quality. If people aren't stopping to watch, nothing else matters - your message, offer, and CTA are irrelevant. Benchmark hook rates vary by platform: TikTok generally runs higher than Meta feed, with YouTube pre-roll lower still. Improving hook rate is usually the highest-leverage creative optimization you can make.
The true return on ad spend after removing conversions that would have happened without any advertising. Measured through holdout testing, geo-lift studies, or conversion lift experiments. Always lower than platform-reported ROAS because platforms count organic conversions as ad-driven. For example, if Meta reports a 5x ROAS but 30% of those conversions were organic, your iROAS is actually 3.5x. This metric is the single most important number for budget allocation decisions because it tells you the real incremental value of each dollar spent.
The true causal lift in conversions directly attributable to advertising. It measures what would NOT have happened without ad exposure. The gold standard of attribution accuracy because it answers the fundamental question: did this ad actually cause this sale, or would the customer have bought anyway? Measured through controlled experiments like holdout tests, geo-lift studies, and conversion lift studies. Without incrementality measurement, you're optimizing toward a number that includes conversions your ads didn't cause.
The accumulated knowledge about a brand's performance patterns, constraints, past decisions, and learned lessons that persists across all agent interactions and compounds over time. Unlike chat history (which is session-based), institutional memory captures durable facts: 'TikTok CAC is consistently above target,' 'Q4 Meta CPMs rise 40% by week 3 of November,' 'the founder prefers weekly Slack reports over email.' After 6 months, a brand's institutional memory contains thousands of interconnected insights that make every agent dramatically smarter.
Apple's App Tracking Transparency framework, launched April 2021, requiring apps to get explicit user permission before tracking activity across other apps and websites. The majority of users opt out. Devastated Meta's tracking accuracy, substantially reduced retargeting pool sizes on iOS, and degraded conversion reporting reliability. Forced the entire industry toward server-side tracking, probabilistic modeling, and first-party data strategies. The single largest disruption to digital advertising measurement in the past decade.
A neural network trained on massive text datasets that generates human-like responses to natural language prompts. GPT-4o, Claude, Gemini, and Llama are prominent examples. LLMs power AI shopping agents, making them the engine behind agent commerce. For brands, LLMs are the new gatekeepers: they decide which products to recommend based on the data available to them. Unlike search algorithms that rank links, LLMs synthesize information and make direct recommendations.
An attribution model that gives 100% of conversion credit to the last touchpoint before purchase. Still the default in many analytics setups. Systematically overvalues bottom-funnel channels like branded search and retargeting while undervaluing awareness and consideration channels like paid social prospecting, YouTube, and display. A brand running last-click attribution will consistently underspend on top-of-funnel and overspend on branded search, creating the illusion that brand search is highly efficient when it's actually capturing demand generated elsewhere.
The total revenue (or profit) a customer generates over their entire relationship with the brand. Calculated by multiplying average order value by purchase frequency by average customer lifespan. High-LTV brands (subscription, consumables, fashion with high repeat rates) can afford higher acquisition costs. A brand with $200 LTV and $60 CAC has a 3.3x CAC:LTV ratio, which is considered healthy. LTV should be calculated on a cohort basis to detect trends over time.
The cost of acquiring one additional customer at the current spend level. Different from average CPA because it reflects the cost of the next conversion, not the average of all conversions. As you increase spend, marginal CPA rises due to diminishing returns. The optimal spend level is where marginal CPA equals your target CPA or where marginal CPA across channels is equalized. This is the single most useful metric for budget allocation.
A statistical approach that uses regression analysis on historical data to estimate the contribution of each marketing channel to business outcomes. Works at an aggregate level (not user-level) making it privacy-safe and resilient to tracking changes. Takes into account external factors like seasonality, promotions, and economic conditions. Typically requires multiple years of historical data and works best for high-spend brands across multiple channels. Slower to implement than MTA but provides a more holistic and unbiased view of channel effectiveness.
Total revenue divided by total marketing spend, including non-advertising costs like email platform fees, SEO tools, creative production, and agency retainers. Provides a holistic view of marketing ROI that ROAS misses. Sometimes called the 'Bezos metric' because it reflects the true cost of customer acquisition across all channels. A declining MER with stable ROAS suggests rising non-ad costs are eating into overall marketing efficiency.
The gradual degradation of a machine learning model's accuracy over time as real-world conditions change. In marketing, model drift happens when consumer behavior shifts, competition changes, platform algorithms update, or seasonal patterns evolve. A model trained on Q1 data will perform increasingly poorly through Q2-Q4 without retraining. Compound learning systems counteract drift through continuous feedback loops. Static models suffer from drift silently until performance degrades noticeably.
A system design where multiple specialized AI agents collaborate on complex tasks, each bringing domain expertise. Cresva uses 7 specialized agents: Maya (memory and institutional knowledge), Felix (forecasting and predictions), Sam (strategy and scenario modeling), Parker (attribution and incrementality), Dana (data quality and reconciliation), Dex (delivery and reporting), and Olivia (creative analysis and optimization). Unlike a single monolithic model, each agent is expert in its domain and shares context with all others through a unified memory layer. When Parker detects Meta overclaiming by 34%, Felix automatically adjusts forecasts and Sam recalculates budget scenarios — all within seconds.
An adaptive testing method that dynamically shifts traffic toward better-performing creative variants while still exploring new options. Unlike A/B testing which splits traffic 50/50 until the test ends, bandit testing automatically reduces exposure to underperformers in real-time. Finds winners faster than traditional A/B tests by balancing exploitation (showing what works) with exploration (trying new things). Named after the slot machine problem in statistics.
An attribution model that distributes conversion credit across multiple touchpoints in the customer journey rather than giving all credit to a single interaction. Common models include linear (equal credit to all touches), time-decay (more credit to recent touches), position-based (40% to first and last touch, 20% split across middle), and data-driven (algorithmically weighted). MTA is better than last-click but still relies on trackable digital touchpoints, meaning it misses offline influence, word-of-mouth, and impressions that don't result in clicks.
The number of days until a customer's cumulative purchases exceed their acquisition cost. A 30-day payback period means you recover CAC within one month. Critical for cash flow planning: a brand with a 90-day payback period and $100K/month in new customer spend needs $300K in working capital just to fund acquisition. Shorter payback periods enable faster scaling because you can reinvest recovered CAC into acquiring more customers sooner.
Google's AI-driven campaign type that runs across all Google surfaces — Search, Display, YouTube, Discover, Gmail, and Maps — from a single campaign. Uses Google's ML to optimize creative assets, audiences, and bidding across surfaces in real-time. Black-box optimization: you provide assets and goals, Google decides where and how to show them. Reporting is limited — you can't see which search terms triggered your ads or which placements drove conversions. Typically replaces Smart Shopping and captures a meaningful share of total Google spend for ecommerce brands.
The gap between what an ad platform reports as conversions and the true incremental conversions your ads actually caused. Typically inflated across Meta, Google, and TikTok. Happens because platforms use broad attribution windows, count view-through conversions generously, and take credit for conversions that were already going to happen. A brand running with a meaningful overclaim rate is effectively misallocating spend based on phantom conversions. The only way to quantify overclaim is through controlled incrementality testing.
Asking customers 'how did you hear about us?' after purchase to capture self-reported channel influence. Captures channels that digital attribution misses entirely: podcast ads, word-of-mouth, influencer content, TikTok organic, and offline touchpoints. Biased by recency and salience (customers remember what's top of mind, not what actually influenced them), but directionally valuable as a complement to click-based and incrementality-based attribution. Best implemented as a required field at checkout with a well-designed dropdown.
AI agents that detect issues and surface recommendations before you ask. Rather than waiting for you to notice a ROAS drop and investigate, proactive agents monitor 24/7 and alert you: 'Meta ROAS dropped 18% over 3 days — creative fatigue detected on your top 2 ads. Recommended: refresh creatives, here are 3 angles based on your best performers.' Cresva's agents check for anomalies, budget pacing issues, creative fatigue, and competitive shifts continuously and post findings to your team chat.
A forward-looking forecast of expected return on ad spend, generated by ML models using historical performance data, creative quality signals, audience saturation levels, and competitive dynamics. Unlike reported ROAS (backward-looking and inflated), pROAS tells you what to expect before you spend. A campaign with a pROAS of 2.1x in its current state versus 3.4x with a creative refresh gives you a clear decision framework. Felix generates pROAS forecasts for every campaign daily.
A technique where an LLM retrieves real-time data from external sources (product databases, review sites, brand websites) before generating a response. This is how AI shopping agents pull current pricing, availability, and product specifications rather than relying solely on training data. Brands with well-structured product data and schema markup are more easily retrieved by RAG systems, leading to more accurate and favorable agent recommendations.
The total number of unique people who saw your ad at least once. Different from impressions, which counts total views including repeats. Reach divided into impressions gives average frequency. Monitoring reach alongside spend reveals whether increased budget is finding new people or just hitting the same audience harder. Flattening reach at rising spend is an early indicator of audience saturation.
Predicting future revenue based on historical patterns, current trends, seasonality, channel performance, and external factors. AI-powered forecasting outperforms manual spreadsheet methods, especially over longer horizons. Accurate forecasting is foundational for budget planning, inventory management, and cash flow decisions. The best forecasting models account for channel-level saturation curves, creative fatigue rates, competitive dynamics, and macroeconomic indicators rather than simple trend extrapolation.
Revenue generated per dollar spent on advertising. A 4x ROAS means $4 in revenue for every $1 in ad spend. Platform-reported ROAS is typically inflated because ad platforms take credit for conversions they didn't cause. True ROAS can only be measured through incrementality testing, which compares results against a holdout group that received no ads. Most ecommerce brands discover their real ROAS is meaningfully lower than what Meta or Google reports.
The typical shape of ad spend efficiency when plotted on a graph. At low spend, returns are minimal (the learning phase where the algorithm has insufficient data). At mid-range spend, efficiency peaks (the sweet spot). At high spend, diminishing returns set in as the audience saturates. Every channel, campaign, and audience has its own S-curve with a different optimal spend level. Finding and staying in the sweet spot of each curve is the core challenge of budget allocation.
The spend level at which a channel or audience can no longer produce meaningful incremental returns. Beyond this point, additional spend primarily drives frequency against the same users rather than reaching new potential customers. Saturation varies by channel, audience size, creative variety, and seasonality. A niche audience of 500K people saturates much faster than a broad audience of 20M. Detecting saturation early prevents wasted spend.
Simulating different budget allocation scenarios to predict outcomes before committing real spend. For example: 'What happens if I shift $20K from Google Search to Meta prospecting?' or 'What if I increase total spend 30% for Black Friday?' Reduces risk by testing decisions mathematically against historical patterns and forecasting models before any money moves. The difference between reactive optimization and proactive strategy.
Running thousands of hypothetical budget allocations, channel mixes, or creative strategies through ML models to predict outcomes before committing real spend. 'What happens if I shift $20K from Google Search to Meta prospecting?' generates a probability distribution of outcomes in seconds. Reduces risk by testing decisions mathematically against historical patterns, diminishing returns curves, and competitive dynamics. Sam runs elasticity-based scenario simulations with confidence intervals.
Structured data (JSON-LD Product schema) on product pages that helps AI agents extract precise product attributes — price, availability, ratings, specifications, brand, material, size — without scraping and parsing HTML. Products with complete schema markup are more likely to be recommended by AI agents because the agent can evaluate them with higher confidence. Essential fields: name, description, price, availability, aggregateRating, brand, sku, and all relevant product attributes for your category.
Recurring patterns in ad performance tied to time periods. Includes annual patterns (Black Friday, Q4 surge, January slump, summer slowdown), monthly patterns (payday effects, end-of-month budget flushes), weekly patterns (higher conversion on weekdays for B2B, weekends for impulse purchases), and even intra-day patterns. Must be accounted for in both forecasting and budget allocation. Ignoring seasonality leads to panic during predictable dips and overconfidence during predictable peaks.
Sending conversion data directly from your server to ad platforms, bypassing browser-level limitations like ad blockers, cookie restrictions, and iOS privacy features. More reliable than pixel-based tracking because it's not affected by client-side interference. Required for accurate data in the post-iOS 14.5 era. Implementations include Meta's Conversions API, Google's Enhanced Conversions, and TikTok's Events API. Should run alongside client-side pixels for maximum data coverage.
Apple's privacy-preserving attribution framework for iOS app install campaigns. Provides aggregated, delayed conversion data without user-level identifiers. Limited to 64 possible conversion values and imposes random time delays on postbacks. Makes granular optimization difficult but is the only sanctioned attribution method for iOS app campaigns. Requires careful conversion value schema design to extract maximum signal from limited data slots.
The threshold at which experimental results are unlikely to have occurred by random chance. Conventionally set at 95% confidence (p-value < 0.05). Running A/B tests or budget changes before reaching statistical significance leads to false conclusions and wasted spend. The required sample size depends on the expected effect size and baseline conversion rate. Small differences in performance require much larger samples to detect reliably. Rushing to conclusions is one of the most expensive mistakes in performance marketing.
A statistical method used in geo-lift testing that creates a mathematically constructed 'control' region by weighting a combination of non-test regions to match the test region's pre-test behavior. More accurate than simply comparing one city to another because it accounts for unique regional characteristics. The synthetic control effectively answers 'what would have happened in the test region if we hadn't changed anything?' by creating a virtual counterfactual from real data.
Small data files placed on users' browsers by domains other than the website being visited, historically used to track users across the web for ad targeting and attribution. Being phased out by browser restrictions (Safari and Firefox already block them, Chrome is implementing restrictions). Their deprecation has degraded retargeting accuracy, reduced attribution reliability, and increased the importance of first-party data and server-side tracking.
The percentage of people who stop scrolling when your ad appears in their feed. Measured as 3-second video views divided by impressions on Meta, or similar metrics on other platforms. Distinct from hook rate in that it measures the initial attention grab before the viewer has processed any content. High thumb-stop, low hook rate means your thumbnail or first frame is compelling but the content immediately disappoints.
TikTok's automated campaign optimization system, analogous to Meta's ASC and Google's PMax. Uses TikTok's algorithm to optimize creative selection, audience targeting, and bidding from a simplified campaign setup. Particularly effective for creative-first brands because TikTok's algorithm heavily weights creative quality and engagement metrics (watch time, shares, comments) in its optimization. Early results show lower CPA versus manual campaigns, but requires a steady pipeline of fresh short-form video creative to maintain performance.
A statistical method that analyzes sequential data points (ad spend, revenue, conversions) over time to identify trends, seasonal patterns, and cyclical behavior. The foundation of most forecasting models. Common techniques include ARIMA, Prophet, and LSTM neural networks. For ecommerce advertising, time series analysis reveals hidden patterns like the lag between Meta spend increases and Shopify revenue impact, or the creative fatigue cycle for video ads.
The basic unit of text that LLMs process — roughly 4 characters or ¾ of a word. 'Protein powder' is 3 tokens. Matters for agent commerce because AI agents have processing budgets: product descriptions consuming fewer tokens while conveying more information are evaluated more efficiently. Concise, attribute-rich product data performs better than verbose marketing copy when parsed by AI agents.
Ad creative that looks and feels like organic content created by real users rather than polished brand advertisements. Includes customer testimonials, unboxing videos, product reviews, and 'day in my life' style content. Consistently outperforms studio-shot creative on social platforms because it matches the native content format. Typically drives lower CPA than traditional brand creative on Meta and TikTok. UGC outperforms in prospecting but often underperforms polished creative in retargeting.
Tags added to destination URLs to track traffic sources in analytics tools. The five standard parameters: utm_source (platform), utm_medium (channel type), utm_campaign (campaign name), utm_content (ad variation), and utm_term (keyword). Inconsistent UTM naming is the number one cause of messy attribution data. Establish a naming convention upfront and enforce it with templates. Missing or incorrect UTMs create 'direct/none' traffic in analytics that makes attribution impossible.
Searching by semantic similarity rather than keyword matching. Instead of finding documents containing the exact words 'budget moisturizer for dry skin,' vector search finds products whose meaning is closest to that intent — including products described as 'affordable hydrating cream for sensitive, dehydrated complexions.' AI shopping agents use vector search to match user queries to products, which is why keyword-stuffed titles perform worse than naturally descriptive ones.
Credits a conversion to an ad that was viewed but not clicked. Common on display, video, and social platforms. The attribution window varies by platform: Meta defaults to 1-day view-through, Google Display can go up to 30 days. Often overcounts because simply viewing an ad in a feed doesn't mean it caused the purchase. A user who was already going to buy might see your ad, not click it, and buy directly - that gets counted as an ad-driven conversion. Narrowing view-through windows or excluding them entirely gives a more conservative (and more accurate) picture.
The practice of adjusting budget allocation across channels every week based on current performance data rather than waiting for monthly or quarterly reviews. Performance shifts constantly due to competitive dynamics, creative fatigue, audience saturation, and seasonal patterns. Brands that rebalance weekly capture opportunities sooner than those on monthly cycles. Even small weekly shifts between channels compound into significant efficiency gains over a quarter.
Want to see these concepts in action? Read the guides or explore the methodology.