Pilot live: ACP for AI commerce.Explore ACP
How agents work

One question. Seven agents collaborate.

The mechanism behind a Cresva recommendation. How a question routes through the workforce, what each agent contributes, and why the answer compounds rather than plateaus.

The architectural choice

Single AI plateaus. A workforce compounds.

A single model is a generalist. Ask it to forecast revenue, debias attribution, score creative fatigue, and write a recommendation, and it does each task at roughly 70 percent of what a specialist would. Accuracy plateaus there, regardless of how much data you feed it.

Cresva splits the work. Each agent owns one function and shares one memory. Felix gets better at forecasting because Felix only forecasts. Parker gets better at debiasing because Parker only debiases. The shared memory means specialization does not fragment context, and the shared context means the workforce answers as one even though the agents reason as seven.

Specialization creates mastery. Coordination creates intelligence.

Inside a recommendation

"Should we shift budget?" What happens next.

A single question routes through seven agents in sequence. Each contributes one slice. The recommendation reads as a decision, not a dashboard.

You asked

Should we shift 20 percent of budget from Google to Meta for Q4?

Step 1
MayaRecalled

Pulls user context. CAC ceiling at $65, margin floor 15%, Q4 historically +40% revenue. Surfaces past Meta vs Google discussions.

Conversations scanned, relevant context retrieved

Step 2
DanaFetched

Fetches the last 90 days across Meta, Google, Shopify. Reconciles spend and revenue. Meta ROAS 3.2x reported, Google 2.9x reported.

3 platforms queried, 2.4M data points processed

Step 3
ParkerVerified

Applies platform debiasing. True incrementality: Meta 2.4x not 3.2x, Google 2.6x not 2.9x. Meta overclaiming by 33%.

Holdout calibration data, last 12 weeks

Step 4
OliviaFlagged

Top three Meta creatives showing fatigue, CTR down 23% in 14 days. New variant pipeline empty. Risk: CAC spike if Meta share grows.

127 creatives scanned, fatigue confidence 89%

Step 5
FelixForecast

Q4 forecast at current 60/40 allocation: $2.1M revenue, $58 CAC. With 70/30 shift: $2.05M revenue, $71 CAC, breaches the cap.

Elasticity-based scenario simulation, 78% confidence

Step 6
SamTested

Tests 1,247 scenarios. The 70/30 shift breaches the CAC cap with 87% probability. Recommends hold and creative refresh first.

1,247 scenarios tested, constraint violations flagged

Step 7
DexDelivered

Compiles the response. Formats for Slack delivery per the user's preference. Logs the recommendation in the brand memory store.

Per-recipient formatting applied, response sent

Recommendation
87% confidence

Hold the current 60/40 allocation. Refresh Meta creatives first. Re-run the analysis in three weeks.

  • 70/30 shift puts CAC at $71, above the $65 cap
  • Revenue drops $50K versus the current allocation
  • Meta's reported "3.2x ROAS" is 2.4x after debiasing
  • Top three Meta creatives are fatiguing, CTR down 23 percent
What gets remembered

Constraints stay learned. Decisions stay logged.

Maya extracts what matters from every conversation. Constraints, patterns, history. Surfaced when the next agent needs it.

Constraints

  • CAC ceiling$65

    Stated in conversation #234

  • Margin floor15%

    Set during onboarding

  • Weekend budget−20%

    Preference from Q2 review

Patterns

  • Q4 revenue multiplier1.42x

    Three years of historical data

  • Meta creative fatigue21 days avg

    127 creatives analyzed

  • Google CPC trend+3% / month

    14 months of bid data

History

  • TikTok test, Aug 2025Failed, $12K loss

    Conversation #456

  • UGC campaign, Sep 20252.1x ROAS

    Performance tracked end to end

  • Black Friday 2024$340K revenue

    Reconciled against Shopify

Human-approval workflow on every memory write. Nothing learned without a sign-off, and everything written can be inspected, edited, or removed.

Compound learning

Every prediction tracks against reality. Misses become signal.

The target curve compound learning is built to deliver: 78 percent in month one, 91 percent by month nine. Not because the model is smarter, but because it has learned your patterns.

1

Measure.

Every prediction has a timestamp. Every outcome is recorded. The delta between expected and actual is the learning signal.

2

Identify the cause.

External factor like a competitor sale or platform change? Seasonality? Bad data? The cause determines the fix.

3

Update the model.

Model weights adjust. Elasticity curves recalibrate. Confidence intervals tighten. The next prediction starts closer.

Forecast accuracy, by month
80%85%90%M1M5M9
+13 pts
architecture target, M1 to M9
9 months
pilot window
Conflict resolution

When agents disagree. Confidence wins.

Multi-agent systems fail loudly when they disagree quietly. Cresva surfaces conflicts instead of hiding them. When Felix and Sam reach different conclusions on the same question, the recommendation includes both calls, the confidence on each, and the reason the workforce sided with one of them.

Confidence is not a marketing number. Each agent reports an interval based on backtested accuracy and the data window backing the call. The agent with higher confidence and tighter context wins by default. The dissenting view is shown in the response, not buried.

Worked example
Felix

Q4 forecast: $2.1M revenue at 60/40 allocation.

78% confidence, 3-month data window

Sam

70/30 shift breaches the CAC cap with 87% probability.

87% confidence, 1,247 scenarios tested

Resolution
Sam wins on confidence

The user sees both numbers. The recommendation goes with Sam's call. Felix's forecast is shown alongside, with the score that explains why it was the second-place answer.

Ready when you are

See an agent in action, or run a 14-day pilot.

Each agent has a deep dive. The pilot connects all seven to your real accounts.

Looking for the deep dive? See Felix forecasts → or Parker debiases → or Maya remembers →.