About ROASt

The intelligence behind ROASt Labs.

A deterministic pipeline that scores campaigns, allocates budget, and applies platform-safe constraints — with full transparency at every step. No LLM involved in budget decisions.

Getting Started

ROASt Labs works with your existing Google, Microsoft, and Meta Ads accounts. Here’s how to go from sign-up to your first optimisation.

1
Connect your ad accounts

Head to Accounts and click Connect for each platform. You’ll authenticate via OAuth — ROASt never sees your password. Once connected, an initial sync pulls 90 days of campaign data: budgets, spend, conversions, impression share, and bid targets.

Supports Google Ads, Microsoft Ads, and Meta Ads. You can connect one or all three.

2
Review your portfolios

Your campaigns are grouped into portfolios — each with its own budget, target ROAS/CPA, and optimisation settings. Set a monthly budget for each portfolio and choose a preset to start with:

Conservative Small steps, tight caps
Balanced Default — good for most
Aggressive Larger swings, faster
3
Set up goals (optional)

By default, the engine optimises against total platform-reported revenue. If you want to focus on specific conversion actions — leads, form submits, or purchases from a particular source — create a custom Goal in the Goals tab and assign it to your portfolio. The engine then scores entirely against those filtered conversions.

4
Run your first optimisation

Click Optimise on any portfolio. The engine scores every campaign and produces a set of staged recommendations — budget changes and bid target adjustments. Nothing is pushed to your ad accounts yet.

Review the recommendations, check the reasoning breakdown for each campaign, then click Execute when you’re ready. ROASt confirms every change landed via readback verification.

5
Configure automation

Once you’re comfortable with the recommendations, head to Automation to schedule nightly runs. We recommend starting with dry-run mode — the engine will stage recommendations and log results every night, but won’t push anything. Switch to auto-execute when you’re ready.

A global kill switch is always available to halt all automated execution instantly.

6
Monitor & tune

Use the Dashboard for a performance overview, pacing health, and account diagnostics. The engine’s calibration loop adjusts parameters automatically as data accumulates. For on-demand analysis, ask Flume — the built-in AI assistant — to generate reports, review performance, or diagnose issues.

Pipeline overview

ROASt Labs analyses your last 91 days of daily campaign data and runs a staged pipeline to produce budget and target recommendations. The pipeline is fully deterministic — every decision is the product of signal maths, not a model that guesses.

Signals Score Constrain Allocate Marginal reallocation Reserve handshake Platform adapter Staged recommendations

AI is used only for Flume (insights, narratives, reports) — never for budget or target decisions.

1

Performance Dashboard

The Dashboard is your command centre — a single view of portfolio health, system status, and actionable diagnostics across all connected accounts.

System status

At-a-glance cards for Accounts, Portfolios, Audit, Platform Mix, and Events — each with a status badge and quick link to the relevant tab.

Pacing health

Pacing gauges show month-to-date vs target spend for each portfolio. Colour-coded: green (on track), amber (slightly off), red (significantly ahead or behind).

Account Health score

A composite score across six dimensions: tracking health, signal strength, pacing health, budget coverage, utilisation, and automation readiness. Each dimension is scored 0–100 with a weighted contribution to the overall score.

Revenue reconciliation

Compares platform-reported revenue with GA4 analytics data (when connected) to surface discrepancies between attribution models.

2

The Optimiser

The Optimiser scores each campaign in a portfolio and allocates the total budget in proportion to those scores. It also recommends target bid adjustments (tROAS / tCPA) where impression share data suggests a ranking constraint rather than a budget constraint.

Click Optimise on any portfolio to run the pipeline manually. Results are staged — every proposed change is visible before anything is pushed to your ad accounts. You can re-run and adjust as many times as you like before executing. Use Run All to stage all portfolios at once.

Score decomposition: Every campaign’s score is broken down into its constituent signals — you can see exactly how much ROAS, confidence, trend, IS data, and each other signal contributed. No black box.

Portfolio controls

Min / Max allocation % — floor and ceiling on each campaign’s share of portfolio budget
Max budget step per run — limits single-run budget change as a % of current daily budget
tROAS floor & ceiling — hard guardrails on target ROAS recommendations
Max target change % — velocity cap on tROAS / tCPA adjustments per run
Recency half-life — how quickly older data loses weight. Shorter = more reactive
Attribution lag days — recent days treated as incomplete for revenue signals

Presets (Conservative, Balanced, Aggressive, Turbo) provide a starting point. A Personalised option analyses your portfolio’s data and suggests optimal settings.

Signals

Every signal is computed from your last 91 days of daily campaign data. Recent days are weighted more heavily via exponential decay (configurable half-life, default 14 days).

Recency-weighted ROAS

Revenue divided by spend, with more weight on recent performance. The primary scoring signal — campaigns with stronger recent ROAS receive a proportionally larger budget share.

IS Lost to Budget

Impression share lost because the daily budget runs out. Campaigns with high IS Lost to Budget have proven demand going unserved — they receive a growth tilt boost proportional to their efficiency.

IS Lost to Rank

Impression share lost because bids are too low. Rather than adding budget, the engine routes these campaigns to target nudges — adjusting tROAS or tCPA to improve auction rank.

Confidence

A composite of data coverage (40%), conversion volume (35%), and ROAS stability (25%). Low-confidence campaigns blend toward the portfolio average so they are neither over-allocated nor starved.

Learning protection

New campaigns with limited data are guaranteed a minimum allocation — at least 50% of the portfolio average — so they can gather enough signal to score fairly.

Trend momentum

Compares recent 7-day ROAS against the prior 14-day ROAS, with winsorisation to cap outliers. Dampened by volatility and conversion consistency to avoid chasing spikes.

Diminishing returns

Detects when spend has grown but ROAS has fallen — a saturation signal. Dampens allocation by up to 50%. Disabled during promotional events to avoid misidentifying promo effects.

Attribution lag discounting

The most recent days have incomplete conversion data. Revenue from recent days is scaled by a completeness curve so the engine doesn’t over-index on partial data. Default lag: 7 days, configurable.

Anomaly detection

A pre-pass flags tracking outages (spend with zero revenue for 3+ days), ROAS spikes (>3× campaign median), and data ceiling breaches. Flagged rows are downweighted by 80% before scoring.

Cumulative change tracking

The engine tracks how much each campaign’s budget has changed over a rolling window to prevent excessive flickering. An asymptotic throttle reduces change velocity as cumulative changes approach the platform maximum.

60%
Google · 7 days
45%
Meta · 10 days
60%
Microsoft · 7 days

If the portfolio budget itself changes by more than 5%, the change history resets to give the engine fresh room to move.

Readback verification: After every execution, ROASt reads back the values from the platform API to confirm that the changes actually landed. Any discrepancy — rejected changes, API errors, or rounding differences — is logged and surfaced in the Optimiser Logs tab.
3

The Pacer

The Pacer tracks your portfolio’s monthly budget and distributes what’s remaining across the rest of the month, recalculating every time you run the Optimiser. It ensures spend is shaped intentionally — not just front-loaded by default.

Flat

Equal daily allocation for every remaining day.

Front-loaded

More spend in the first half of the month, ramping down toward month-end.

End-loaded

Reserves more budget for the second half of the month.

Modelled

Weights future days using your portfolio’s own historical day-of-week and week-of-month patterns, combined with impression share data.

You can pin individual days with a custom multiplier — click any future bar in the Pacing chart — to override the mode weight for that day. Pins persist across mode changes.

Flex Reserve: A configurable portion of the daily budget (0–25%, default 10%) is held back before base allocation. At the end of each optimiser run, campaigns bid for reserve funds based on signal strength. The reserve deploys via a sigmoid liquidation curve: strict early in the month, progressively relaxing toward month-end to prevent underspend.
4

Platform Adapters

After the core pipeline scores and allocates budget, platform-specific adapters apply constraints appropriate to each platform’s learning mechanics and API behaviour. The pipeline is platform-agnostic — adapters are the final safety layer.

Google Ads

  • Learning phase lock — campaigns in Google’s learning phase are capped at a 5% budget step
  • Spend capacity ceiling — if a campaign is only spending a fraction of its budget, recommendations are capped at 1.15× recent average spend
  • Compound stepping — large increases are broken into sequential steps, one per run
  • Confidence-based velocity — high-confidence campaigns get the full step; low-confidence ones are dampened

Meta Ads

  • ASC dwell lock — Advantage+ Shopping campaigns have a 48-hour minimum between budget changes
  • Learning stage awareness — campaigns in learning or learning_limited have reduced budget steps
  • Audience saturation guard — frequency data detects saturation; increases are capped when triggered
  • Creative quality dampening — campaigns with low quality or engagement rankings have increases dampened

Microsoft Ads

  • Ghost budget prevention — campaigns spending less than 50% of budget over 14 days are frozen at current budget
  • Conservative velocity — Microsoft’s smaller auction volume means more conservative dampening than Google
  • Compound stepping — same as Google: large increases broken into sequential steps
5

Budget Modeller

The Modeller is a what-if planning tool for portfolio-level budget allocation. It answers: given a total budget across these portfolios, what’s the optimal split to maximise revenue?

Select an objective, choose the portfolios to include, set a total budget, and click Model. The Modeller computes an optimal allocation using a marginal ROAS greedy algorithm — each budget increment goes to whichever portfolio has the highest marginal return at its current level.

Efficiency curve

Current vs optimised split across budget levels. Three modes: Revenue, ROAS, Marginal ROAS. The marginal view shows where each unit of spend hits diminishing returns.

Revenue uplift waterfall

Per-portfolio revenue change from current to optimised allocation — so you can see exactly where the gain comes from.

Spend ceiling

Each portfolio’s addressable ceiling is estimated from impression share data. The Modeller won’t push beyond what the portfolio can absorb.

Stepped ramp

Apply changes immediately or use Step Over N Days to ramp budgets gradually — the nightly scheduler applies one step per night.

6

Goals & Conversion Mapping

By default, every campaign’s optimisation is based on its total platform-reported revenue and orders. Custom Goals let you define the specific conversion actions that matter for each portfolio — and the engine scores entirely against those.

Conversion actions are synced from your ad platforms during account connection. You create named goals that group one or more actions across platforms — for example, a “Qualified Leads” goal might combine Google’s “Form Submit” and Meta’s “Lead” action.

When a portfolio has a custom goal assigned, all engine signals — ROAS, trend, confidence, diminishing returns — are computed from the filtered conversion data.

GA4 integration: You can also link a Google Analytics 4 property as an alternative conversion source. GA4-sourced goals use analytics data instead of platform-reported conversions — useful for unifying measurement across platforms or for accounts where platform tracking is unreliable.
7

Flume AI

Flume is the built-in AI assistant for insights, analysis, and report generation. It has read access to your campaign data, portfolio settings, and engine outputs — but it never makes budget decisions. All budget and target recommendations are produced by the deterministic pipeline.

Ask Flume questions in natural language, or choose from pre-built prompts to generate structured reports:

Performance Review

Spend, ROAS, trends, top movers, budget concentration, and recommendations across all portfolios.

Pacing Review

Month-to-date pacing health, projected month-end spend, and portfolios materially off plan.

Recommendations

Highest-impact optimisation opportunities with supporting evidence and prioritised actions.

Account Health Check

Multi-dimensional health score breakdown with per-dimension diagnostics and fix suggestions.

Reports are saved to the Documents sidebar and can be exported as HTML or PDF. Flume usage is metered per plan tier.

8

Events & Promotions

The Events system lets you schedule promotional periods and settings changes on a calendar, with automatic activation and deactivation managed by the scheduler. Drag an event template onto the calendar, set the dates, and the engine handles the rest.

Promo mode

Activates the engine’s full promotional lifecycle: snapshots current settings, applies turbo settings, enables intraday runs (2-hour interval), and on deactivation restores the snapshot with a 4-day post-promo cooldown.

Settings override mode

Merges a preset and/or specific setting overrides onto portfolio settings for the duration. Useful for seasonal adjustments, testing strategies, or temporarily tightening guardrails.

Promo-aware scoring: During events, the engine blends historical signals with live promo-period signals. On day 1 it uses historical performance weighted toward recent winners. As promo data accumulates over 7 days, it progressively takes over — preventing the engine from flying blind or over-correcting.

While an event is active, affected portfolios’ settings are locked — manual changes are blocked and the UI shows a banner with the event name and dates.

9

Automation & Scheduling

The scheduler automates optimiser runs without manual intervention. Each portfolio can be configured independently — nightly runs, intraday runs, auto-execute on or off.

Nightly runs

Each portfolio can be configured with a nightly run time (default 23:50). The scheduler stages recommendations and optionally auto-executes them. Dry run mode stages but never executes.

Intraday runs

For portfolios that need faster reaction times, intraday runs can be enabled with a configurable interval (minimum 1 hour). An optional time window restricts runs to business hours.

Transition plans

When you apply a budget change via the Modeller’s Step Over N Days option, the scheduler creates a transition plan. Each night, one step is applied toward the target budget.

Safety features
  • Kill switch — immediately halts all automated execution across all portfolios
  • Dry run mode — stages recommendations and logs results, but never pushes to ad platforms
  • Concurrency guard — only one sync or optimiser job per user per platform at a time
  • Execution notifications — email digests after each batch run with success/failure breakdown and pacing alerts
  • Automatic backups — database backup before each nightly batch (local 7 days, remote 30 days)
10

Audit & Diagnostics

The Audit tab (Mission Control) surfaces conversion tracking issues and account health problems before they affect optimisation quality.

Conversion tracking audit

A 14-check diagnostic that detects double-counting, broken tags, zero-value conversions, and stale actions. Each issue includes severity, affected campaigns, and a one-click fix where possible.

Account health breakdown

Six dimensions scored independently: tracking health, signal strength, pacing health, budget coverage, utilisation, and automation readiness. The composite score gives you a single number to track over time.

Recommended actions

Prioritised suggestions based on audit findings — fix broken conversion tracking, address budget gaps, resolve pacing issues, or connect missing accounts.

Throttle diagnostics

Per-campaign view of cumulative change and remaining headroom within the platform’s rolling window. See exactly why a campaign’s budget didn’t move as much as expected.

11

Calibration

The engine maintains a per-portfolio feedback loop that compares its predictions against actual outcomes and adjusts parameters over time. Shrinkage blends learned values toward safe defaults until evidence accumulates (5–20 observations), and staleness decay with a 14-day half-life reverts dormant portfolios so they restart cautiously. Calibration is fully automatic and runs as part of the nightly scheduler.

1
Snapshot

When recommendations are staged, the engine records predicted ROAS, spend utilisation, IS Lost to Budget, and confidence

2
Observe

Waits for attribution to complete. Dynamic window: small changes use base lag; large increases add days for smart bidding to ramp

3
Tune

Compares predicted vs actual and updates calibration biases via EWMA. Adjusts spend ceiling, ROAS floor, confidence dampening, and saturation steepness

12

Security & Trust

ROASt Labs manages real ad budgets — so trust and transparency are foundational, not optional.

Staged before executed

Every recommendation is staged and visible before anything is pushed to your ad accounts. You review, adjust, and approve. Nothing happens behind your back.

Readback verification

After every execution, ROASt reads back the values from the platform API to confirm that the changes actually landed. Discrepancies are logged and surfaced.

Token encryption

All OAuth tokens are encrypted at rest using AES-256-GCM. Tokens are decrypted only in memory for the duration of an API call.

Automatic backups

Database is backed up before every nightly batch. Local backups retained for 7 days, remote backups for 30 days. Full export/import is available for data portability.

No LLM in budget decisions

AI (Flume) is used for insights, narratives, and report generation only. Every budget and target recommendation is the product of deterministic signal maths — reproducible, auditable, and explainable.

Ready to see it in action?

Connect your ad accounts and ROASt will show you exactly what it would recommend — before you execute anything.

Open ROASt Labs