Fig
Back to Blog
GuideJanuary 20, 20265 min read

How to Set Up AI-Powered Metric Monitoring in 5 Minutes

The Problem With Traditional Alerting

You've set up metric alerts before. Revenue drops below $X — fire an alert. Churn rate exceeds Y% — send a Slack notification. Pipeline coverage falls under 3x — email the VP of Sales.

These alerts answer exactly one question: "Did a number cross a threshold?" And then they leave you with the harder question: "Why?"

What typically follows is a scramble. Someone opens a dashboard. Someone else writes a SQL query. A third person pulls data into a spreadsheet. The alert created urgency, but it created zero understanding. By the time the team has diagnosed the cause, hours or days have passed.

AI-powered monitoring changes this equation. Instead of alerting you to a problem and leaving you to investigate, Fig detects the anomaly and automatically runs Root Cause Analysis to tell you why it happened. The output isn't "Revenue dropped" — it's "Revenue dropped 12%, driven primarily by a decline in Enterprise deal count, which was caused by a drop in SQL-to-Opportunity conversion that began on February 3rd."

Here's how to set it up.

Step 1: Connect Your Data Sources

Fig connects directly to your data warehouse — Snowflake, BigQuery, Redshift, Databricks, or PostgreSQL. The connection takes about 60 seconds.

What you'll need:

  • A read-only database user (Fig never writes to your warehouse)
  • Connection credentials (host, port, database, schema)
  • Network access (whitelist Fig's IP range or use a secure tunnel)

Once connected, Fig scans your schema and presents a catalog of available tables and columns. You don't need to build any data models or transformations at this stage — Fig works with the tables you already have.

A note on data readiness: You don't need perfect data to start. Fig works with the metrics you already track. If you have a revenue table, a pipeline table, and a marketing metrics table, that's enough to set up meaningful monitoring. You can add more data sources and refine your knowledge graph over time.

Step 2: Define Your Metrics and Relationships

This is where Fig differs from a traditional alerting tool. You're not just defining a metric — you're defining how it connects to other metrics in your business.

Defining a metric is straightforward:

  • Select the table and column
  • Define the aggregation (SUM, COUNT, AVG, etc.)
  • Define the time grain (daily, weekly, monthly)
  • Add any filters (e.g., status = 'closed_won')

Mapping relationships is where the intelligence comes in. For each metric, you specify what drives it:

  • Revenue is driven by Deal Count and Average Deal Size
  • Deal Count is driven by Opportunity Count and Win Rate
  • Opportunity Count is driven by SQL Count and SQL-to-Opportunity Conversion Rate

Fig helps here by suggesting relationships based on your data structure and common business patterns. You validate and adjust these suggestions. The result is a knowledge graph — a causal map of your business metrics.

For most companies, the initial knowledge graph covers 15-30 metrics and can be built in 15-20 minutes. It doesn't need to be comprehensive on day one. Start with the metrics you care about most (usually revenue and pipeline) and their immediate drivers.

Step 3: Create a Monitor

A monitor is a combination of three things: what to watch, how to evaluate it, and when to check.

What to watch: Select one or more metrics from your catalog. You can monitor individual metrics or create composite monitors that watch a group of related metrics.

How to evaluate: Fig offers three detection modes:

  • Anomaly Detection: Flag when a metric value falls outside its expected range, accounting for seasonality, trends, and day-of-week patterns. This is the most common mode — it catches deviations without requiring you to set a specific threshold.
  • Threshold: Flag when a metric crosses a specific value (e.g., churn rate exceeds 5%). Use this for metrics with hard business rules.
  • Trend: Flag when a metric shows a sustained directional trend over a defined period (e.g., win rate declining for 4 consecutive weeks). Use this for slow-moving problems.

When to check: Set the evaluation frequency — hourly, daily, or weekly. For most business metrics, daily evaluation strikes the right balance between responsiveness and noise.

Step 4: Configure Auto-RCA

This is the step that transforms monitoring from "alerting" to "diagnosis." For each monitor, you can enable automatic Root Cause Analysis.

When auto-RCA is enabled, every time the monitor detects an anomaly, Fig automatically:

  1. Runs Root Cause Analysis through the knowledge graph, tracing the causal chain from the affected metric upstream through its drivers
  2. Runs Concentration Analysis to determine whether the anomaly is concentrated in specific entities (customers, regions, products) or distributed broadly
  3. Generates a narrative summary explaining the causal chain in plain language
  4. Delivers the results to your configured notification channel — Slack, email, or the Fig dashboard

The entire process — detection, diagnosis, and notification — happens without any human intervention. Your team receives not an alert, but an explanation.

Step 5: Set Up Notifications

Fig delivers monitor results through multiple channels:

  • Slack: Post to a channel with a summary and link to the full analysis. This is the most common setup — the team sees the diagnosis in their normal workflow.
  • Email: Send a formatted digest with the anomaly details and RCA results.
  • Dashboard: All monitor results are available in the Fig dashboard with full drill-down capability.

You can configure different notification rules for different severity levels. A minor anomaly might post to a Slack channel. A major deviation might page the relevant team lead.

What the Output Looks Like

When a monitor fires, the notification includes:

The detection: "Monthly revenue is 12% below expected value for the current period."

The diagnosis: "Root Cause Analysis identified the primary driver as a decline in Enterprise deal count (down 18%), specifically driven by a drop in SQL-to-Opportunity conversion rate from 34% to 21%, beginning approximately February 3rd."

The concentration: "68% of the revenue impact is concentrated in the North America region. Within NA, 3 accounts that were expected to expand did not, contributing $420K of the $580K shortfall."

The context: Links to the full analysis, historical trend of the affected metrics, and the knowledge graph path that was traversed.

This is the difference between monitoring and intelligence. Traditional monitoring tells you the house is on fire. AI-powered monitoring tells you which room, what caused it, and whether it's spreading.

Why Monitoring Without Diagnosis Is Just Noise

Every data team has experienced alert fatigue. You set up 50 alerts, and within a month, you're ignoring most of them. The problem isn't that the alerts are wrong — it's that they create work without creating understanding.

Each alert requires an investigation. Each investigation takes hours. When you have more alerts than investigation capacity, you start triaging — which really means ignoring the ones that don't seem urgent. And the ones that don't seem urgent today become the crisis next month.

Auto-RCA breaks this cycle. When every alert comes with a diagnosis, the response time drops from hours to minutes. You're not investigating — you're reviewing an analysis and deciding what to do about it. The cognitive cost of each alert drops by an order of magnitude, which means you can actually pay attention to all of them.

The setup takes 5 minutes. The time it saves compounds every day. Start with your most important metric, enable auto-RCA, and see what you learn in the first week. Most teams are surprised by what they find — not because the problems are new, but because they're finally visible.

Ready to see Fig in action?

Start with free credits. Connect your data warehouse. See your first causal analysis in minutes.

Start With Free Credits →