Fig
Back to Blog
AnalysisFebruary 15, 20267 min read

Why Your BI Dashboard Can't Tell You Why Revenue Dropped

The Monday Morning Panic

It's 9:03 AM on a Monday. Your VP of Sales opens the revenue dashboard, and there it is: a 12% drop in monthly recurring revenue compared to the prior period. The Slack messages start flying. "What happened?" "Is this a data issue?" "Can someone pull the numbers by segment?"

What follows is a scene that plays out at thousands of companies every week. An analyst opens a SQL editor. They start slicing the data — by region, by product line, by customer segment, by sales rep. They build a pivot table. They check if it's a churn issue or a new business issue. They look at deal stages. They compare cohorts. Two days later, they have an answer — or more accurately, they have a hypothesis that seems to fit.

This isn't a failure of your team. It's a failure of the tool.

What Dashboards Actually Do

BI dashboards are exceptional at one thing: showing you what happened. They aggregate, filter, and visualize historical data. They answer questions like "What was revenue last month?" and "How many deals closed in Q4?" with precision and clarity.

But there's a category of question that dashboards structurally cannot answer: Why did this happen?

The reason is architectural. A dashboard is a window into a single table or a pre-joined set of tables. It shows you metrics in isolation or in simple, predefined groupings. When revenue drops, the dashboard can show you that it dropped. It can even show you that it dropped more in the Enterprise segment than in Mid-Market. But it cannot tell you whether the Enterprise drop was caused by longer sales cycles, lower win rates, smaller deal sizes, or a spike in churn — at least not without someone manually investigating each of those hypotheses.

That investigation is what analysts spend their time on. And the gap between "seeing the problem" and "understanding the problem" is where organizations lose days.

The Manual Investigation Spiral

Let's walk through the 12% revenue drop scenario in detail. Here's what a typical investigation looks like:

  • Hour 1-2: The analyst confirms the drop is real (not a data pipeline issue). They slice revenue by segment, region, and product line. They find that the drop is concentrated in North America Enterprise accounts.
  • Hour 3-5: They dig into North America Enterprise. Is it new business or expansion revenue? It turns out both are down, but new business is down more. They start looking at pipeline metrics — deal count, average deal size, win rate, sales cycle length.
  • Hour 6-8: Win rate looks stable. Deal count is down. They shift to marketing metrics — MQL volume, conversion rates from MQL to SQL to Opportunity. MQL volume is actually up, but conversion from SQL to Opportunity dropped.
  • Day 2: They realize the SQL-to-Opportunity conversion drop coincided with a change in the qualification criteria two weeks ago. The new criteria filtered out a segment of leads that historically converted at a decent rate. Mystery solved — probably.

That's 12+ hours of skilled analyst time to trace a causal chain that spans four different data domains: revenue, sales pipeline, marketing funnel, and operational process changes.

What Causal Analysis Actually Looks Like

Now consider the same scenario with a system that understands how metrics relate to each other causally.

A knowledge graph maps the relationships between your business metrics. Revenue is driven by deal count and average deal size. Deal count is driven by opportunity count and win rate. Opportunity count is driven by SQL count and SQL-to-Opportunity conversion rate. SQL count is driven by MQL count and MQL-to-SQL conversion rate. Each of these relationships has a measured direction, a quantified strength, and a known lag time.

When revenue drops 12%, Fig's Root Cause Analysis algorithm doesn't start from scratch. It traverses the knowledge graph, checking each upstream metric for anomalies and quantifying how much each one contributed to the downstream change. In roughly 30 seconds, it produces a causal chain:

  1. Revenue dropped 12% — concentrated in North America Enterprise
  2. New business revenue drove 78% of the decline — expansion revenue was roughly flat
  3. Deal count dropped 18% — win rate and deal size were stable
  4. Opportunity count dropped 22% — driven by fewer SQLs converting to Opportunities
  5. SQL-to-Opportunity conversion dropped from 34% to 21% — starting February 3rd

The analyst didn't have to hypothesize. They didn't have to open five different dashboards. The causal chain was traced automatically because the relationships between metrics were already mapped.

Concentration Analysis: Is This Systemic or Isolated?

Here's a question that dashboards struggle with even more than "why": Is this problem concentrated or widespread?

When revenue drops 12%, it matters enormously whether that drop is spread across 200 customers or concentrated in 3 large accounts. The first is a systemic issue — something about your market, product, or go-to-market motion has shifted. The second is an account management problem.

Fig's Concentration Analysis applies a Pareto-style decomposition to any metric movement. For the revenue drop, it might reveal that 68% of the decline is attributable to just 2 accounts — both of which had contracts up for renewal and chose not to expand. That's not a market problem. That's a renewal execution problem in your largest accounts.

Or it might reveal the opposite: the drop is distributed across 47 accounts with no single account contributing more than 4%. That's a signal of something structural — a pricing issue, a competitive shift, or a product gap.

The point is that the shape of the decline matters as much as the size. Dashboards show you the size. Concentration analysis shows you the shape.

Why This Gap Exists

BI tools were designed in an era when the hard problem was getting data into a readable format. And they solved that problem extraordinarily well. Looker, Tableau, Power BI — these are excellent tools for data visualization and reporting.

But visualization is not analysis. Reporting is not diagnosis. The implicit assumption in every BI dashboard is that a human will look at the chart and figure out what it means. That works when the causal chain is short and obvious. It breaks down when the chain spans multiple business functions, involves non-obvious lag times, or requires comparing dozens of possible explanations.

The missing piece is not better charts. It's a model of how your business actually works — which metrics drive which other metrics, with what strength, and with what delay. That model is the knowledge graph, and it's what makes automated causal analysis possible.

What Changes When You Can Answer "Why" in 30 Seconds

The operational impact is significant. When root cause analysis takes 2 days, organizations tend to investigate only the biggest problems. Smaller anomalies — a 5% dip here, a trending decline there — go uninvestigated because the cost of investigation exceeds the perceived urgency.

When root cause analysis takes 30 seconds, the threshold for investigation drops to zero. Every anomaly gets a causal explanation. Patterns that would have taken months to notice through manual analysis become visible in real time.

This isn't about replacing analysts. It's about redirecting their time from finding problems to solving them. The analyst who spent 12 hours tracing the revenue drop could instead spend that time working with the sales team to evaluate the qualification criteria change and design a better approach.

The dashboard tells you the house is on fire. Causal analysis tells you which room, what started it, and how fast it's spreading. Both are necessary. But only one of them helps you decide what to do next.

Ready to see Fig in action?

Start with free credits. Connect your data warehouse. See your first causal analysis in minutes.

Start With Free Credits →