Why We Built Scoop Analytics the Way We Did

Why We Built Scoop Analytics the Way We Did

When a 200-line markdown file erases $285 billion in market value overnight, you know something fundamental just shifted. Not just stock prices, the entire logic of how we build and sell software.

I've Been Thinking About That $285 Billion Crash, This Is Why We Built Scoop the Way We Did

I watched the SaaS apocalypse unfold last week like everyone else. Thompson Reuters down 16%. RELX cratering 14%. Legal Zoom losing 20% in a single day. And I kept thinking: this isn't really about Anthropic's legal contract review plugin. This is about a reckoning that's been coming for years.

Here's what nobody's saying clearly enough: the crash revealed that the emperor has no clothes, and the emperor is the per-seat licensing model that built a trillion-dollar industry.

Why Does This Matter to Someone Building Analytics Software?

Because I've been wrestling with these exact questions for the past three years while building Scoop Analytics.

Not "how do we add AI features to our BI tool?" That's the bolt-on approach that just killed $285 billion. The real question is: if AI fundamentally changes how humans interact with data, what should analytics software even be?

Let me back up. When we started Scoop, every competitor was following the same playbook: build dashboards, charge per seat, require IT involvement for everything, make the data team the bottleneck. The entire BI industry ran on the assumption that humans are the interface to data.

But I kept seeing the same pattern in customer conversations. Operations leaders would say: "Our dashboard shows revenue dropped 15%. Great. Now what?" The analyst would spend 4 hours testing hypotheses manually. By the time they found the answer, the problem had gotten worse.

That's not an analytics problem. That's an investigation problem.

  
    

Try It Yourself

                                  Ask Scoop Anything          

Chat with Scoop's AI instantly. Ask anything about analytics, ML, and data insights.

    

No credit card required • Set up in 30 seconds

    Start Your 30-Day Free Trial   

What's the Difference Between Monitoring and Investigation?

Monitoring tells you what happened. Investigation tells you why it happened and what to do about it.

Every BI tool on the market is fundamentally a monitoring system. They're brilliant at showing you metrics, trends, and thresholds. They're terrible at answering: "Why did our operational efficiency drop 23% last month?"

Because answering "why" requires testing multiple hypotheses simultaneously. It requires:

  • Temporal analysis (when did the change start?)
  • Segment comparison (which teams are affected?)
  • Correlation discovery (what else changed at the same time?)
  • Pattern recognition across dozens of variables
  • Synthesizing findings into a coherent narrative

No dashboard does that. Dashboards show you one query result. Then another. Then another. You're the one connecting the dots manually.

This is exactly what Nate B Jones highlighted in his analysis of the SaaS crash—the difference between UI-first and agentic-first architecture. Monitoring tools are UI-first. They assume a human navigates through screens clicking on things. Investigation requires agentic-first thinking.

Why Did We Build Scoop as an Investigation Platform Instead of a Dashboard Tool?

Because I saw the writing on the wall about the per-seat model years ago.

Think about the economics: if one AI agent can do the work that previously required 10 analysts with 10 separate BI licenses, you don't lose the value of the data—you lose nine seats of revenue. That's the $285 billion problem.

But here's what everyone's missing: the solution isn't to build AI-powered dashboards. It's to rethink what analytics software should be in an AI-native world.

We made three architectural decisions at Scoop that I'm watching play out in real-time with this market crash:

Decision 1: Investigation Over Monitoring

When someone asks Scoop "Why did revenue drop last month?", we don't show them a chart. We run an investigation.

The system automatically:

  • Tests 5-10 hypotheses simultaneously
  • Explores temporal patterns, segment variations, correlations
  • Identifies the specific mobile checkout bug affecting iPhone users in the Northeast
  • Calculates exact financial impact: $430K lost
  • Provides specific remediation steps

Time to answer: 45 seconds.

Compare that to the traditional approach: pull data (30 minutes), create pivot tables (45 minutes), build charts (30 minutes), test hypotheses one by one (2-3 hours), still not sure what's wrong.

This is what agentic-first means in practice. The AI isn't decorating an existing workflow—it's fundamentally rethinking how investigation works.

Decision 2: Multi-Hypothesis Testing as Core Architecture

Here's where I think about the "articulation problem" Nate mentioned in his analysis—the gap between what a human asks for and what they actually need.

When a VP of Operations says "I need to understand our efficiency drop," that sentence contains maybe 1% of the information needed to find the real answer. The other 99% is buried in:

  • Which processes matter most right now
  • What "efficiency" means in their specific context
  • Which exceptions are normal vs. anomalous
  • How this quarter differs from last
  • What they're really trying to decide

A traditional BI tool makes them articulate all of that upfront. Choose your dimensions. Build your query. Create your visualization. Iterate manually.

Scoop runs the investigation they would have run if they knew exactly what to look for.

We test hypotheses they didn't know to ask. We find patterns across 20+ dimensions simultaneously—combinations no human could manually explore. We synthesize findings into specific recommendations with confidence levels.

This is possible because we built a three-layer AI architecture from day one:

Layer 1: Automatic Data Preparation (invisible to users)

  • Cleans data, handles missing values
  • Engineers features, bins variables
  • Normalizes for comparison

Layer 2: Sophisticated ML Execution (the real work)

  • J48 decision trees that can be 800+ nodes deep
  • EM clustering finding natural segments
  • Pattern recognition across dozens of variables simultaneously

Layer 3: Business Translation (what users see)

  • Converts technical output to plain English
  • "High-risk customers have 3 key traits..." instead of dumping an 800-node tree
  • Specific recommendations with quantified impact
  • Financial projections and confidence scores

Most BI tools bolt AI onto existing dashboards. We built investigation intelligence from the ground up.

Decision 3: Pricing for Value, Not Seats

This is where the SaaS apocalypse gets personal for every software company.

We don't charge per seat. We never did. Because I knew the per-seat model was fundamentally incompatible with AI-native workflows.

Think about what KPMG did to Grant Thornton—demanded a 14% fee reduction based purely on the existence of AI capabilities. Not because they deployed AI. Not because they automated the audit. Just because everyone now knows these tasks can be done more cheaply.

That negotiating tactic works in every knowledge work fee structure. Legal fees. Consulting fees. Implementation fees. And BI license fees.

When an AI investigation agent can answer in 45 seconds what previously took an analyst 4 hours, why would anyone pay for the analyst's seat license?

The data still has value. The investigation capability has value. But the assumption that value scales linearly with human headcount? That's broken forever.

What Did We Get Right? What Are We Still Figuring Out?

We got the architecture right. Scoop is agentic-first, not bolt-on AI. When the market shifts from UI-first to agent-first (which is happening right now), we don't need to rebuild. We're already there.

We got the pricing model right. We charge for value delivered, not seats occupied. When AI makes investigations 100x faster, our customers don't suddenly owe us 1/100th the revenue.

We got the investigation paradigm right. The future of analytics isn't better dashboards—it's automated investigation that finds patterns humans miss.

What we're still figuring out: The same thing every AI-native company is wrestling with—how fast can we move before we outrun our customers' ability to adapt?

Here's something I think about constantly: We built Scoop to solve problems most companies don't know they have yet.

Operations leaders are still stuck in monitoring mode. They've invested millions in Tableau dashboards. They've trained teams on Power BI. They've built their entire analytical workflow around the assumption that humans navigate UIs to find insights.

And then a 200-line markdown file shows them there's a completely different way to work.

What Does the Articulation Problem Mean for Analytics?

This is the part that keeps me up at night—and it's the same challenge every software company faces in the AI transition.

Can we build systems that understand what people actually need, not just what they ask for?

A business user says: "Show me customers at risk of churning."

What they actually need:

  • Predictive scores on which customers will churn
  • Explanation of why each customer is at risk
  • Specific intervention recommendations
  • ROI calculation for prevention efforts
  • Prioritization based on customer value
  • Timeline for when to act

Traditional BI makes them manually specify all of that. Build separate queries. Create multiple dashboards. Connect the dots themselves.

Scoop runs the complete investigation automatically. We've encoded years of data science expertise into the investigation process. When someone asks about churn risk, we know to:

  • Run ML models predicting probability
  • Explain predictions with multi-factor analysis
  • Calculate financial impact of intervention
  • Recommend specific actions with confidence levels

But here's the honest truth: we're not perfect at this. Nobody is yet.

The gap between "what the human asked" and "what would actually help them" is the hardest problem in software. It's harder than building the AI. It's harder than fixing the pricing model. It's harder than rethinking architecture.

Because it requires understanding context that isn't in the question. Industry norms. Company culture. Unstated priorities. The real decision they're trying to make.

What Should Analytics Leaders Do Right Now?

I've been thinking about this a lot since the crash. Not just as someone building analytics software, but as someone who talks to operations leaders every week who are trying to figure out what to do.

Here's what I'm seeing work:

Stop bolting AI onto existing workflows. If you're using ChatGPT to proofread reports you could have written anyway, you're decorating a structural problem. If you're adding Copilot to your BI tool but your analytical process looks exactly like it did two years ago, you're not adapting—you're procrastinating.

Rethink the workflow from scratch. Ask: "If I could investigate any business question in 45 seconds instead of 4 hours, how would I work differently?" Not "how would I work faster"—how would I work differently?

Identify investigation triggers. Make a list of questions that monitoring can't answer:

  • Why did [metric] change?
  • What's different about high performers vs. low performers?
  • What factors predict [outcome]?
  • Which customers will churn/expand/convert?
  • Where are bottlenecks in [process]?

Calculate your investigation ROI. How many ad-hoc analyses does your team run per month? How long does each take? What's the cost when questions go unanswered because there's no time?

For most companies, the math is stark: 40 analyses per month × 4 hours each × $85/hour = $13,600/month in analyst time. Plus the opportunity cost of the 65% of questions that never get answered.

Start with high-impact use cases. Don't boil the ocean. Pick three investigation scenarios with immediate business impact. For operations leaders: process bottleneck investigation, quality issue root cause analysis, resource optimization.

What Keeps Me Up at Night?

The same thing that should worry every SaaS company: the window for transformation keeps compressing.

Nate mentioned in his analysis that asking an AI model right now to help you figure out how to use AI will give you advice that's 6 months out of date. Even the AI can't keep up with itself.

That's the pace we're operating at. Opus 4.6 drops. Twenty minutes later, Codex drops. The same week, OpenAI launches Frontier. The capabilities we're building on today will be obsolete foundations tomorrow.

And here's the brutal part: customers need time to adapt, but the market doesn't care.

We can build investigation-grade analytics that finds root causes in 45 seconds. We can deploy ML models that predict outcomes with 89% accuracy. We can explain complex decision trees in plain English.

But if operations leaders are still thinking in terms of dashboards and manual analysis, if they're still asking "can you build me a report" instead of "investigate why this happened," we're speaking different languages.

The SaaS companies that crashed this week? They have the same problem at a different scale. Their data is valuable. Their accountability edge is real. But if they can't help customers transition from UI-first to agentic-first thinking, they'll bolt AI onto dying platforms and wonder why the market didn't reward them.

What Are We Building Next?

I think about the KPMG precedent a lot. Not because they automated their audit—they didn't. They used the existence of AI as leverage to renegotiate fees.

That playbook spreads like wildfire through every knowledge work industry. And it means two things for analytics:

First: Investigation capabilities become table stakes, not differentiators. Every analytics platform will claim to "use AI." The question becomes: which ones actually rethought their architecture vs. which ones bolted AI onto dashboards?

Second: The articulation problem becomes the real battleground. Which platform best understands what you actually need when you ask a vague business question?

We're working on both. Making investigation capabilities more accessible. Building systems that better understand context and intent. Helping customers transition from monitoring to investigation thinking.

But I'll be honest: I don't know if we're moving fast enough. Nobody does. The pace is incomprehensible.

The Question Nobody Wants to Ask

Here's what I think about late at night when I'm working on Scoop:

What if we built exactly the right product at exactly the right time, and the market still isn't ready?

Investigation-grade analytics. Multi-hypothesis testing. Three-layer AI architecture. Pricing that scales with value, not seats. We made all the right architectural decisions.

But adoption requires customers to fundamentally rethink how they work with data. To stop asking for dashboards and start asking for investigations. To trust AI to test hypotheses they didn't know to consider.

That's a bigger shift than any software architecture change.

The SaaS companies losing hundreds of billions in market value right now? They have the opposite problem. They have massive adoption of the wrong architecture. Millions of users trained on UI-first workflows. Revenue models built on per-seat assumptions. Product roadmaps optimized for incremental dashboard improvements.

They need to rebuild while the building is on fire and the stock price is cratering.

We have the right architecture with a much smaller customer base. We need to help the market catch up to what we built.

I don't know which problem is harder to solve.

  
    

Try It Yourself

                                  Ask Scoop Anything          

Chat with Scoop's AI instantly. Ask anything about analytics, ML, and data insights.

    

No credit card required • Set up in 30 seconds

    Start Your 30-Day Free Trial   

What I Know for Sure

The 200-line markdown file didn't decide who wins and loses. It just compressed a 5-year transition into 48 hours.

The per-seat SaaS model is broken. The data and accountability underneath aren't. And every software company—including Scoop—needs to figure out how to deliver value in an agentic-first world without destroying revenue in the transition.

Investigation beats monitoring. Agentic-first beats bolt-on AI. Multi-hypothesis testing beats single queries. These aren't predictions—they're observations about what already works.

The question isn't whether the market shifts from monitoring to investigation. It's already happening. The question is: which companies adapt fast enough, and which customers are ready to change how they work?

I built Scoop betting that investigation-grade analytics is the future. That operations leaders want to understand why things happen, not just see what happened. That 45 seconds beats 4 hours. That finding patterns across 20+ dimensions simultaneously beats manual pivot table hell.

I still believe that. But watching $285 billion evaporate because of a markdown file makes me think differently about how fast "the future" arrives—and whether we're all ready for it.

The clock isn't stopping. The transition isn't slowing down. And the difference between companies that survive and companies that crash isn't just product architecture—it's whether they can help customers fundamentally rethink how they work.

That's what keeps me building. And what keeps me up at night.

What are you rethinking about how you work with data?

Why We Built Scoop Analytics the Way We Did

Brad Peters

At Scoop, we make it simple for ops teams to turn data into insights. With tools to connect, blend, and present data effortlessly, we cut out the noise so you can focus on decisions—not the tech behind them.

Subscribe to our newsletter

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Frequently Asked Questions

No items found.