I've Been Thinking About That $285 Billion Crash, This Is Why We Built Scoop the Way We Did
Nate B Jones just published an analysis of the SaaS apocalypse that's been rattling around in my head all week. Not because of the $285 billion in market value that evaporated—though that number is staggering—but because of what he identified as the real problem: the difference between bolting AI onto existing workflows versus fundamentally rethinking how work gets done.
That's been the central tension at Scoop for three years. And watching this market crash crystallized something I've been struggling to articulate.
The Question Nobody's Asking
Here's what struck me about Nate's analysis: he identified what he calls the "articulation problem"—the gap between what someone asks for and what they actually need.
When a VP of Sales says "I need a better way to track pipeline," that sentence contains maybe 1% of the information required to build something useful. The other 99% is buried in how the team actually works, what their unspoken conventions are, which exceptions matter, how this quarter differs from last.
That's exactly the problem we face in analytics.
A business user asks: "Why did revenue drop 15% last month?"
What they actually need isn't a chart showing the drop. They need:
- Which specific segments drove the decline
- When exactly the inflection point occurred
- What changed in the business at that time
- Whether it's temporary or structural
- What specific actions would reverse it
- Confidence levels on each finding
Traditional BI tools make users manually specify every dimension of that investigation. Build queries. Create dashboards. Connect the dots themselves. They're UI-first systems assuming humans navigate through screens.
We built Scoop to be investigation-first. To automatically test the hypotheses a business user would run if they knew exactly what to look for.
Why "Investigation" Instead of "Dashboard"?
This is where Nate's analysis about agentic-first vs bolt-on AI really resonated with how we think about analytics.
Every BI platform is fundamentally a monitoring system. Brilliant at showing what happened. Terrible at explaining why.
When someone asks "Why did operational efficiency drop 23%?", monitoring tools show you a declining chart. Maybe a breakdown by region. Then you manually test hypotheses one by one:
- Pull data from multiple systems: 30 minutes
- Create pivot tables: 45 minutes
- Build comparison charts: 30 minutes
- Test theories manually: 2-3 hours
- Still not certain about root cause
We built Scoop to run that entire investigation automatically in 45 seconds.
The system tests 5-10 hypotheses simultaneously:
- Temporal patterns (when did it start?)
- Segment variations (which teams affected?)
- Correlation analysis (what else changed?)
- Process bottleneck identification
- Resource constraint detection
Then it synthesizes findings into: "Efficiency dropped due to staffing changes in the Northeast region affecting the quality control process, causing a 34% increase in rework cycles. Impact: $430K in lost productivity. Recommend immediate training intervention."
That's not a dashboard with AI features bolted on. That's rethinking what analytics software should do.
The Three-Layer Architecture We Bet On
Here's where we made a fundamental architectural decision that I keep thinking about in light of Nate's "bolting on vs rebuilding" framework.
Most BI tools are adding AI features to existing dashboards. Chat interfaces that generate SQL queries. Smart suggestions for visualizations. That's bolt-on thinking.
We built a three-layer AI architecture from scratch:
Layer 1: Automatic Data Preparation (invisible)
- Handles data quality, missing values, feature engineering
- Business users never think about this
Layer 2: Sophisticated ML Execution (the real work)
- J48 decision trees analyzing patterns across dozens of variables
- EM clustering finding segments humans miss
- Multi-hypothesis testing running coordinated investigations
Layer 3: Business Translation (what users see)
- Converts complex ML output to plain English
- "High-risk customers share 3 traits..." instead of dumping statistical trees
- Specific recommendations with confidence scores
This architecture assumes AI does the investigation work. Humans make decisions based on findings.
What We're Still Figuring Out
Nate made a point about the articulation problem that hit home: even with sophisticated AI, there's still a gap between vague business questions and actionable insights.
We're wrestling with this constantly. How much context can we automatically infer? When should we ask clarifying questions? How do we learn what "better" means in each customer's specific context?
Right now, we're good at investigation when the question is clear: "Why did churn increase?" or "What predicts deal closure?"
We're still learning how to handle: "Help me understand what's happening with our business" or "Something feels off in our operations."
That requires understanding not just data patterns, but business context, organizational priorities, unstated concerns. The kind of implicit knowledge that experienced analysts bring.
Can AI learn that? We're building toward it, but we're not there yet.
The Pricing Model We're Rethinking
Here's something we're actively struggling with: how do you price investigation capabilities?
We started with per-seat licensing like everyone else. But as investigations get faster and more automated, that model feels increasingly misaligned with value delivered.
If an AI investigation answers in 45 seconds what previously took 4 hours of analyst time, should that cost more (because it's more sophisticated) or less (because it requires less human effort)?
We're experimenting with different models. Value-based pricing. Investigation credits. Hybrid approaches. Honestly, we don't have this figured out yet.
But watching the market wake up to the per-seat problem makes me think we're asking the right questions, even if we haven't found the right answers.
What Keeps Me Up at Night
The same thing Nate identified about the software industry broadly: customers need time to adapt, but the market doesn't care.
We can build investigation-grade analytics that finds root causes in 45 seconds. We can deploy ML models with 89% accuracy. We can explain 800-node decision trees in plain English.
But if operations leaders are still thinking in terms of "build me a dashboard" instead of "investigate why this happened," we're speaking different languages.
This isn't a product problem. It's an adoption problem.
The market shift from monitoring to investigation is inevitable. Multi-hypothesis testing beats manual analysis. Automated investigation beats dashboard clicking. The question is timing.
Are we too early? Are we helping customers transition fast enough? Are we solving problems they don't know they have yet?
I don't know. Nobody does.
What I'm Confident About
Investigation beats monitoring. This isn't a prediction—it's an observation about what already works.
When you can answer "why did this happen?" in 45 seconds instead of 4 hours, everything changes:
- You test more hypotheses
- You catch problems earlier
- You find patterns you'd never manually explore
- You make decisions based on evidence, not intuition
The companies that adapt to investigation-first thinking will move faster than competitors stuck in monitoring mode.
The question isn't whether that transition happens. It's whether we're building the right tools to enable it—and whether the market is ready to change how they work.
That's what I'm thinking about while building Scoop. And what Nate's analysis crystallized for me about the broader software transformation we're all navigating.
The difference between bolting on AI and rebuilding for an AI-native world isn't just architecture. It's whether you're solving tomorrow's problems or just decorating today's.






.webp)