What Sol Rashidi’s Data Gets Right About AI Failure — And What It Means for Your Governance | Rovers
“Without data, you’re just another person with an opinion.”
— W. Edwards Deming
Sol Rashidi has seen more AI deployments fail than most practitioners will ever see attempt. As Chief Data Officer at organizations including Sony Music and Estée Lauder, she analyzed what went wrong across hundreds of failed AI implementations — not from the outside, as a researcher or consultant, but from the inside, as the person responsible for making AI work.
Her conclusion is sharper than most governance frameworks are willing to state: the reason AI projects fail isn’t technical. It’s organizational. And the specific organizational failures she identified map precisely to the governance gaps that most mid-market organizations haven’t addressed.
Her research has been cited throughout the Rovers insights — in posts on → data quality, on → pilot-to-production gaps, and on → collaborative governance. But her work deserves its own post — because taken together, her findings constitute a practical diagnostic for AI governance failure that mid-market organizations can use directly.
The Core Finding: Data Is Not the Technical Problem
Rashidi’s analysis consistently surfaces data quality as the primary blocker preventing AI from reaching production. Not model performance. Not algorithm selection. Not technical infrastructure.
Data.
The typical failure pattern: a pilot is built on carefully curated historical data. It works beautifully. The business case is compelling. Leadership approves the budget to scale. Then someone tries to run the model on production data — and discovers that the real-world data bears no resemblance to the curated pilot data. Missing fields. Inconsistent definitions across systems. Integration gaps nobody anticipated.
The $400K pilot becomes a $1.2M data remediation project. The six-week development timeline becomes a nine-month delay.
What makes this finding significant isn’t that data quality matters — that’s widely understood. What’s significant is Rashidi’s observation about when organizations discover the data problem. Almost universally, they discover it during deployment, not before. The data assessment that would have taken two to four weeks before the pilot begins gets skipped in the pressure to show progress quickly. The resulting discovery during deployment costs months and multiples of the original investment.
The → data-first sequence — assess data readiness for the specific use case before building anything — is the governance practice her research most directly validates.
The Second Finding: Human Capital, Not Technology, Determines Deployment Success
Rashidi’s analysis of AI deployments that succeeded technically but failed organizationally is particularly instructive for AI governance design.
Her consistent observation: organizations that deployed AI without addressing the human dynamics — team alignment, cross-functional ownership, the fear and resistance of the people whose work the AI changes — saw their technical successes produce operational failures. The AI worked. The people didn’t adopt it. The business value never materialized.
This finding directly validates the organizational readiness dimension of AI governance — → Layer 0 of the CAGF framework — that most governance frameworks treat as a soft precondition rather than a hard requirement. In Rashidi’s analysis, it’s not soft at all. It’s the determinant of whether technical governance investments produce business outcomes.
The implication for governance design: organizational readiness — leadership alignment, change management, cross-functional collaboration capability — needs to be assessed and addressed before AI governance frameworks are implemented, not as an afterthought when adoption fails to materialize.
The Third Finding: Use Case Selection Is Governance, Not Strategy
One of Rashidi’s most actionable observations concerns which AI initiatives organizations choose to pursue. Her research found that organizations that allowed enthusiasm, board pressure, or competitive anxiety to drive use case selection — choosing ambitious, high-visibility initiatives rather than realistic, high-probability ones — consistently hit deployment walls that better use case selection would have avoided.
Choosing an AI use case that matches your organization’s current data readiness, technical capability, and governance maturity is a governance decision, not just a strategic one. The → use case selection framework that CAGF includes addresses this directly: before any initiative enters development, the governance process evaluates whether the organization is actually ready to deploy this specific AI successfully.
Rashidi’s framing is memorable: “Stop obsessing over which tool to buy or which language model to choose. Spend your time picking the right use case — one you can realistically deploy given your organization’s current state.”
That’s governance in practice.
The Fourth Finding: The Technology Will Work If You Do the Human Work First
Rashidi’s most quotable observation — “the technology will work if you do the human work first” — is also her most governance-relevant one.
The human work she describes includes:
- Team alignment: do the people who need to collaborate on this AI initiative share understanding of the goal and commitment to the outcome?
- Stakeholder buy-in: have the people affected by this AI been part of defining what success looks like?
- Realistic use case selection: is this an initiative the organization can actually execute, or is it aspirational relative to current capability?
- Data readiness: is the data foundation adequate for this specific use case, or is there cleanup work that needs to happen first?
All four of these are governance questions. All four are what well-designed → AI governance maturity assessments evaluate before organizations commit resources to AI development.
The organizations building governance that addresses these questions before they build AI are the ones Rashidi’s research would predict to succeed. The organizations that skip the human work in the pressure to show AI progress are the ones her data consistently shows failing.
The Practical Implication
Rashidi’s research doesn’t prescribe a governance framework. But it validates the governance principles that effective mid-market AI governance is built around:
- Data foundation first, model second
- Organizational readiness before framework implementation
- Use case selection based on current state, not aspiration
- Human dynamics addressed proactively, not reactively
If your AI governance addresses these four dimensions — in that sequence — you’re building what Rashidi’s research shows to be the foundation of successful deployment. If your governance is primarily a policy and compliance framework that skips these dimensions, you have the documentation without the substance.
The Monday Morning Question
“Data really powers everything that we do.”
— Jeff Weiner
