Last week, I sat in a boardroom with the CTO of a Fortune 500 company as he canceled a multi-million dollar AI implementation mid-contract.
The vendor was stunned.
The purchasing team was relieved.
And all it took was a 15-minute conversation using my AI BS Detector framework.
While most of my frameworks stay locked in my paid consulting vault, I've decided to share this one publicly. Why? Because I'm tired of watching good companies burn money on AI theater.
The 7 Questions That Expose AI Snake Oil
Here's the exact script I use when evaluating AI vendors:
1. "What specific problem are you solving that couldn't be addressed with simple rules-based automation?"
This question immediately separates actual AI applications from rebranded if/then statements.
In this case, the vendor's "predictive intelligence platform" turned out to be basic regression analysis wrapped in buzzwords.
2. "Show me examples of edge cases where your system fails."
Real AI systems have limitations. Vendors who claim 99% accuracy across all scenarios are selling fantasy.
This vendor couldn't provide examples because they'd never tested edge cases. Red flag.
3. "What percentage of your clients achieve the results shown in your case studies within the first 6 months?"
Here's where things got interesting. After some uncomfortable shifting, the answer was "about 15%."
The case studies represented outliers, not typical outcomes.
4. "What's required from our team to make this work?"
The glossy brochure said "minimal implementation effort." The reality? They needed:
3 dedicated data engineers for 6+ months
Complete reorganization of existing data pipelines
Manual review of 20% of all outputs
These hidden costs would have doubled the effective price.
5. "Walk me through how your system would handle [specific business scenario]."
I described a common but messy scenario their system would encounter. Their response: "We'd need to discuss customizations for that specific use case."
Translation: Their out-of-the-box solution couldn't handle real-world complexity.
6. "What would a simpler, non-AI approach to this problem look like?"
The most revealing question. After some prodding, they admitted a rules-based system could deliver 80% of the value at 20% of the cost.
7. "What specific metrics should we use to evaluate success, and how quickly will we see them?"
They couldn't define concrete success metrics beyond vague "efficiency improvements."
The $3.2 Million Decision
After this conversation, the CTO paused the implementation and requested a smaller pilot with clear success metrics.
The vendor's unwillingness to agree to performance-based payment terms told us everything we needed to know.
Final outcome: Contract canceled, alternative solution implemented at 15% of the original cost, with 90% of the functionality.
The Bigger Problem
This isn't an isolated case. I've used this framework with dozens of companies and consistently found:
60% of "AI solutions" are basic automation with inflated price tags
25% are genuinely using AI but can't demonstrate clear ROI
10% are solid technical solutions solving the wrong business problems
Only 5% deliver genuine AI-driven value worth their cost
Why I'm Sharing This
Most consultants guard their frameworks like state secrets. I'm taking a different approach.
The free content I share demonstrates how I think and work. If you find value in it, you'll understand why clients pay for my deeper expertise.
And if this framework helps you avoid even one bad AI investment, it's worth sharing.
What's Coming Next
While this framework is free, my paid subscribers receive:
The complete AI Pitch Deck Decoder with annotated examples
My Tech Simplification Calculator for identifying overspending
The AI Implementation Tracker with real-world benchmarks
Weekly case studies of real AI implementations gone right and wrong
But even if you never subscribe, keep using this free framework. It might save your company millions too.
Your Turn
Have you encountered AI snake oil in your organization? Reply with your experience, and I'll analyze one case study in a future post.
If this was helpful, consider subscribing for weekly insights on cutting through tech hype and building solutions that actually work. Or don't – this framework is yours either way.
Thanks, Pranjal, for sharing this valuable framework questionnaire.
Good data as input to AI then good out put by AI - this statement I have red it multiple times...and also data cleaning is basic first step for any organisation before thinking AI.
Does companies don't have data analysts/data engineers who can do this data cleaning activities? Why most companies fail in getting desired results from their data?
How Ontology, Knowledge Graphs play a role in getting good data?
Please share your thoughts/experiences on data cleaning, ontology and knowledge graph.
(May be in one new article)
Thank you