Why Deloitte's $291K AI Mistake Shows Students Need Better AI Training (2025)

A costly $291,000 blunder reveals that even experts falter without proper dialogue—and this lesson is just as relevant for students.

Deloitte, one of the Big Four consulting giants, recently returned $291,000 to the Australian government after admitting that it had used ChatGPT to generate a compliance review filled with glaring inaccuracies. This report wasn’t just off the mark—it contained fake references, made-up citations, and fabricated court rulings. Christopher Rudge, an academic from the University of Sydney, described the document as suffering from multiple "hallucinations," meaning the errors were unsupported by any real evidence.

But here’s where it gets controversial: this isn't some student cheating on a homework task with an AI shortcut. No, this was a multi-billion-dollar consultancy outsourcing their professional judgment to an algorithm. The outcome? Exactly what happens when highly paid experts stop engaging their critical thinking—a polished package of nonsense masked by professional formatting.

It’s crucial to understand that AI itself wasn’t the culprit here. The tool did what it was asked to do. The failure was on the consultants’ part—they didn’t know how to think with the AI, treating it like a "black box" that just spits out answers. This echoes Paulo Freire’s idea of "banking education," where knowledge is merely deposited and retrieved without questioning or dialogue.

This Deloitte incident should be a wake-up call for educators everywhere. If top-level professionals are offloading cognitive tasks to AI without meaningful engagement, why should we expect students to approach AI any differently?

The Problem of Institutional Hypocrisy

Unfortunately, we’re entering an era where nearly everyone uses AI but few openly admit it. Educational institutions are no exception. Anthropic’s 2025 education report highlighted a striking contradiction: professors rate AI-assisted grading as the "least effective" use of AI in education, yet nearly half (48.9%) of their AI-powered grading is fully automated—essentially letting algorithms do the work educators are paid to perform. Professors are also using AI to develop detailed teaching materials and course content.

Then, paradoxically, these same institutions punish students for engaging in the very behaviors they model for themselves. The message to students is clear: AI use is acceptable and even expected for professionals, but it’s considered academic dishonesty for learners. This kind of hypocrisy doesn’t just erode academic integrity; it teaches students that the goal isn’t mastering the tool but hiding their AI use.

What Cognitive Outsourcing Looks Like

Repeated and uncritical use of large language models (LLMs) has been shown to weaken brain connectivity. A recent MIT study found that frequent AI users often couldn’t accurately cite or claim ownership of their own work, reporting low levels of intellectual engagement. The researchers call this phenomenon "cognitive debt."

This ties back to Freire’s banking metaphor: each time someone deposits a prompt and withdraws an answer without critical thinking, their brain neglects to develop the neural pathways associated with deep understanding and analysis. Over just four months, consistent LLM users showed diminished performance on neural, linguistic, and behavioral metrics.

Students who rely on AI like a copy-paste machine struggle to explain why they chose particular evidence or examples. When pressed, many blank out or become defensive. Their work might look polished on the surface, but underneath is a lack of authentic thinking—no connection between conclusions and broader concepts, and little curiosity beyond meeting the assignment criteria.

This isn’t a sign of laziness; it’s a symptom of disengagement. When students don’t invest in the cognitive process, they become indifferent to the accuracy or quality of their output.

The Power of Dialogic Engagement

On the flip side, students who learn dialogic engagement with AI demonstrate strikingly different behavior. They ask thoughtful follow-up questions during discussions, defend their reasoning when challenged, critically evaluate each other's arguments with evidence they personally vetted, and recognize the limits of their conclusions. They don’t stop at the assignment’s minimum; they want to explore further.

The key difference is their approach. Instead of treating AI as a magic “answer machine,” they engage in an ongoing interrogation process—testing, critiquing, and refining the AI’s outputs. For example, rather than saying, "Write an analysis of symbolism in The Great Gatsby," they would first ask AI for an analysis, then challenge it: "What assumptions underlie this interpretation? How might it be flawed? What would a historian from the 20th century say? Can I relate these themes to my personal experience?"

This kind of interaction requires more time and effort because it mimics human dialogue—an exchange where expertise and information processing work together in harmony.

What Educators Need to Do

We can’t lecture students about AI ethics and then automate our own grading or feedback behind closed doors. Penalizing students for behaviors we ourselves endorse is counterproductive.

Teachers must model dialogic engagement by being transparent about their own AI use. If they employ AI to draft assignments, organize data, or generate feedback, they should openly demonstrate the process—sharing the questions they asked, the AI outputs they rejected, their reasoning, and the expertise they contributed. The goal isn’t to hide AI usage for a competitive edge but to promote thoughtful, transparent, and responsible use.

A Wake-Up Call for All

Deloitte’s $291,000 error isn’t an isolated incident—it’s a glimpse into a future workplace where AI is ubiquitous but users lack the skills or motivation to think critically alongside it. If we fail to teach students how to engage AI dialogically, we risk graduating professionals who produce rubbish reports and can’t articulate their own reasoning, accumulating so much cognitive debt they’re reliant on AI for even basic tasks.

Let’s stop pretending students won’t use these tools. Instead, educate them that true expertise lies in interrogating AI outputs, recognizing their limits, and applying human judgment every step of the way.

Hand a powerful LLM to someone who doesn’t think critically, and you get results like Deloitte’s flawed report. Give it to someone who truly knows how to think, and you amplify their knowledge exponentially. This is a pivotal moment—if we don’t act now, we risk an entire generation learning that thinking is optional.


References:

  • Bent, D., et al. (2025). Anthropic Education Report: How Educators Use Claude. Anthropic.
  • Kosmyna, N., et al. (2024). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv.
  • Melisa, R., et al. (2025). Critical Thinking in the Age of AI: A Systematic Review of AI's Effects on Higher Education. Educational Process: International Journal.
Why Deloitte's $291K AI Mistake Shows Students Need Better AI Training (2025)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Kerri Lueilwitz

Last Updated:

Views: 5480

Rating: 4.7 / 5 (67 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Kerri Lueilwitz

Birthday: 1992-10-31

Address: Suite 878 3699 Chantelle Roads, Colebury, NC 68599

Phone: +6111989609516

Job: Chief Farming Manager

Hobby: Mycology, Stone skipping, Dowsing, Whittling, Taxidermy, Sand art, Roller skating

Introduction: My name is Kerri Lueilwitz, I am a courageous, gentle, quaint, thankful, outstanding, brave, vast person who loves writing and wants to share my knowledge and understanding with you.