When AI «hallucinates»: Deloitte, governments, and the price of letting our guard down

At this point, we can say without fear of being wrong that generative AI is the most powerful general-purpose tool of recent years. There are very few sectors where this tool is not transforming the way people work. Consulting is no exception.

The major consulting firms have incorporated generative AI tools to speed up the preparation of reports: summarizing literature, drafting first versions or generating citations automatically. But the recent cases involving Deloitte in Australia and Canada show the dark side of that shortcut: when too much is delegated to the machine and human oversight is relaxed, AI not only makes mistakes, it can also fabricate reality, and do so in multi-million-dollar documents that are meant to guide public policy.

In Australia, Deloitte prepared a 237-page report for the Department of Employment and Workplace Relations on the use of IT systems that impose automatic penalties in the welfare system. The document, for which the government paid 440,000 Australian dollars, contained an invented quotation from a federal court judgment and references to academic works that never existed. Chris Rudge, a researcher in health and welfare law at the University of Sydney, identified up to 20 errors in the report and denounced that the work of colleagues had been used as mere «tokens of legitimacy», cited but not read, while the false judicial quotation amounted to misrepresenting the law to the government itself. After reviewing the text and confirming that some references were incorrect, Deloitte agreed to return the final payment, and the report was republished with changes: the fictitious citations were removed and a warning was added stating that Azure OpenAI had been used to help draft it.

Weeks later, a very similar echo has been heard in Canada. According to information reported by Fortune, the government of the province of Newfoundland and Labrador commissioned Deloitte to produce a 526-page health report advising the Department of Health and Community Services on issues such as virtual care, incentives to retain staff, and the impact of the COVID-19 pandemic on healthcare workers, in a context of shortages of nurses and doctors. The study, released in May, cost the government just under 1.6 million Canadian dollars. An investigation by the Independent has detected false citations generated from non-existent academic articles, mentions of works in which studies were attributed to researchers who never signed them, and combinations of authors who had never published together.

The official response has not been identical in both cases. In Australia, the government made public that the report had been reviewed and partially reimbursed. In Canada, Deloitte Canada has maintained that it stands by the recommendations of the report and that AI was not used to write the report, but only to support a limited number of research citations, which they are now correcting. In both cases, the official line insists that the citation errors do not alter the substantive content or the conclusions of the documents. Beyond the technical dispute, both episodes illustrate the central risk of misusing AI in high-responsibility contexts: a statistical tool, prone to hallucinations, is mistaken for a reliable source of truth.

Since the dawn of humanity, human beings have tried to live more comfortably. Having at hand a tool that, in a matter of seconds, can provide, in most (though not all) cases, solutions to your problems clearly moves in that direction. But this positive comfort gradient should not make us forget that we have to treat this tool for what it is: a powerful tool which, like any powerful tool, demands responsible use. The Deloitte cases may become the first warning sign of a larger problem: governments making strategic decisions on the basis of reports where, instead of evidence, there is statistical fiction with the appearance of authority.

 

Sources:

[1] https://apnews.com/article/australia-ai-errors-deloitte-ab54858680ffc4ae6555b31c8fb987f3

[2] https://fortune.com/2025/11/25/deloitte-caught-fabricated-ai-generated-research-million-dollar-report-canada-government/