In Melbourne, an experienced defence lawyer accepted responsibility after legal documents in a murder case were found to contain fake quotes and references to non-existent court rulings—all generated by artificial intelligence.
The case unfolded in the Supreme Court of Victoria.
Rishi Nathwani, a King’s Counsel, filed submissions on behalf of a teenager accused of murder.
However, Justice James Elliott’s team discovered the citations could not be verified, prompting the defence to admit the sources were fabricated.
The mistake led to a 24-hour delay before the case proceeded, resulting in the teenager being found not guilty due to mental impairment.
Justice Elliott rebuked the counsel, saying such lapses are unacceptable and undermine the court’s trust in legal submissions.
He pointed out that guidelines issued by the court last year require lawyers to independently verify any AI-generated content before using it in legal proceedings.
The error wasn’t limited to the defence. Prosecutor Daniel Porceddu also missed the inaccuracies, relying on the defence’s flawed submissions, which included quotes from a parliamentary speech and citations from Supreme Court rulings that did not exist.
This incident follows similar issues overseas. In the U.S. in 2023, lawyers were fined for submitting AI-produced legal research containing invented cases.
Courts in the U.K. have also warned that misleading material, when presented as legitimate, could amount to contempt of court—or at worst, perverting the course of justice.
Legal observers say the case is a warning sign. AI tools can assist legal research—but only if their outputs are carefully checked.
The episode serves as a reminder of the risks when technology is trusted without scrutiny.
