Does AI belong in the courtroom? A Texas judge doesn't think so.

A federal judge in Texas has ordered lawyers to keep AI-reliant legal briefs out of his courtroom. But one legal technology executive says AI has been lurking in litigation for years and curbing it may prove a major challenge.

"We try and educate judges about this stuff," Andy Wilson, CEO of legal technology company Logikcull, said. "Just knowing what I know, I think it’ll be a futile effort."

The order, issued by Northern District Court Judge Brantley Starr, appears to be a first of its kind, mandating that lawyers who file documents on his docket certify the filings as either free of content produced by large language model (LLM) AI tools – like OpenAI's ChatGPT, Harvey.AI and Google Bard – or as reviewed by a human for accuracy.

"My order is an attempt to keep the pros of generative AI while managing the cons," Judge Starr told Yahoo Finance. "But judges are reactive and resolve what is put before us. So we'll never be as cutting edge as the innovations we eventually face."

Starr said one of the many pros of legal AI is that it can search through mountains of data. The main con he sees is the tendency for systems to ‘hallucinate’ by making up case citations and supporting citations. Hallucinations are scenarios where AI-generated text appears plausible but is factually, semantically, or syntactically incorrect.

Starr explained in a post to the court’s website that there’s also no way to hold a machine to the ethical requirements of practicing law or to ensure that the technology’s creators have avoided programming their personal prejudices, biases, and beliefs into the systems.

"As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States…or the truth," the judge wrote.

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled 'Oversight of A.I.: Rules for Artificial Intelligence' on Capitol Hill in Washington, U.S., May 16, 2023.  REUTERS/Elizabeth Frantz
OpenAI CEO Sam Altman testifies on Capitol Hill in Washington, U.S., May 16, 2023. REUTERS/Elizabeth Frantz

Rebecca Johnson, public affairs director for the State Bar of Texas, said the organization had not taken a position on AI use in the legal profession. Other bar associations including the American Bar Association and New York State Bar Association also said they had not taken official positions on the use of AI.

The ABA, however, passed a resolution in February urging AI developers and users to maintain human oversight and control and take accountability for harm that their AI tools cause.

The bar is also urging Congress and government authorities to consider those standards in passing AI legislation and regulation.

The New York State Bar Association, for its part, is studying the impacts of AI beyond LLMs. Its communications director Susan DeSantis said the association is also looking into how the legal profession is impacted by facial recognition and digital finance and currency AI.

Standing Order issued by US District Court Judge Brantley Starr, Northern District of Texas, Dallas Division
Standing Order issued by US District Court Judge Brantley Starr, Northern District of Texas, Dallas Division

LLMs, like ChatGPT and others, Wilson said, have already enabled systems like Logikcull’s to identify nuances in communication that at one time required a human’s calculus. Wilson says Logikcull can comb through terabytes of electronically stored documents, databases, video, emails, and Slack messages, and then red-flag data that is relevant to legal queries.

Still, he added, no one has answered yet to Judge Starr’s concern that machines can’t be held accountable for the ethical requirements of practicing law.

"The ethical use of AI is something that’s not talked about enough," Wilson said, cautioning that personally sensitive data like a person's likeness or voice is no longer safe from fabrication.

Starr’s order came days after reports highlighting a New York lawyer who cited bogus AI-generated citations in court filings while defending his client.

For now, Judge Starr says he hopes his order will make lawyers aware that generative AI can create false statements, and that they can avoid sanctions for making false statements by checking AI’s work.

"These platforms are incredibly powerful and have many uses in the law…" Judge Starr wrote. "But legal briefing is not one of them."

Alexis Keenan is a legal reporter for Yahoo Finance. Follow Alexis on Twitter @alexiskweed.

Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, LinkedIn, and YouTube

Find live stock market quotes and the latest business and finance news