Judge Won’t Punish Michael Cohen For Relying on Artificial Intelligence

Judge Won’t Punish Michael Cohen For Relying on Artificial Intelligence

  • Post category:New York

A Manhattan judge on Wednesday declined to impose sanctions on Michael D. Cohen, the onetime fixer for former President Donald J. Trump, after he mistakenly gave his lawyer fake legal citations concocted by Google Bard, an artificial intelligence program, for a motion the lawyer was preparing on Mr. Cohen’s behalf.

The lawyer, David M. Schwartz, cited the bogus cases in his motion, which was filed in Federal District Court.

The judge, Jesse M. Furman, said the episode was embarrassing and unfortunate, but he had accepted Mr. Cohen’s explanation that he did not understand how Google Bard worked and did not mean to mislead Mr. Schwartz. The judge also said he had not found that Mr. Schwartz had acted in bad faith.

“Indeed, it would have been downright irrational for him to provide fake cases for Schwartz to include in the motion knowing they were fake,” Judge Furman wrote of Mr. Cohen, a former lawyer who has been disbarred, given the probability that Mr. Schwartz, the government or the court would discover the problem, “with potentially serious adverse consequences for Cohen himself.”

The issue arose in a case regarding tax evasion, as well as campaign finance violations committed by Mr. Cohen on behalf of Mr. Trump. Mr. Cohen pleaded guilty in 2018 and served time in prison. He had been asking for an early end to the court’s supervision of his case after being released from prison and complying with the conditions of his release.

Judge Furman had denied three earlier such requests by Mr. Cohen. In his latest request, his lawyer, Mr. Schwartz, pointed out that his client testified for two days last fall in New York State’s civil fraud trial of Mr. Trump. Mr. Cohen’s “willingness to come forward and provide truthful accounts,” Mr. Schwartz argued, “demonstrates an exceptional level of remorse and a commitment to upholding the law.”

But Judge Furman said that Mr. Cohen’s testimony in the state trial “actually provides reason to deny his motion, not to grant it.” The judge cited Mr. Cohen’s testimony in the state civil trial in which he admitted that he had lied in federal court when he pleaded guilty to tax evasion, which he now says he did not commit.

A lawyer for Mr. Cohen did not immediately respond to a request for comment on Judge Furman’s ruling.

Mr. Cohen’s credibility will be at the heart of Mr. Trump’s first criminal trial, scheduled to start in mid-April in Manhattan. Mr. Cohen, one of the prosecution’s star witnesses, was involved in the hush-money deal at the center of that case, brought by the Manhattan district attorney’s office. Mr. Trump’s lawyers might try to seize on Mr. Cohen’s inconsistent statements at the civil fraud trial, and possibly even Judge Furman’s ruling, to paint him as a liar. But the district attorney’s office is likely to counter that Mr. Cohen told many of his earlier lies on Mr. Trump’s behalf, and that he has told a consistent story about the hush-money deal for years.

The judge overseeing the civil fraud trial, Arthur F. Engoron, had said that he found Mr. Cohen’s testimony “credible,” and imposed a crushing $454 million judgment on Mr. Trump.

It was in his request to end the court supervision of his case that Mr. Cohen sought to assist his lawyer, Mr. Schwarz.

Mr. Cohen said in a sworn declaration in December that he had not kept up with “emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like ChatGPT, could show citations and descriptions that looked real but actually were not.”

Mr. Cohen also said he had not realized Mr. Schwartz “would drop the cases into his submission wholesale without even confirming that they existed.”

Mr. Cohen asked Judge Furman to exercise “discretion and mercy.”

The case is one of several that have surfaced in the Manhattan federal court in the past year in which the use of artificial intelligence has tainted court filings. Nationally, there have been at least 15 cases in which lawyers or litigants representing themselves were believed to have used chatbots for legal research that wound its way into court filings, according to Eugene Volokh, a law professor at U.C.L.A. who has written about artificial intelligence and the law.

The issue exploded into public view last year after Judge P. Kevin Castel, also of Manhattan federal court, fined two lawyers $5,000 after they admitted to submitting a brief filled with nonexistent decisions and legal citations that had been generated by ChatGPT.

A series of similar cases in federal courts in Manhattan followed.

In one, a lawyer acknowledged citing a “nonexistent case” — Matter of Bourguignon v. Coordinated Behavioral Health Services, Inc. — that she said was “suggested by ChatGPT” after her own research failed to turn up a decision to support an argument she was making. In January, the U.S. Court of Appeals for the Second Circuit referred her to a court panel that investigates complaints against attorneys.

And in another case, Judge Paul A. Engelmayer of Federal District Court chastised a law firm in Auburn, N.Y., that openly admitted it had used ChatGPT to bolster a request for attorney’s fees in a lawsuit against New York City’s Department of Education.

Judge Engelmayer said the firm’s “invocation of ChatGPT as support for its aggressive fee bid is utterly and unusually unpersuasive.”

The cases highlight the legal profession’s challenges as lawyers increasingly rely on chatbots to prepare legal briefs. The artificial intelligence programs, like ChatGPT and Bard (now known as Gemini), generate realistic responses by making guesses about which fragments of text should follow other sequences.

Mr. Cohen, in his declaration, wrote that he had understood Bard to be “a supercharged search engine” that in the past he had used to obtain accurate information. The cases that he found and passed along to Mr. Schwartz appear to have been “hallucinations” — a term used to refer to chatbot-generated inaccuracies.

The episode became public in December when Judge Furman said in an order that he could not find any of the three decisions Mr. Schwartz had cited in his motion. He ordered Mr. Schwartz to provide him with copies of the decisions or “a thorough explanation of how the motion came to cite cases that do not exist and what role, if any, Mr. Cohen played.”

Mr. Schwartz, in his own declaration, said he had not independently reviewed the cases Mr. Cohen provided because Mr. Cohen had indicated another lawyer was providing him with suggestions for the motion.

“I sincerely apologize to the court for not checking these cases personally before submitting them to the court,” Mr. Schwartz wrote.

Barry Kamins, a lawyer for Mr. Schwartz, said Wednesday, “We are gratified that the court viewed this mistake as one that was not made in bad faith by Mr. Schwartz.”

Ben Protess contributed reporting.

by NYTimes