Published on:

Artificial Intelligence and the Courts

A Judge in New York imposed sanctions on two attorneys and their law firm for submitting a legal brief containing six fictitious case citations generated by an artificial intelligence (AI) chatbot called ChatGPT.

The judge, P. Kevin Castel, found that the lawyers acted in bad faith, consciously avoiding the truth and presenting misleading statements to the court. Consequently, they were fined a total of $5,000.

The law firm, in response, expressed their disagreement with the court’s assessment, claiming their mistake was a result of a good-faith error, that of underestimating the possibility for a technology like ChatGPT to fabricate cases.

Using ChatGPT

 

Steven Schwartz, one of the attorneys, had admitted earlier that he used ChatGPT to assist in researching a legal brief for a client’s personal injury case against a Colombian airline. Unfortunately, he unknowingly included the AI-generated false citations.

The airline’s lawyers initially alerted the court about the missing case citations in March. They emphasized that regardless of the AI usage, the court’s decision to dismiss the personal injury case against the airline was justified. The judge ruled in favor of the airline’s motion to dismiss the case on the grounds of it being filed too late.

Ethical Responsibility

 

The judge clarified that while there is nothing inherently improper about lawyers using AI for assistance, they have an ethical responsibility to ensure the accuracy of their filings. In this case, the lawyers were found to have continued supporting the false citations even after doubts were raised about their authenticity. The sanctions order also mandates that the lawyers inform the real judges who were identified as authors of the fake cases about the sanctions imposed.

Conclusion

 

As we begin creeping into the new land of AI-assisted language tools and even AI imaging tools that are becoming concerningly accurate, some new legal issues are bound to arise. However, it stands to reason that attorneys should not be using AI to write submissions to the court, especially when neglecting to proofread and cite-check.