Chatbot and legal practice: when the AI is wrong
Possible legal sanctions for the lawyer who used the OpenAI chatbot in the case of a client injured in flight
In the age of artificial intelligence, many people are wondering if this technology could somehow replace humans in the workplace. However, as a recent legal case illustrates, this is not necessarily true of all professions.
Schwartz, an attorney at the major law firm, recently enlisted the help of ChatGpt, an advanced chatbot developed by OpenAI, to draft a legal document. This work was part of Mata's case, who sued the Colombian airline Avianca following a flight accident in which he was injured in the knee by a service trolley.
This would appear to be a model example of how AI can assist professionals in their business. However, it turned out to be a controversial case due to some false information contained in the legal document drafted by the chatbot.
To persuade the judge not to dismiss the case, Schwartz used ChatGpt to conduct a search on previous cases similar to Mata's. The result was a document citing at least a dozen previous cases, including "Varghese v. China Southern Airlines," "Martinez v. Delta Airlines," and "Miller v. United Airlines." However, none of these cases were ultimately found to be authentic.
Although Schwartz questioned the chatbot about the veracity of the information, the chatbot insisted that the cases it cited were indeed real and present in "reputable legal databases" such as Westlaw and LexisNexis. But the defense attorney has denied these claims.
For example, the "Varghese v. China Southern Airlines Co" case mentioned in the document does not exist, although there was a reference to the real "Zicherman v. Korean Air Lines Co" case. This suggests that ChatGpt may have provided false information, putting Schwartz in a delicate situation that could lead to legal penalties.
Schwartz defended himself by stating that he was unaware that the information provided by ChatGpt could be false. However, this explanation was not found convincing by the judge in charge of the case.
Schwartz's case raises significant questions about the readiness of AI to perform tasks autonomously. Despite technological developments, ChatGpt and other chatbots still need human supervision to function properly. This situation is an example of how blind faith in AI can lead to unexpected problems.
Follow us on Google News for more pills like this05/29/2023 16:09
Editorial AI