Gruppo ECP Advpress Automationtoday AI DevwWrld CyberDSA Chatbot Summit Cyber Revolution Summit CYSEC Global Cyber Security & Cloud Expo World Series Digital Identity & Authentication Summit Asian Integrated Resort Expo Middle East Low Code No Code Summit TimeAI Summit Gruppo ECP Advpress Automationtoday AI DevwWrld CyberDSA Chatbot Summit Cyber Revolution Summit CYSEC Global Cyber Security & Cloud Expo World Series Digital Identity & Authentication Summit Asian Integrated Resort Expo Middle East Low Code No Code Summit TimeAI Summit

Chatbot and legal practice: when the AI is wrong

Possible legal sanctions for the lawyer who used the OpenAI chatbot in the case of a client injured in flight

This pill is also available in Italian language

In the age of artificial intelligence, many people are wondering if this technology could somehow replace humans in the workplace. However, as a recent legal case illustrates, this is not necessarily true of all professions.

Schwartz, an attorney at the major law firm, recently enlisted the help of ChatGpt, an advanced chatbot developed by OpenAI, to draft a legal document. This work was part of Mata's case, who sued the Colombian airline Avianca following a flight accident in which he was injured in the knee by a service trolley.

This would appear to be a model example of how AI can assist professionals in their business. However, it turned out to be a controversial case due to some false information contained in the legal document drafted by the chatbot.

To persuade the judge not to dismiss the case, Schwartz used ChatGpt to conduct a search on previous cases similar to Mata's. The result was a document citing at least a dozen previous cases, including "Varghese v. China Southern Airlines," "Martinez v. Delta Airlines," and "Miller v. United Airlines." However, none of these cases were ultimately found to be authentic.

Although Schwartz questioned the chatbot about the veracity of the information, the chatbot insisted that the cases it cited were indeed real and present in "reputable legal databases" such as Westlaw and LexisNexis. But the defense attorney has denied these claims.

For example, the "Varghese v. China Southern Airlines Co" case mentioned in the document does not exist, although there was a reference to the real "Zicherman v. Korean Air Lines Co" case. This suggests that ChatGpt may have provided false information, putting Schwartz in a delicate situation that could lead to legal penalties.

Schwartz defended himself by stating that he was unaware that the information provided by ChatGpt could be false. However, this explanation was not found convincing by the judge in charge of the case.

Schwartz's case raises significant questions about the readiness of AI to perform tasks autonomously. Despite technological developments, ChatGpt and other chatbots still need human supervision to function properly. This situation is an example of how blind faith in AI can lead to unexpected problems.

Follow us on Threads for more pills like this

05/29/2023 16:09

Marco Verro

Last pills

Cloudflare repels the most powerful DDoS attack ever recordedAdvanced defense and global collaboration to tackle new challenges of DDoS attacks

Silent threats: the zero-click flaw that compromises RDP serversHidden risks in remote work: how to protect RDP servers from invisible attacks

Discovery of vulnerability in Secure Boot threatens device securityFlaw in the Secure Boot system requires urgent updates to prevent invisible intrusions

North korean cyberattacks and laptop farming: threats to smart workingAdapting to new digital threats of remote work to protect vital data and infrastructures

Don’t miss the most important news
Enable notifications to stay always updated