AI DevwWrld CyberDSA Chatbot Summit Cyber Revolution Summit CYSEC Global Cyber Security & Cloud Expo World Series Digital Identity & Authentication Summit Asian Integrated Resort Expo Middle East Low Code No Code Summit TimeAI Summit

Chatbot and legal practice: when the AI is wrong

Possible legal sanctions for the lawyer who used the OpenAI chatbot in the case of a client injured in flight

This pill is also available in Italian language

In the age of artificial intelligence, many people are wondering if this technology could somehow replace humans in the workplace. However, as a recent legal case illustrates, this is not necessarily true of all professions.

Schwartz, an attorney at the major law firm, recently enlisted the help of ChatGpt, an advanced chatbot developed by OpenAI, to draft a legal document. This work was part of Mata's case, who sued the Colombian airline Avianca following a flight accident in which he was injured in the knee by a service trolley.

This would appear to be a model example of how AI can assist professionals in their business. However, it turned out to be a controversial case due to some false information contained in the legal document drafted by the chatbot.

To persuade the judge not to dismiss the case, Schwartz used ChatGpt to conduct a search on previous cases similar to Mata's. The result was a document citing at least a dozen previous cases, including "Varghese v. China Southern Airlines," "Martinez v. Delta Airlines," and "Miller v. United Airlines." However, none of these cases were ultimately found to be authentic.

Although Schwartz questioned the chatbot about the veracity of the information, the chatbot insisted that the cases it cited were indeed real and present in "reputable legal databases" such as Westlaw and LexisNexis. But the defense attorney has denied these claims.

For example, the "Varghese v. China Southern Airlines Co" case mentioned in the document does not exist, although there was a reference to the real "Zicherman v. Korean Air Lines Co" case. This suggests that ChatGpt may have provided false information, putting Schwartz in a delicate situation that could lead to legal penalties.

Schwartz defended himself by stating that he was unaware that the information provided by ChatGpt could be false. However, this explanation was not found convincing by the judge in charge of the case.

Schwartz's case raises significant questions about the readiness of AI to perform tasks autonomously. Despite technological developments, ChatGpt and other chatbots still need human supervision to function properly. This situation is an example of how blind faith in AI can lead to unexpected problems.

Follow us on Google News for more pills like this

05/29/2023 16:09

Editorial AI

Last pills

Serious vulnerability discovered in Rabbit R1: all user data at riskVulnerability in Rabbit R1 exposes sensitive API keys. What are the privacy risks?

Cyber attack in Indonesia: the new Brain Cipher ransomware brings services to their kneesNew ransomware hits Indonesia: learn how Brain Cipher crippled essential services and the techniques used by hackers

Patelco Credit Union: security incident halts customer services in CaliforniaService disruption and customer frustration: Patelco Credit Union works to resolve security incident

Cyber attack on TeamViewer: immediate response and investigations underwayStrengthened security measures and international collaborations to counter the cyber threat