Group photo
A Personal Injury, Workers' Compensation and Defense Base Act Law Firm Fighting for the Injured.
Published on:

How has Artificial Intelligence Impacted the Legal Field?

How has Artificial Intelligence Impacted the Legal Field?

What is artificial intelligence?

Artificial Intelligence (AI) is the science of programming machines, especially computer systems to think and reason like human beings. These computer systems have been programmed to perform tasks such as problem solving and providing answers to everyday questions, even solving math equations.

Generative Artificial Intelligence is a type of AI technique where the machine perceives and classifies information to produce new and original content. The type of content includes image, music, video, art and design and text generation.AI-HEAD-300x200

There are different types of generative artificial intelligence services. A few of the most common ones are ChatGPT, Microsoft’s Co-Pilot, Claude and Google Bard. Some are targeted at the legal profession, such as LexisNexis’ AI product which can generate the first draft of a legal document and analyze a judge’s past decisions to tailor a paper to the particular judge.

The Supreme Court’s “2023 Year-End Report on the Federal Judiciary” by Chief Justice Roberts addresses the impact of AI technology on the legal field. “Law professors report with both awe and angst that AI apparently can earn Bs on law school assignments and even pass the bar exam. Legal research may soon be unimaginable without it. AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike. But just as obviously it risks invading privacy interests and dehumanizing the law,” says Chief Justice Roberts in the report.[1]

What is unique about lawyers using AI as opposed to other professions?AI-300x209

Lawyers are bound by special rules that govern their profession, which may come from their state bar’s ethical rules (i.e. the licensing body’s rules) or from state or federal law.

As attorneys based in Florida, we must always make sure to follow the Florida Bar Ethical rules. There are many rules of ethics, but the ones that we must definitely keep in mind are the following:

Rule 4-8.4: Misconduct

Rule 4-1.1: Competence

Rule 4-1.3: Diligence

Rule 4-1.6: Confidentiality of Information

Rule 4-3.3: Candor Toward the Tribunal

Rule 4-8.4 Misconduct states:

A lawyer shall not:

(a) violate or attempt to violate the Rules of Professional Conduct, knowingly assist or induce another to do so, or do so through the acts of another;

(c) engage in conduct involving dishonesty, fraud, deceit, or misrepresentation, except that it shall not be professional misconduct for a lawyer for a criminal law enforcement agency or regulatory agency to advise others about or to supervise another in an undercover investigation, unless prohibited by law or rule, and it shall not be professional misconduct for a lawyer employed in a capacity other than as a lawyer by a criminal law enforcement agency or regulatory agency to participate in an undercover investigation, unless prohibited by law or rule;

Lawyers must be careful when using these AI tools and it is imperative to fact-check all the information being used for their cases. Failing to fact-check can inadvertently result in an attorney engaging in conduct involving dishonesty, fraud, deceit or misrepresentation.

 

Rule 4-1.1 Competence states:

A lawyer must provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation.

When using an AI tool such as ChatGPT it is important to cross check the information with Westlaw or Lexus to ensure the information is accurate and this way attorneys will provide competent representation.

 

Rule 4-1.3: Diligence states:

A lawyer shall act with reasonable diligence and promptness in representing a client.

Attorneys should be diligent in their case research. Cutting corners by using these AI tools without reading case law will inevitably result in errors made in a client’s case.

 

Rule 4-1.6: Confidentiality of Information states:

(a)Consent Required to Reveal Information. A lawyer must not reveal information relating to a client’s representation except as stated in subdivisions (b), (c), and (d), unless the client gives informed consent.

Lawyers must be careful with the information they put into these AI systems. Lawyers can use generative artificial intelligence as a tool but must protect the confidentiality of their client’s case. Client information needs to be protected; therefore, attorneys should avoid putting data such as social security numbers, dates of birth and addresses into these AI systems.

 

Rule 4-3.3: Candor Toward the Tribunal states:

  • False Evidence; Duty to Disclose. A lawyer shall not knowingly:(1) make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer;

 

In the case of People v. Crabill[2], a Colorado attorney was temporarily suspended for using fake case law citations in a motion. He had used ChatGPT to help him write the motion and ChatGPT provided fake case law citations. To make matters worse, this attorney lied to the Judge by blaming a legal intern for providing the fake citations. The Presiding Disciplinary Judge approved the parties’ stipulation to discipline and suspend Zachariah C. Crabill for one year and one day with ninety days to be served and the remainder to be stayed upon Crabill’s successful completion of a two-year period of probation, with conditions.

In the case of Mata v. Avianca, Inc.[3], the law firm of Levidow, Levidow & Oberman P.C. were found to have “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.” The Court found bad faith on the part of the Respondents for acts of conscious avoidance and false and misleading statements to the Court. The Court imposed sanctions on this law firm.

In the case of Park v. Kim[4], a New York Plaintiff’s attorney had requested two extensions to file a pleading which were granted, but then she proceeded to file the pleading past the deadline. When the pleading was finally filed, it cited to case law that the Court was unable to find. The Court ordered counsel to provide a copy of the case. At which point, Plaintiff’s attorney admitted to having used ChatGPT. Clearly, this attorney failed to further investigate the case and submitted a non-existent case to the Court violating the ethical rules. The attorney now faces possible sanctions. It is important to confirm the validity of cases provided by artificial intelligence tools.

In the case of United States v. Michael Cohen[5], Attorney David M. Schwartz provided the Court with a motion in which it cited AI-generated cases. Attorney Schwartz was trying to bring an early end to Michael Cohens’ court supervision after he served time in jail. Michael Cohen was Donald Trump’s previous lawyer and Cohen had pleaded guilty in 2018 for tax evasion, lying to congress and he was disbarred. Michel Cohen provided Schwartz with the fake cases he got from Google Bard and it appears both failed to verify the information. The Judge did not order any sanctions.

Is there a way for lawyers to use artificial intelligence in an ethical manner?

Yes! The cases discussed above are cautionary tales that lawyers can learn from in order to ensure lawyers use AI appropriately. As technology progresses, it is possible to integrate the legal field with the use of artificial intelligence. Lawyers can use AI tools like ChatGPT, so long as they keep the aforementioned ethical rules in mind. Lawyers must protect the confidentiality of client information, conduct diligent research and double check their work. Case citations should be verified and any case law provided by these tools should be investigated and analyzed before inserting it into a motion.

What does this mean for the use of AI in the Defense Base Act practice area?

At the Annual Longshore Conference held this year in New Orleans, practitioners in the Defense Base Act field gathered to discuss current topics in our area of practice. The Honorable Administrative Appeals Judge Buzzard from the Benefits Review Board – the appellate body for DBA claims – presented on Ethical Considerations in Utilizing AI in Appellate Litigation. Ultimately, the presentation pointed out that AI is here to stay and can be an incredible tool for the legal professional if used ethically – just like every other tool and decision that lawyers wield.

By: Jennifer Corbalan

 

[1] 2023year-endreport.pdf (supremecourt.gov) – page 5

[2] No. 23PDJ067, 2023 WL 8111898, (Colo. Discipl. Nov. 22, 2023).

[3] 678 F. Supp.3d 443 (S.D.N.Y. June 22, 2023)

[4] 91 F.4th 610 (2d Cir. 2024)

[5] No. 18-CR-602, 2023 WL 1193604, (S.D.N.Y. Mar. 20, 2024)

Contact Information