* The curious case of Des Plaines resident Graciela Dela Torre, Nippon Life Insurance Company and ChatGPT via the International Business Times…
Dela Torre and her lawyer initially pursued a disability claim against Nippon Life Insurance Company after she suffered carpal tunnel and tennis elbow on the job in August 2019. She later stopped qualifying as disabled in November 2021, then sued and ultimately obtained a settlement, the court papers say.
As part of that agreement, she waived any future claims against the insurer in January 2024. When she tried to revisit the matter about a year later, her lawyer allegedly told her it could not be reopened, prompting her to seek guidance from ChatGPT and ask whether she had been ‘gaslighted’ by her attorney. […]
After consulting the chatbot, Dela Torre submitted a pro se filing on 22 January 2025 seeking to reopen the settled case, the insurer alleges. A [federal] judge ruled on 13 February 2025 that she could not reopen the case, but the filings did not stop there, according to the complaint.
Instead, the court papers [in the lawsuit filed by Nippon Life Insurance Company] say she went on to bring a new suit against Nippon that remains pending. The insurer alleges ChatGPT produced at least 44 filings connected to her efforts, including a document citing the fabricated case ‘Carr v. Gateway, Inc. 9′. The complaint argues that the supposed precedent is nowhere to be found outside the bot’s output and Dela Torre’s submissions. It states: ‘It only exists in Dela Torre’s papers and the “mind” of ChatGPT.’
Lots more in there, so take a look.
Nippon’s lawsuit is here. Wow.
* From Michael Stanisci…
Nippon settled a long-term disability lawsuit in January 2024. The claimant signed a full release. Case dismissed with prejudice.
A year later, she changed her mind. Her attorney told her the release was enforceable and the case was closed. She uploaded his response to ChatGPT and asked if she was being gaslighted. ChatGPT said yes.
Then ChatGPT drafted her motions, generated her legal arguments, conducted her legal research, and helped her file them in federal court. One filing cited a case that doesn’t exist. It only appears in ChatGPT’s output and her court papers.
By the time Nippon filed this new lawsuit, she had submitted more than 60 documents across two cases, nearly all drafted with ChatGPT’s assistance. Nippon says the cost of defending a settled case has reached approximately $300,000.
The three claims are tortious interference with a contract, abuse of process, and unlicensed practice of law under Illinois statute. Nippon is seeking $10 million in punitive damages and a permanent injunction barring OpenAI from practicing law in the state. […]
When an AI drafts your pleadings, analyzes your case, tells you to fire your lawyer, and generates the legal strategy you take to federal court, is that the practice of law?
And if it is, does the company that built the tool bear responsibility for what it produces?
Courts are going to have to answer that. This may be one of the first cases where they try.
* Back to IBT…
Dela Torre is described in the reporting as a senior logistics coordinator, and she is not named as a defendant in the insurer’s lawsuit against OpenAI. OpenAI has rejected the allegations, with a spokesperson quoted as saying: ‘This complaint lacks any merit whatsoever.’
* An update from Stanisci…
Just recently, a federal lawsuit was filed in Chicago alleging that ChatGPT practiced law without a license.
THE NEXT DAY
New York State Senate moved a bill that would make exactly that kind of conduct explicitly illegal, and give users the right to sue over it.
The bill is Senate Bill S7263, introduced by Senator Kristen Gonzalez. It passed the Internet and Technology Committee on a 6-0 vote. It prohibits AI chatbots from impersonating licensed professionals, including lawyers, doctors, and therapists, and bars them from providing substantive legal or medical advice. It requires operators to clearly disclose that users are talking to an AI.
The part worth paying close attention to: that disclosure is not a safe harbor. Under this bill, telling someone they’re talking to a machine doesn’t protect the operator from liability if the machine then acts like a lawyer and causes harm. The bill creates a private right of action. Users can sue. Damages and attorney’s fees are on the table. […]
But the policy direction is clear. Legislators aren’t just asking for transparency anymore. They’re asking who pays when an AI gives bad legal advice and someone relies on it. That’s a different question, and it has real consequences for every company deploying a chatbot that touches legal, medical, or financial matters.
* From the bill’s synopsis…
This bill would prohibit a chatbot to give substantive responses; information, or advice or take any action which, if taken by a natural person, would constitute unauthorized practice or unauthorized use of a professional title as a crime in relation to professions who licensure is governed by the education law or the judiciary law.
Proprietors may not waive or disclaim this liability by notifying consumers that they are interacting with a non-human chatbot system. A person may bring a civil action to recover damages, and if the proprie- tor has willfully violated this section, costs, attorney’s fees and other costs of litigation. Proprietors utilizing chatbots shall provide clear, conspicuous and explicit notice to users that they are interact- ing with an artificial intelligence chatbot program. […]
JUSTIFICATION: […]
This bill prohibits proprietors of A.I. chatbots from permitting the chatbot to give substantive responses, information, or advice or take any action which, if taken by a natural person, would constitute unauthorized practice or unauthorized use of a professional title as a crime in relation to professions whose licensure is governed the education law and judiciary law. This bill ensures professional advice is provided only by licensed human professionals and not by artificial intelligence or chatbots.
* Illinois has a much more narrow law on its books. From an IDFPR press release…
Governor JB Pritzker signed legislation on Friday that protects patients by limiting the use of artificial intelligence (AI) in therapy and psychotherapy services. The Wellness and Oversight for Psychological Resources Act prohibits anyone from using AI to provide mental health and therapeutic decision-making, while allowing the use of AI for administrative and supplementary support services for licensed behavioral health professionals. This will protect patients from unregulated and unqualified AI products, while also protecting the jobs of Illinois’ thousands of qualified behavioral health providers. This will also protect vulnerable children amid the rising concerns over AI chatbot use in youth mental health services.
Your thoughts?
- Steve Polite - Monday, Mar 9, 26 @ 12:09 pm:
This is a complex issue, but on the face of it, New York’s bill seems prudent. If we are going to require a professional license from a human being and hold that individual accountable to professional standards, then AI should be held to those same standards with the right to sue when it doesn’t.
- Stephanie Kollmann - Monday, Mar 9, 26 @ 12:18 pm:
I shudder to think what the industry is spending to defeat basic regulations like S7263.
- DuPage Saint - Monday, Mar 9, 26 @ 1:27 pm:
Absolutely should be banned as practicing law without a license and any medical advice that results in injury should be subject to medical malpractice. Can you imagine if A I is handing out medical advice during the next pandemic?
- Candy Dogood - Monday, Mar 9, 26 @ 1:45 pm:
===then AI should be held to those same standards with the right to sue when it doesn’t. ===
The tools we’re discussing here aren’t sentient and do not have personhood. There is a person behind the curtain that must be held accountable for building an advance computer program that practices law.
I think it is important for the courts to understand that as they establish a precedent that precedent cannot be having ChatGPT “sit for the BAR.”