* Reuters…
Judges and attorneys in Illinois are allowed to use artificial intelligence tools in their work under certain conditions, the state’s top court said in a new policy.
The use of AI by lawyers, judges, self-represented litigants, clerks and other court officials “may be expected, should not be discouraged, and is authorized provided it complies with legal and ethical standards,” the court said in a policy, opens new tab issued on Wednesday that takes effect Jan. 1. […]
The policy does not require lawyers to disclose the use of AI in filings. An accompanying fact sheet, opens new tab said attorneys will be subject to sanctions for submitting “legally or factually unfounded pleadings.”
Other courts have mandated disclosure of AI, and judges in several have sanctioned lawyers whose AI-assisted filings contained errors or made-up case citations. […]
Lawyers and judges who use AI technology are accountable for the final work product and must review all AI-generated content, the court said. AI tools also must protect sensitive information.
From the court’s policy…
The Court acknowledges the necessity of safe AI use, adhering to laws and regulations concerning privacy and confidentiality. AI applications must not compromise sensitive information, such as confidential communications, personal identifying information (PII), protected health information (PHI), justice and public safety data, security-related information, or information conflicting with judicial conduct standards or eroding public trust.
This policy reflects the Illinois Supreme Court’s commitment to upholding foundational principles while exploring the potential benefits of new AI technologies in a dynamic landscape. The Court will regularly reassess policies as these technologies evolve, prioritizing public trust and confidence in the judiciary and the administration of justice. Judges remain ultimately responsible for their decisions, irrespective of technological advancements.
Click here for the fact sheet.
- Walker - Thursday, Dec 19, 24 @ 2:01 pm:
No effective way to police it anyway
- TheInvisibleMan - Thursday, Dec 19, 24 @ 2:13 pm:
Like many areas of society, AI even in a primitive state is probably already going to surpass the lowest rungs on the human intelligence ladder that exist right now. I see no problem with this, but then again I also don’t feel threatened by AI.
Lawyer: “Doctor, before you performed the autopsy, did you check for a pulse?”
Witness: “No.”
Lawyer: “Did you check for blood pressure?”
Witness: “No.”
Lawyer: “Did you check for breathing?”
Witness: “No.”
Lawyer: “So, then it is possible that the patient was alive when you began the autopsy?”
Witness: “No.”
Lawyer: “How can you be so sure, Doctor?”
Witness: “Because his brain was sitting on my desk in a jar.”
Lawyer: “But could the patient have still been alive nevertheless?”
Witness: “Yes, it is possible that he could have been alive and practicing law somewhere.”
- Suburban Mom - Thursday, Dec 19, 24 @ 2:24 pm:
from the fact sheet:
===Confidentiality - If using a public
generative AI tool (like ChatGPT), your
input prompt is being handed over to
the technology. Ensure you do not
compromise sensitive information. Do
not input any information such as
(non-exhaustive list):
Confidential or privileged
information;
Personal identifying information;
Protected health information;
Justice and public safety
information;===
Which basically means you can’t use it to write any court filings because you can’t put any facts into it.
Which is, actually, the correct outcome, because ChatGPT and other solutions ingest your data and train on it and while there are commercial solutions for sale that say they don’t train on your data, you need to be really dang sure you understand the underlying architecture of a) the LLM you’re using and b) the data processing setup you’re using, as well as being thoroughly versed in contract with the provider AND being literate enough with AI technology to understand when AI contracts are promising things that are not possible — you CANNOT remove data once it’s been entered into a training set, even accidentally. You have to retrain the whole model, at a cost of millions or billions of dollars.
- TheInvisibleMan - Thursday, Dec 19, 24 @ 2:41 pm:
“Which basically means you can’t use it to write any court filings”
GPT it just the consumer version of AI.
I can train a private AI on my personal stash of data on my own servers in the cloud, and then package it up and install the model on my local device to have it be completely self-contained and invisible to the world.
There are many other options besides ChatGPT, and I assume that’s why the court used the umbrella statement of AI to ensure to include the entire universe of AI options available, instead of the more narrowly defined ChatGPT consumer AI product.
- @misterjayem - Thursday, Dec 19, 24 @ 3:01 pm:
It’s just like the old computing axiom says:
– MrJM
- Nope. - Thursday, Dec 19, 24 @ 4:38 pm:
I can train a private AI on my personal stash of data on my own servers in the cloud, and then package it up and install the model on my local device to have it be completely self-contained and invisible to the world.
Oh please. There is no conceivable way the minuscule data set you put in your server constitutes AI. That’s part of the problem surrounding this issue. People don’t even know what AI is or what constitutes its make up.
- hisgirlfriday - Thursday, Dec 19, 24 @ 4:40 pm:
Lawyers have been using AI in legal documents for years with Spell Check/Grammar Check in word processing software and through various case search programs and online search.
Besides just being a luddite who thinks AI companies are overhyped and pushing scammy products on people, I am concerned with how using chatgpt like stuff in legal work goes against preservation of legal privileges and puts sensitive personal info for people (hipaa protected stuff, personal identification info) at risk of disclosure and distribution beyond what is allowed normally.