Spring 2024 Research Labs
GenerativeAI Dos & Don'ts
Don't let the wonders of modern technology get to your head.
Start
Sign In Sheet
OR SIGN IN HERE: https://go.fiu.edu/labsignin
Agenda
Today we'll cover:
Why this is important
Things to do with AI
Things not to do with AI
Extra Info & End
Why is this important?
AI messes up both accidentally *and* on purpose.
A few layers have been busted for citing to nonexistant caselaw. On March 8 a Florida attorney was suspended from the U.S. Court for the Middle Dist. of Florida. He was found to have violated the Florida RPC by filing pleading which contained frivilous legal arguments based on fake cases. Opposing counsel is who spotted it.
Why is this important?
AI messes up both accidentally *and* on purpose.
And of course, there's more!
- New York:
- 2 experienced lawyers submitted a legal brief with SIX fake cases and were fined along w/ thier firm.
- A Special Ed. firm used chatGPT to estimate fees saught from NYC after representing a child.
- AI estimated 113k fee, 53k was rewarded after federal judge rejected/
- Colorado
- New lawyer (>5 years exp.) suspended for incorrect citations *and* made up cases from ChatGPT
- Excuse: “This is all so new to me,” he said. “I just had no idea what to do and no idea who to turn to.”
- Massachusetts
- Lawyer submitted 3 separate pleadings which relied on fictious and nonexistant cases.
- Blamed interns when he couldn't explain thier inclusion. (rude.)
The point: Lawyers at all experience levels are messing up by not double checking AI generated product.
Why is this important?
This example of a Google Gemini flub is from Axios (click this text to see the article) In an effort to avoid bias, an over correction was made. The level of control over output is happening with text, too. Companies are trying to make it less apparent and fix it. The point: In the context of research, this kind of bias produces bad results. We need to research how the law IS not how the law should be or could be.
Principles to support my upcoming suggestions
The tech is developing and so are the policies regarding use.
As with humans, incomplete data sets create bias.
These AI models only "know" what they've learned, and only share how they are told.
These types of policy may be constantly changing: -New ToS for each version of an AI product -Investor agreements pertaining to AI generated IP use. -Attorney ethics rules
AI models trained on the open web or otherwise non-specialized information is going to be flawed for legal research. Also, what you put in might be adding to the training and made available to third parties. AND the amount of info they provide you and the way they say it have been decided by someone somewhere. It's not an oracle.
Generative AI "wants" to give you an answer, even if it doesn't have complete information. This creates bias. Much like your bias if you only know one 'side' of a conflict!
The "Dos"
Platforms like Westlaw and Lexis are adopting AI tools. Some are generative, some are just an ehanced search engine. These are trained on legal materials (good!) Other tech available may be trained on the open web and then tweaked to filter results.
Ensure client confidentiality by asking questions.
Keep an eye on ethics opinions on your state.
Review generated documents very, very carefully.
Understand what the model is "trained" on.
Review opposing counsel documents very, very carefully.
Experiement with low-stakes tasks, like drafting emails and marketing.
If your firm has forms, templates or precedent, use them first. Generate your own library of forms and templates you like so you can stay consistent because generative AI may give you different things each time you request them.
Last note on "Dos" -
The "Don'ts"
Plug confidential client info into unsecure AI systems.
Substitute your own careful judgement
Don't do any of these things:
Create unprotectable works subject to other agreements with it.
Assume all case law cited by opposing counsel is real
Use it like a search engine.
Use general generative AI tools to find the law.
A private investment company may specifically require that any IP created by a company be free of certain IP pitfalls like open source code & generative AI because they aren't as readily protectable by U.S. IP Law.
Example:
Helpful reassurance
AI isn't ready to replace you.
It can save time on low stakes tasks.
A good answer willshow its sources.
Take time to learn about prompt engineering and other ways to finesse human contact with AI. It won't replace your job, but someone who knows how to use it with speed, accuracy and integrity might.
Drafting emails and client letters and even creating marketing materials can be done in less than half the time. Save time on these kinds of things-- mistakes here will seldom hold dire consequences.
Lexis is a good example- when you have it's generative AI create something for you or answer a question, you'll get a list of supporting resources. You should still check them though .
Get More Help
Need more assistance? You can contact us in a variety of ways!
Call or Text Us
Appointments
Email Us
Stop by the Reference Desk
Make a Research Appointment
Research Labs Template
Katelyn Golsby
Created on April 5, 2024
Start designing with a free template
Discover more than 1500 professional designs like these:
View
Math Lesson Plan
View
Primary Unit Plan 2
View
Animated Chalkboard Learning Unit
View
Business Learning Unit
View
Corporate Signature Learning Unit
View
Code Training Unit
View
History Unit plan
Explore all templates
Transcript
Spring 2024 Research Labs
GenerativeAI Dos & Don'ts
Don't let the wonders of modern technology get to your head.
Start
Sign In Sheet
OR SIGN IN HERE: https://go.fiu.edu/labsignin
Agenda
Today we'll cover:
Why this is important
Things to do with AI
Things not to do with AI
Extra Info & End
Why is this important?
AI messes up both accidentally *and* on purpose.
A few layers have been busted for citing to nonexistant caselaw. On March 8 a Florida attorney was suspended from the U.S. Court for the Middle Dist. of Florida. He was found to have violated the Florida RPC by filing pleading which contained frivilous legal arguments based on fake cases. Opposing counsel is who spotted it.
Why is this important?
AI messes up both accidentally *and* on purpose.
And of course, there's more!
- New York:
- 2 experienced lawyers submitted a legal brief with SIX fake cases and were fined along w/ thier firm.
- A Special Ed. firm used chatGPT to estimate fees saught from NYC after representing a child.
- AI estimated 113k fee, 53k was rewarded after federal judge rejected/
- Colorado
- New lawyer (>5 years exp.) suspended for incorrect citations *and* made up cases from ChatGPT
- Excuse: “This is all so new to me,” he said. “I just had no idea what to do and no idea who to turn to.”
- Massachusetts
- Lawyer submitted 3 separate pleadings which relied on fictious and nonexistant cases.
- Blamed interns when he couldn't explain thier inclusion. (rude.)
The point: Lawyers at all experience levels are messing up by not double checking AI generated product.Why is this important?
This example of a Google Gemini flub is from Axios (click this text to see the article) In an effort to avoid bias, an over correction was made. The level of control over output is happening with text, too. Companies are trying to make it less apparent and fix it. The point: In the context of research, this kind of bias produces bad results. We need to research how the law IS not how the law should be or could be.
Principles to support my upcoming suggestions
The tech is developing and so are the policies regarding use.
As with humans, incomplete data sets create bias.
These AI models only "know" what they've learned, and only share how they are told.
These types of policy may be constantly changing: -New ToS for each version of an AI product -Investor agreements pertaining to AI generated IP use. -Attorney ethics rules
AI models trained on the open web or otherwise non-specialized information is going to be flawed for legal research. Also, what you put in might be adding to the training and made available to third parties. AND the amount of info they provide you and the way they say it have been decided by someone somewhere. It's not an oracle.
Generative AI "wants" to give you an answer, even if it doesn't have complete information. This creates bias. Much like your bias if you only know one 'side' of a conflict!
The "Dos"
Platforms like Westlaw and Lexis are adopting AI tools. Some are generative, some are just an ehanced search engine. These are trained on legal materials (good!) Other tech available may be trained on the open web and then tweaked to filter results.
Ensure client confidentiality by asking questions.
Keep an eye on ethics opinions on your state.
Review generated documents very, very carefully.
Understand what the model is "trained" on.
Review opposing counsel documents very, very carefully.
Experiement with low-stakes tasks, like drafting emails and marketing.
If your firm has forms, templates or precedent, use them first. Generate your own library of forms and templates you like so you can stay consistent because generative AI may give you different things each time you request them.
Last note on "Dos" -
The "Don'ts"
Plug confidential client info into unsecure AI systems.
Substitute your own careful judgement
Don't do any of these things:
Create unprotectable works subject to other agreements with it.
Assume all case law cited by opposing counsel is real
Use it like a search engine.
Use general generative AI tools to find the law.
A private investment company may specifically require that any IP created by a company be free of certain IP pitfalls like open source code & generative AI because they aren't as readily protectable by U.S. IP Law.
Example:
Helpful reassurance
AI isn't ready to replace you.
It can save time on low stakes tasks.
A good answer willshow its sources.
Take time to learn about prompt engineering and other ways to finesse human contact with AI. It won't replace your job, but someone who knows how to use it with speed, accuracy and integrity might.
Drafting emails and client letters and even creating marketing materials can be done in less than half the time. Save time on these kinds of things-- mistakes here will seldom hold dire consequences.
Lexis is a good example- when you have it's generative AI create something for you or answer a question, you'll get a list of supporting resources. You should still check them though .
Get More Help
Need more assistance? You can contact us in a variety of ways!
Call or Text Us
Appointments
Email Us
Stop by the Reference Desk
Make a Research Appointment