Hagan and her team have just finished phase one of their research into generative AI and access to justice.
“Our lab at Stanford, the Legal Design Lab, has been really focused on understanding, both from community members’ perspective and from those who provide frontline legal help at legal aid and court help centers: is generative AI worth the hype?” Hagan says. “Do people really have an appetite to use it for legal problem-solving scenarios, and do frontline providers think that AI can be helpful either behind the scenes or as they deliver services?”
To get a picture of this, the Design Lab conducted many interviews and placed people in simulated scenarios with AI tools for legal help. Hagan wants to identify the most promising use cases for AI from the perspective of legal professionals, and also think through the best ways to analyze quality and risks. In the second phase, they will launch research and development projects based upon the initial findings, as well as create national networks to build shared AI infrastructure.
As a legal services lawyer representing low-income people in New York City for 22 years, Nori says he has “tried everything that is out there,” including building a dozen GPT tools to help tenants with their legal problems and to test out use cases for generative AI. He also worked with the company Josef to create a tool for a nonprofit service called Housing Court Answers, which fields about 50,000 calls from New York City tenants each year.
“We built a copilot model for these operators so that the gen AI can be in the background like a really experienced supervisor and can feed answers to the person who can then relay them to the caller as a first step,” Nori says. “Eventually, maybe the gen AI can stand on its own and it can just be a chat bot.
“And I think if this works, this could revolutionize the triage part of legal services, which is such a huge expenditure of resources and time in probably 100% of the legal services nonprofits across the country,” he continues.
Nori worries about lawyers stalling progress of AI tools by holding them up against a “perfect” system, rather than the actual level of services currently being provided to low-income Americans.
Hagan has put a lot of thought into effectively and efficiently lab testing AI tools to keep things moving forward responsibly. She explains that the focus should be on two measures: can the AI system beat or match the quality of the best humans doing the work today, and can it beat or match the efficiency of current practices?
“And once we see the AI system in the lab setting matching or beating the best available human quality at that task…then it’s time to go to the copilot phase where we’re rolling it out with a human strongly in the loop using this AI, but having a lot of ability to spot problems [and] record those, correct the AI, but putting it in the field because we are not going to know performance issues or risk issues until we start doing controlled pilots in the field,” Hagan continues.
Then, Hagan explains, with a human working closely with the AI, we can truly measure how common errors, bias and “hallucinations” are, rather than just speculating.
Talk Justice episodes are?available online?and on Spotify, Stitcher, Apple and other popular podcast apps. The podcast is sponsored by LSC’s Leaders Council.?
Legal Services Corporation (LSC) is an independent nonprofit established by Congress in 1974. For 50 years, LSC has provided financial support for civil legal aid to low-income Americans. The Corporation currently provides funding to 131 independent nonprofit legal aid programs in every state, the District of Columbia, and U.S. territories.
––––––––––––––––––––
Subscribe to the Legal News!
http://legalnews.com/Home/Subscription
Full access to public notices, articles, columns, archives, statistics, calendar and more
Day Pass Only $4.95!
One-County $80/year
Three-County & Full Pass also available