Michigan Law
Three alumni whose careers have placed them at the forefront of law and technology returned to campus recently to share their perspectives and advice in the panel “AI, the Future of the Legal Profession, and You.”
David Caragliano, ’09, global head of ads safety at Google; Clarissa Cerda, ’92, chief legal officer and secretary at Pindrop Security Inc.; and Mary Snapp, ’84, former deputy general counsel and recently retired vice president of strategic initiatives in Microsoft’s Office of Corporate External and Legal Affairs, answered questions from an audience of law students about the transformative impact AI has on the academic and professional legal landscape.
“The pace of change driven by artificial intelligence is truly remarkable, and nowhere is that change more profound than in the intersection of law and technology,” said Neel U. Sukhatme, David A. Breach Dean of Law and professor of law, in his introductory remarks.
“AI is reshaping everything from legal research and contract analysis to advocacy and access to justice. As we navigate this transformation, it’s important to recognize that with these tremendous opportunities come new ethical and professional responsibilities.”
The panel discussion was moderated by Ashish Prasad, adjunct professor at Michigan Law. The audience asked a number of questions, including:
What lessons have you learned that would help students prepare to leverage technology to advance their careers?
Snapp: You don’t have credibility as a lawyer unless you understand the technology that you’re helping to support. The most important thing I can tell you is not to just leverage the technology; learn the basics of the technology.
Caragliano: In a couple of years, the tool will be something entirely different. Think about the skills that it takes to be a good lawyer: how to read closely and think critically. Discover how you learn best and become comfortable with being a beginner. Also, technology is not neutral; it’s all about ethics. It’s packed with all kinds of human biases, values, and assumptions. So keep that in mind, and question it. The hardest question, particularly from a policy perspective, is not whether something is legal but whether it is right.
Cerda: There are three interdisciplinary skills I use every single day in navigating the world of law and technology: legal reasoning, technological literacy, and ethical judgment. Those skill sets are what will keep you at the forefront of things. In addition, there are three personal attributes I rely upon: being proactive, curious, and a continuous life learner. These are what will keep you relevant.
Mary, at Microsoft, what safeguards did you see to prevent bias within the data produced by generative AI?
Snapp: We established an executive committee in 2016 to consider principles of use of AI and established the Office of Responsible AI (ORA) in 2019 to review ethical and safety concerns related to AI. Generative AI emerged at the end of 2022.
This internal team—lawyers, researchers, engineers, anthropologists—sits in the general counsel’s office and focuses on ensuring that Microsoft’s products and services using AI are responsible and safe. They develop standards for use and review as new products and services emerge. Their job is to understand the uses of artificial intelligence, particularly generative AI, in the product space and categorize what is referred to as “sensitive use” cases. These could include potential harm to an individual or to fundamental human rights, as well as those related to bias.
This ORA team works with engineers in each product group to identify those sensitive use cases for products and services under development. The ORA team then begins a real iterative process, which takes place alongside the engineers to try to mitigate that risk.
Other members of that team conduct a red teaming process, where they literally try to break the product to make it do what it’s not supposed to do. All of this is done internally before the product ships to help ensure that it is used responsibly and safely.
What are your thoughts about policymaking to ensure accountability instead of relying on businesses to take regulation into their own hands?
Caragliano: How do you balance risks with opportunities? I’ve seen basically two flavors of regulation here. You’ve got the EU AI Act, a precedent-setting, large omnibus legislation that’s trying to regulate all possible uses of AI, and it does that in what it calls a risk-based proportionate approach. Another approach, like we’re starting to see in the U.S., is a more vertical approach where you’re focused on specific harms, like the use of AI for deception in the election context.
To enable innovation, it makes sense to focus on the use, especially high-risk uses, rather than the technology.
Industry standards are incredibly important because different industries have different nuances. I work on digital advertising. Using generative AI in a digital ads context can be harmful, but it’s very different from using it to make a medical diagnosis, for example.
Regulation should reflect that.
You need interoperable standards for it to work. Still, you don’t want an overly rigid rule that prevents platforms like ours from innovating, evolving, and keeping up with scammers, bad actors, and adversaries.
What will we see in the coming years regarding policy issues that we’ll have to battle with?
Cerda: The biggest challenge is ensuring that policies have the impact you want them to have. As you examine some of these laws, either they become outdated too quickly or the legislation is unable to be future-proofed.
I also think that when you’re trying to protect citizens or consumers, oftentimes the laws you’re putting there to protect them end up protecting the fraudsters. So, I think there needs to be more policy on adversarial testing because it’s the only way to protect the privacy laws in the United States.
––––––––––––––––––––
Subscribe to the Legal News!
https://legalnews.com/Home/Subscription
Full access to public notices, articles, columns, archives, statistics, calendar and more
Day Pass Only $4.95!
One-County $80/year
Three-County & Full Pass also available




