MSU Law captivated by prominent Harvard professor analyzing artificial intelligence

Harvard Law School Professor Ruth L. Okediji discusses future advances in artificial intelligence at MSU’s third annual Dean’s Speaker Series.

Photo courtesy of MSU?Law

MSU College of Law hosted the first in a series of renowned legal and technology leaders examining the current and future impact of artificial intelligence (AI). Kicking off the 3rd annual Dean’s Speaker Series was Harvard Law School Professor Ruth L. Okediji, who shared her expertise on how future advances in artificial intelligence will affect the U.S. and global communities.

Welcoming Professor Okediji and engaging with her in a question-and-answer session was MSU Law Dean Linda Greene who has brought speakers of national and international prominence to participate in her three Dean’s Speakers Programs since 2021. By way of introduction, Professor Okediji shared her journey to academia with the large audience of faculty, staff, and students. She talked about her upbringing and childhood, growing up as a daughter of two academic parents, and surrounded by strong influences that pushed her into a successful law career although she had not planned on going to law school. She found her calling the first time she stepped into a classroom.

“I walked into that law school classroom to teach an intellectual property (IP) class. Unknown to me, and despite much trepidation, I stepped into my destiny. The only way to describe it is that I came alive in a way I cannot fully express. Everything about me – heart, mind, and spirit – felt fully engaged and energized. I deeply wanted my students to appreciate the power and role of law in shaping human flourishing. I had never felt so strongly the urgency for others to care about and to learn how to pursue the paths of justice.  From that day on, I was gone. It was like falling in love at first sight and never looking back. I knew then that law teaching was my calling.”

An important highlight of her career was watching her ideas about IP law debated in international fora and becoming policy in many countries. Persuading political leaders to make difficult policy choices to improve human development outcomes has been “immensely satisfying,” Okediji observed. “People want lives that are meaningful; they long for conditions that offer hope for the future of their children and communities. Law plays a crucial role in fostering and, ultimately, in delivering hope.”

Another rewarding moment in her academic career was being twice honored with a Shatter the Ceiling Award by the Harvard Women’s Law Association for mentoring women in law school and beyond.
She described these awards as “high and deeply humbling honors” and was surprised when she was also asked by Harvard Law School students to give the Last Lecture not long after joining the faculty.
“One thing I said in that lecture was the importance of turning off the mute button.” Okediji recalled a “profound lesson” about the assumptions that we make about others. During the lecture, she shared an experience about a Zoom meeting when she repeatedly tried to speak and found that she was being talked over each time she started to say a few words. Confused about why no one would let her speak, she got really upset, only to realize that she had been on mute the entire time.

Expanding on this experience and applying it to everyday life, she advised students “to take a moment to learn about your own blindness to what you may be doing that makes it hard or impossible for others to hear you.” Emphasizing the importance of “taking a moment to look at where you stand, what you are saying, and whether the things you have to offer are constructive enough to make it worthwhile for people to give heed to what you say” are enduring considerations for life and lawyering.

Turning to the discussion topic of Artificial Intelligence, Professor Okediji highlighted the categories of predictive and generative AI. She noted that concerns surrounding generative AI, like ChatGPT, include the data developers use to train Large Language Models (LLMs). Generative AI has important implications for many fields of law, including employment law, privacy, and IP law. When Dean Greene asked her whether she believed that AI outputs are a form of creative work eligible for copyright protection by AI, Okediji responded, “emphasizing that humans are created in God’s image and endowed with minds, souls, and spirits.” Citing the IP Clause in the Constitution, she stressed the moral and ethical considerations that attend to the human condition, and the special duties that law guarantees to safeguard the dignity of a person.

Humans, she argued, are creators with moral impulses, a conscience, and a deep longing for purpose and meaning. Humans and firms need incentives that help shape markets for creative outputs and overcome pernicious free-riding; AI systems do not. Machines may be able to do things, but in her view, there will always be a limit to what machines can be. “Machines imitate humans – not the other way around, at least not now. It is ironic to say that machines create like humans. Rather, it is that humans want machines to create like them. Machines ultimately are expressions of what we want them to be.
Their seduction – and their danger - lies precisely within our human limits.”

Professor Okediji noted that, in general, technological advancements are viewed as an unalloyed good for society. History, however, tells a much more nuanced story. Increasingly, digital technological frontiers have proven harmful to mental health, physical wellbeing, democracy, and much more. She acknowledged that although AI has enormous economic potential, there are significant “unknowns” that merit careful policymaking for its effective regulation. Artificial Intelligence can hardwire bias into decision-making and obscure questions of agency for discriminatory and tortious behavior.

She pointed out, for example, that AI is trained to discriminate among objects – e.g., is this a car or a bicycle? This may seem like mere categorization, but, she argues, categories are imbued with moral choices. Should a person with repeated criminal convictions be categorized as a “criminal”? As a likely recidivist?  She argued that even the most benign decisions about training datasets have important ramifications for AI, thus requiring engineers and firms to maintain strong guardrails anchored in ethical commitments. She stressed the importance of human oversight, adding, “In my view, the [training] models produce anemic decisions. These are not wise decisions, these are not discerning decisions, these are not merciful decisions, these are not relational decisions. Facts are not truth.” In sum, Professor Okediji reminds us that AI is as trustworthy as the data and processes that lead to its development.


Regarding debates among copyright scholars over the usefulness of fair use as a defense to AI inputs, she is skeptical that fair use can adequately resolve the many questions that arise given the complex way models are designed and trained. “There are design choices that will clearly affect copyright liability, such as whether to use unlicensed data or whether the model memorizes specific copyrighted works. There are also questions about users’ role in controlling prompts that direct a system to produce substantially similar copies of a work. In my view, fair use cannot do all the work, though I agree it will be a robust tool.” She argues that other copyright doctrines could be applicable, such as the idea/expression distinction and exceptions for text and data mining, such as those that exist in Europe.

In conclusion, Professor Okediji said she’s “deeply skeptical” that the U.S. will get anywhere close to an effective regulatory framework for AI soon. “Courts will have to take the lead and, given the lawsuits already underway, judicial decisions may eventually provoke legislative outcomes.” Professor Okediji closed by pointing to the opportunities for lawyers in the AI space. “I believe there will be an important role for lawyers in shaping the kind of regulatory framework that emerges both nationally and globally. We need lawyers who can advocate for employees, for artists, and for musicians; we need transaction lawyers who will negotiate and draft agreements for firms that create or deploy AI systems. Among other things, these agreements must address liability for harm, indemnification, and choice of law. There is no shortage of issues and problems regarding AI – some of the hardest questions are likely still ahead of us. We need a cadre of thoughtful, dedicated, and skilled lawyers helping courts to tackle these questions. I hope you will be part of this rapidly evolving field.”

A video of the Speaker Series may be found on the College of Law website.


 

––––––––––––––––––––
Subscribe to the Legal News!
http://legalnews.com/subscriptions
Full access to public notices, articles, columns, archives, statistics, calendar and more
Day Pass Only $4.95!
One-County $80/year
Three-County & Full Pass also available

––––––––––––––––––––
Subscribe to the Legal News!
http://legalnews.com/Home/Subscription
Full access to public notices, articles, columns, archives, statistics, calendar and more
Day Pass Only $4.95!
One-County $80/year
Three-County & Full Pass also available