By Annie Hagstrom
Michigan Law
Isobel Blakeway-Phillips, ’25, and Michael “Mike” Tiu Jr., LLM ’25, are the 2025 recipients of Michigan Law’s Jon Henry Kouba Prize, which recognizes the best paper or papers on European Union law or on international peace and security among nations.
Jon Henry Kouba, ’65, established the prize in 2003. The winners receive a $1,000 stipend.
Meet the Kouba Prize Winners
Isobel Blakeway-Phillips, ’25

Michigan Law
Isobel Blakeway-Phillips, ’25, and Michael “Mike” Tiu Jr., LLM ’25, are the 2025 recipients of Michigan Law’s Jon Henry Kouba Prize, which recognizes the best paper or papers on European Union law or on international peace and security among nations.
Jon Henry Kouba, ’65, established the prize in 2003. The winners receive a $1,000 stipend.
Meet the Kouba Prize Winners
Isobel Blakeway-Phillips, ’25
“The AI Act and the GDPR: Implications for Personal Data and Methods of Redress”
In her paper, Blakeway-Phillips looks at the intersection of the EU’s General Data Protection Regulation (GDPR) and the more recently enacted Artificial Intelligence (AI) Act to understand how the relationship between the two might impact the privacy of EU citizens.
The GDPR, which came into effect in 2018, is the EU’s data privacy and security law that sets strict rules for how organizations collect, process, and store citizens’ personal data. In 2024, the AI Act was passed to ensure AI systems are safe, transparent, and respect the fundamental rights within the EU.
“I was interested in how information is being input into or extracted from AI algorithms, how that information falls under the GDPR’s definition of ‘personal data,’ and if or how the GDPR—as it stands—protects the information and rights of EU citizens when AI is involved,” she said.
Blakeway-Phillips worked at the Law Society of England and Wales during the passage of the GDPR, which sparked her initial interest in privacy law. She went on to earn a master’s degree in legal and political theory, then matriculated at Michigan Law.
She credits both Professor Daniel Halberstam’s European Union Law class and Professor Sylvia Lu’s class, Artificial Intelligence Regulations: US, EU, and Asian Perspectives, for feeding into what inevitably became her prize-winning paper.
In her findings, Blakeway-Phillips outlines two possible ways to prevent potential data breaches with the use of AI: Either keep the concept of “personal data” narrow and limited, safeguarding the need for human interaction and anonymization, or fundamentally redefine what “personal data” means to broaden the GDPR’s scope of protection over the rights of EU citizens and their information.
“What ends up happening with technology that evolves quicker than people can regulate it is we essentially jerry-rig pre-existing systems around it, but that’s not always sound,” said Blakeway-Phillips.
“I would like to believe the European courts will choose to be overprotective of personal data in some way because I think taking a looser approach is too big a risk.”
Michael “Mike” Tiu Jr., LLM ’25

In her paper, Blakeway-Phillips looks at the intersection of the EU’s General Data Protection Regulation (GDPR) and the more recently enacted Artificial Intelligence (AI) Act to understand how the relationship between the two might impact the privacy of EU citizens.
The GDPR, which came into effect in 2018, is the EU’s data privacy and security law that sets strict rules for how organizations collect, process, and store citizens’ personal data. In 2024, the AI Act was passed to ensure AI systems are safe, transparent, and respect the fundamental rights within the EU.
“I was interested in how information is being input into or extracted from AI algorithms, how that information falls under the GDPR’s definition of ‘personal data,’ and if or how the GDPR—as it stands—protects the information and rights of EU citizens when AI is involved,” she said.
Blakeway-Phillips worked at the Law Society of England and Wales during the passage of the GDPR, which sparked her initial interest in privacy law. She went on to earn a master’s degree in legal and political theory, then matriculated at Michigan Law.
She credits both Professor Daniel Halberstam’s European Union Law class and Professor Sylvia Lu’s class, Artificial Intelligence Regulations: US, EU, and Asian Perspectives, for feeding into what inevitably became her prize-winning paper.
In her findings, Blakeway-Phillips outlines two possible ways to prevent potential data breaches with the use of AI: Either keep the concept of “personal data” narrow and limited, safeguarding the need for human interaction and anonymization, or fundamentally redefine what “personal data” means to broaden the GDPR’s scope of protection over the rights of EU citizens and their information.
“What ends up happening with technology that evolves quicker than people can regulate it is we essentially jerry-rig pre-existing systems around it, but that’s not always sound,” said Blakeway-Phillips.
“I would like to believe the European courts will choose to be overprotective of personal data in some way because I think taking a looser approach is too big a risk.”
Michael “Mike” Tiu Jr., LLM ’25
“Treating Social Media Corporations as Quasi-State Actors to Address the Use of Artificial Intelligence in Content Moderation”
Tiu’s work examines the use of AI systems by social media companies to moderate content shared on their channels, arguing that it raises concerns about transparency and social media users’ freedom of expression.
“Content moderation on social media channels cannot be done by humans alone by virtue of the volume of what’s put out there every day,” said Tiu.
“It’s understandable, then, that social media companies turn to emerging AI technology to deal with this problem. My paper is not suggesting stopping AI moderation of social media channels altogether, but understanding how both human moderation and AI moderation might work together.”
He goes on to suggest treating social media companies as quasi-state actors rather than private entities in order to hold them accountable for protecting human rights, especially as the need for regulations across social media channels evolves alongside the emergence of new technologies.
“Private entities, in large part, have already been functioning like states,” said Tiu. “For example, when a state passes laws that regulate expression, social media companies in said states adopt those laws and incorporate them in their own terms of service. These companies also have their own set of community guidelines beyond state law that reflect their values.”
He continued, “These regulatory models give social media companies more power than we would typically expect of other private entities in relation to speech. Treating them as quasi-state actors would also relieve states of accountability for the things happening on social media.”
Tiu is a law professor at the University of the Philippines, and frequently engages in discussions with students about navigating freedom of expression online.
He has returned to Michigan Law as an SJD student, working on a dissertation that examines the responsibilities of corporations under international law.
Tiu’s work examines the use of AI systems by social media companies to moderate content shared on their channels, arguing that it raises concerns about transparency and social media users’ freedom of expression.
“Content moderation on social media channels cannot be done by humans alone by virtue of the volume of what’s put out there every day,” said Tiu.
“It’s understandable, then, that social media companies turn to emerging AI technology to deal with this problem. My paper is not suggesting stopping AI moderation of social media channels altogether, but understanding how both human moderation and AI moderation might work together.”
He goes on to suggest treating social media companies as quasi-state actors rather than private entities in order to hold them accountable for protecting human rights, especially as the need for regulations across social media channels evolves alongside the emergence of new technologies.
“Private entities, in large part, have already been functioning like states,” said Tiu. “For example, when a state passes laws that regulate expression, social media companies in said states adopt those laws and incorporate them in their own terms of service. These companies also have their own set of community guidelines beyond state law that reflect their values.”
He continued, “These regulatory models give social media companies more power than we would typically expect of other private entities in relation to speech. Treating them as quasi-state actors would also relieve states of accountability for the things happening on social media.”
Tiu is a law professor at the University of the Philippines, and frequently engages in discussions with students about navigating freedom of expression online.
He has returned to Michigan Law as an SJD student, working on a dissertation that examines the responsibilities of corporations under international law.




