By A. Vince Colella
Audio and video have always proved to be reliable evidence for lawyers and prosecutors presenting their cases. For example, surveillance footage of a robbery in progress, clearly showing a person armed with a gun walking into a store, robbing a clerk and escaping would normally be an open and shut case for a prosecutor. However, advancements in AI technology have now made it possible to alter sounds and images in a manner that is nearly undetectable. The manufacturing or altering of evidence is not something entirely new. However, the ability to distort reality has taken an exponential leap forward with “deep fake” technology. Lightning advancements in artificial intelligence have not only created an ability to alter images, but to create videos of actual people doing and saying things that never occurred. Machine learning has made these created images much more realistic and nearly incapable of detection. An evidentiary nightmare for the court system.
Fake video depictions of real people began to emerge on the internet in late 2017. Surprisingly, the technology did not require elaborate Hollywood cameras and editing equipment. The new technology allows anyone with a smart phone to mimic the movements and words onto someone else’s face and voice to make them appear to say or do anything. And the more video that is fed into the dep-learning algorithms, the more convincing the result. The danger of deepfake technology is two-fold. First, it may be used, as in the example above, to demonstrate the commission of an act or statement attributable to a person that did not take place. Second, it opens the door to deepfake bias that can be used to delegitimize actual audio and video evidence. Recently, Tesla was sued by a family of a man who died when his car crashed while using the self-driving feature. During the trial, the family’s lawyers cited a statement made by Tesla founder, Elon Musk, in 2016 claiming that its Model S and Model X vehicles were capable of being driven autonomously with greater safety than a person. While the statement was in fact uttered by Musk at a conference, lawyers for the car company suggested that Musk was the subject of several fake videos saying and doing things that he had not said or done — casting doubt on whether his statements about the safety of his vehicles were true.
The capacity for creating undetectable videos of everyday people has created a shroud of “doubt” over what we have come to accept as reliable. Thus, the ability to manufacture images and sound undoubtedly may cause jurors to question otherwise reliable evidence. A double edge sword of evidentiary deceit.
This has opened the floor to debate among legal scholars on how to remedy deepfakes and the bias it creates. The challenges include (1) proving whether audiovisual evidence is genuine or fake; (2) confronting claims that genuine evidence is a deepfake; and (3) addressing a growing distrust and doubt among jurors in audiovisual evidence. From an authenticity standpoint, we might see a sharp rise in the use of AI experts to confront and present evidence. Analysis of metadata and source information will likely be used to prove the veracity of an image. Experts will also be required to weigh in on unusual or unnatural elements within an image. For example, if an image has perfect symmetry of flawless patterns typical in AI-generated images, experts can be called to testify to these unauthentic features. However, deep scientific dives into reliable video evidence will not only prove costly to the litigants but also be disruptive to the efficiency of our judicial system.
Complex legal issues caused by the evolution of science and technology are often solved with the basic tenets of jurisprudence. Historically, solutions to problems surrounding the presentation of evidence in legal proceedings were governed by existing, non-exhaustive means of authentication in the state and federal rules of evidence. While authenticity can be proved in several ways, lawyers primarily rely on witnesses to confirm that what we see and hear in audio-visual reproduction exists in real life. However, because artificial intelligence is so difficult to detect, forensic and scientific examination will likely be the best way to ferret out reliable evidence from the deepfakes.
The importance of maintaining judicial integrity cannot be overstated. Therefore, forensically keeping pace with artificial intelligence is paramount to the fair administration of justice and to preserve our country’s faith in the system. Lawyers and judges must stay mindful of the potential for fraudulent audio-visual evidence and to ensure that jurors are not duped into thinking something is suspicious when it is not.
————————
Vince Colella is a founding partner of Southfield-based personal injury and civil rights law firm Moss & Colella.
- Posted October 13, 2023
- Tweet This | Share on Facebook
COMMENTARY: Will AI technology undermine the faith and integrity of our judicial system?
headlines Macomb
headlines National
- Lucy Lang, NY inspector general, has always wanted rules evenly applied
- ACLU and BigLaw firm use ‘Orange is the New Black’ in hashtag effort to promote NY jail reform
- 2024 Year in Review: Integrated legal AI and more effective case management
- How to ensure your legal team is well-prepared for the shifting privacy landscape
- Judge denies bid by former Duane Morris partner to stop his wife’s funeral
- Attorney discipline records short of disbarment would be expunged after 8 years under state bar plan