What’s Real Evidence?
The New AI Worry of Fabricated Videos, Images, and Communications—and How It’s Changing U.S. Courtrooms
By Santo V. Artusa, Esq.
Abstract
In an era where artificial intelligence (AI) can generate hyper-realistic fake videos, images, and communications, the very foundation of evidentiary reliability is under attack. Courts, judges, lawyers, and jurors across the United States are now confronting a deeply disruptive question: What counts as “real” evidence? This article explores how AI-generated or manipulated content (often called “deepfakes”) is impacting litigation in criminal and civil courts nationwide, how outdated evidentiary rules are being exploited, and how under-informed judges and lawyers are struggling to adapt. We also examine major case studies and propose future-proofing solutions for the American legal system. In a case of deepfakes imagine if deepfakes were used that led to this article in the Illinois Bar Association About A Man Jailed for 3 years in Civil Contempt For Allegedly Hiding $10Million.
I. Introduction: Welcome to the Age of AI Deception
Artificial intelligence has given us tools of extraordinary power. Among the most concerning are deepfakes—AI-generated or manipulated images, audio, and videos that mimic real people with startling accuracy. These tools allow anyone to forge communications, fabricate video evidence, or clone someone’s voice, all with minimal cost or expertise.
While these capabilities once seemed science fiction, they are now reality. The legal implications are immense: fabricated media is already appearing in litigation, and in the absence of updated rules or technical literacy, courts are vulnerable to admitting fake evidence or wrongly excluding real evidence based on unfounded AI skepticism.
II. What Are Deepfakes and Why Are They So Dangerous?
Deepfakes are created using Generative Adversarial Networks (GANs), where two neural networks—a generator and a discriminator—compete to produce increasingly realistic outputs. The result is synthetic media so lifelike that even experts sometimes struggle to tell the difference. AI can now replicate facial movements, vocal intonations, and speech patterns, allowing for the creation of entire video statements that the subject never made.
The dangers include:
- Fabrication of evidence: Altered or created media may be submitted as real.
- Discrediting real evidence: The “liar’s dividend”—bad actors can deny real evidence by claiming it’s AI-generated.
- Witness intimidation: Deepfakes can be used to impersonate or blackmail witnesses.
- Loss of public trust: Jurors and the public may become skeptical of all digital media.
III. The Current Legal Standard: Authentication and Admissibility
Under the Federal Rules of Evidence, particularly Rule 901, evidence must be authenticated—i.e., shown to be what the proponent claims it is. Traditionally, this standard has been low, often satisfied by a witness testifying that the photo or video accurately reflects what they saw. In the deepfake era, this is no longer sufficient.
Evidence can now be forged with alarming precision, meaning:
- Chain of custody is not enough.
- Eyewitness authentication is unreliable.
- Cross-examination may fail to expose AI-generated deception.
Courts still rely heavily on outdated notions of digital trust, assuming media files are inherently more reliable than they are.
IV. Case Studies Illustrating Legal Vulnerabilities
1. Commonwealth v. Cheer Mom (PA)
A mother created fake videos of her daughter’s cheerleading rivals in compromising positions using deepfake technology. Charges were dropped when investigators failed to conclusively prove the videos were manipulated.
2. Wisconsin v. Rittenhouse
The defense claimed zooming in on a video might introduce “AI artifacts.” The judge, not fully understanding the technology, limited the use of the video.
3. United States v. Khalilian
The defense argued that an audio recording was fake and generated by voice cloning. The judge admitted it based on witness familiarity, though this is increasingly unreliable.
4. Valenti v. Dfinity
A lawyer caught on video claimed the footage was AI-faked. The court ruled against him but noted that deepfake defenses are growing.
V. Technological Illiteracy Among Judges and Lawyers
Judges are often generalists, not technologists. Many on the bench have limited understanding of AI, machine learning, or digital forensics. Without foundational knowledge, they can be:
- Overly skeptical of legitimate evidence.
- Too credulous of manipulated content.
- Vulnerable to tech jargon from litigators.
CLE courses and judicial conferences are beginning to address this, but change is slow. In many cases, younger, more tech-literate attorneys may outmaneuver opposing counsel or the judge on AI issues.
VI. Procedural and Ethical Challenges
A. Lack of Formal Rules for AI Evidence
There is no uniform rule for addressing AI-altered media. Proposed amendments to Rule 901 are pending but years from enactment.
B. Expert Dependence
Judges now rely heavily on expert testimony to verify authenticity. Yet even experts disagree, and AI detection tools are imperfect.
C. Weaponized Discovery
Litigants may flood discovery with AI-altered materials to confuse or intimidate opposing parties. Others may stall or obfuscate with baseless AI authenticity claims.
D. Ethical Pitfalls
Lawyers who knowingly submit manipulated evidence risk sanctions, but proving intent is difficult. The flip side—wrongly discrediting genuine evidence as fake—can also be a form of litigation misconduct.
VII. Institutional Responses
1. The Judiciary
Courts are issuing guidance documents and bench cards to aid in AI evidence evaluation. Still, many rulings depend on individual judges’ discretion and technological comfort.
2. The Bar
State bars are beginning to require CLE training on technology, including AI and cybersecurity. The New York State Bar and ABA have both hosted panels on deepfakes.
3. Law Enforcement
Police departments are struggling to verify digital evidence. Many lack the tools or training to identify forgeries. The FBI and DHS have issued alerts about AI impersonation scams and synthetic media.
4. Federal Government
President Biden’s 2023 AI Executive Order directs federal agencies to create safeguards for trustworthy AI use. Treasury and FinCEN have also warned about AI risks in financial crime.
VIII. Future-Proofing the Courts
A. Proposed Rules
Legal scholars have proposed amending Rule 901 to include:
- A burden-shifting mechanism when deepfake claims arise.
- Pre-trial hearings to determine media authenticity.
- Forensic examination requirements for disputed evidence.
B. Judicial Education
Mandatory technology training for judges should be implemented at both state and federal levels. Familiarity with AI capabilities and limits is essential.
C. Technological Tools
Investment in forensic tools, such as blockchain authentication, metadata analysis, and machine-learning-based deepfake detectors, is key. Courts should consider funding digital evidence labs or appointing special masters for complex cases.
D. Ethical Guidelines
Bar associations must issue clear guidance on the ethical use of AI in litigation. Penalties for abusing deepfake arguments or submitting false evidence should be standardized.
IX. Conclusion: The Search for Truth in the AI Age
The legal system is only beginning to grapple with the consequences of AI-altered reality. As deepfakes and other synthetic media become easier to produce and harder to detect, courts must adapt quickly. Otherwise, they risk admitting false evidence, rejecting real evidence, and allowing justice to be distorted by illusion.
Judges and practitioners who do not keep pace with technology will find themselves increasingly manipulated—not by facts, but by forgeries. Preserving the rule of law in the AI era will depend on rigorous standards, better tools, and above all, education. The courtroom must remain a place where the truth prevails, even when technology tries to bury it.
Keywords: deepfakes, AI evidence, Federal Rule of Evidence 901, authentication, manipulated video, fabricated audio, legal ethics, courtroom technology, litigation strategy, judicial education
About the Author:
Santo V. Artusa, Esq. is a NY and NJ litigation lawyer based in New York City. He writes on evolving legal standards, courtroom advocacy, and the intersection of law and technology.
Start earning every time someone clicks—join now! https://shorturl.fm/F4gGP
Join our affiliate program and watch your earnings skyrocket—sign up now! https://shorturl.fm/EVO3n
Earn passive income on autopilot—become our affiliate! https://shorturl.fm/B8bf4
Unlock exclusive affiliate perks—register now! https://shorturl.fm/k9gcK
Share your link and rake in rewards—join our affiliate team! https://shorturl.fm/U3uap
Start earning instantly—become our affiliate and earn on every sale! https://shorturl.fm/9irgK
uesrqz
Share your unique link and cash in—join now! https://shorturl.fm/ma4ss
Share our offers and watch your wallet grow—become an affiliate! https://shorturl.fm/wsPEs