What Currency History Teaches Us About Deepfakes

Karl T. Muth discusses what the history of currency might teach us about how to regulate so-called “deepfakes,” AI-created images and videos that misleadingly portray things that did not occur. He argues that criminalizing mere possession of such images and videos is overbroad criminalization for precisely the reason criminalizing possession of counterfeit currency is bad policy.
When people heard “counterfeiting” or “piracy” until forty years ago, they thought categorically, respectively, and separately about the production of imposter currency and the swashbuckling corsairs of the age of sail. In the 1980s, these terms began to be applied to software in the mainstream media; in the 1990s, to musical recordings; onward, to nearly everything vulnerable to duplication.
The story of paper money in China is told incorrectly in economics and history departments around the world. People incorrectly think paper currency was introduced in the Yuan Dynasty uneventfully (perhaps because the basic unit of the renminbi is, confusingly, called the yuan). But actually counterfeiting was a huge issue in the Song, Jin, and Yuan Dynasties and was a huge issue well into the 13th and 14th centuries.
The problem was so severe that public executions of counterfeiters are one of the few criminal procedure elements of the Chinese bureaucracy described in Marco Polo’s diaries from the late 1200s (to find these passages, search “chao” and “yuan” and “Khanbaliq court”). In China, unlike most countries today, mere possession of counterfeit currency was a crime. In majority contemporary jurisprudence, this is not the case.
This distinction is the fulcrum of this blog post.
Most will think colloquially that the word “uttering” means to voice something inelegantly or to make one’s opinion known after a few too many pints. But the word “uttering” has a different meaning in the context of U.S. law, which makes it a federal crime to be involved in the “uttering” of counterfeit bills of value (a U.K. or Commonwealth law/policy audience can think of this as analogous to the Forgery and Counterfeiting Act of 1981).
In the U.S. context, “uttering” refers to trying to represent that an instrument known to be counterfeit is negotiable as legal tender. In the U.K. context, which is more explicit and less arcane in its verbiage (though the Act does several times use the word “uttering” in this confusing way), “passing counterfeit notes or coins as genuine” is clearly criminalized. This concept of uttering or passing notes as genuine is crucial due to its “critical act” and mens rea implications.
Put succinctly, there are three elements that must each and all be proven beyond a reasonable doubt. The elements are:
Knowledge In both the U.S. and U.K., for a person to be liable for criminal wrongdoing must have known the instruments or currency involved were in fact counterfeit.
Intent to Deceive In both jurisdictions, for criminal liability to attach, a person must mislead, or create conditions that would mislead, another to mistake fake for genuine.
Decisive Act The act of “uttering” or “passing” the coin, note, or other instrument allegedly of sovereign value must occur and, without this act, possession alone is not a crime.
The classic film The Running Man, a 1987 Paul Glaser film loosely based on Stephen King’s 1982 novella, is perhaps the first mainstream film to depict a “deepfake” scenario. In it, Ben Richards is a man falsely accused of a crime. Deepfake video evidence is used to frame and convict him. In this instance, a megacorporation broadcasts the deepfake video as genuine to perpetuate and bolster the false accusations.
It is the broadcasting of the deepfake video that damages Mr. Richards, not its existence.
This is a crucial distinction.
Now, if we want to create new criminal offenses then we must think about what defenses may be asserted.
To understand what defenses should be available, we must engage in a brief taxonomical exercise and understand that some jurisdictions embrace the three-element criminal framework of the U.S. and U.K. (requiring an uttering, passing, or distribution as genuine) while others have what I would suggest are overbroad criminalizations of mere possession (with the extremely narrow exception of creating images that do not enjoy First Amendment protection).
While many U.S. states (and the U.K. if the Defamation Act of 2013 is read to be modified or augmented by the Online Safety Act of 2023) criminalize the distribution of deepfakes by persons knowingly misleading the audience and presenting them as true depictions of actual events (which I agree is problematic), other jurisdictions like South Korea (legislation currently in debate regarding revision/expansion) and Singapore (via POFMA & its Elections Act of 2021) criminalize the mere possession of deepfake images or videos even if they are not distributed or shown to others.
Having reviewed the literature in this area and several recent cases, I strongly believe that in the jurisprudential epoch we inhabit, that at least four defenses should be available to the slew of new criminal offenses being created concerning deepfakes: 1) ignorant utterance as a defense, 2) lack of utterance as a defense, 3) genuineness as a defense, and 4) consent as a defense.
Taking these in order, I suggest this is the correct legal framework for criminal liability:
- If a person possesses and transmits a political meme or online video he did not manufacture and genuinely believes it is a video of a presidential candidate saying he believes in fairies and dragons, this person should not be criminally liable (though under the Singapore and South Korean laws cited above, liability may attach).
- If a person merely manufactures or possesses for his own amusement such a video of a candidate expressing unusual beliefs about fairies and dragons, but does not show it to anyone and does not represent it as evidence of any genuine event or statement by the candidate, this person should not be criminally liable (but in some jurisdictions liability may attach).
- If a person is accused of possessing or transmitting a deepfake misleading video of a candidate saying he believes he arrived at the presidential debate riding a dragon and that person can later, through corroborating evidence, prove the genuineness of the video (that it is not a deepfake), this person should not be criminally liable (analogous to the U.S., but not U.K., doctrine of truth as a defense to defamation).
- If a person manufactures a deepfake that is misleading but was created with the subject’s consent, for instance a video of a political candidate playing the piano when the candidate cannot play the piano (but wished to be portrayed playing the piano), the person should not be criminally liable (Singapore’s law may criminalize this, as it could mislead the public about “a candidate’s qualifications or abilities”).
Finally, I suggest and again emphasize that the “utterance” must be a pivotal, necessary element to any crime regarding deepfakes. Imagine a person of extraordinary talent, like the antihero in William Friedkin’s To Live and Die in L.A., who is an artist and printmaker, manufactures through artistic talent an essentially-perfect copy of a $1 note. He frames it on his wall in his apartment, but does not represent or suggest to anyone the note is legal tender.
This should not be, and in the U.S. context is not, a criminal behavior. But spending that $1 note at the local bodega is, and should be, a serious federal crime.
Similarly, it is difficult to understand how a deepfake image or video is damaging, or a matter of concern for the courts, unless it is distributed in a way that endorses its credibility and legitimacy. Meanwhile, if an “utterance” occurs, such as distributing the video imagined above via a broadcast news outlet and damaging a candidate’s credibility because now everyone thinks he believes in dragons and fairies, then this is damaging and should be dealt with whether in civil dispute or, as is contemplated now in several Asian jurisdictions, with criminal sanctions in particularly egregious cases.
Karl T. Muth studied law at the undergraduate level in the Netherlands prior to earning J.D. and M.B.A. degrees in the United States, the latter with a concentration in Economics from the University of Chicago; Muth then wrote his M.Phil. and Ph.D. degrees at the London School of Economics, his dissertation focusing on models of entrepreneurial decisionmaking, including a study of over 3,000 small firms – the most comprehensive study of its type at the time.
Thanks to Danielle Citron, Carrie Goldberg, Alice Locatelli, and others for many conversations over the years that have informed my views in this area; all opinions expressed here are my own and do not represent the views of any institutions, clients, employers, or other organizations with whom I work or have worked.
Photo by Anete Lusina
Further Reading
For laypeople or students to understand more about this fast-moving area of regulatory concern and the public policy implications of same, I suggest the following Lawfare article:
K. Parreno et al., New and Old Tools to Tackle Deepfakes and Election Lies in 2024, Lawfare (2024).
For a prescient piece from several years ago with a good overview of the key issues, I highly recommend Robert Chesney and Danielle Citron’s 2019 piece from Foreign Affairs:
R. Chesney & D. Citron, Deepfakes and the New Disinformation War, 98 Foreign Aff. 147 (2019).
For a deeper dive in this area, I suggest these recent scholarly works:
M. Labuz & C. Nehring, On the Way to Deep Fake Democracy?, 23 Euro. Poli. Sci. 454 (2024).
R.W. Painter, Deepfake 2024, 14 J. Nat'l Sec. L. & Pol'y 121 (2023-2024).
J. Thayer, Defamation or Impersonation?, 66 Wm. & Mary L. Rev. 251 (2024).