This story opens in media res, in the middle of heart-pounding action as the US Congress passes a modification to the Communications Decency Act of 1996.
Congress did not pass a new law. Congress can no longer get it together enough to pen, debate, and pass legislation to reflect an updated time. But they passed a modification to an old law. They crowed their victory to any media outlet that would have them. They passed it, as they are wont to do, under cover of darkness thirty seconds before everyone fled town for a break over an extended holiday. A victory for bipartisanship!
The bill is on the President’s desk for signing into law where, no doubt, he’ll be surprised such a thing has appeared at all.
Technology changes quickly; the law changes hardly at all. Over time, technology and the law drift apart. The real world and the legal landscape no longer resemble one another. Without organizing skills, the will to act, and a mass, motivating social movement, government biases against action, and for the status quo.
Congress modified the law only because they, too, were part of this social movement.
Section 230 of the Communications Decency Act of 1996 states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” It preempts all state and local laws: “[n]o cause of action may be brought, and no liability may be imposed under any State or local law that is inconsistent with this section.” Designed originally to protect telecommunication providers who ran fiber from place to place, Section 230 protects any platform – an intermediary that carries 3rd party content.
When deepfakes first appeared, they were at best a novelty and, at worst, poorly executed and easily dismembered. They were a weird Internet joke. The AI researchers and ethicists warned that deepfake quality would improve. But no one listened to the ethicists because they’re bummers and their ideas don’t lead to expanding profits, explosive stock offerings, and world domination.
As the platforms pushed for the masses to create content on their platforms, the content -> advertising dollars -> stock price growth flywheel spun on. Content creation became cheaper. Regular people gained access to what was once state-of-the-art artificial intelligence-powered audio and video manipulation algorithms. Tools went online. Anyone could blend their content creation tools with artificial intelligence and make “a new thing.” What once cost millions to produce now required a simple yearly subscription of $99.
The future was widgetized.
When the first authentic deepfake appeared on the internet – one rapper replaced another rapper in a video and distributed it – it was a novelty. Sure, no one could tell the real thing from the fake, but it wasn’t no big thing. It was a music video!
Media companies rushed to monetize this new technology. They stole talent from each other in a pure digital form. Influencers rushed out videos mixing and matching video archives from the past with personalities of the future. People held imaginary conversations with historical figures in full-motion video.
It was fun, new, fresh, artistic, and engaging.
Then: the crime.
What was once a tool for media creation rapidly became a tool for crime because, at the end of everything, all tools are tools for crimes. If a criminal can break the law to make a fast buck, a criminal is going to break that law over and over. It’s like a fraud spigot or a magic machine that turns scams into cash. Once active, the crimes are optimized, packaged, and marketed to stomp on that cheap money machine.
Some examples of deepfake fun unleashed on the Internet:
- Companies fought over influencers to push their products. They spun up scam influencer channels and feeds stealing names, faces, even voices. Fashion influencers felt it the worst. No one knew who was real and who was a fake account funneling customers to a competitor. These flourished on platforms.
- The sleazier paparazzi houses scammed up fake celebrity controversy with faker videos and spread it all over social media with clickbaity headlines.
- Multi-nationals were swindled out of millions as corporate pirates scammed up rival CEO likenesses on financial calls and made terrible deals, stole corporate secrets, and fired vital personnel.
- Disinformation artists rampantly stole politician likenesses. They put words into politician’s mouths and then reposted the videos on platforms where they went viral, hoping to influence mass social opinion for their dark purposes.
- Voice phishing (vishing) ran rampant across the Internet. If people thought regular phishing was terrible, this was worse. Scammers stole full video and voice identity from personal content on public social media accounts. They called their targets and socially engineered them out of their data. Security moved to face biometrics, but scammers stole those, too.
Fraud ran rampant as reality collided with the Internet. People no longer knew what or who to believe. What was real? Who was real? Even that call from a trusted friend or parent might not be real. That call from your boss might not be real.
What is the nature of reality when information before our eyes cannot be trusted?
Scammers (being scammers) found further ways to monetize confusion and mayhem. They built fraud-automation packages on their platforms, sold access, and scaled on up. They leveraged the global cloud. They launched deep web marketing campaigns to attract other fraudster customers. Smart scammers sold software to governments and terrorist networks.
At first, the platforms shrugged faintly, waved their hands at the problem, muttered something about it being too hard, and continued to monetize real and fakes. They pointed at section 230, which protected them from liability on their platforms. When pressed by the government, the platforms hired legions of underpaid workers in sweatshops globally to stare at fakes and try to discern what was fake and what was real. This operation, of course, failed.
Industries appeared and rolled out products to validate and protect people from deepfake fraud. Corporations and individuals alike paid handsomely for Artificial Intelligence-powered products to protect themselves from rampant “face theft.” RealLife ™, an AI-based platform company, provided a popular online service that both validated real videos from deepfakes, and integrated with voice, video, AR, and fully-immersive chats to flag vishing attacks
Nothing would have happened had the sitting Senators and Congresspeople themselves not been affected by AI-based fraud and deep fakes. Yet, eventually, foreign actors managed to evade RealLife ™. They impacted the results of the 2048 election up and down ballots with fraudulent video “proof” of murder and treason. Then, and only then, lawmakers thought, hey, we should pass something? Like, a law? Do we still do that?
They chose not to touch section 230 and make the platforms – who paid handsomely into reelection campaign PACs — liable for policing their content. Instead, Congress did something exciting and novel. Here we are at the top of the story, where Congress passed something and changed society.
The modification to the Communications Decency Act of 1996 are as follows:
- Every individual citizen owns their physical likeness under their personal copyright. Your voice, your likeness, your physical appearance in the real (not digital) world is copyrighted you.
- No one can legally be you without a direct licensing agreement.
- Nominally, an ordinary citizen holds the copyright on their likeness for 70 years after their death. If the citizen was a celebrity, a politician, or an influencer, the law becomes complicated, messy, and confusing. The law encoded follower counts into copyright judgments. If you had less than 10K followers, 70 years. More? 125 years.
- This copyright only applies to a person’s physical being.
- The copyright does not apply to mathematical representations of a likeness to protect facial recognition technology. A cohort of the Justice Department and authoritarian corporate interests ensured they could still collect faces, look people up automatically, spy on them, and abuse privacy at will.
- While Section 230 still protected the platforms, it did not protect the people on the platforms, allowing citizens to sue each other for rampant copyright violations.
Congress referred to this new section as the Digital Personal Copyright Act of 2050. Done with their work, and now signed into law, they handed the deepfake problem to the Federal Communications Commission, the Commerce Department, and the Courts to sort out. Then they congratulated themselves on “doing something.”
The flurry of lawsuits heralded the meteoric rise of “RoboLaw:” AIs loaded with the entire corpus of law and authorized by the state to provide legal services. As no AI could fail the bar, since they had search abilities to all case law in the history of humanity, states granted waivers. AIs could work as lawyers as long as companies registered the AIs with the state.
RealLife ™ rolled out a system for people to access quick and easy Legal AI to sue their neighbors. The RoboLaw AIs helped citizens issue takedown notices and copyright infringement letters left and right. It helped celebrities and influencers better license and monetize their likeness without the need of expensive corporate lawyers, and corporations to better go after whoever they wanted.
It was a revolution in contract and copyright law, but not enough to stop the deepfakes.
RealLife ™ launched the ultimate hammer: Personal Digital Rights Management (DRM). Now, with the force of law behind it, people could register themselves with the RealLife ™ PDRM platform complete with AI-based RoboLaw services to enforce the integrity of their in-reality human copyrights. The platform scoured the Internet and automatically issued takedown notices, filed lawsuits, filed complaints with the FCC, or notified law enforcement. Actions were all based on guidance from the RoboLaw AIs, and their interpretation of a Personal Copyright Violation under the Digital Personal Copyright Act of 2050.
PDRM was itself protected by the Communications Decency Act of 1996 from research, cracking, spoofing, or hacking. Researchers were dubious about the effectiveness of the program. Some researchers claimed it was as scammy as the deepfakes. Yet, Corporate RoboLaw AIs blocked any reasonable measure to make the RealLife ™ PDRM system better.
As a result, hacked PDRM floated all over the dark web. Criminals could buy up cracked personal copyright and lease it on dark web platforms for short slices of time, like a timeshare, to loot, steal, or pretend to be famous people with abandon. For enough money and enough connections, criminals sold full real identities wholesale.
Meanwhile, against the onslaught of armed with robot lawyers, law, and legal remedies, deepfake fraud became a low-rent, widgetized, commodity business. More and more AIs popped up to protect people from vishing. AIs scrubbed social media platforms. The big business was in selling cracked elite personal copyrights, not cranking out low rent stolen videos. Crime went where the profit was.
Time moved on. Crime moved on. So did Congress. So did the corporations, celebrities, influencers, and regular people. So did the big platforms, who continued to be immune from prosecution for whatever happened on their systems – where whatever happened, happened.
This one convergence of tech and law was a single moment in time. This weird artifact of personal copyright and DRM existed in the world as a new thing to work around. Tech and law continued to diverge wildly, causing disruptions and madness. This whole sordid episode drifted away in time like everything in tech does — replaced by something new, neat, and criminal weeks or days later.
Until the rise of personalized robots, many decades later.
At about the time human copyrights began entering the public domain, the first personalized robot companions appeared on the market. Originally, robots were bland, generic, and repetitive. They wore the same dozen faces. They worked, but no one bonded emotionally with these new robotic companions. No one made them a member of their family.
Calvin Robotics went about changing that fact. They wanted their customers to love their robots and think of them almost as people. The robots from Calvin Robotics featured lifelike robots with unique faces expressing near-real human emotion.
While uniqueness was the key to overcoming robot uncanny valley, the roboticists at Calvin Robotics realized their design AIs could not craft the perfect human face. The AIs came close, close enough for market penetration, but something was off. The designs from the design AIs still lacked human authenticity. As global copyright law protected human identity with globally enforced PDRM, the law barred Calvin Robotics from using current human faces on their robots without complicated and expensive legal processes.
The Calvin Roboticists approached the Internet Archivists. Zettabytes of data collecting the entire public Internet from decades ago sat on cold disks rotting in salt mines. All that data, forgotten, from the bankruptcies of RealLife ™ and the old, giant, mega-platforms, lie moldering. The archivists were able to recover the faces of millions of people – a minuscule percentage of the people who existed online at that time – with full-motion audio and video. All DRM-free, and all free for use.
Not long later, robots walked off the line wearing the faces of long-dead humans. Humans were reborn into the public domain. And, from the public domain, back into Calvin Robotics ownership of licensing and copyright.
bdewhirst says
Past changes to the media technology available (rail, radio, film, tv) all had major impacts on politics, and led to changes in how campaigns and democracies operated. I’m not sure our 150-400 year old political institutions can survive what’s coming.
Right on track for cyberpunk crapsack dystopia 🙂