Attorney William Meyer joins producer/host Coralie Chun Matayoshi to discuss how Hawaii laws protect the commercial use of your voice, likeness and other attributes even post-mortem, how AI issues were resolved in the actors’ and screenwriters’ strikes, how AI contributes to misinformation and deepfakes for political smearing and sexual exploitation, and why the biggest danger of AI use in the media is lack of trust.

Q.  There is so much content on the internet for AI to copy or manipulate and AI technology has become so sophisticated that it seems that anyone can do anything with your face, body, voice, and even mannerisms and movement.  So, who owns your face? 

In the absence of federal law governing the right of publicity, states have enacted a patchwork of laws to regulate the use of your likeness, including your face, body, and voice.  Hawaii passed a right of publicity law in 2009 that allows all persons, living or deceased, the right of publicity, which is a property right in the commercial use of one’s name, voice, signature, likeness, and other commercially valuable attributes.  Clearview AI settled a lawsuit filed by the ACLU for repeated violations of the Illinois Biometric Privacy Act.  The company agreed not to sell its face database of over 20 billion photos scraped from the web and sites like Facebook, LinkedIn and Instagram to most private individuals and businesses in the U.S. While they can still sell their database to federal and state agencies in the U.S., its technology is banned in Canada, Australia and parts of Europe for violation of privacy laws.  George Carlin’s estate settled a lawsuit alleging a violation of California’s right of publicity law and federal copyright law over a fake hourlong comedy audio special called “I’m Glad I’m Dead” that purportedly recreates Carlin making comments on current events.  Carlin died in 2008.

Q.  Wasn’t the use of AI one of the issues in the Actors’ strike because they wanted to protect their face, body and work?

The actors feared that generative AI and other digital technologies would increasingly be used to replicate their faces and voices to create entertainment ordinarily performed by paid actors.  The Agreement contained detailed provisions regarding the creation and use of both “Digital Replicas” of individual performers and entirely AI-generated “Synthetic Performers.”

  • Digital Replicas of an actor’s voice or likeness:

“Employment-based” where a studio captures the likeness of a performer while shooting a movie or TV show “with the performer’s physical participation” but with the intent to use the replica to “portray the performer” in a scene or soundtrack “in which the performer did not actually perform.”  In this case, the studio must get the actor’s explicit consent for each additional movie or TV show where the replica is used and must pay the actor at least the “day performer rate.”

“Independently created” where a studio uses “existing materials” to portray the actor in scenes they did not actually shoot.  Similarly, the studio must get the actor’s explicit consent for each use of the digital replica, but monetary compensation is negotiable because studios usually already own the copyright to “existing materials” from their movies and TV shows.

  • Synthetic Performers

Similar to deepfakes, AI is used to generate an actor’s likeness and project it into a scene where the actor never actually appeared.  If a studio wants to completely replace an actor with a human-like synthetic performer, the studio needs to give the union an opportunity to bargain in good faith for appropriate compensation.

Q.  The Screenwriters went on strike over the potential use of generative AI by studios to drive down wages and deprive them of creative assignments and writing credits.  What concessions were made on AI issues to end the strike?

Generative AI like ChatGPT can use algorithms to learn patterns from data to produce written material.  The agreement provides that AI generated material will not be considered source material which is important because writers receive less compensation and legal rights when only writing a screenplay versus credit for both the story and the screenplay.  The writer is given the bulk of authority to decide whether to use AI. The issue of whether anyone can use writers’ existing written material to train generative AI systems was reserved for another day.

Q.  Newspapers like The New York Times and book authors like John Grisham sued Open AI for using copyrighted material off the internet to train its AI-powered Chatbot.  What is this all about?

The New York Times and book authors like John Grisham and comedian Sarah Silverman sued Open AI (owner of ChatGPT) over the company’s methods of scraping copyrighted materialoff the internet to train its AI-powered chatbot.  The New York Times also cites unfair competition because ChatGPT and Microsoft’s Copilot are using its material and then diverting web traffic away from the newspaper and other copyright holders who depend on advertising revenue generated from their sites. to keep producing their journalism.  Sometimes these chatbots will copy Times articles word-for-word with attribution, and other times they will “hallucinate” and falsely attribute this misinformation to the New York Times, thereby damaging its reputation.  Twitter was sued by music publishers for copyright infringement by allowing users to post music to their platform without permission.  Photographers are discovering digital replicas of their copyrighted photos being used to train generative AI systems.

Q.  Beyond fights over commercial use of likenesses and copyrighted material, AI heightens the danger for misinformation and deepfakes used for political smearing and sexual exploitation.

A fake photo of an explosion near the Pentagon went viral on Twitter last year causing stocks to dip.  A fake photo on Twitter of Pope Benedict in a puffer jacket also went viral last year. And as the saying goes, a lie can travel around the world and back again while the truth is still tying its shoelaces.  A deepfake photo of a vocal critic of the Bangladesh ruling party was falsely depicted wearing a bikini in an AI created video.  The U.S. has been slow to respond to this type of election manipulation, but the European Union is mandating special labeling of AI deepfakes starting in 2025.  There were bills in the Hawaii State Legislative this year to keep AI deepfake messaging out of Hawaii elections, but the bills died.  YouTube is rolling out new rules for AI content that require creators to reveal whether they used AI to create realistic-looking videos.  Violators can face having their content removed or being suspended from YouTube’s revenue sharing program.  YouTube’s privacy complaint process will allow requests to remove AI generated video that simulates an identifiable person, including their face or voice.  Record labels or distributers can also request YouTube to take down AI generated music that “mimics an artists’ unique singing or rapping voice.”  And political ads will now be required to come with a prominent warning label.

Q.  What is the biggest danger of AI use in the media?

AI generated content cannot be trusted and in a world of where everything is suspect, you decide what to believe based on misinformation.

To learn more about this subject, tune into this video podcast.

Disclaimer:  this material is intended for informational purposes only and does not constitute legal advice.  The law varies by jurisdiction and is constantly changing.  For legal advice, you should consult a lawyer that can apply the appropriate law to the facts in your case.