Brandmark/Primary

No results found

AU
You are currently viewing our Australia site.
Select your preferred location for tailored content to your location.
  • Australia
  • Asia Pacific
  • New Zealand
  • Singapore
  • Malaysia
  • Hongkong
Back to News and Insights

The Bad Blood of Deepfakes

Commercialisation & Licensing

The exponential and unstoppable rise of technology as seen with the significant advance of everything and anything ‘artificial intelligence’ (AI) is at once marvellous and frightening.

Recently, the world of Taylor Swift, a performer who is currently on her own exponential and unstoppable rise, suffered a Glitch when explicit deepfakes of the singer/songwriter began circulating on social media platform X.

In our previous article, OK, Computer: AI, Music and IP Law in Australia, we discussed some of the challenges posed by AI-generated music. In this article, we explore how AI has been used, is being used and may be used in the future in relation to the creation of deepfakes (in both visual and audio formats) and ask how intellectual property and related laws currently address the misuse of deepfakes.

What is a Deepfake? (aka I Knew You Were Trouble)

Deepfakes typically refer to content, whether it be video, images or audio, that have been altered or created using AI to resemble a specific real person or persons. The term arises from the use of “deep learning”, a method in AI for constructing and optimising mathematical models (neural networks), some of which are capable of generating deepfakes.

Apologies if we get technical for a moment here, but here’s a brief explanation of how the Hoax occurs. Some deepfake-capable models are trained by means of a Generative Adversarial Network (GAN) procedure which involves a Generator and Discriminator. The Generator is trained to output synthetic samples, be they images, video and/or audio samples. The discriminator network meanwhile is trained to evaluate authenticity and distinguish synthetic from real samples. Training both the generator and discriminator simultaneously yields a model which is capable of producing hyperreal samples, or deepfakes.

Some of the most recent and high profile examples are the  explicit deepfakes of Taylor Swift that began circulating on social media platform X last week. X has since blocked searches for Taylor Swift on its platform in an attempt to crack down on the sharing of the deepfakes.

Musicians, celebrities, politicians and other people in the public eye are particularly at risk of deepfakes because of the amount of audio and visual material available online that can be used as input/training data in deep learning. Ultimately, the more complete the data on which an AI model is trained, the more ‘believable’ the output.

Beyond Tolerating It – what Australian law can and can’t do to protect against deepfakes

Taylor Swift, recently named as Time Magazine’s 2023 Person of the Year, and currently in the midst of a world concert tour, is arguably one of the most prominent victims of deepfakes to date, and this recent example has highlighted the potential harm of AI. But what can be done to address the misuse of deepfakes?

From an Australian law perspective, there are no laws that specifically address deepfakes. However, there are some existing laws that could apply to the misuse of deepfakes, with varying degrees of success in protecting the victims of deepfakes.

  1. Copyright

If a deepfake reproduces a substantial part of a copyright work, such as a sound recording or image, then the deepfake is likely to constitute copyright infringement under Australian law. However, the copyright owner must bring an action of copyright infringement. Given a number of the materials used as input/training data to generate a deepfake of a celebrity are typically images taken by paparazzi, the images are unlikely to be owned by the celebrity the subject of the image and therefore copyright infringement is unlikely to be a cause of action available in relation to the misuse of deepfakes.

There is a further limitation in showing that the copyright work was used to train an AI model. In most cases it would be difficult to prove that a copyright work was used to train an AI model, unless a specific identifier, such as a watermark, was visible in the AI-generated image (as occurred in a recent US case brought by Getty Images against Stability AI Inc).[i]

Additionally, even if the celebrity can establish that they do own copyright in any of the images that have been used to train the AI model, there can be difficulties in identifying the individual or entity responsible for the creation of the deepfake, in order to bring a copyright action, and difficulties in establishing that a “substantial part” of a specific copyright work has been created.

Finally, there may also be fair use defences to copyright infringement available in some limited circumstances, such as where the deepfake is a parody (and is likely to be recognised by internet users as a parody). A popular example of parody deepfakes are those posted to the TikTok account @deeptomcruise which posts deepfakes of Tom Cruise doing un-Tom-Cruise-like stuff.[ii]

  1. Consumer protection laws

The Australian Consumer Law (ACL), as set out in Schedule 2 of the Competition and Consumer Act 2010 (Cth), is a cause of action that could be available to those who are depicted in a deepfake in a commercial context with a connection to Australia (for example, targeted to Australian consumers). The ACL prohibits conduct in trade or commerce that is misleading or deceptive, or is likely to mislead or deceive, and other unfair practices such as false or misleading representations about goods and services. However, as with copyright law, the victim of a deepfake may only be able to take action under the ACL if they can first identify an individual or company responsible for the use of that deepfake.

It’s been a particularly Cruel Summer here in the southern hemisphere for Taylor Swift. Recently she was depicted in another deepfake which was a promotional video for a fake Le Creuset cookware giveaway to steal money and data.[iii] This deepfake, if directed to Australian consumers, could lead consumers to think that luxury cookware brand, Le Creuset, had the sponsorship or approval of Taylor Swift (when that was not the case) and therefore be considered a breach of the ACL.

  1. Online Safety Act 2021 (Cth)

In 2021, Australia introduced new legislation under the Online Safety Act 2021 (Cth) (OSA) to expand and strengthen existing laws relating to online safety, particularly around non-consensual sharing of intimate images. It also established a specific and targeted power for the eSafety Commissioner.

Section 75 of the OSA prohibits the posting or threat of posting intimate images (which is defined in section 15) of a person without their consent on any social media, relevant electronic or designated internet service. While it does not refer to deepfakes expressly, a recent decision by the Federal Court of Australia confirms that the provision does apply to deepfakes.

In October last year, the eSafety Commissioner commenced Federal Court proceedings against Mr Anthony Rotondo for alleged creating and posting intimate images of Australian public figures, who did not consent to the creation of the deepfakes, on the website mrdeepfakes.com in breach of the OSA. Despite receiving a removal notice from the eSafety Commissioner and being ordered by the Federal Court to remove the offending material, he failed to remove the images.[iv] Mr Rotondo was found to be in contempt of Court and a warrant was issued for Mr Rotondo’s arrest and detention.[v] He was also later fined $25,000.[vi]

  1. Social media policies and terms

Deepfakes can be widely distributed on various social media platforms, with each platform having their own terms of service and in some cases, their own policies on deepfakes.

Given the difficulties in identifying the individuals responsible for the creation of deepfakes, sometimes social media accounts and policies are the best method for getting nefarious deepfake content taken down, and to prevent the ongoing dissemination of that content.

For example, Facebook has a policy regarding ‘manipulated media’ which states it is prohibited content (and therefore can be taken down by Facebook) unless it is parody or satire.[vii] X also has a policy regarding synthetic and manipulated media but has wider exceptions to violations of this policy, including if the deepfake is memes/satire, animations, illustrations and cartoons, commentary, reviews, opinions and/or reactions and counter speech.[viii]

  1. Other causes of action

Other potential causes of action to address the misuse of deepfakes include privacy and defamation. However, the Australian Government is currently reviewing Australian privacy laws which are expected to be subject to significant reform.

The Australian Government also announced on 17 January 2024, that it will establish an expert advisory group to support the development of mandatory guidelines for AI development and deployment in high-risk settings.[ix] One of the possible guardrails relates to watermarking of AI-generated content. While mandatory watermarking would make it easier to detect deepfakes, it is unlikely to be complied with when it comes to nefarious deepfakes.

This Is Why We Can’t Have Nice Things

It is not all bad blood when it comes to deepfakes. There have been positive uses of deepfake technology. For instance Google and DeepMind were able to create a synthesised voice of a former NFL football linebacker, Tim Shaw, as he was no longer able to speak following his ALS diagnosis.[x]

Deepfake technology has the potential to bring extraordinary benefits, but also provides a potent weapon for those seeking to do harm with very real political, geopolitical and commercial ramifications. Until the legal framework for preventing and protecting individuals from the damage caused by deepfakes is strengthened, celebrities such as Taylor Swift will continue to be Haunted by the non-consensual use of their own image online.

 

This article forms part of DCC’s Music and IP initiative.

 

 

[i] James Vincent, ‘Getty Images sues AI art generator Stable Diffusion in the US for copyright infringement’, The Verge (Article, 7 February 2023) <https://www.theverge.com/2023/2/6/23587393/ai-art-copyright-lawsuit-getty-images-stable-diffusion>.
[ii] Miles Fisher, ‘How I Became the Fake Tom Cruise’, The Hollywood Reporter (Article, 21 July 2022) <https://www.hollywoodreporter.com/feature/deepfake-tom-cruise-miles-fisher-1235182932/>.
[iii] Megan Schaltegger, ‘Taylor Swift Fans Were Scammed By AI-Generated Le Creuset Ads’, Delish (Article, 10 January 2024) < https://www.delish.com/food-news/a46339456/taylor-swift-le-creuset-scam/>.
[iv] eSafety Commissioner v Rotondo [2023] FCA 1296
[v] eSafety Commissioner v Rotondo (No 2) [2023] FCA 1351
[vi] eSafety Commissioner v Rotondo (No 3) [2023] FCA 1590
[vii] https://transparency.fb.com/en-gb/policies/community-standards/manipulated-media/
[viii] Our synthetic and manipulated media policy | X Help (twitter.com)
[ix] Minister for Industry and Science, ‘Action to help ensure AI is safe and responsible’ (Media Release, 17 January 2024) < https://www.minister.industry.gov.au/ministers/husic/media-releases/action-help-ensure-ai-safe-and-responsible>.
[x] Kyle Wiggers, ‘DeepMind and Google recreate former NFL linebacker Tim Shaw’s voice using AI’, VentureBeat (Article, 18 December 2019) < https://venturebeat.com/ai/deepmind-and-google-recreate-former-nfl-linebacker-tim-shaws-voice-using-ai/>.