Artificial intelligence (AI) is an exciting prospect in advertising. It’s accessible to everyone with internet access. It can create advertising, personalised content and experiences, images and other written content (including social media posts). It can also power a chatbot on your website and do your data analysis for you. And behind the scenes on your social media and search engine platforms, AI is in full swing, determining who sees which advertisements.
But there are a few things to be mindful of if you are using AI to generate or power advertising for therapeutic goods (such as medicines or medical devices).
The advertising of therapeutic goods to Australian consumers is subject to special rules set out in the Therapeutic Goods Act 1989 (Act) and the Therapeutic Goods Advertising Code (Code). These rules do not prohibit the use of AI in advertising. However, the rules will impact on how you use AI in relation to your advertising. Here are Seeside Advisory’s top tips.
1. The advertiser is still responsible for the content
If you use AI to generate content that is (or forms part of) your advertising, you are still responsible for the compliance of it—meaning, you will need to review and decide whether it is compliant before releasing it.
In particular:
- You will need to be able to substantiate all the claims made in the advertisement with suitable evidence (see the TGA guidance on this subject).
- Images conveying or portraying advertising claims are also subject to the Act and the Code. For example, using AI to alter an image of a person to convey the weight loss possible from the use of the advertised good must comply with the Act and the Code, including the specific requirements for weight management products. Under the Code, advertisers can only use visuals that are consistent with what can be achieved on average from the use of the advertised good (clause 23(4)(c)). Further, it would need to be clear to viewers that they were seeing an AI generated image, otherwise the advertisement would mislead (thereby breaching Code clause 8(1)(a)).
- AI may use advertising tools and tactics acceptable for other types of health products but that are explicitly prohibited in medicines and medical devices advertising – e.g., doctor endorsement, suggesting other products are ineffective.
- Consider whether you need to clearly identify the advertisement (or parts of it) as AI generated in order to avoid misleading consumers.
- AI is also unlikely to automatically include the mandatory statements the Code requires in medicines and medical devices advertising (e.g. ‘Always read the label’) so you will need to add these.
Remember that personalised content promoting the use or supply of therapeutic goods, even if it is only being seen by one person, is still advertising and will need to comply with the Act and the Code (unless it is being provided by a health practitioner directly to a patient as part of a course of treatment[1]).
Our June 2024 blog post contains more information about the definition of advertising and the types of content that could be captured.
2. Chat bot content can be advertising
Using AI to streamline engagements with customers online is helpful for businesses. However, it’s important to remember that the responses from a chat bot may be considered advertising in and of itself, especially when considered in the context of a promotional website. And if it is advertising, it needs to comply with the Act and the Code (including the mandatory statements).
Remember that the definition of ‘advertise’ in the Act includes the promotion of the use or supply of the goods (my emphasis), giving it a very broad scope. So you will need to review your chat bot scripts and test it carefully for compliance before implementing it.
3. Don’t be tempted to use AI to create testimonials or celebrity endorsements
Falsifying testimonials and endorsements is unlawful, whether or not you use AI to generate them.
The Australian Competition and Consumer Commission (ACCC) has previously fined and taken court action against advertisers that have manipulated customer reviews or created their own reviews. Such actions will also breach the Code because the resulting ‘testimonials’ are misleading. The same applies to AI-generated celebrity or politician endorsements.
There is another reason not to use these things. Fake celebrity endorsements are common on the internet and social media channels. And they often seem to be a front for nefarious purposes. If you are building a legitimate business, you don’t want to damage your reputation by getting on this bandwagon. Savvy people will also recognise them as fake and not bother to click through.
4. Influencers—human and AI
Then there is the curious case of the AI influencers—personas generated by AI to promote products on social media. They are touted as giving advertisers more control over promotional content than they would have with a human influencer. While some AI influencers do have humans writing the social media posts behind the scenes, others use AI to generate content. Ultimately, if you decide to use an AI influencer to promote therapeutic goods, you will likely be held responsible for any compliance issues with the advertising it generates[2].
As with human influencers, care will be needed to ensure the content generated by AI influencers to promote therapeutic goods complies with the Act and the Code. Seeside recommends that you:
- choose your influencer (AI or human) carefully to ensure they are compatible with the medicine or medical device you want to advertise;
- ensure that the process for generating the advertising supports compliance checks on content before it is released; and
- craft the content for the AI influencer carefully to ensure it does not claim or suggest that the AI influencer has used (or will be using) the product being advertised (as it would be misleading).
5. Other legislation could impact on the way you use AI in your business
Australia, like other countries, is still considering whether its regulatory frameworks are fit for purpose in relation to the current AI challenges and the challenges that are likely to pose in the future as the technology advances.
Privacy is a specific area of concern. The Office of the Victorian Information Commissioner’s statement on Artificial Intelligence and Privacy articulates the risks to privacy and the challenges to the central principles of privacy laws (such as not collecting more personal information than is needed) through the use of AI.
To protect your clients and your business, Seeside recommends taking the following steps:
- Don’t enter sensitive information, like health information, into AI tools. This could breach the Commonwealth’s Privacy Act 1988.
- Before using AI with other client data (e.g. to gain insights into client purchasing or to prepare personalised marketing), assess whether your use of AI might undermine your relationship with clients, including by conflicting with your privacy obligations (e.g. under the Privacy Act 1988), your client’s consent and your business’s privacy policy.
- Monitor for relevant changes to the regulatory environment. For example:
- Changes to the Commonwealth’s Privacy Act 1988 are expected in the second half of 2024 that will strengthen protections, including those around consumer consent.
- Professionals should keep abreast of best practices with respect to AI in their sector and ensure that any use of AI complies with the relevant policies and regulation of their profession.
- If you are uncertain about applying relevant legislation, you may need to seek legal advice.
6. The consequences of non-compliant advertising
The use of AI offers businesses a range of benefits in relation to advertising, including helping to generate advertising faster and the ability to grab your audience’s attention using slick AI-generated graphics or AI influencers. But what use are all these benefits if the resulting advertising is non-compliant and gets you into trouble with regulators?
Your competitors or members of the public are likely to report advertising that doesn’t comply with the Act or Code to the TGA—whether the advertising is AI generated or not. You also can’t hide behind an AI influencer. The first place the TGA will look is the business responsible for the product the AI influencer is promoting. The TGA also has a range of legislative powers to obtain information, including information to establish the entity or entities responsible for the advertising.
Failing to comply with the Act and the Code can result in bad publicity for your business, fines and even court action. The level of action the TGA will take against non-compliance depends on a range of factors, including the safety of consumers, whether you have previously received education from the TGA and whether the subject matter is captured under the TGA’s compliance priorities. But the TGA website is full of examples of advertisers that gambled on the risk and lost.
It’s just not worth using non-compliant advertising (whether AI assisted or not), even if you think it will give you a competitive edge.
Did you know that regulators are using AI to identify non-compliance? While not currently widespread, Australian regulators’ use of AI will likely grow. Read Seeside’s post on Future trends in advertising compliance: AI.
7. Further reading
- Other practical advice for advertisers
- Lexology (UK) – AI and advertising – pitfalls for advertisers
- Federal Trade Commission (USA) – Keep your AI claims in check
- Institute of Practitioners in Advertising (UK) – Principles for use of generative AI in advertising
- Regulatory and policy considerations
- Hall & Wilcox – Legal and ethical issues with the use of AI – including ChatGPT – in healthcare
- Ashurst – Privacy risks for AI and ADM in an evolving regulatory ecosystem
- Office of the Australian Information Commissioner -submission to the Department of Industry, Science and Resources – Safe and responsible AI in Australia discussion paper
- Marketing Mag – Privacy, data and GenAI: Are Aussie marketers ready?
- The National Law Review (USA) – Exploring best practices for the use of artificial intelligence in advertising
[1] The main advertising offences in the Act are framed in the context of what a person advertising, or causing the advertising, of a therapeutic good must and must not do (see sections 42DL, 42DM). An advertiser that engages an AI or human influencer to promote a therapeutic good on the influencer’s own social media is likely to be considered to have caused the advertising, if not the advertiser themselves. The influencer can also be considered the advertiser.
[2] Information provided by a health practitioner directly to a patient as part of a course of treatment may be exempt from the majority of therapeutic goods advertising requirements, provided the conditions in subsection42AA(4) if the Therapeutic Goods Act 1989 (the Act) are met.