Meta is already using the public Facebook and Instagram posts of users in Australia to train its artificial intelligence systems, contrary to local media reports which suggested the company was only set to begin using the data in late June.
The social media giant had planned to begin using the public posts of users in Europe and the UK to train its AI systems from 26 June — a move which has since been postponed, but which also caused people in other countries to believe they would be impacted.
However, Meta has already used public Facebook and Instagram posts to train its generative AI models in markets such as Australia and the United States, which have less stringent data privacy regulations than Europe.
Privacy watchdog the Office of the Australian Information Commissioner (OAIC) told Information Age it had requested a meeting with Meta “to discuss their AI products and the privacy safeguards they have in place”.
Responding to questions from this publication, a Meta spokesperson did not deny claims the company had used public posts and captions from Australian Facebook and Instagram users to train its AI systems.
“With the release of our AI experiences, we’ve shared details about the kinds of information we use to build and improve AI experiences – which includes public posts from Instagram and Facebook – consistent with our privacy policy and terms of service,” they said.
“We will continue to build AI at Meta responsibly and bring this innovation to people and businesses around the world.
“Using publicly available information to train AI models is an industry-wide practice, and is not unique to our services.”
Meta said it did not train AI models using posts from private Facebook and Instagram accounts, nor posts from account holders under the age of 18.
The company’s director of product management for generative AI data and safety, Mike Clark, wrote in September 2023 that Meta had begun to use “publicly shared posts from Instagram and Facebook – including photos and text” to help train its generative AI models.
Messages which users sent to Meta’s AI chatbots were also used, he said.
Clark added that Meta did not use the content of users’ private Messenger or WhatsApp messages, but did use “information that’s publicly available online, licensed data and information from Meta’s products and services”.
A spokesperson for the OAIC told Information Age that the watchdog encouraged Australians to “regularly review and update their privacy settings to ensure they are not publicly sharing information they’d rather keep private, including to prevent their posts and content being used to train AI models”.
The spokesperson added that under the Australian Privacy Act, privacy obligations would apply when personal information was used to train AI systems.
They added that organisations could only use this information for purposes other than its original purpose “if the individual has consented, or if the individual would reasonably expect the organisation to use it for those purposes”.
Why Australians can’t opt out of AI training
Unlike Facebook and Instagram users in Europe and the UK, Australian users don’t currently have the option to opt-out of their public posts and captions being used to train Meta’s AI systems.
This is because Europe’s General Data Protection Regulation (GDPR) — one of the strongest privacy laws in the world — makes it harder for Meta and other companies to collect user data in that region.
Australian users can, however, submit requests regarding how their personal information from third parties is usedto train Meta’s AI systems.
Australians can also delete personal information they’ve provided to the Meta AI chatbot, but this is mainly done to correct inaccuracies in its responses.
The Australian government has so far only committed to a handful of recommendations from the Attorney-General’s review of the nation’s Privacy Act, which was released in February 2023.
Australia’s eSafety Commissioner recently signed an administrative agreement with the European Commission to support each other’s regulation of online platforms, in which “algorithms and artificial intelligence” was listed as a common interest.
Why Meta paused European AI training
In May, Meta began notifying users in Europe and the UK about upcoming changes to its privacy policy (then scheduled for 26 June) which would give it the right to use public posts from those users to train its AI systems.
The proposed privacy policy changes are only able to be viewed here in Europe, or by using a VPN with a European connection.
Meta said it was committed to using the content “in a responsible way that complies with privacy rules”, and justified the move by saying its systems “need to be trained on information that reflects the diverse cultures and languages of the European communities who will use them”.
“Under the UK’s Data Protection Act and the EU’s General Data Protection Regulation, we’ll be relying on the legal basis of ‘Legitimate Interests’ for processing certain first and third-party data in the European Region and the United Kingdom to improve AI at Meta,” the company said.
The move sparked backlash from some European users and privacy groups, including digital rights advocacy organisation Noyb (which stands for “none of your business”).
Noyb filed complaints about Meta’s plans with 11 data privacy authorities across Europe, urging them to act.
The organisation criticised Meta for not making AI training an opt-in system, and argued Europeans would not have an option to have their content removed from the training data (or the so-called ‘right to be forgotten’) once the training had taken place.
On 14 June, Meta announced it was pausing its plan to train AI using public posts from European and UK users, following a request by the Irish Data Protect Commission on behalf of European data privacy authorities.
Meta’s global engagement director of privacy policy, Stefano Fratta, claimed it was “a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe”.
“We remain highly confident that our approach complies with European laws and regulations,” he said.
“AI training is not unique to our services, and we’re more transparent than many of our industry counterparts.”
Reacting to the pause and Meta’s decision not to launch some AI products in Europe yet, Noyb chair Max Schrems said it felt like “collective punishment”.
“If one European insists on his or her rights, the whole continent will not get our shiny new products,” he said.
“But Meta has every opportunity to deploy AI based on valid consent — it just chooses not to do so.”
ACS