Australia unveils new measures to combat AI-generated harmful content
Tech companies will be required to stamp out harmful content created by AI
The Australian federal government is introducing new online safety rules that will compel social media platforms and tech companies to combat harmful content generated using artificial intelligence (AI). This includes addressing issues like deep fake intimate images and hate speech. The government aims to update the Basic Online Safety Expectations (Bose) to ensure that tech companies' algorithms do not amplify harmful content, such as racism.
The proposed changes will also require companies to take proactive steps to minimize the creation and dissemination of unlawful or harmful material through generative AI tools. The government is concerned about the rise of offensive and dangerous content, as well as antisemitic and Islamophobic rhetoric on online platforms.
Communications minister Michelle Rowland announced two key initiatives: a statutory review of the Online Safety Act and an intention to amend the Bose system. The eSafety commissioner will oversee these changes, and companies will need to report on their progress in addressing online safety concerns. The government will open consultation on these changes, aiming to strike a balance between allowing users to benefit from generative AI capabilities while preventing the rapid spread of harmful content.
The new expectations will also require companies to consider the impact of their recommender systems, particularly on social media platforms, and minimize the amplification of harmful or extreme content. The consultation will run until February 2024, and the Online Safety Act review will begin in early 2024.