By Suvradip Maitra, Lyndal Sleep, Paul Henman and Suzanna Fay, Phys.org, Illustration: Pixabay/CC0 Public Domain, February 10, 2025
Late last year, ChatGPT was used by a Victorian child protection worker to draft documents. In a glaring error, ChatGPT referred to a "doll" used for sexual purposes as an "age-appropriate toy." Following this, the Victorian information commissioner banned the use of generative artificial intelligence (AI) in child protection.
Unfortunately, many harmful AI systems will not garner such public visibility. It's crucial that people who use social services—such as employment, homelessness or domestic violence services—are aware they may be subject to AI. Additionally, service providers should be well informed about how to use AI safely.
Fortunately, emerging regulations and tools, such as our trauma-informed AI toolkit, can help to reduce AI harm.
Comments (0)