Victoria's child protection agency bans AI use after report debacle
Victoria's child protection agency has imposed a ban on its staff using generative artificial intelligence (AI) services. This decision comes after an employee was discovered to have inputted substantial personal information, including the name of a vulnerable child, into ChatGPT. The Department of Families, Fairness, and Housing reported this incident to the Office of the Victorian Information Commissioner (Ovic) in December last year.
AI misuse linked to child protection case
The report that contained the personal information was submitted to a children's court in a case involving a young child. The parents of this child were facing charges related to sexual offenses, although these charges were not directly connected to the child. Ovic's investigation into the incident revealed several signs indicating the use of ChatGPT in drafting the report, including non-standard language and inappropriate sentence structures.
Usage resulted in inaccurate personal information
Ovic's report highlighted that parts of the child protection report contained inaccurate personal information. A concerning example was the misrepresentation of a child's doll, reported to have been used by the father for sexual purposes. The AI system described this as a positive aspect of the parents' efforts to provide 'age-appropriate toys.' This misuse of ChatGPT was seen as diminishing the seriousness of potential harm to the child and could have influenced decisions about their care.
Unauthorized disclosure of information
Ovic stated that entering the information into ChatGPT constituted an unauthorized disclosure of information under the department's control. A subsequent investigation revealed that the employee might have used ChatGPT in 100 cases while preparing child protection-related documents. Between July and December 2023, nearly 900 employees had accessed the ChatGPT website, accounting for almost 13% of the workforce. However, none of these instances were found to have as significant an impact as the initial report.
Employee admits to using AI for report generation
The employee confessed to using ChatGPT to generate the report "to save time and present work more professionally," but denied entering personal information. However, colleagues stated that this individual had shown others how to use ChatGPT by inputting client names into the tool. Based on these testimonies, Ovic concluded that it was likely the worker had indeed entered personal information into the system.
Department responds to Ovic's orders
In response to the incident, Ovic issued orders to the department, including blocking IP and domains for several generative AI websites such as ChatGPT, Gemini, Meta AI, and Copilot. This block is required to be in place for two years from November 5. The department has accepted these findings and stated that it will implement the orders. It also confirmed that the employee involved is no longer employed by them.