Breadcrumb

ACIC unlocking new opportunities with artificial intelligence (AI) while building trust

QRTLY INTEL

 

Our agency’s core operations are conducted in close collaboration with law enforcement partners and the National Intelligence Community. Leveraging cutting-edge technologies, including artificial intelligence (AI), we deliver a diverse suite of intelligence products, services, analysis and advice to confront the threats Australia faces from serious and organised crime.

At the Australian Criminal Intelligence Commission (ACIC), AI isn’t just a buzzword – it’s an opportunity. As organised crime increasingly uses AI to scale and conceal their activities, causing real harm to the Australian community, our agency is responsibly applying AI to detect and disrupt these activities.

“AI’s ability to improve speed, accuracy and efficiency is undeniable. We’re working towards AI-enabled monitoring of millions of data points across our vast and diverse data holdings to help us understand, predict, disrupt and harden the environment to criminal threats. 

“We don’t adopt AI simply because it’s powerful. Our adoption and use of AI is governed by stringent internal frameworks to ensure it is explainable, lawful, ethical, secure, and aligned with our strategic objectives,” ACIC Chief Data and Analytics Officer said. 

At the heart of the ACIC’s internal framework for AI is our AI Policy. The policy provides the foundation, defining 4 pathways that outline how our agency can adopt AI capabilities, while setting clear ethical governing principles for a human-centered approach. Every AI initiative is subject to rigorous risk assessments, reviews and ongoing monitoring throughout its lifecycle. This ensures it complies not only with legislative and regulatory obligations, but with the ACIC’s own standards for fairness, accountability and trust. It’s a structure designed not to slow innovation, but to ensure our adoption of AI is built on solid ground.

Our Chief Data and Analytics Officer said that underpinning this is the importance of community trust. The Australian public expects us to use AI responsibly, fairly and ethically. That’s why our approach is guided by strong ethical principles, with safeguards in place to enhance accountability and ensure human oversight.

“There’s a real risk in treating AI as a ‘super-intelligent’ helper. Yes, it can help us detect threats faster than a human team – but without responsible guardrails, and a deep understanding of its capability in context, it can just as easily amplify risk. That’s why we embed safeguards into every stage of AI development and use, supported by rigorous assessment of risks and benefits using specialist expertise. Human oversight remains central, with accountable leaders within the ACIC making decisions – not algorithms,” they said.

AI continues to evolve, but our commitment to using it responsibly, securely and ethically will not waver. By pairing big opportunities with strong foundations, we aim to ensure that AI is a force for good – delivering intelligence and scalable operations to help protect the Australian community from the harm of serious and organised crime. And, with the landscape rapidly evolving, having both intelligence and integrity in our toolkit is the smartest strategy of all.