Profile
Profile Overview

AI Needs to Be Kept Under an IA Eye

Increasing adoption of AI by firms means the security of such systems from threats and vulnerabilities is paramount.

Share

A recent report from Gartner on security and risk management (SRM) of companies forecasts:

› Worldwide end-user spending on SRM is projected to be around $215 bn in 2024, an increase of 14.3% from 2023.

› Spending on security services — consulting, IT outsourcing, implementation and hardware support — is forecast to total $90 bn in 2024, an increase of 11%.

› Security services are expected to represent the largest area of total SRM end-user spending in 2024 at 42%.

Increasing adoption of AI by firms means the security of such systems from threats and vulnerabilities is paramount. This is where the internal audit (IA) function becomes essential. IA plays a crucial role in ensuring:

› Controls for AI security are established and effective in mitigating risks.

› Organisations maintain integrity and reliability of their AI systems, thereby safeguarding sensitive data and maintaining trust in the digital ecosystem.

Some principal strategies and considerations of IA for AI security control requirements:

› Understand AI-plus risks
Deepen and strengthen understanding of AI technologies, their applications and specific risks they pose. AI systems are complex and have their vulnerabilities, which traditional security controls often fail to address.
In data poisoning, for instance, the decision-making process of AI becomes vulnerable to adversarial manipulation. Moreover, all these attacks look out for weak models. So, it’s important to be aware of such risks for effective audits of AI security.

› Update audit skills & knowledge
Auditors also need to upskill themselves with the abilities and knowledge that AI systems are to be judged with. That would include a proper understanding of AI algorithms, data processing and an environment regulated around its use. Auditors should continue learning and developing professionally, and maintain a competitive edge in constantly growing threats and technologies.

› Collaborate with experts
Complexity means that IA functions will have to start considering partnering with AI experts, such as data scientists and AI security specialists. This collaboration would help an auditor understand the AI systems that were in place at the time, why some model decisions were made, and what security controls had been implemented in the system. The latter could also enable a more focused and effective audit of AI systems.

› A risk-based audit approach
Best way to audit AI systems would be to identify and prioritise AI applications based on their criticality to the organisation, and the risks they may represent. High-risk areas may include AI systems that process sensitive data, influence critical decisions and interfere with external systems. Areas of interest to audit shall be those of applicability and effectiveness in these high-risk areas concerning security controls.

› Assessing security controls
Whether the organisation’s AI security controls cover best practices and regulatory requirements should be determined independently. Security controls should contain facilities within data protection mechanisms, processes to validate models and incident response plans on an AI-specific basis. So, due consideration for the organisation’s ethical deployment of AI should also be given to auditors, keeping in mind biases and discriminations that these systems may be subjected to.

› Continuous monitoring & assurance
IA can recommend implementing tools and techniques for AI security monitoring, such as anomaly detection and automated vulnerability assessment. It is critical to continuously assure that AI security controls are functioning as designed.

Following these strategies could reduce the risk of AI threats, and encourage the ethical and responsible use of AI technologies.

card

Get a Free confidential review from a resume expert

Upload your resume and get expert resume analysis

Upload Resume