
Navigating the Cybersecurity Landscape in 2025: The AI Challenge
The intersection of artificial intelligence (AI) and cybersecurity has become increasingly complex, presenting both opportunities for innovation and new threat vectors. While a lot of content has been published on security for AI, particularly in the last year, there’s no definitive “101”. With 2025 underway, we want to provide a starting point for those new to this domain.
In this blog, we review the need for protection from three perspectives:
- When attackers are using AI
- When you are using third-party AI applications
- When you are developing in-house AI applications
We’ll focus on these key perspectives of protection in an AI-driven world, the recommended focus areas, and some predictions for what’s next.
Look out for the key reference list at the end of this blog to get you started on your AI controls journey.
Rising cyber threats of using AI
The potential threat from AI can be divided into three closely related, yet distinct categories, which require a different approach to manage.
When attackers use AI
Organizations must recognize that as AI technology becomes more accessible, so too does the arsenal available to cybercriminals.
The rise of Generative AI (GenAI) has made traditional attack methods like phishing, smishing, and vishing more sophisticated than ever, as attackers can now craft more convincing messages.
While GenAI can also be used for crafting malware, with the efficacy of modern EDR, it’s likely that social engineering is a primary risk.
When using third-party AI applications
As businesses increasingly adopt third-party AI tools (ranging from coding assistants to writing tools) the need to safeguard sensitive company data becomes paramount.
Organizations must develop robust policies and training to navigate the myriad of new tools emerging. Ensuring data confidentiality while leveraging productivity-enhancing AI applications is essential to mitigate risks.
When developing in-house AI applications
For those creating AI solutions internally, integrating security and governance into the development process is critical. The potential repercussions of regulatory breaches or security incidents can lead to significant reputational and financial risks.
Additionally, the adoption of AI can expand an organization's attack surface, making its infrastructure more vulnerable. Developers must be aware of various adversarial tactics that could compromise their AI applications, including:
- poisoning (injecting malicious samples into training data to skew model learning),
- evasion (manipulating input data to prompt incorrect classifications or decisions),
- model extraction (rebuilding a version of the AI model for malicious purposes, including intellectual property theft)
- and model inversion (querying the model to “steal” sensitive or proprietary data).
How to tackle AI risk
To navigate these challenges, we recommend focusing on three key areas:
1. AI Governance and Risk Management
Establish oversight and frameworks that emphasize transparency and data privacy. Consider resources like the NIST AI Risk Management Framework (AI RMF) and AI Trust, Risk, and Security Management (AI TRiSM).
2. Core Controls for AI Infrastructure
Implement rigorous cybersecurity measures tailored to AI applications.
For protection against using third-party AI apps this should centre around the “human factor” relating to social engineering, including user awareness training, IAM, and PAM.
Organizations developing in-house AI capabilities should double down on access controls and vulnerability management.
3. Emerging AI-Specific Controls
As the landscape evolves, new control tools specifically designed for GenAI are becoming available, but the appetite for adoption of these by the wider market is as yet unclear. Staying updated on these developments is crucial.
The future of the AI landscape
Looking ahead, we anticipate several key trends in AI cybersecurity. A consensus around standards and frameworks that define industry best practices seems likely. We anticipate an increased focus on AI within established Continuous Controls Monitoring programs. At the same time, increasing regulation will shape the controls landscape and drive new compliance requirements.
It's not yet clear whether a proactive response to best practices or a reactive response to legislation will lead the way. We are also starting to see the emergence of AI-specific tool vendors. This will create a distinct category for AI controls within cybersecurity, elevating the role and importance of AI in cybersecurity programs and frameworks.
Final thought
As AI reshapes the cybersecurity landscape, organizations must stay both informed and proactive. Whether adopting third-party applications or developing in-house solutions, it’s vital to understand the risks and put robust controls in place to navigate this new era safely.
By prioritizing governance, strengthening core controls, and exploring new AI-specific tooling, businesses will be better equipped to protect themselves against the evolving threats that AI brings.
Stay tuned for future publications, and make sure to consult the provided reference list as you embark on your AI controls journey.
Notable Publications and Resources
Regulations
EU AI Act
Entered into force: on 1 August 2024
The EU AI Act categorizes AI systems based on risk levels and imposes regulatory requirements accordingly. Its differentiator is the regulatory focus on compliance, safety, and the protection of fundamental rights, establishing a comprehensive legal framework for AI in the EU.
Standards
ISO/IEC 42001
Published by: International Organization for Standardization (ISO)
Last Updated: 2023
ISO/IEC 42001 establishes a comprehensive framework for AI governance, emphasizing alignment with organizational goals and ethical considerations. Its differentiator is the structured approach it offers for integrating AI governance into existing management systems, fostering accountability and transparency.
ISO/IEC 23894:2023
Published by: International Organization for Standardization (ISO)
Last Updated: 2023
This standard focuses on managing risks specific to AI systems. Its key differentiator lies in providing detailed guidance on integrating risk management into AI-related activities, making it practical for organizations at various stages of AI deployment.
Frameworks
AI Risk Management Framework (AI RMF)
Published by: National Institute of Standards and Technology (NIST)
Last Updated: 2023
The NIST AI RMF provides a structured approach for identifying, assessing, and mitigating AI risks. Its differentiator is its emphasis on integrating AI risk management into broader organizational practices, promoting a holistic view of risk that encompasses technology, people, and processes.
AI TRiSM (Trust, Risk & Security)
Published by: Gartner
AI TRiSM focuses on four pillars: explainability, continuous model refinement, application security, and privacy compliance. Its key differentiator is its holistic view of trust and security in AI, addressing not only technical aspects but also organizational and ethical considerations.
AI Maturity Model
Published by: MITRE Corporation
Last Updated: 2023
This model evaluates organizations' AI capabilities and maturity. Its differentiator is its focus on self-assessment, allowing organizations to identify strengths and weaknesses in their AI practices, thus facilitating targeted improvements in governance and risk management.
ATLAS
Published by: MITRE Corporation
Last Updated: Ongoing updates, last in March 2025
MITRE ATLAS is designed for assessing and improving AI capabilities, with a focus on effectiveness, reliability, and security. Its key differentiator is its structured evaluation process, which allows organizations to benchmark their AI systems against best practices.
OWASP Top 10 for GenAI
Published by: OWASP
Last Updated: Ongoing updates, last in 2025
The OWASP Top 10 for Generative AI highlights the most critical security and safety risks specific to generative AI systems. Its purpose is to raise awareness among developers, organizations, and stakeholders, helping them identify, understand, and mitigate vulnerabilities unique to the design, deployment, and use of generative AI technologies.
The Model for Responsible Innovation
Last Updated: 2022
This framework addresses ethical risks in AI and data-driven technologies. Its differentiator is its emphasis on practical guidance for responsible innovation, helping organizations navigate the complexities of AI ethics in project development.
Other useful resources
Cloud Security Alliance (CSA)
The CSA focuses on best practices for securing cloud environments. Its key differentiator is its emphasis on collaboration across industry stakeholders to develop comprehensive guidelines and frameworks, ensuring security measures are practical and widely adopted.
AI Standards Hub (UK)
This initiative serves as a centralized resource for AI standards. Its differentiator is its collaborative approach, promoting stakeholder engagement and best practices to enhance the quality and safety of AI technologies in the UK.