Skip to main content
The Panaseer logo shows a white square and a yellow square around the initial P. To the right of the P there is the copy written ‘anaseer’.
Show main menu Hide main menu

Why AI governance is key to secure development and growth

The Artificial Intelligence (AI) world is evolving rapidly, shifting in public perception from futuristic concept to business enabler. In this article, I will discuss why governance is important for responsible AI innovation. I'll also share insights from the recent AI Action Summit.

Thordis Thorsteins, Senior AI Data Scientist
read

Microsoft are reporting that three out of four individuals are already using AI in the workplace, with that number only set to grow over the next 12 months as the technology continues to revolutionise industries, workplaces, and everyday life.

The 3rd AI Action Summit

Enter the 3rd Artificial Intelligence (AI) Action Summit. Bringing together international organizations, government leaders, representatives from civil society and academic communities from over 100 countries.

It had five core themes:

  • Public interest in AI
  • The future of work
  • Innovation and culture
  • Trust in AI
  • Global AI governance

Held last month, the event may have faded from the headlines, but the lack of consensus it revealed remains just as relevant.

This year’s pivotal event had a special emphasis on innovation and economic impact. It marks a notable shift from previous years, where discussions focused heavily on the risks AI poses.

A key backdrop to these discussions was the EU AI Act, which came into effect last year as the world’s first legal framework on AI. Taking a risk-based approach, it mandates safeguards and human oversight for high-risk AI systems while leaving minimal-risk AI systems – where companies typically focus their experimentation - unregulated.

Despite global AI governance being a core theme of this year’s summit, some voices raised concerns that frameworks like this hinder development and stifle innovation.

The main headline-grabbing moment came when the United States and the United Kingdom refused to sign the Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet. The declaration outlined key principles for AI development, including an open, inclusive and ethical approach to the technology. But UK representatives cited concerns about national security and “global governance”, whilst US vice-president J.D Vance stated that “pro-growth AI policies” should be prioritized over continued focus on AI safety.

The moment highlighted the ongoing lack of global consensus on how AI should be governed and the summit was described by experts as a missed opportunity.

Governance: the foundation for responsible AI innovation

Regardless of whether global leaders sign sustainable AI agreements like this one, the importance of embedding strong governance into AI development is paramount. We can drive faster knowing we have the working brakes to safely navigate the hairpin ahead without stopping entirely (or veering off course).

My career in cybersecurity has made me particularly aware of the importance of integrating security into every stage of software development — from design to deployment to iteration.

Treating security as an afterthought can lead to dangerous vulnerabilities which are costly and complex to fix. If breaches have taught us one thing, it’s that failing to address security proactively can result in significant damage—both financial and reputational. Proactive safeguards reduce these risks from the start. AI governance builds on this to address broader AI-related risks in advance.

In the absence of global alignment on AI governance, organisations should still proactively adopt common sense governance steps as a default standard for responsible development. Lack of consensus does not mean lack of accountability

Speaking to Panaseer about the growing use of AI as part of our Cybersecurity Leadership webinar series, Simon Goldsmith Enterprise Security and Platform Lead at OVO Energy, explained, “There is an upside to the technology, but take some time to think about and mitigate the downside. It’s the grown-up approach to take [to development].”

Moving forward: the AI balancing act

In a future increasingly shaped by AI, appropriate governance is the only viable path for sustainable AI growth and development. Totally eliminating risk is not a feasible goal, as with any technology, but responsible development is necessary to reduce that risk to a manageable level.

For people and society to truly benefit from the potential that AI offers, they need to be able to trust the systems. People need to be assured their data is being handled responsibly, and that they can trust outputs from AI systems. The technology offers immense opportunities, but these can only be fully realized if we take the time to develop and regulate the technology responsibly.

For those of us in the cybersecurity industry, this focus on governance and security is especially crucial. Building AI systems that function well is not the only goal—it’s also about building systems that are secure, resilient, and trustworthy.

The need for thoughtful, responsible regulation and security measures has never been clearer.

About the author

Thordis Thorsteins, Senior AI Data Scientist