Securing AI: Lessons from the Cloud Era

Generative artificial intelligence (AI) is emerging as this decade’s revolutionary technology, akin to the transformative impact of cloud computing in the previous decade. While the benefits of AI are vast, from automating tasks to generating insights from large datasets, it also brings significant security challenges. The RSA Conference 2024 in San Francisco highlighted the parallels between the early days of cloud adoption and the current trajectory of AI. This article explores the importance of securing AI projects, drawing lessons from the mistakes made during the cloud transition, and emphasizes the need for a proactive security approach.

The Cloud Computing Parallels: A Cautionary Tale

The advent of cloud computing was initially met with skepticism, but its advantages soon became undeniable. Businesses quickly adopted a “cloud first” or “cloud forward” approach, prioritizing cloud solutions for data management and storage. However, this rapid transition often overlooked critical security implications, leading to numerous vulnerabilities. As Akiba Saeedi, Vice President of Product Management at IBM Security, noted, “We have to make sure that what happened with cloud doesn’t happen with AI.”

Early cloud adopters frequently failed to understand the differences between securing on-premises data and protecting data in the cloud. This led to configuration errors and other vulnerabilities.

The traditional castle-and-moat approach to cybersecurity proved inadequate in a cloud-oriented world, prompting a gradual shift towards zero trust principles.

A report by Gartner highlighted that 95% of cloud security failures will be the customer’s fault, primarily due to misconfigurations and inadequate security measures.

According to Forrester, 60% of enterprises are expected to move towards zero trust architecture by 2023, driven by the need to secure cloud environments.

At the RSA Conference, cybersecurity professionals stressed the importance of applying these lessons to AI projects. As generative AI becomes integral to business operations, ensuring robust security measures are in place from the outset is crucial.

Parallels Between Cloud and AI Security

John Yeoh, Global Vice President of Research for the Cloud Security Alliance, emphasized that AI, like cloud computing, is rapidly becoming a staple in business operations. “Our customers are using it. Our staff is using it. And your CEO is presenting it to you now, telling you, ‘We have to do it,’” Yeoh said. However, the integration of AI introduces new security challenges that need to be addressed proactively.

In a cloud environment, securing network access is paramount. AI introduces the need to manage machine identities in addition to human identities. For every human in an organization, there can be 10 to 20 times as many machine identities.

AI projects often involve customizing and training large language models with specific organizational data, making data control crucial. Organizations need to carefully manage what data is fed into and extracted from these models.

Research by Gartner predicts that by 2025, machine identities will outnumber human identities by a factor of 4 to 1 in many organizations.

A report by the Ponemon Institute found that 67% of organizations experienced a data breach due to misconfigured AI and machine learning models.

By addressing these considerations, businesses can mitigate the risks associated with AI adoption and ensure that their AI projects are secure and trustworthy.

Current State of AI Security: A Wake-Up Call

Despite the critical importance of AI security, many organizations are not prioritizing it adequately. A report by IBM and Amazon Web Services found that only 24% of current generative AI projects are being secured, even though 82% of organizations acknowledge that “secure and trustworthy AI is essential to the success of the business.”

Nearly 70% of executives prioritize innovation over security, a trend reminiscent of the early cloud era.

The disconnect between data scientists and cybersecurity experts contributes to the security challenges in AI projects. Data scientists focus on building AI models, while cybersecurity experts are still learning about AI security.

According to a survey by PwC, only 36% of executives feel confident in their organization’s AI security measures.

A study by Capgemini found that 58% of organizations struggle to find cybersecurity professionals with the skills to secure AI systems.

To address these issues, leadership must bridge the gap between data science and cybersecurity. Ensuring that security is built into AI projects from the ground up is essential for creating sustainable and trustworthy AI systems.

Building Trustworthy AI: A Leadership Imperative

Securing AI projects requires a comprehensive approach that integrates security into every stage of the AI lifecycle. Business leaders play a crucial role in fostering a security-first mindset and ensuring that AI initiatives are implemented with robust security measures.

Embed security protocols in the development and deployment phases of AI projects. This includes secure coding practices, regular security assessments, and continuous monitoring.

Facilitate collaboration between data scientists and cybersecurity experts to create a unified approach to AI security.

Invest in training programs to enhance the security knowledge of both data scientists and IT staff.

A report by McKinsey highlights that organizations integrating security into their AI projects from the start can reduce vulnerabilities by up to 40%.

According to Deloitte, organizations that foster collaboration between data science and cybersecurity teams report a 30% improvement in their overall security posture.

IBM has implemented a comprehensive AI security framework that includes secure model development, rigorous testing, and continuous monitoring. This approach ensures that AI systems are resilient against threats and operate within defined security parameters.


As generative AI continues to transform business operations, securing AI projects is paramount. By learning from the mistakes made during the cloud transition and applying proactive security measures, organizations can build trustworthy AI systems that drive innovation and growth. Business leaders must champion a security-first approach, fostering collaboration and ensuring that AI initiatives are secure from the ground up. Only then can AI truly reach its potential as a transformative technology that enhances business operations while maintaining robust security standards.


Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2024 IFEG - WordPress Theme by WPEnjoy