By Ismail Amla, Senior Vice President of Kyndryl Consult at Kyndryl

When Kyndryl surveyed hundreds of C-suite leaders about their organizational readiness for AI, the biggest concern we heard was security. That’s not completely surprising, given that any new technology paradigm generates cybersecurity attack possibilities and, therefore, new risks. 

Projections around how AI will recast the cybersecurity landscape can be dizzying to navigate. So, it’s both interesting and counterintuitive that recent research on AI security issues underscores the importance of basic security hygiene and the augmentation of existing best practices rather than implementing new ones. More specifically, AI security is best framed within existing cybersecurity approaches and is best applied based on straightforward calculations of the value of what is being protected. Further, AI security will work best if integrated with all other cybersecurity efforts.  

Here are three guidelines for developing a healthy, cost-effective and realistic AI security game plan for CEOs and their technology teams. 

 

 

First focus on cybersecurity hygiene

Even as AI becomes more deeply embedded in work processes, many basic vulnerabilities attackers seek to exploit remain the same, including manual processes and human actions. 

While investing in some new AI tools and security controls makes sense, continuing to improve on security basics is likely the higher ROI for AI security. In fact, over-investing in sophisticated AI security defenses may backfire by misaligning resources with actual threats. 

So, while organizations may be tempted to implement cutting-edge defenses against gradient-based cyberattacks or complex mathematical exploits, most real-world breaches leverage simple techniques like basic prompt injection and straightforward jailbreaks. Complex defense systems consume substantial resources and can introduce new vulnerabilities through their added complexity. Organizations will achieve better security outcomes by focusing on fundamentals: comprehensive cybersecurity training, robust input validation, effective monitoring systems that can detect unusual patterns in model interactions and an emphasis on cyber resiliency.

 

 

Don’t make a centralized AI security team

While creating a new security bucket for AI is tempting, it would likely wind up working against an organization’s interests. Because AI will be widely distributed and affect all aspects of technology — including networks, applications, infrastructure and APIs — creating security for AI systems will be a distributed responsibility. More specifically, distributed AI security approaches will likely outperform centralized approaches because they better address context-specific vulnerabilities. The same AI model can present radically different risk profiles depending on its application. A healthcare deployment, for instance, will require a different security consideration than a creative writing tool. 

By putting AI security in the hands of IT experts with the best context about the risk and application, organizations can more readily identify and mitigate contextually relevant risks and threats at any stage — from development to production. A distributed approach enables organizations to implement security measures aligned with real-world usage patterns rather than theoretical vulnerabilities.

Projections around how AI will recast the cybersecurity landscape can be dizzying to navigate. So, it’s both interesting and counterintuitive that recent research on AI security issues underscores the importance of basic security hygiene and the augmentation of existing best practices rather than implementing new ones.

Ismail Amla

Senior Vice President, Kyndryl Consult

 

Data and context value trumps model value 

Consider the following scenario: A financial services company has built a large, expensive AI model to analyze user behaviors and recommend product changes and new features. The company also has a small, relatively simple AI model used to analyze the treasury activities of large customers and make liquidity recommendations. Which model should receive security priority? 

Given this context, the obvious answer is the smaller AI model because the financial services company wants to protect clients against the rising tide of dangerous business e-mail compromise and spear phishing attacks. To this end, security investment should scale with data value rather than model capability because attackers conduct cost-benefit analyses before attempting breaches. A relatively simple AI model handling sensitive financial data requires more robust cybersecurity and protection than a state-of-the-art AI model generating creative content. 

Cyber attackers consistently choose the path of least resistance to valuable outcomes rather than targeting the most sophisticated AI systems. This value-based approach to security investment aligns resources with actual business risks rather than technical complexity. Organizations can optimize their cybersecurity and resiliency spending by protecting their most valuable assets, regardless of the underlying AI system’s sophistication.

 

 

While AI security challenges may seem daunting, they are manageable with the right approach. The key is to resist the urge to treat AI security as an entirely new domain that requires revolutionary solutions. Instead, organizations should strengthen existing cybersecurity practices, distribute security responsibilities across teams with relevant context and align protection levels with data value rather than model sophistication. This balanced strategy for AI business systems, coupled with ongoing vigilance and adaptation as AI capabilities evolve, will help organizations harness AI’s benefits while maintaining a robust cybersecurity posture.

Ismail Amla

Senior Vice President, Kyndryl Consult