Pages

05 February, 2026

AI cybersecurity in 2026: the attack and defence playbook

Fuelled by third-party services, cybercriminals will be attacking businesses on multiple fronts in 2026. 

John Wojcik, Senior Threat Researcher, Infoblox, foresees that cybercrime-as-a-service (CaaS) will supercharge financially-motivated threat actors in Southeast Asia in 2026.

“The region is experiencing a growing number of industrial-scale scam centres and organised hacker groups which offer malicious software and leaked credentials in exchange for money on the dark web," he said.  

“The rise of CaaS means cybercriminals are no longer limited by their own in-house skills, as they can shop around for plug-and-play tools that make hacking look easy.” 

Source: Kyndryl. Andrew Lim.
Source: Kyndryl. Lim.
Cybercriminals already have the upper hand, said Andrew Lim, MD, Kyndryl ASEAN & Korea. "According to Kyndryl’s 2025 Readiness Report, only 29% of executives feel prepared to manage future AI risks. In Singapore, just 24% of organisations feel ready for future risk, and 58% say they struggle to keep pace with technological change," Lim noted.

"These gaps—fragmented data, legacy infrastructure, limited observability and insufficient organisational skills—will increasingly be exploited by attackers, especially as AI introduces new risks such as data leakage, unintended behaviours, and model poisoning."

Autonomous malware

"Traditional self-propagating malware like WannaCry, NotPetya, and Mirai caused billions in damage within days through automated propagation. What we are observing now are adversaries integrating AI capabilities to create malware that adapts, selects targets, and evades detection autonomously," Dmitry Volkov, CEO, Group-IB warned. 

"These autonomous AI agents are increasingly capable of managing the entire kill chain: vulnerability discovery, exploitation, lateral movement, and orchestration at scale." 

A kill chain refers to a list of a series of activities that leads to the successful completion of an attack.

Data and model poisoning

Invisibly corrupting AI training data at its source will be a new attack method in 2026, Palo Alto Networks has predicted. "This attack exploits a critical organisational silo between data scientists and security teams to create hidden backdoors and untrustworthy models, igniting a fundamental 'crisis of data trust'," the company noted in a list of 2026 predictions.

"As traditional perimeters become irrelevant, the solution must be a unified platform that closes this blind spot, using data security posture management (DSPM) and AI security posture management (AI-SPM) for observability and runtime agents for firewall as code to secure the entire AI data pipeline." 

Source: JFrog. Yuval Fernbach.
Source: JFrog. Fernbach.

Another way to introduce poisoned data is via unapproved AI model use, JFrog said. "With open-source models, SaaS AI tools, and API-based agents now one click away, organisations will see a surge in 'shadow AI' — teams adopting unvetted models outside formal processes," said Yuval Fernbach, VP & CTO of MLOPs, JFrog.

"In 2026, this will eclipse shadow IT as the top operational risk CIOs face. Security teams won’t just worry about rogue infrastructure; they’ll worry about unapproved models with hidden vulnerabilities, poisoned datasets, and undocumented behaviours. To counter this, enterprises will adopt centralised AI catalogues and enforce model allow-lists as standard practice, similar to how software artifact governance became mandatory during the DevOps era." 

Cybercriminals will target wherever AI resides, said Lim from Kyndryl. "As enterprises embed AI deeper into payments, logistics, customer services and critical infrastructure, attackers are shifting their focus from traditional networks to the AI systems themselves—targeting models, data pipelines and inference environments," he said. 

"The shift is already visible—LLM-enhanced reconnaissance, adaptive phishing, polymorphic malware and early forms of autonomous agent-based attacks are becoming common."

Identity

Volkov said that AI-in-the-middle attacks mean that traditional identity verification methods such as passwords, two-factor authentication, and even biometrics, operate on a flawed assumption that authenticating identity provides ongoing assurance.

"However, adversary-in-the-middle (AiTM) frameworks are becoming increasingly popular among cybercriminals, exploiting the continued verified access users maintain across devices and platforms. The challenge for businesses would be accepting that identity verification is no longer a gateway safeguard but a continuous process," he said. 

"Organisations must move beyond static authentication to continuous behavioural monitoring and anomaly detection that can match the adaptive nature of AI-managed attacks. The industry cannot afford to cling to authentication models that were designed before adversaries could embed AI into these frameworks to automate session compromise at machine speed." 

Source: Yubico. Geoff Schomburgk.
Source: Yubico. Schomburgk.
"2026 will be the year identity becomes infrastructure. AI-driven phishing and deepfake impersonation are accelerating in Singapore and across APAC, with an overwhelming 85% of Singaporeans recognising that phishing attempts are becoming more sophisticated.

"To combat these rising threats, organisations will begin treating identity security the way they approach networks or data centres - as critical systems for business operations which require hardened, resilient components," agreed Geoff Schomburgk, VP, Asia Pacific and Japan at Yubico. 

"Organisations across finance, critical infrastructure, and the public sector will increasingly lean on phishing-resistant tools like passkeys and hardware-backed credentials for strong multifactor authentication (MFA), Zero Trust and privileged access. 

"For 2026, the priority is clear for organisations across APAC: shrink credential‑theft risk by focusing on building phishing-resistant users throughout the company, and build trust in a region where digital transformation continues to out‑pace legacy‑era security models. The companies that adapt fastest will be those treating identity not as an IT feature, but as core infrastructure," Schomburgk concluded.

Source: Rubrik. Arvind Nithrakashyap.
Source: Rubrik. Nithrakashyap.

"The scale of non-human identities in the AI era will become a critical vulnerability. Attackers continue exploiting the labyrinth of non-human credentials; however, in 2026, they’ll achieve full-system compromise," said Arvind Nithrakashyap, Co-Founder and CTO, Rubrik. 

"A recent survey revealed that 89% of organisations plan to hire professionals in the next 12 months specifically to manage identity security. Identity infrastructure will become more critical than the data infrastructure it protects."

"Identity is replacing infrastructure as the perimeter of security," said Martin Creighan, VP, Asia Pacific at Commvault.

"IDC anticipates that by 2026, cyber-resilient organisations will merge identity, data, and recovery policies into one continuous security fabric. Continuity is incomplete if identities remain corrupted. The ability to restore verified user integrity – not just restore systems – will become a cornerstone of operational assurance.

"This matters even more as AI starts talking to AI – autonomous agents initiating actions, sharing data, and making decisions on their own. In this AI-centric world, a trusted identity becomes the first checkpoint of safety, and recovery plans must prove that compromised identities have been reset, re-verified, and re-linked to clean data."

Infrastructure

Amitabh Sarkar, VP & Head of Asia Pacific and Japan - Enterprise at Tata Communications said that in 2026, cybersecurity will become inseparable from the infrastructure that powers AI and cloud workloads. 

Sarkar said: "Zero-Trust architectures and continuous verification are now critical foundations for enterprise security, particularly as AI workloads expand across cloud, edge, and hybrid environments.

"Networks themselves are evolving to support both performance and security at scale. For example, large-scale AI-ready networks demonstrate how high-capacity, low-latency infrastructure can enable compute-intensive AI applications while embedding robust access controls, data integrity measures, and compliance standards."

Open source models

Kaspersky predicted that open-weight models - where the weights used for the model are openly shared - will approach the top-quality proprietary models in many cybersecurity-related tasks, creating more opportunities for misuse. 

"Closed models still offer stricter control mechanisms and safeguards, limiting abuse. However, open-source systems are rapidly catching up in functionality and circulate without comparable restrictions. This blurs the difference between proprietary models and open-source models, both of which can be used efficiently for undesired or malicious purposes," Kaspersky said.

Ransomware 

Ransomware that is supercharged by AI is a bad combination for cyberdefenders, said Fabio Assolini, Head of Research Center, Americas and Europe, Global Research & Analysis Team, Kaspersky. "Agentic AI systems, which can reason autonomously and adapt in real time, will likely automate attack chains, from initial reconnaissance to the final extortion demands, executing them at speeds many times faster than human operators," Assolini said. 

"AI-fuelled ransomware-as-a-service platforms may empower even novice hackers to unleash polymorphic malware that mutates on the fly or deploys deepfake videos to blackmail executives. The victim count of these attacks could explode, as attackers scale high-volume operations against third-party vendors. Extortion tactics may evolve toward insidious data tampering and reputational sabotage, eroding trust in brands overnight." 

Assolini added: "To stay ahead, organisations should invest in threat intelligence and proactive detection, implement immutable, air-gapped backups. There should be thorough supply chain audits and advanced multifactor authentication. Targeted training should be rolled out to counter AI-enhanced phishing schemes."

APAC refers to the Asia Pacific region, and LLM is an acronym for large language model. SaaS stands for software-as-a-service.

Explore AI cybersecurity in 2026

The state of play 

Ensuring the show goes on  

Deepfakes to dominate 

A broader attack surface with agentic AI 

Hashtag: #2026Predictions

No comments:

Post a Comment