2026 will expose the limits of traditional cybersecurity, says Dmitry Volkov, CEO, Group-IB.
"The cybersecurity landscape in 2026 will be defined not by new vulnerabilities, but by adversaries' accelerating ability to weaponise AI," he said.
"From Group-IB's vantage point tracking threat actor operations globally, we are observing a fundamental shift: attackers are embedding AI into every stage of their operations, compressing timelines, scaling capabilities, and adapting faster than traditional defences can respond.
"The imperative is clear: defences must evolve at the same pace as AI-enabled adversaries, or risk facing automated attacked that operate faster than human-speed detection and response can counter."
"2026 will test every assumption about how we defend, recover, and adapt to today’s evolving threat landscape. AI is now both the weapon and the shield, and the line between IT and business risk has disappeared," warned Carl Windsor, CISO, Fortinet in the same vein.
Windsor pointed out that generative AI (gen AI) technology is democratising technological change. "Every department is leveraging AI to enhance efficiency, facilitate better decision-making, and deliver more personalised experiences for customers. However, this brings with it some new risks," he said, highlighting:
Lack of transparency: "Many AI models are opaque, making it difficult to interpret how the system arrived at its decision, which can create accountability and compliance challenges," he said.
Privacy and data misuse: "AI requires large, often sensitive datasets to be uploaded to cloud-based systems. If teams are not adequately trained on the risks, this could result in the leaking of sensitive personal information or intellectual property, leading to privacy violations or regulatory breaches," Windsor noted.
Security vulnerabilities:
Adversarial attacks: The subtle manipulation of input data to trick models into making incorrect predictions.
Model inversion and extraction: Model queries enable attackers to reconstruct sensitive training data or to clone the model itself, such as extracting personal faces from a facial recognition AI.
Data poisoning: The manipulation of data to force it to generate incorrect predictions.
Large language model (LLM) prompt injection: The circumvention of guardrails by embedding hidden instructions in text or websites that cause AI systems to ignore
safety rules or leak data.
![]() |
| Source: Cohesity. Murthy. |
Agentic attacks
Agentic AI could well be used for cyberattacks. Vasu Murthy, Chief Product Officer, Cohesity, forecast that agentic AI will autonomously operate ransomware-as-a-service (RaaS) platforms in 2026.
"AI will accelerate the scale and efficiency of attacks, building on the surge of AI-enabled cybercrime observed in 2025.
"As ransomware becomes easier to launch and more complicated to contain, new safeguards and legislative frameworks will be essential to protect businesses and consumers from AI-driven extortion," he said.
"Ransomware has evolved beyond mass exfiltration to targeted extortion schemes, with RaaS operating as a structured business model complete with developers, affiliates, and specialised tooling," agreed Volkov.
"Organisations cannot afford to ignore what is changing: the integration of AI agents into these operations: rapid encryption, automatic backup destruction, lateral movement, and disabling endpoint detection and response (EDR) solutions.
"AI-driven agents will compress attack timelines from days to hours, giving even low-skilled RaaS affiliates access to advanced automated capabilities."
"As AI agents interact more, there’s a risk of coordination or collusion, swarm attacks, and emergent vulnerabilities. These threats are sometimes not covered by traditional cybersecurity frameworks," Windsor added.
He also pointed out that agentic AI can enable multiple agents to query one another and take actions without human intervention. "As the use of this technology increases, the security of the agents’ non-human identity (NHI) becomes crucial, as a weakness in the identity of one agent could lead to a cascading vulnerability," he said.
"There have already been multiple breaches of AI LLMs. 2026 will see this increase in both volume and also severity as AI accesses more and more sensitive data, and agent-to-agent communication is allowed without considering the identity and security implications," he cautioned.
Chaim Mazal, Chief AI and Security Officer at Gigamon, said: “As adversaries weaponise AI to evade detection, security leaders must respond with equal force. The priority now is twofold: to gain real-time visibility into the growing volume of AI-driven network traffic and to establish clear governance over how AI is adopted within the enterprise.
![]() |
| Source: Kyndryl. Dr Nanduri. |
"As AI workloads expand, CISOs are grappling with rising data volumes,
hybrid cloud complexity, and visibility gaps that leave organisations
exposed. In Singapore, these concerns are particularly evident: more than half (54%)
of security and IT leaders remain hesitant to deploy AI in the public
cloud, citing risks around intellectual property protection."
"Many
are now turning to packet-level data paired with metadata as the
foundation for restoring visibility, strengthening defences, and
ensuring AI tools operate on trusted information," Mazal added.
Securing the AI ecosystem is the next frontier for enterprises in the Asia-Pacific region (APAC), Kyndryl agreed. "Recent cyberattacks across the region have shown that adversaries are no longer just exploiting software vulnerabilities," said Dr Vishnu Nanduri, Head of AI Innovation, Kyndryl ASEAN & Korea.
"They are probing AI systems, manipulating training data and targeting identity layers. This requires organisations to move beyond perimeter defense and start securing the intelligence that now drives their business. Our role is to help customers modernise their infrastructure and implement security-by-design so they can comply while still innovating at speed."
![]() |
| Source: Veeam. Sia. |
"In a crisis, AI-driven insights can dramatically shorten the time from detection to response, enabling business leaders to act decisively."
Palo Alto Networks called 2026 the 'year of the defender' and said AI-driven defences will tip the scale in the favour of defence, driving down response times, reducing complexity and increasing visibility when it comes to responding to cyberattacks.
Wendi Whitmore, Chief Security Intelligence Officer at Palo Alto Networks said: "AI adoption is redefining cybersecurity risk, yet the ultimate opportunity is for defenders. While attackers utilise AI to scale and accelerate threats across a hybrid workforce, where autonomous agents outnumber humans by 82:1, defenders must counter that speed with intelligent defence. This necessitates a fundamental shift from a reactive blocker to a proactive enabler that actively manages AI-driven risk while fuelling enterprise innovation."
Explore AI cybersecurity in 2026
The attack and defence playbook
Ensuring the show goes on
Deepfakes to dominate
A broader attack surface with agentic AI
Hashtag: #2026Predictions



No comments:
Post a Comment