Pages

01 February, 2026

AI cybersecurity in 2026: deepfakes to become more menacing

Deepfakes will become even more sophisticated in 2026.

The line between legitimate and fraudulent AI-generated content will become increasingly blurred, Kaspersky has warned. "AI can already produce well-crafted scam emails, convincing visual identities, and high-quality phishing pages. At the same time, major brands are adopting synthetic materials in advertising, making AI-generated content look familiar and visually 'normal'," the company shared in a list of 2026 predictions. 

"As a result, distinguishing real from fake will become even more challenging, both for users and for automated detection systems."

"Companies are increasingly discussing the risks of synthetic content and training employees to reduce the likelihood of falling victim to it. As the volume of deepfakes grows, so does the range of formats in which they appear. At the same time, awareness is rising not only within organisations but also among regular users: end consumers encounter fake content more often and better understand the nature of such threats," Kaspersky elaborated, calling deepfakes "a stable element of the security agenda, requiring a systematic approach to training and internal policies."

More convincing deepfakes

Kaspersky noted that real-time face and voice swapping technologies are improving, but their setup requires more advanced technical skills. "Wide adoption is unlikely, yet the risks in targeted scenarios will grow: increasing realism and the ability to manipulate video through virtual cameras make such attacks more convincing," the company said. 

In the mid-range however, deepfakes are becoming democratised. "At the same time, content generation tools are becoming easier to use: even non-experts can now create a mid-quality deepfake in just a few clicks," Kaspersky noted. 

Faking at scale

Source: SentinelOne. Gregor Steward.
Source: SentinelOne.
Steward.

"In 2026, the smartest enterprises will move beyond single-layer defences against deepfakes. The technology to replicate someone's identity in video, doing practically anything, should concern every CISO – especially when you consider voice and video communications are already highly-compressed signals that make sophisticated fakes increasingly difficult to distinguish," said Gregor Steward, Chief AI Officer at SentinelOne. 

"What many security teams don't yet grasp is that sophisticated attackers can iterate indefinitely at minimal cost, refining their approach until they succeed. When detection systems reject fakes, they inadvertently provide valuable signals that help attackers refine their methods." 

Disinformation

"There have been many cases of disinformation being used to unduly influence people," noted Carl Windsor, CISO, Fortinet.

"The power of AI takes this to a new level with services such as OpenAI DALL-E and Sora 2, which make the creation of almost indistinguishable audio, images, and videos trivial."

Windsor predicted that deepfake services will take business email compromise (BEC) and social engineering to a whole new level. "The use of AI-generated audio has already been observed in extortion attempts, but in 2026, we expect organisations to face an onslaught of audio- and video-generated content used for BEC, phishing, and other targeted attacks," he said. 

Deception breaks new ground in 2026, said Palo Alto Networks. In a list of 2026 predictions, the company said that "identity will become the primary battleground as flawless, real-time AI deepfakes — or CEO doppelgängers — make forgery indistinguishable from reality". 

"This threat is magnified by autonomous agents and a staggering 82:1 machine-to-human identity ratio, creating a crisis of authenticity where a single forged command triggers a cascade of automated actions. As trust breaks down, identity security must transform from a reactive safeguard into a proactive enabler for the enterprise, securing every human, machine and AI agent," the cybersecurity provider said. 

Steward said the path forward will require combining detection tradecraft - to avoid enabling attackers - with out-of-band verification methods, additional factors that exist outside the communication channel itself. "We're seeing early signals of this in consumer technology like iOS Contact Key Verification, and enterprises will need to follow suit," he said. The contact key verification capability uses keys to ensure recipients of a message are who they claim to be.

"Detection remains a critical element of defence, but the organisations that will stay ahead in 2026 are those recognising that deepfakes require a fundamentally different approach to identity verification across the enterprise. 

Explore AI cybersecurity in 2026

The state of play 

The attack and defence playbook 

Ensuring the show goes on  

A broader attack surface with agentic AI 

Hashtag: #2026Predictions 

No comments:

Post a Comment