Check Point Software Technologies, a cybersecurity solutions provider, has launched its inaugural AI Security Report at RSA Conference 2025. The report explores how cybercriminals are weaponising AI, alongside providing insights for defenders to stay ahead.
According to Check Point, AI has erased the lines between truth and deception in the digital world. Cybercriminals now wield generative AI and large language models (LLMs) to obliterate trust in digital identity. In today’s landscape, what you see, hear, or read online can no longer be believed at face value, the company said. AI-powered impersonation bypasses even the most sophisticated identity verification systems, making anyone a potential victim of deception on a scale.
"The swift adoption of AI by cybercriminals is already reshaping the threat landscape,” said Lotem Finkelstein, Director of Check Point Research.
“While some underground services have become more advanced, all signs point toward an imminent shift - the rise of digital twins. These aren’t just lookalikes or soundalikes, but AI-driven replicas capable of mimicking human thought and behaviour. It’s not a distant future - it’s just around the corner.”
Key report highlights:
At the heart of these developments is AI’s ability to convincingly impersonate and manipulate digital identities, dissolving the boundary between authentic and fake. The report uncovers four core areas where this erosion of trust is most visible:
![]() |
Source: Check Point 2025 AI Security Report. Deepfake automation maturity levels with the red checkmark showing what is already available in markets and exploited in the wild. |
- AI-enhanced impersonation and social engineering: Threat actors use AI to generate realistic, real-time phishing emails, audio impersonations, and deepfake videos. Notably, attackers recently mimicked Italy’s defence minister using AI-generated audio, demonstrating that no voice, face, or written word online is safe from fabrication.
- LLM data poisoning and disinformation: Malicious actors manipulate AI training data to skew outputs. A case involving Russia’s disinformation network Pravda showed AI chatbots repeating false narratives 33% of the time, underscoring the need for robust data integrity in AI systems.
- AI-created malware and data mining: Cybercriminals harness AI to craft and optimise malware, automate distributed denial of service (DDoS) campaigns, and refine stolen credentials. Services like Gabbers Shop use AI to validate and clean stolen data, enhancing its resale value and targeting efficiency.
- Weaponisation and hijacking of AI models: From stolen LLM accounts to custom-built dark LLMs like FraudGPT and WormGPT, attackers are bypassing safety mechanisms and commercialising AI as a tool for hacking and fraud on the dark web.
The report emphasises that defenders must now assume AI is embedded within adversarial campaigns. To counter this, organisations should adopt AI-aware cybersecurity frameworks, including:
- AI-assisted detection and threat hunting: Leverage AI to detect AI-generated threats and artifacts, such as synthetic phishing content and deepfakes.
- Enhanced identity verification: Move beyond traditional methods and implement multilayered identity checks that account for AI-powered impersonation across text, voice, and video - recognising that trust in digital identity is no longer guaranteed.
- Threat intelligence with AI context: Equip security teams with the tools to recognise and respond to AI-driven tactics.
"In this AI-driven era, cybersecurity teams need to match the pace of attackers by integrating AI into their defences," said Finkelstein.
"This report not only highlights the risks but provides the roadmap for securing AI environments safely and responsibly."
The full AI Security Report 2025 is available for download at https://engage.checkpoint.com/2025-ai-security-report
Hashtag: #RSA2025
Thanks
ReplyDelete