Generative AI (gen AI) has changed the playing field not just for businesses, but also for cyberdefence. Industry observers agree that cyberthreats are becoming more sophisticated, especially with the help of AI.
Source: SailPoint. Boey. |
“Attackers are leveraging advanced AI algorithms to automate their attack processes, making them more efficient, scalable, and difficult to detect. These AI-driven attacks can adapt in real time, learning from the defenses they encounter and finding innovative ways to bypass them.
"Ransomware attacks are evolving into more targeted campaigns as cybercriminals focus on critical infrastructure and high-value targets, aiming to inflict maximum damage and, in turn, demand exorbitant ransoms,” said Merium Khalid, Director, SOC Offensive Security, Barracuda.
Jumio's Frederic Ho, VP of Asia Pacific, also said businesses have to be on their guard. "Easy access to AI has empowered fraudsters. To stay ahead, we will see more organisations tapping onto AI solutions in the fight against AI-driven cyberthreats," he said.
Source: Jumio. Ho. |
"Businesses must look to implement multimodal, biometric-based identity verification systems that can detect deepfakes and thwart the misuse of stolen personal credentials. This enables them to fortify their defences against sophisticated scams, ensuring highest levels of security while cultivating digital trust in this evolving age of disinformation."
"AI is marking a turning point in the evolution of cyberattacks, as this type of technology allows threats to be more frequent, faster, and more effective. Techniques such as deepfake are managing to reliably impersonate relevant identities and companies to steal information, phishing attacks are becoming more convincing, and new variants of ransomware and malware are developing rapidly and more cost-effectively. As cybercriminals' techniques progress rapidly, cybersecurity is also using AI to refine its defensive methods to keep pace," noted Teong Eng Guan, Regional Director, Southeast Asia & Korea, Check Point Software Technologies.
Weak AI is weak
In its 2024 predictions, BeyondTrust shared in a blog post that weak AI or narrow AI, which focuses on specific narrow tasks, could provide an edge for cybercriminals in niche areas such as discovering vulnerabilities and evading detection.
Strong AI - sometimes called artificial general intelligence (AGI) or artificial super intelligence (ASI) - which BeyondTrust said offers
a broader and more human-like intelligence could also be used by cybercriminals to conduct entire cyberattacks autonomously, the company warned.
In a blog post Morey J. Haber, Chief Security Officer, Christopher Hills, Chief Security Strategist, and James Maude, Director of Research from BeyondTrust said: "Strong AI will also allow a single threat actor to act as a large group. This will supplant the technical skills once provided by other humans, while, at same time, giving the attacker a competitive advantage in speed and scale to capitalise on the black market against legacy, human-only threat actors."
Faking it
One intrusion method, social engineering attacks, will become more sophisticated with gen AI, Lorri Janssen-Anessi, Director, External Cybersecurity Assessments, BlueVoyant, warned. "Generative AI tools will enable attackers to create more personalised and craftier approaches, more frequently and with greater success," she said.
Trend Micro has also identified this vulnerability. "The widespread availability and improved quality of gen AI, coupled with the use of generative adversarial networks (GANs), are expected to disrupt the phishing market in 2024," the company said in its 2024 predictions list.
"This transformation will enable cost-effective creation of hyper-realistic audio and video content—driving a new wave of business email compromise (BEC), virtual kidnapping, and other scams."
![]() |
Source: Veritas Technologies. Dr Purser. |
Dr
Joye Purser, Field Chief Information Security Officer at Veritas
Technologies listed specific examples of the dangers. "One risk we can
expect to see more of is threat actors feeding disinformation into AI
and machine learning technologies causing such tools to misbehave,
mislead, and become disruptive through misinformation," she said.
"This
is an area that needs great thought into how to provide protection. We
also need to adapt our own human behaviours to the vagaries of AI. For
now, every output needs human verification – we must ask ourselves,
‘does this look right?’, until we get to the point when we know the
output is accurate and can be trusted."
Fortinet highlighted that major news opportunities will be used by cybercriminals in conjunction with gen AI for attacks. "Looking ahead, we expect to see attackers take advantage of more geopolitical happenings and event-driven opportunities, such as the 2024 US elections and the Paris 2024 games. While adversaries have always targeted major events, cybercriminals now have new tools at their disposal—generative AI in particular—to support their activities," said Derek Manky, Global VP Threat Intelligence from Fortinet.
As for cybercriminals
exploiting AI to create more sophisticated forms of attack, Dr Purser
said: "Given AI’s dual nature as a force for both good and bad, the
question going forward will be whether organisations’ AI protection can
outpace hackers’ AI attacks."
Source: Ensign InfoSecurity. Teo. |
"Known attack codes can also be rewritten now to mislead detection systems. The threat from gen AI will only increase as threat groups start standardising their tactics and procedures, including self-evolving malware and attack variants. The result? A surge in phishing attacks."
Deepfakes were also mentioned by Check Point as a concern in the era of gen AI. "AI is marking a turning point in the evolution of cyberattacks, as this type of technology allows threats to be more frequent, faster, and more effective. Techniques such as deepfake are managing to reliably impersonate relevant identities and companies to steal information, phishing attacks are becoming more convincing, and new variants of ransomware and malware are developing rapidly and more cost-effectively. As cybercriminals' techniques progress rapidly, cybersecurity is also using AI to refine its defensive methods to keep pace," said a Check Point spokesperson.
Source: Trend Micro. Jain. |
Nilesh Jain, VP, Southeast Asia & India, Trend Micro, suggested that virtual kidnappings would see a surge due to deepfakes. He also observed that AI-enabled deepfakes "cannot be both cheap and convincing".
"Hence, it’s more likely that voice cloning will be used in near-future scams and in a targeted way, rather than in volume-based attacks," he predicted.
DataStax highlighted the
dangers of large language models (LLMs), including their role for
deepfakes. "Just like the dark web, there will be dark LLMs. There is
every opportunity for automated agents, powered by open-source and
uncensored LLMs, to be used for harmful attacks. Uncensored LLMs,
potentially even coming from bad-actor states, can potentially be used
for everything from financial fraud and organised crime to bioweapons
and terrorism," the company observed in a list of 2024 predictions.
"LLMs
also pose a new risk of being exploited by attackers through things
like highly realistic and personalised phishing emails and deepfake
videos, agents that automate illegal financial activity, or even the
surfacing of detailed plans and procedures for illegal or terrorist
activity.
"But it’s not all doom and gloom. LLMs can also be
harnessed for cybersecurity protection and for thwarting bad actors.
We’ll likely see innovations in this area moving into 2024 as well."
Trend Micro also pointed out that AI models can be vulnerable. "While gen AI and LLM datasets are difficult for threat actors to influence, specialised cloud-based machine learning models are a far more attractive target," the company said in its 2024 predictions.
"The more focused datasets they are trained on will be singled out for data poisoning attacks with various outcomes in mind—from exfiltrating sensitive data to disrupting fraud filters and even connected vehicles. Such attacks already cost less than US$100 to carry out."
Insecure code
AI assistants will introduce security vulnerabilities into code, BeyondTrust predicted. "As developers continue to adopt tools designed to make their lives easier and increase their productivity, we will see source code being sent to cloud services that may be unsecure, and this will result in source code risks. Increased use of these tools will also start to introduce unintentional, AI-generated vulnerabilities and misconfigurations into software products," Haber, Hills, and Maude explained."Generative AI models being trained on online code examples that contain mistakes will cause machine error rather than human error to be the cause of software vulnerabilities."
Swarms
Sally Vincent, Senior Threat Research Engineer, LogRhythm, said that businesses should brace themselves for a surge in AI-enhanced botnets. "In 2024, the symbiosis between AI and botnets will witness a significant surge. The convergence of AI capabilities will empower the proliferation and sophistication of botnets, amplifying their potency to orchestrate complex cyberthreats," she said."AI-powered botnets will exploit advanced algorithms to expand their reach and impact, intensifying the challenges faced by cybersecurity. This alarming trend will necessitate innovative defense strategies and heightened vigilance to counter the escalating threat posed by botnets, reshaping the landscape of digital security measures."
Speaking Asian languages
Oakley Cox, Analyst Technical Director – Generative AI for Darktrace, said that gen AI will let attackers phish across language barriers. "For decades, the majority of cyber-enabled social engineering, like phishing, has been carried out in English. The language is used by millions across North America and Europe and dominates business operations in large swathes of the rest of the world. As a result, leveraging local languages is not worth the effort for cybercriminals when English can do the job just fine," he explained.
"This has made APAC a relative safe haven. The diversity of local languages has restricted the extent to which hackers can target the region. Employees know to look out for phishing emails written in English, but are complacent when receiving emails written in their local language. With the introduction of generative AI, the barrier to entry for composing text in foreign languages has dropped dramatically.
"At Darktrace, we have already observed the increased complexity of English language use in phishing attacks. Now we can expect attackers to add new language capabilities which were previously viewed as too complex to be worth the effort, including Mandarin, Japanese, Korean and Hindi."
Cox said foreign language phishing emails are likely to reward cybercriminals well. "Email security solutions trained using English-language emails are unlikely to detect local language attacks, and the emails will land in the inboxes of those who are not used to receiving social engineering attempts in their native language," he warned.
Eric Skinner, VP of market strategy at Trend Micro went into more detail: "Advanced LLMs, proficient in any language, pose a significant threat as they eliminate the traditional indicators of phishing such as odd formatting or grammatical errors, making them exceedingly difficult to detect," he said.
"Businesses must transition beyond conventional phishing training and prioritise the adoption of modern security controls. These advanced defences not only exceed human capabilities in detection but also ensure resilience against these tactics."Shadow AI
The introduction
of unsanctioned AI tools into the corporate environment by employees
poses tracking challenges for security teams, said CrowdStrike in a list
of 2024 predictions. "These blind spots and new technologies open the
door to threat actors eager to infiltrate corporate networks or access
sensitive data," the company said.
"Critically, as employees use
AI tools without oversight from their security team, companies will be
forced to grapple with new data protection risks. Corporate data that is
input into AI tools isn’t just at risk of threat actors targeting
vulnerabilities in these tools to extract data, the data is also at risk
of being leaked or shared with unauthorised parties as part of the
system’s training protocol."
"2024 will be the year when
organisations will need to look internally to understand where AI has
already been introduced into their organisations (through official and
unofficial channels), assess their risk posture, and be strategic in
creating guidelines to ensure secure and auditable usage that minimises
company risk and spend but maximises value," CrowdStrike predicted.
"At the same time, more organisations will look into investing in a comprehensive cloud-native application protection platform (CNAPP) to fend off adversaries looking to exfiltrate cloud-based data and ensure AI models running in the cloud are not exploited for malicious purposes."
AI will also affect cyberdefence. NTT predicted that AI will enable dark network operations centres (NOCs) - highly automated NOCs - will appear.
"With the speed at which AIOps has advanced, the idea of a completely automated, lights-out network operations centre is quickly becoming an ideal. Over the next 12 months, networking companies will further embed AIOps into their broader operations to improve network quality, support engineers, and modernise infrastructures," noted NTT in a list of 2024 predictions.
The company also suggested that networking specialists must understand "where automation helps and
where human talent is still an essential part of the networking
function". "While automation lays at the heart of a ‘dark NOC’, human talent will be key to making it a success. Network providers will need to focus on upskilling, as well as ensuring they have made the necessary preparations from a technological standpoint – from standardising APIs to optimising data processes," NTT said.
Mohan Veloo, VP Solutions Consulting, Zscaler, said AI will play various roles for cybersecurity, from providing a better view of risk, delivering visualisations of that risk, to determining what to work on.
"In much the same way that AI is helping discover and classify data, enterprises will increasingly use AI to visualise and quantify risk across their entire footprint. This includes gaining comprehensive
insights and risk scoring across their attack surface and across their business entities—including their workforce, applications, assets, and third parties," he said.
"Similarly, enterprises will leverage AI to gain top-down and board-level visualisations of their risk to uncover and drill down into their top contributing factors to risk, including the ability to quantify the
financial impact of exposures. This helps enterprises make informed decisions on which threats and vulnerabilities to address first."
"Finally, enterprises will seek AI tools that allow them to automatically gain prioritised security actions and policy recommendations, which are tied to their key risk drivers and which quantifiably improve the security of their organisation," Veloo concluded.
Amir Sohrabi, Regional VP & Head of Digital Transformation, Emerging EMEA and Asia, SAS, said that employees will use tools like ChatGPT and Bard whether there are restrictions or not as they have enormous benefits for productivity.
"CIOs in the region will need to grapple with this and study mitigation measures that are in line with their organisation’s risk tolerance. Ultimately, well-intentioned employees will turn to these tools to be more efficient, and CIOs can ensure this doesn't jeopardise the organisation by proactively taking charge of the situation," he said.
AI literacy
Dr Robert Blumofe, Executive VP and CTO, Akamai, said that building AI literacy will be critical for the future of the Internet. "Asia Pacific has often led the globe when it comes to embracing new technologies, and generative AI has been no exception," he said.
"But as generative AI tools like DALL-E and ChatGPT become more pervasive and common, organisations will also need to invest in AI literacy to maintain customer trust. AI will make it harder to verify what is real and what is fake. Asia Pacific has the opportunity to lead the way here, too, and ensure deepfakes don’t undermine legitimate businesses and help everyone separate fact from fiction."
Boey said there is a need to "transition away from human-driven identity management, as the sheer volume of identities to manage has surpassed human capacity, further emphasising the need for more automated and advanced security measures".
"Businesses equipped with AI-driven identity solutions will be able to analyse vast amounts of data to detect patterns indicative of potential threats. This intelligent automation of access permissions ensures that all digital identities including contract workers, third-parties and non-humans will only have access to necessary resources – promptly revoking access privileges when no longer required.
"This agile response capability enables businesses to address emerging risks swiftly, reducing the likelihood of data breaches and other security incidents. Moreover, businesses can attain the trifecta of speed, automation, and flexibility," he said.
An industry response
Things are getting so bad, the industry will take matters into its own hands, Trend Micro predicted. "In (2024), the cyber industry will begin to outpace the government when it comes to developing cybersecurity-specific AI policy or regulations," said Greg Young, VP of cybersecurity at Trend Micro.
"The industry is moving quickly to self-regulate on an opt-in basis."
Dr Blumofe also called for a reset on the way businesses view AI-related security, calling 2024 "the year of AI security snake oil". "Cybercriminals have been quick to adopt generative AI tools to advance their goals. Organisations are, rightfully, racing to ensure assets are protected. The solution to this increase in cyberthreats won’t be generative AI-enhanced security," he said.
"But that fact won’t stop startups from claiming that they have used gen AI to create a security silver bullet. While AI, particularly deep learning, will always have a place in solving security challenges, organisations will be better served by avoiding the AI panic and ensuring any security solutions help them optimise the security basics - identity, visibility, Zero Trust access, and microsegmentation. Security basics done really well will continue to be the best way to protect assets from the threats we
know and the ones we aren’t yet aware of."
No comments:
Post a Comment