Pages

Wednesday, 2 January 2019

How AI changes the security landscape in 2019

Source: RSA. Nigel Ng.
Source: RSA. Ng.
Deploying artificial intelligence (AI) and machine learning for cybersecurity is an attractive proposition. Such advanced technologies can help to make more sense of the security situation and is able to identify suspicious behaviour without requiring malware signatures. But that's just one side of the story.

AI in cyberdefence

“From a security technology perspective, the focus continues to be on enhancing detection and response by gaining more visibility. User and entity behaviour analytics (UEBA), machine learning and AI-powered technology will witness more adoption, empowering organisations to detect faster and respond more efficiently,” said Nigel Ng, VP, International, RSA.

Source: Darktrace. Andrew Tsonchev.
Source: Darktrace. Tsonchev.
According to Ng, machine learning can analyse large volumes of transactional data for online users to detect abnormal deviation and to limit online fraud, while UEBA in the enterprise access world learns pattern of user behaviour and understands what good looks like, so that when behaviour steps outside of the norm, a step-up action can be automated to protect that user access.

“Whereas legacy systems depend on preconceptions of known threats, the latest AI can identify threats that have never been seen before, without any level of human pre-programming. By establishing the digital norms of an organisation, the technology can intelligently pinpoint even the most novel and subtle of threats in real time,” Andrew Tsonchev, Director, Darktrace Industrial pointed out.

“As the cyberskills shortage worsens across Asia and businesses struggle to plug their security deficit with more manpower, AI steps in as the machine defender, able to not only detect threats that are often difficult to spot, but autonomously respond on behalf of humans to isolate threats before they spread...We are already seeing the early signs of AI-driven cyberattacks and organisations in Asia Pacific need to be readying themselves for what is fast becoming a cyber arms race.”

Source: Symantec. Steve Trilling.
Source: Symantec. Trilling.
Ng stressed that AI and machine learning do not replace human security analysts. “It’s not Skynet*,” he said. “These technologies are there to take away routine tasks, and free up our teams to innovate as time-consuming tasks are reduced. For example, new team members in the security operations centre (SOC) can be recommended a course of action via an AI or machine learning driven orchestration, for similar incidents witnessed previously.”

“Attackers are not only the ones that can use AI systems to probe for open vulnerabilities; defenders can use AI to better harden their environments from attacks. For example, AI-powered systems could launch a series of simulated attacks on an enterprise network over time in the hope that an attack iteration will stumble across a vulnerability that can be closed before it is discovered by attackers,” said Hugh Thompson, Symantec CTO and Steve Trilling, Senior VP and GM Security Analytics and Research at Symantec.

Source: Symantec. Hugh Thompson.
Source: Symantec.
Thompson.
“Closer to home, AI and other technologies are also likely to start helping individuals better protect their own digital security and privacy. AI could be embedded into mobile phones to help warn users if certain actions are risky. For example, when you set up a new email account, your phone might automatically warn you to set up two-factor authentication. Over time, such security-based AI could also help people better understand the tradeoffs involved when they give up personal information in exchange for the use of an application or other ancillary benefits.”

AI in cyberattacks

Where AI can defend, it can also attack.

“Automated systems powered by AI could probe networks and systems searching for undiscovered vulnerabilities that could be exploited. AI could also be used to make phishing and other social engineering attacks even more sophisticated by creating extremely realistic video and audio or well-crafted emails designed to fool targeted individuals. In addition, AI can be used to launch realistic disinformation campaigns.

Source: ESET. Lysa Myers.
Source: ESET. Myers.
“For example, imagine a fake AI-created, realistic video of a company CEO announcing a large financial loss, a major security breach, or other major news. Widespread release of such a fake video could have a significant impact on the company before the true facts are understood,” Thompson and Trilling suggested.

Lysa Myers, ESET Senior Security Researcher, is of the same mind. In ESET’s Cybersecurity Trends 2019: The Cost of our Connected World she writes, “While some phishing and other fraud attacks have certainly improved their ability to mimic legitimate sources, many are still painfully obvious fakes. Machine learning could help increase effectiveness in this area.”

Source: Trend Micro. Nilesh Jain.
Source: Trend Micro. 
Jain.
AI could better anticipate the movements of executives, says Nilesh Jain, VP, SEA and India, Trend Micro. In Trend Micro's Mapping the Future: Dealing with Pervasive and Persistent Threats report, it is predicted that targeted attacks by well-funded threat actors will start to use techniques powered by AI for reconnaissance.

“This will lead to more convincing targeted phishing messages, which can be critical to business email compromise (BEC) attacks. Additionally, it is likely that BEC attacks will target more employees who report to C-level executives, resulting in continued global losses,” he said.

In a BEC attack, a cybercriminal impersonates someone at a company when sending corporate email to dupe someone else.

Darktrace is betting on malware that can hide intelligently, thanks to AI. “Traditionally, if you wanted to break into a business it was a manual and labour-intensive process. But AI enables the bad guys to perpetrate advanced cyberattacks, en masse, at the click of a button. We have seen the first stages of this over the last year - advanced malware that adapts its behaviour to remain undetected,” agreed Tsonchev.

Source: CrowdStrike. Michael Sentonas.
Source: CrowdStrike.
Sentonas.
When it comes to specific types of AI, Michael Sentonas, VP Technology Strategy, CrowdStrike said that adversarial machine learning will be the main method used to bypass security products that rely exclusively on AI.

“We foresee that attackers will turn the tables and start leveraging adversarial machine learning in their attacks to bypass security products reliant exclusively on machine learning for detection of malware,” he said.

In adversarial machine learning, the training of the AI includes data that is designed to cause it to make mistakes. The idea is to make it more accurate.

The duo from Symantec also predicted that attack toolkits for AI-powered attacks would be found on sale eventually. “With such tools automating the creation of highly personalised attacks – attacks that have been labour-intensive and costly in the past – such AI-powered toolkits could make the marginal cost of crafting each additional targeted attack essentially be zero,” they said.

Source: Gemalto. Michael Au.
Source: Gemalto. Au.
With all these trends in play, it is no wonder that Michael Au, President South Asia & Japan, Gemalto is predicting that 2019 will be the year the first AI-orchestrated attack takes down a FTSE100 company.

“Creating a new breed of AI powered malware, hackers will infect an organisations system using the malware and sit undetected gathering information about users’ behaviours, and organisations' systems.

“Adapting to its surroundings, the malware will unleashing a series of bespoke attacks targeted to take down a company from the inside out. The sophistication of this attack will be like none seen before, and organisations must prepare themselves by embracing the technology itself as a method of hitting back and fight fire with fire,” he said. 

AI systems as targets

Thompson and Trilling noted that the rising use of AI-powered systems has meant that they have become “promising attack targets, as many AI systems are home to massive amounts of data.”

Source: Hillstone Networks. Tim Liu.
Source: Hillstone
Networks. Liu.
“The fragility of some AI technologies will become a growing concern in 2019. In some ways, the emergence of critical AI systems as attack targets will start to mirror the sequence seen 20 years ago with the Internet, which rapidly drew the attention of cybercriminals and hackers, especially following the explosion of Internet-based commerce,” they said.

Tim Liu, CTO, Hillstone Networks, elaborated on how AI has increased the attack surface. “AI consumes large amounts of heterogeneous data that may come from different sources and have different security and compliance requirements. There is also the question about who has access to the different AI engines and secures the input to these engines,” he said.

“AI sometimes can be very sensitive to input data, and hackers can poison input data so that the engine will make the wrong decisions. Last but not the least, the data that comes out is usually critically important and needs to be protected.”

Explore:

Browse the full list of 2018 round-ups and 2019 predictions in TechTrade Asia

*Skynet is an AI system that is the villain in the Terminator series of movies.

No comments:

Post a Comment