Pages

Wednesday, 21 August 2024

Not enough being done about deepfakes: Iproov

The risk of deepfakes is rising with almost half of organisations (47%) having encountered a deepfake and three-quarters of them (70%) believing deepfake attacks which are created using generative AI tools will have a high impact on their organisations. This is according to a new global survey* of technology decision-makers from iProov, a provider of science-based biometric identity solutions,

Yet perceptions of AI remain positive. Even though two thirds of organisations (68%) believe that it’s impactful at creating cybersecurity threats, 84% also say it’s instrumental in protecting against them. The iProov research also found three quarters of solutions being implemented to address the deepfake threat are biometric solutions. 

The Good, The Bad, and The Ugly gathered the opinions of technology decision makers from the UK, US, Brazil, Australia, New Zealand and Singapore on the threat of generative AI and deepfakes, which can be used for defamation, reputational damage and financial fraud. 

Almost three quarters (73%) of organisations are implementing solutions to address the deepfake threat but confidence is low, with the study identifying an overriding concern that not enough is being done by organisations to combat deepfakes. 

The survey shows organisations recognise that deepfakes are a real and present threat. Over six in 10 (62%) worried their organisation isn’t taking the threat of deepfakes seriously enough. This is an issue when deepfakes can be used to impersonate individuals remotely in order to gain unauthorised access to systems or data, initiate financial transactions, or deceive others into sending money on the scale of the recent Hong Kong deepfake scam**. 

“We’ve been observing deepfakes for years but what’s changed in the past six to 12 months is the quality and ease with which they can be created and cause large scale destruction to organisations and individuals alike,” said Andrew Bud, founder and CEO, iProov. 

“Perhaps the most overlooked use of deepfakes is the creation of synthetic identities which because they’re not real and have no owner to report their theft go largely undetected while wreaking havoc and defrauding organisations and governments of millions of dollars.” 

“And despite what some might believe, it’s now impossible for the naked eye to detect quality deepfakes. Even though our research reports that half of organisations surveyed have encountered a deepfake, the likelihood is that this figure is a lot higher because most organisations are not properly equipped to identify deepfakes. 

"With the rapid pace at which the threat landscape is innovating, organisations can’t afford to ignore the resulting attack methodologies and how facial biometrics have distinguished themselves as the most resilient solution for remote identity verification,” Bud added.

Regionally, more Asia-Pacific (APAC; 51%) respondents than North American (34%) organisations to say they had encountered a deepfake. APAC organisations are the most likely (81%) to believe deepfake attacks will have an impact on their organisation. 

While password breaches (64%) still top the list for threat-related concerns, followed by ransomware (63%), phishing/social engineering attacks and deepfakes tied for 3rd place (61%).

Biometrics have emerged as the solution of choice by organisations to address the threat of deepfakes. Organisations stated that they are most likely to use facial and fingerprint biometrics though the type of biometrics used can vary based on tasks. For example, the study found organisations consider facial recognition to be the most appropriate additional mode of authentication to protect against deepfakes for account access/logins, changing personal details on accounts, and typical transactions.

Source: iProov survey. Negative outcomes from being duped by deepfakes for identity fraud. Chart showing the loss of sensitive data is the most feared consequence of a deepfake-driven identity fraud attack.
Source: iProov survey. Negative outcomes from being duped by deepfakes for identity fraud.

The study revealed that organisations view biometrics as a specialist area of expertise with nearly all (94%) agreeing a biometric security partner should be more than just the seller of a software product. Organisations surveyed stated that they are looking for a solution provider that evolves and keeps pace with the threat landscape with continuous monitoring (80%), multi-modal biometrics (79%), and liveness detection (77%) all featuring highly on their requirements to adequately protect biometric solutions against deepfakes. Liveness refers to the assurance that the biometric element presented comes from a live person and not a recreation.

Other findings include that 17% have failed to increase their budget in programmes that encompass the risk of AI, while most have introduced policies on the use of new AI tools.

*The Good, The Bad, and The Ugly survey was developed in collaboration with Hanover Research. Five hundred global respondents were recruited across industries including banking, e-commerce, finance and accounting, healthcare/medical, hospitality, insurance, retail, telecommunications, and travel. This was done via a third-party panel provider and the survey was administered online in spring 2024. Respondents were professionals in IT, operations, network security, cybersecurity, digital experience, risk management, or product management department with primary decision-making responsibility in the selection and purchase of cybersecurity solutions for their organisation.

**In February 2024, a multinational firm was duped into paying out US$25 M after senior executives were deepfaked into communications requesting the money.

No comments:

Post a Comment