Pages

Sunday, 28 January 2024

Data privacy: inextricably linked to AI governance

Concept art for data privacy generated by Blue Willow.
Concept art for data privacy
generated by Dream.ai.

Data Privacy Day, 28 January, marks a time of reflection on our rights to data privacy and protection. In the age of generative AI (gen AI), AI governance becomes a pressing priority.

In December 2023, the International Association of Privacy Professionals (IAPP), the largest global information privacy community and resource, released the IAPP-EY Survey results on professionalising organisational AI governance. The research found that 57% of privacy functions have acquired additional responsibility for AI governance, and that AI governance was ranked the 2nd-most important strategic priority for privacy functions in 2023.

Additionally, the IAPP-EY Survey found that 60% of organisations have either already established a dedicated AI governance function, or will likely establish one in the next 12 months, but 56% of survey respondents work in organisations lacking an understanding of both the benefits and risks related to AI deployment.

Nigel Ng, Senior VP, Asia Pacific and Japan, Tenable, said: "Every year, we engage in discussions about data privacy, yet annually, the count of individuals falling victim to data breaches continues to rise. While advocating for the crucial right to data privacy is essential, the stark truth is that if hackers can effortlessly pilfer, peruse, and disseminate personal data, the concept of privacy becomes elusive. In essence, privacy cannot be achieved without effective safeguards."

Ng pointed out that cybercriminals can simply compile leaked data from multiple breaches into a comprehensive database on individuals, essentially cross-referencing information for subsequent attacks. 

"For instance, a compromised identity and password set from one service can be employed in 'credential stuffing' including passwords, PINs, social security numbers, place of birth, and more, threat actors can construct a detailed profile of an individual. This aggregated data could provide enough information to bypass security questions designed to safeguard or access bank accounts. The abundance of available data exponentially increases the risk posed by hackers," he said. 

"To prevent threat actors from stealing data, organisations need to get to the root of insufficient security measures. The attack methods employed by threat actors are not sophisticated or unique; rather, they are opportunistic in nature. They exploit various entry points and pathways within environments to cause harm and capitalise on their malicious endeavours. The extensive adoption of cloud computing also introduces heightened levels of vulnerability and management intricacy, providing targets for malicious actors."

The problem with AI

Industry observers agreed that gen AI should be on the agenda for data privacy. Raja Mukerji, Co-founder and Chief Scientist, ExtraHop said: "As a new approach gaining attention across enterprises, concerns about data security and privacy have run rampant. Most enterprises are eager to take advantage of generative AI; however, circumstances like employees uploading sensitive corporate data and IP, the opacity of criteria used to train the model, and lack of governance and regulations introduce new challenges."

Source: ExtraHop. Raja Mukerji.
Source: ExtraHop.
Mukerji.
"During this time of development, enterprises should focus on ways to make generative AI work for their specific needs and protocols. Visibility into AI tools is critical, and enterprises should have solutions in place that monitor how they're being both trained and used while educating employees on best practices for safe and ethical use.

"Investing in systems and processes that grant you this visibility and training will help position generative AI as an aid for productivity in the workplace, and help mitigate data privacy concerns. Eventually, enterprises will be able to take advantage of the opportunity to build their own unique AI tools to better serve their employees, customers, and processes, in a provably secure and repeatable manner."

"We are squarely in the middle of an AI boom, with gen AI promising to take us into a new era of productivity and prosperity. However, despite its vast potential, there remains a lot of trepidation around the technology – particularly around how to use it responsibly. For example, there are risks around violation of data privacy and individual consent when it comes to the data that AI algorithms are trained on," agreed James Fisher, Chief Strategy Officer, Qlik.

"Trust in gen AI – and the data powering it – is key for the technology to be embraced by enterprises. With the risk of misinformation, the use of deepfakes and more, it will take hard work to build this trust. One way to do this is through improving the data that AI is fed – because AI is only as good as its data."

Ajay Bhatia, Global VP & GM of Data Compliance and Governance, Veritas Technologies, said that Data Privacy Day is a reminder that "data privacy isn’t something a business can achieve in a single day at all". "Far from that, it’s a continual process that requires vigilance 24 x 7 x 365. Top of mind this year is the impact AI is having on data privacy," he said.

"AI-powered data management can help improve data privacy and associated regulatory compliance, yet bad actors are using gen AI to create more sophisticated attacks. Gen AI is also making employees more efficient, but it needs guardrails to help prevent accidentally leaking sensitive information. Considering these and other developments, data privacy in 2024 is more important than ever.”

Remus Lim, VP, Asia Pacific and Japan, Cloudera, said that as companies look to deploy more AI and machine learning (ML) technologies across the business, demand for access to their data increases across all environments.

"Advancements in AI/ML have even let organisations extract value from unstructured data, which makes the management, governance, and control of all data critical. With businesses looking to democratise more of their data, it is key to focus on data privacy and security. They must build their strategies and plans with data security and governance at the forefront as tackling third-party security solutions is often a difficult and expensive process," he said.

"Investing in modern data platforms and tools with built-in security and governance capabilities allows companies to democratise their data in a secure and governed manner, while successfully training enterprise AI/ML models. In fact, DataOps, which is the approach to improving the communication, integration and automation of data flows between employees that work with data and the rest of the organisation, is expected to hit US$10.9 billion by 2028 as businesses strive to make more data-driven decisions by increasing employees’ access to data." 

The AI boom will accelerate data privacy regulation, predicted Splunk, resulting in many established companies being be unable to, or choosing not to provide their services in certain regions. “Governments around the world are becoming more active in ensuring that industry is meeting their obligations around data privacy,” said Simon Davies, SVP and GM in APAC, Splunk.

Progress with compliance

Bhatia shared that data privacy compliance continues grow in complexity. He said: "New laws putting guardrails on using personal data in the large language models (LLMs) behind gen AI tools are gaining steam. For example, the California Privacy Protection Agency is already working to update the California Consumer Privacy Act to address GenAI and privacy, including opt-out implications. More will follow. This type of legislation, like most other privacy regulations, will differ across continental, country and state borders, making the already complex regulatory environment even harder to navigate without help.”

"We are seeing steps in the right direction here through a push for better governance, origin, and lineage of data to power AI. At an enterprise level, businesses must look to test the validity of their data and get robust data governance in place. Then, it will be possible to use AI to generate more trustworthy and actionable insights down the line," he concluded.

Allon Mureinik, Senior Manager, Software Engineering, Synopsys Software Integrity Group, said that there is another side to 'sharing is caring'. "Whether intentionally on their social media accounts and company websites or unintentionally by the actions of their employees, companies might share more than they ought to," he said.

"In a world where information is the hottest commodity and any small sliver of data could be used by a competitor or even an unlawful attacker, companies would be well advised to prioritise the protection of their and their employees’ data."

Next steps

The first step to data protection and privacy is defining a set of policies about what can be shared, how it can be shared, and by whom, Mureinik said. Policies should cover both the actions of the company’s employees (e.g., defining what work-related aspects can be shared on social media) and the technical measures taken to support these policies (e.g., blocking social media sites on work-issued laptops).

"While it may be compelling to create a 'share nothing, hide everything' policy, this often isn’t advisable, or even possible. Any such policy should assess the risk any data exposure would create and weigh it against the potential benefit," he said.

Second, such a policy must be shared with employees, with training offered so they understand their role in protecting the company’s (and their own) private data.

"The important part of this training isn’t just memorising rules and regulations, but having the employees truly understand the intent behind them, and what they are supposed to achieve," he said.

John Tapp, Associate Principal Consultant, Synopsys Software Integrity Group also advised businesses to use a password manager, with strong and unique passwords. "Use browser extensions like uBlock Origin and Privacy Badger to make it more difficult to meaningfully track you," he said.

No comments:

Post a Comment