Pages

Tuesday, 28 January 2025

Data privacy in 2025: AI adoption generates new issues

Industry observers have highlighted the dominant role of AI in rocking the privacy boat:

“With data at the heart of everything, it would be amiss not to mention the potential disruption AI has introduced, adding another layer of risk. For organisations, controlling AI deployment usage while also identifying vulnerabilities within AI tools and AI development packages, is yet another headache for the security team to worry about," said Bernard Montel, Technical Director and Security Strategist at Tenable.

“On the flip side, harnessing the potential of AI to supercharge the way we utilise data can be monumental. For example, using AI to transform our approach to security by enabling faster analysis, decision-making and guidance, cutting through complexity to stay ahead of attackers."

Montel added: “We’ve seen malicious actors get increasingly aggressive with their threats. Ten years ago a ransomware attack was really obvious. Today these attacks are less obvious and can go undetected for a few weeks as threat actors look to obfuscate their presence. Once they’ve extracted the information, it’s out of your control. However, while they’ve been creeping around, they could also have been laying incendiaries in case ransoms aren’t forthcoming, threatening to destroy the data which could leave your organisation unable to function. In addition, threat actors are starting to harness AI to write malware for ransomware attacks.

“Every organisation must take action to protect the data it relies upon to function and that it's trusted to protect. Know what is important, the attack paths that could be travelled should a threat actor gain access, then prioritise efforts to shut off these paths. It’s not rocket science but foundational security practices that will protect what matters most.”

Wei Chen, Executive VP and Legal Counsel, Infoblox, said the rise in online scams and phishing attacks, exacerbated by advancements in generative AI, presented a growing threat to Singapore’s digital economy and public trust.

"Generative AI can be used to create hyper-realistic deepfakes and personalised phishing messages, making them harder to detect. Large language models (LLMs) trained on personal or identifiable information on the Internet, or stolen data, could further undermine individuals’ privacy," she said. 

Sylvain Cazard, President, Asia Pacific and Japan, Broadcom, said that the AI landscape is shifting as enterprises recognise that "innovation doesn't have to come at the cost of data privacy and control". 

"While the initial generative AI (gen AI) wave indicated significant public cloud investment, we are now seeing a surge in demand for private AI as businesses seek to maintain better control over their data while enabling the local deployment of targeted gen AI models. This shift is a strategic response to several factors, including data privacy, security and regulatory compliance," Cazard said. 

Thales noted that disruptive business models and new technologies like generative AI "permeate all aspects of business and personal life", leading to a growing amount of data being collected, used, analysed, and retained. "This also means that the damage a data breach can potentially inflict on a business and its customers rises in tandem. Regulatory oversight and financial penalties are also trending northwards," the company stated in an advisory for Data Privacy Day.

“Generative AI is a threat but also an unprecedented opportunity to build loyalty with customers,” said Andy Zollo, Senior VP, Application and Data Security for Thales in Asia Pacific & Japan. 

“Maintaining proper data control is arguably the most important focus of all strategic security initiatives and must take priority in organisations looking to build trust-led competitive advantage.”

Richard Cassidy, CISO EMEA at Rubrik also spoke about managing data in the age of AI. “As we mark Data Privacy Day 2025, AI should be at the top of the priority list for all security and technology leaders — especially as we navigate the competing mandates and regulations worldwide. For global businesses, it will be critical to have a firm grasp on all legislation to ensure they leverage AI in a regulated way. Noncompliance will result in costly financial and reputational damage," he said.

"To keep pace with AI’s rapid evolution and proliferation, organisations must have a comprehensive, continuous understanding of their data inventory — knowing where sensitive data lives and ensuring it has the correct security posture. Organisations must give customers the confidence that their data is secure–no matter where it lives–while they tap into the full potential of AI." 

Cassidy advocated embracing data privacy by design. "One key way for security leaders to achieve continued compliance and assurances for their customers is to embed data privacy by design into every process system operation they build. They must also closely collaborate with key stakeholders, including legal and compliance teams. Privacy is not just a security responsibility — it falls onto every department across the entire business,” he said. 

Cloud

Source: HPE. Loh Khai Peng.
Source: HPE. Loh.

Loh Khai Peng, VP and MD, Singapore and Southeast Asia, HPE said: "As organisations across Asia Pacific accelerate AI adoption – with the region’s investments projected to reach US$110 B by 2025 – data privacy has emerged as a critical requirement of responsible AI innovation. Yet, many organisations are still struggling to balance AI advancement and data protection, especially with hybrid cloud now the preferred operating model for the vast majority of organisations in APAC."

One of the main challenges of ensuring data privacy in a hybrid cloud environment is
managing disparate security and privacy protocols across different cloud providers and
on-premises infrastructure, Loh said, as this increases the number of potential vulnerabilities that cybercriminals can exploit.

"Second, the dynamic nature of data movement between clouds poses significant tracking and protection challenges. With data constantly flowing across on-premises systems, private clouds, and public cloud services, maintaining consistent encryption and access controls has become increasingly complex," Loh continued.

"Third, organisations face the challenge of regulatory compliance across different jurisdictions. Organisations are grappling with how to navigate the complexities of having data residing in multiple locations and countries, all with varied regional data protection laws. This is particularly challenging in APAC, where data sovereignty
requirements vary significantly between countries like China, Singapore, and Australia.

Loh recommended considering a private cloud model as APAC organisations continue to experiment and innovate with AI, as a private cloud environment "is key to optimising AI control and security and mitigating data privacy risks". "In today’s changing hybrid cloud environments, a private cloud approach is the key to ensuring data privacy, especially with the rise of AI and machine learning. Training AI models requires vast amounts of data, and hosting AI workloads on public cloud can expose data and models to increasingly advanced data privacy threats. Data security and compliance concerns are driving organisations to turn to the private cloud for better control over sensitive data," he said. 

"Private clouds allow organisations to keep their training data, model parameters, and inference results within their own infrastructure, preventing potential data leaks or unauthorised access. This approach is especially appealing to
organisations dealing with proprietary algorithms, customer information, or regulated data. It also enables organisations to customise their security protocols, implement strict access controls, and ensure compliance with data protection regulations while still maintaining the scalability needed for AI workloads."

Cazard highlighted a shift to 'private AI', or AI that operates only within an organisation, similar to the difference between public and private clouds. "The emergence of generative AI has heightened awareness about data privacy and security. Meanwhile, regulatory frameworks across the Asia Pacific region, such as Singapore’s Personal Data Protection Act (PDPA), or Australia's upcoming Privacy Act reforms, demand stricter control over personal data protection and management. Organisations are also treading cautiously anticipating future AI legislation and investing in hybrid or private platforms," he explained. 

"Private AI refers to artificial intelligence systems that are developed, deployed, and managed within an organization’s own infrastructure or a secure environment. Private AI offers a unique combination of benefits: management of regulatory risk compliance, data security, cost efficiency, and scalable innovation. Our experience shows that organisations can achieve better cost efficiency with private deployments. In recent IDC data, 60% of respondents cited on-premises AI models (are) either more economical or cost-equivalent to public cloud alternatives. At the same time, with private AI, enterprises can avoid the risk of vendor lock-in and having to surrender control over their data."

Cazard added that the real-world applications of private AI are already proving transformative. "Looking ahead, private AI will shift how organisations deploy intelligent solutions. It will propel companies to look at greater investment in their private cloud which will support technological innovation in the longer run, while enabling them to remain compliant with local security and privacy policy frameworks," he concluded.

Data quality

"Data privacy is more critical than ever as organisations invest in AI. A big part of the solution is making sure organisations only gather data with clear and explicit permission, encrypt personal information, and provide a simple opt-out so customers can decide not to share their sensitive personally-identifiable information (PII) data with AI," said David Irecki, CTO for APJ, Boomi.

"AI transparency also goes a long way. Organisations should clearly explain how their AI model arrives at various decisions and conclusions that make sense to humans. It is key that they have transparent model inputs with AI test policies that generate algorithm results within approved bounds."

Irecki said that the data should not be biased. "Organisations also need to ensure that their models are not trained on data that perpetuates long-standing patterns of discrimination or bias. Human oversight remains essential, ensuring these systems comply with regulations and ethical standards," he added.

 "Alongside all this, a solid data governance framework is crucial to preserve trust and maintain consistent, high-quality information and protect sensitive data. With responsible practices and clear accountability, AI can thrive without sacrificing privacy."

Domain name system (DNS) security

"This evolving threat landscape requires a proactive and multilayered approach by leveraging innovative and emerging technologies, such as DNS security. Websites, emails, short URLs, text messages, and many other digital services rely on domain names and the DNS infrastructure to function and route traffic accurately," noted Chen. 

"The DNS infrastructure is also utilised by threat actors to support activities like phishing campaigns. By monitoring and analysing this infrastructure, protective DNS can block malicious activity, such as preventing users from accessing phishing links. Encrypted DNS is a key technology that enhances security by encrypting DNS traffic, preventing it from being intercepted or tampered with by malicious actors. 

"These tools, combined with robust access controls, data minimisation practices, and proactive monitoring, provide comprehensive defence mechanisms."

According to Infoblox, protective DNS is "any security service that analyses DNS queries and takes action to mitigate threats, leveraging the existing DNS protocol and architecture". 

Chen shared that many governments, including those in Australia and Japan, have recognised that protective DNS is effective. "The future of data privacy in our increasingly digital society hinges on our ability to anticipate and counter these AI-enhanced scams effectively, maintaining confidence in the online environment," she concluded.

Explore

The data privacy focus discussed cybersecurity separately at https://www.techtradeasia.com/2025/01/data-privacy-in-2025-more-complex-with.html

Read about data privacy research on consumers from Acronis at https://www.techtradeasia.com/2025/01/acronis-privacy-survey-data-breaches.html

Hashtag: #DataPrivacy2025

No comments:

Post a Comment