![]() |
| Concept artwork on Internet safety generated by Bing. |
"We've entered a new era of online safety, one that prioritises protecting our youngest users and holding our platforms to higher standards," said Australia's eSafety Commissioner Julie Inman Grant in a video message. The country banned young Australians from social media in late 2025.
"But legislation is only part of the solution. Real change starts with all of us. Having conversations about online safety, being kind and respectful online, and using tools to prevent harm - Safer Internet Day is the perfect place to start."
Xbox announced that Minecraft will launch CyberSafe: Bad Connection?, the 5th installment in the CyberSafe series for Minecraft Bedrock and Minecraft Education. "This new world will help young players build online safety and digital citizenship skills through scenario-driven gameplay. Players will explore ways to recognise risks and red flags, learn how to report suspicious activity, and build confidence to keep themselves and their community safe," said Xbox in a blog post ahead of Safer Internet Day.
Bad Connection? builds on the legacy of the CyberSafe series, which has reached more than 80 million downloads since 2022, equipping players with the skills, practice, and confidence to navigate online spaces, Xbox said.
Bad Connection? is available for free in the Minecraft Marketplace and Minecraft Education on February 10, along with free, downloadable materials for parents and educators to support critical, ongoing conversations about online safety.
Xbox also launched its 6th Xbox Transparency Report, which showed proactive moderation efforts in 2025 to prevent unsolicited content from ever reaching players resulted in a 90% drop in spam message complaints compared to 2024. There was a 23% drop in complaints about messages from non-friends over the same period.
Meta noted that safety is already built in for teens using Instagram, Facebook, or Messenger, as they will automatically be placed into Teen Accounts, which comes with safeguards and essential safety features by default.
"This means strangers can't message them, only friends can tag them and age-inappropriate content is automatically filtered. Even suspicious images in DMs are blurred automatically. While these features apply to all teens under 18, those under 16 will need a supervising parent’s permission to change any of these settings to be less strict," the company said. DM refers to direct message.
Meta also allows daily limits to be set, or access blocked during specific periods of the day.
The 2026 theme of Smart tech, safe choices – Exploring the safe and responsible use of AI is particularly apt this year, said Alina BÎZGĂ from Bitdefender in a blog post. "When used well, AI offers clear benefits. It can support learning, boost creativity, and improve online safety by detecting scams, fake websites, and malicious behaviour faster than humans could ever do. At Bitdefender, AI is already used to identify suspicious patterns and emerging threats, helping people pause before they fall victim to fraud or manipulation," she said.
"But AI also opens the gate to abuse. We’ve seen it used to generate convincing fake videos, voices, and images, sometimes for so-called 'pranks' that cross ethical lines, and other times for outright scams. The same technology that can create helpful tools can just as easily be used to deceive, embarrass, or exploit."
BÎZGĂ shared that 37% of consumers polled in Bitdefender’s 2025 Consumer Cybersecurity Survey cited their biggest concern about AI as its use in sophisticated scams, such as deepfake videos and audio. "More than seven in 10 consumers encountered scams in the past year, and one in seven fell victim," she said.
The same survey showed that younger consumers are twice as likely to be scammed as the older generations as they share more personal content online and spend more time on social platforms. "AI tools feel helpful and safe, which makes it easy to forget that:
- AI systems can store or learn from inputs
- Not everything they generate is accurate
- Images, voices, and videos can be entirely fabricated," she shared.
Check Point's comments are in alignment. The company stated that Safer Internet Day has always been about protecting people online. "In 2026, that mission takes on new urgency as AI becomes deeply embedded in how we work, learn, communicate, and transact. AI is no longer a future technology—it is already shaping what we see online, how decisions are made, and how cybercriminals operate. As AI becomes a default layer across the Internet, online safety is no longer just about user behaviour—it is about how intelligently and responsibly AI itself is designed, governed, and secured," Check Point said.
"The challenge this year is not whether to use AI, but how to use it safely, responsibly, and with awareness of the new risks it introduces. As AI accelerates productivity and creativity, it is also expanding the attack surface of the Internet in ways that affect individuals, families, schools, and organisations alike. This is why Check Point increasingly frames cybersecurity as 'AI-first and prevention-first'—because reacting after harm occurs is no longer sufficient in a machine-speed threat environment."
Check Point observed that "AI is now present in almost every digital interaction". "Enterprises are adopting generative AI at speed, and individuals are using it daily—often without fully understanding how data is processed or stored. AI has effectively become a co-pilot for daily digital life, influencing decisions, content, and trust signals in the background," the company said.
According to Check Point Research from December, in December 2025, 1 in every 27 generative AI prompts submitted from enterprise networks posed a high risk of sensitive data leakage, and 91% of organisations using generative AI tools were affected by high-risk prompt. An additional 25% of prompts contained potentially sensitive information, highlighting how easily users can overshare data when interacting with AI tools.
"These findings reinforce a core Check Point message: AI must be secured just like any other critical system—because it now directly handles sensitive data and decision-making," the company said.
"These risks are not limited to enterprises. When students, families, or individuals use AI tools for homework, advice, or content creation, the same behaviours—copy-pasting personal information, uploading images, or trusting outputs without verification—can expose them to privacy, misinformation, or manipulation risks. Safe AI use therefore starts with digital literacy, not restriction."
Check Point’s Cyber Security Report 2026 notes that attackers are now combining AI, identity abuse, ransomware, and social engineering into coordinated, multistage campaigns that move faster than traditional defences can. These attacks increasingly adapt in real time, learning from failed attempts and automatically refining their techniques—mirroring how defensive AI operates.
The report found that organisations globally faced an average of 1,968 cyberattack attempts per week in 2025, representing an 18% year-over-year increase and a 70% increase since 2023. "These attacks are no longer isolated incidents—they are persistent, automated, and increasingly personalised. This scale is precisely why human-only security models can no longer keep pace," Check Point said.
The company added that Singapore remains a primary target, surpassing the global average with 2,272 weekly cyberattacks in 2025, a 17% increase over 2024. Within Singapore, the consumer goods and services sector was hit hardest, averaging 3,353 attacks per week, followed by financial services (1,632) and business services (1,588).
Jayant Dave, CISO, Check Point Software Technologies remarked: “As AI reshapes how we learn, work, and connect, we must remember that trust, like virtue, is built through consistent action, not blind assumption. This Safer Internet Day serves as a call to embrace prevention-first, AI-powered security and cultivate digital wisdom, ensuring the Internet remains a safe and resilient space in an AI-driven era.”
Alex Laurie, Go-To-Market Chief (GTM) CTO at Ping Identity, focused on agentic AI. "The conversation this Safer Internet Day must evolve to reflect a new reality: AI agents are no longer just tools, they’re autonomous actors operating at machine speed across the digital ecosystem. While AI agents unlock powerful efficiencies, they also introduce new security risks by acting like users, making decisions, and accessing systems in ways that are difficult to distinguish from human behaviour," Laurie said.
"When left ungoverned, these agents can be exploited or behave unpredictably, expanding the attack surface and undermining what can be trusted."
"Combatting this risk requires a shift toward verified trust, where every digital interaction is continuously validated, not assumed. Organisations must move beyond static credentials and adopt identity-first security models that verify who, and what, is accessing systems, under what context, and with what intent. By combining strong identity verification, real-time risk assessment, and adaptive access controls, businesses can enable AI innovation while protecting users, data, and trust in an increasingly agentic Internet."
Arun Kumar, Regional VP APAC at ManageEngine, took the bigger picture perspective. He said that a safer Internet
experience starts with vigilance and layered defence. "As digital
activity accelerates across work and daily life, no single control or
tool can protect users from the full range of evolving online threats.
Real resilience comes from pairing informed, security-aware users with
intelligent, automated controls that work continuously in the background
to detect anomalies, enforce policies, and reduce risk," Kumar said.
"Today,
every device, user, and connection represents a potential entry point
for attackers. Enterprises must assume that risk is constant and adopt a
proactive approach to monitoring their digital environments, rather
than relying solely on reactive responses. This also means encouraging
safer behaviour; from limiting sensitive activity on untrusted networks
to recognising suspicious links or payment requests."
"Safer
Internet Day is a timely reminder that privacy and security must be
built into systems by design, not added as an afterthought only when
incidents occur. Basic security fundamentals such as asset visibility,
timely patching, access control, and continuous monitoring remain the
most effective line of defence for both organisations and individuals,
even as threats become more sophisticated," Kumar added.
"By combining strong
digital hygiene with automated oversight, organisations can better
protect corporate systems and personal data, while maintaining trust in
an increasingly connected digital world.”
Hashtag: #SaferInternetDay

No comments:
Post a Comment