Pages

Saturday, 4 January 2025

The 2-Z of 2025 predictions: A is for AI, part 3

2025 predictions about AI continue below.

Authenticity

Qlik predicted that authenticity, applied value and agents are three interconnected themes that will impact AI in 2025. "If something is available online, it has likely been used to train AI models. However, there have been two distinct shifts in the three years since OpenAI introduced ChatGPT," the company said in its 2025 predictions.

"First, AI-generated content is proliferating quickly across the Internet. One study estimated that 57% of online content is AI-generated. Amazon is flooded with AI-authored books, YouTube is overwhelmed with videos, and content farms have swapped dollar-an-article writers for gen AI rubbish. That means that as large language models (LLMs) develop, the data they’re being trained on — if just taken from freely available sources — could well be AI-generated in the first place," the company said. Gen AI stands for generative AI.

"And the likelihood of that happening will only increase as businesses remove information from the public domain — another unintended consequence of the AI boom. Unauthenticated access is becoming restricted, with less high-quality content available without at least a registration. Publishers and authors are suing OpenAI, and closed platforms such as Medium and Substack are welcoming an influx of creators. YouTube has stopped regular users from being able to transcribe videos. In fact, an MIT-led research group estimated that 25% of data from the highest-quality sources that appear in three commonly used training sets have been removed.

"The result is an authenticity crisis that undermines the quality of LLMs and, ultimately, trust in the models. Some have indicated that it’s difficult to evolve the models further without copyrighted materials. In the hunt for authentic data to train the AI, your corporate data IP is next. As a result, data quality and authenticity will become highly valued — and the demand to prove provenance will soar."

New tools and techniques: RAG

Jess O'Reilly, Area VP, Asia for UiPath, said that organisations’ concerns over data security and accuracy of public gen AI tools are interest in new techniques and tools such as knowledge graphs, retrieval augmented generation (RAG), and internal LLMs. 

"According to the UiPath Knowledge Worker survey, Singapore workers using gen AI are most concerned about security risks (38%) and inaccurate output (34%), as in the case of most markets in Asia Pacific," she said.

"Knowledge graphs, which represent real-world entities like events and concepts, connect scattered information across different data sources to drive significant improvements. On the other hand, RAG improves gen AI models’ performance by giving them access to real-world data while they generate responses. Many companies are also refining foundational LLMs with proprietary data to turn enterprise data into a significant advantage within a company’s firewalls," she noted.

Source: UiPath. Jess O'Reilly's AI predictions for 2025..
Source: UiPath. O'Reilly.

"Ultimately, the most successful enterprises in 2025 will focus not only on scaling agentic AI but also on ethical automation, embedding governance and transparency into their AI orchestration strategies as they navigate escalating regulations."

“2025 will mark a turning point as organisations refine their AI strategies to achieve tangible results and navigate the complexities of a maturing AI landscape. We can expect takeup for RAG and data unification technologies to soar, as organisations place a premium on data integrity, ethics, and sustainability,” said Matthew Oostveen, VP and CTO, Asia Pacific and Japan, Pure Storage.

In a list of 2025 predictions, Pure Storage said: "Generic, off-the-shelf AI solutions like ChatGPT are set to decline in enterprise use as trust concerns over output reliability increase. In 2025, organisations will increasingly pivot to grounded approaches leveraging techniques like RAG.

"This shift will reflect a deeper commitment to AI transparency and ethics, with a preference for context-aware systems that mitigate data biases and inaccuracies. The demand for RAG will surge, particularly in fields like healthcare and financial services, where real-time data integration and contextually accurate responses are critical for nuanced understanding and decision-making."

Ying Shaowei, Chief Scientist, NCS, explained that interest in RAG is high because it reduces the risks of AI model hallucinations and helps to ensure that the AI outputs are not only accurate but also contextually relevant. "As businesses increasingly rely on AI, RAG enhances the quality and reliability of AI applications, underscoring the need for sophisticated data management and well-structured strategies to unlock AI’s full potential," he said.

"In the near future, we can anticipate a significant increase in tools and services leveraging RAG to enable enterprises to effectively mine their proprietary and third-party data using generative AI. RAG is likely to be viewed as a more accessible and cost-effective alternative to finetuning LLMs."

New tools and techniques: xLAM

Source: Salesforce. Gavin Barfield.
Source: Salesforce. Barfield.

In September 2024, Salesforce publicly released xLAM (large action model), models designed for AI agent tasks, on Hugging Face. "In 2025, we’ll see new, highly specialised AI models that go beyond text generation to drive complex, autonomous actions. Salesforce’s xLAM is at the forefront of this evolution. 

"Unlike traditional LLMs, which excel at generating responses, xLAM models are designed for action and decision-making, allowing AI to autonomously execute tasks and manage workflows without requiring explicit instructions," said Gavin Barfield, VP and CTO, Solutions, Salesforce ASEAN.

"By managing entire workflows proactively, like an autonomous sous chef that prepares each step, xLAM models can streamline operations and enhance decision accuracy across various environments."

Barfield predicted that as xLAMs become popular, they will be able to operate across multi-agent systems to tackle increasingly complex, customer-focused processes. "This innovation will make AI a powerful partner in business, delivering efficiency, context-aware responses, and automated actions that drive customer success with accuracy and reliability," he said.

New tools and techniques: SLM

Barfield added that there will also be many small language models (SLMs) designed for a particular industry or purpose. "These models are trained on smaller but more reliable datasets and are effective at performing certain tasks. They are cheaper to run, to train and often more accurate than the large language equivalents," he said.

Ying also observed that a move towards smaller, task-specific language models is gaining traction. "These models are designed to perform well on specific tasks with greater efficiency, consuming far less energy and reducing environmental impact. Despite their smaller size, these models often rival larger ones in their respective domains, demonstrating that in AI, bigger isn’t always better. The shift towards more sustainable, use-case-driven solutions highlights a growing recognition of the need for AI to be both powerful and environmentally responsible," he said. 

"SLMs are poised to gain significant traction among enterprises by 2025. Their ability to deliver tailored insights while reducing dependence on high-end GPUs makes them an appealing option for businesses looking to efficiently leverage large language models to enhance their products and services," said Jay Jenkins, CTO, Akamai Technologies APJ.

"In addition, the increasing focus on data privacy will drive enterprises to adopt SLMs that are more suitable for on-premises deployment, ensuring easier protection of sensitive information. The modular design and scalability of SLMs will further enable organisations to customise these models to meet their specific requirements, allowing for seamless adaptation to changing business needs. 

"As a result, SLMs are set to transform how companies harness AI, making them not only more accessible but also more aligned with contemporary challenges in data management and privacy."

Source: NVIDIA. Deepu Talla.
Source: NVIDIA. Talla.
Deepu Talla agreed, at least for realm of robotics. "To improve the functionality of robots operating at the edge, expect to see the rise of small language models that are energy-efficient and avoid latency issues associated with sending data to data centres.

"The shift to small language models in edge computing will improve inference in a range of industries, including automotive, retail and advanced robotics," said the VP of Edge and Robotics from NVIDIA.

New tools and techniques: vertical LLMs

Verticalisation, the process of tailoring LLMs to specific industries or domains, will begin to transform various sectors in the Asia-Pacific region in 2025, Lenovo predicted. 

"By focusing on specific domains, verticalised LLMs will potentially delve deeper into industry-specific nuances, regulations, and best practices, enabling industries and companies to generate more accurate and relevant output, tailored to the unique needs of each domain. These customised LLMs' ability to analyse vast amounts of industry
data to identify patterns and trends, will enable industry-specialised data-driven decision-making," the company said.

"By leveraging the power of AI, these specialised models will drive innovation, improve efficiency, and ultimately reshape the future of industries worldwide."

"With enterprise AI innovation taking centrestage in the year ahead, businesses will eschew public LLMs in favour of enterprise-grade or private LLMs that can deliver accurate insights informed by the organisational context," said Remus Lim, Senior VP, Asia Pacific and Japan, Cloudera. 

"According to a McKinsey study, less than half (47%) of companies are significantly customising and developing their own models currently and we believe that this is set to change in 2025 as businesses develop AI-driven chatbots, virtual assistants, and agentic applications tailored to the individual business and industry. 

"As more businesses deploy enterprise-grade LLMs, they will require the support of GPUs for faster performance over traditional CPUs, and robust data governance systems with improved security and privacy. In the same vein, businesses will also ramp up their use of RAG in a bid to transform generic LLMs into industry-specific or organisation-specific data repositories that are more accurate and reliable for end users working in field support, HR, or supply chain."

New tools and techniques: hybrid models

Mohan Varthakavi, VP, AI and Edge, Couchbase, thinks that businesses will adopt hybrid AI models, combining LLMs and domain-specific models, to safeguard data while maximising results. "Enterprises will embrace a hybrid approach to AI deployment that combines large language models with smaller, more specialised, domain-specific models to meet customers’ demands for AI solutions that are private, secure and specific to them," he elaborated.

"While large language models provide powerful general capabilities, they are not equipped to answer every question that pertains to a company’s specific business domain. The proliferation of specialised models, trained on domain-specific data, will help ensure that companies can maintain data privacy and security while accessing the broad knowledge and capabilities of LLMs."

Additionally, data architectures will evolve into language model architectures, Varthakavi said. "Enterprises will need to simplify their data architectures and finish their application modernisation projects," he said.

"There have been huge advances in improving outputs thanks to extensive RAG and finetuning work, and 2025 will bring even more innovation: knowledge graphs, ontologies, and bigger context windows, surpassing a million tokens. AI understanding of your specific use cases will improve," noted Qlik. 

"But one size doesn’t fit all — with accuracy critical to unlocking value, the right approach must be matched with the right data, be it graph, vector or relational."

Open source

Source: Red Hat. Guna Chellappan.
Source: Red Hat. Chellappan.
Red Hat's Guna Chellappan, GM, Singapore at Red Hat Asia Pacific said: "Since last year, the number of open source gen AI projects has surged by 98%, with many of these contributions coming from India, Japan, and Singapore. This reflects the importance of collaboration and accessibility when it comes to new technologies like AI and we are likely to see gen AI activity increase globally.

"Open source AI platforms and tools, as well as open source-licensed models, are already democratising innovation by ensuring that its benefits—such as versatile frameworks and tools—are no longer confined to a select few. By making these benefits accessible to organisations of all sizes, the playing field is levelled, allowing even smaller enterprises to discover open source and innovate on a global scale."

Chellappan added: "Open source solutions also offer businesses flexibility in navigating constraints like cost, data sovereignty, and skill gaps. With a collaborative open source community, enterprises can tailor these solutions to their specific needs while retaining control over sensitive data. Moreover, many eyes make all bugs shallow. With vulnerabilities swiftly identified and addressed, businesses will be able to foster greater trust in AI-driven outcomes."

Ying from NCS also highlighted that open AI models are increasingly rivalling proprietary, closed models. "Historically, major tech companies developed and dominated the AI landscape with their proprietary models, limiting access to the most advanced capabilities. However, the emergence of powerful open-source models is democratising access to cutting-edge AI, enabling a broader range of organisations to leverage AI technologies," he said.

"These open models offer cost advantages and foster innovation through community-driven development. As more businesses and researchers contribute to these models, they rapidly improve in performance and usability. The growing adoption of open-source models represents a significant shift in how AI technologies are developed and deployed. We can expect more open models challenging the dominance of closed, proprietary systems and offering new opportunities for businesses to innovate."

Ecosystem

Source: SAP. Utkarsh Maheshwari.
Source: SAP.
Maheshwari.

"Limited resources, inadequate data governance frameworks, and a lack of in-house expertise, are common barriers that companies face in implementing AI for their business. For example, 37% of midmarket businesses report lack of quality data, as well as data silos and disparate systems, as challenges hindering their AI adoption and ability to deliver actionable insights," said Utkarsh Maheshwari, Chief Partner Officer and Head of Midmarket, SAP Asia Pacific Japan (APJ).

"Channel partners have been pivotal in bridging this gap, providing prebuilt frameworks, industry expertise, solution add-ons, and end-to-end support to augment and integrate AI capabilities into essential business processes. SAP is already witnessing the growth of partner-led territories in markets like Australia and New Zealand, India, Indonesia, as well as many parts of Southeast Asia, where partners combine their local business expertise with unique intellectual property (IP), on top of the complete suite of SAP solutions, to offer customers tailored AI solutions that drive meaningful business outcomes.

"In 2025, we will see increased AI collaboration and technology alliances in the partner ecosystem, and a significant expansion and evolution in the role that channel partners play in helping businesses of all sizes make the promise of business AI a reality." 

Looking ahead

Source: Qlik. Kelly Forbes.
Source: Qlik. Forbes.
"We have passed the initial excitement that came with the breakthrough of generative AI, and we are now in a space of figuring out its practical applications. I think we can all agree that we are not yet using AI to its full potential, but through awareness, education, and careful stewardship, we will work toward that in the year ahead," said Kelly Forbes, Co-Founder and Executive Director of the AI Asia Pacific Institute and Qlik AI Council member.

"The first step for businesses requires balancing market trends with organisational needs. They must make internal assessments about their own needs and requirements so they can deploy AI in the areas where it can make a real difference—because there are opportunities out there."

"ChatGPT has brought gen AI to the forefront of mainstream consciousness, reshaping how businesses approach workflows and drive efficiencies in uncertain times. We might start to see some enterprises that are overly fixated on immediate returns reign in their efforts on AI-driven transformations prematurely. However, to truly unlock AI’s full potential, enterprises need to take a long-term view," Chellappan advised.

"In the AI Readiness Barometer: AI landscape study, conducted by Ecosystm on behalf of IBM, AI maturity was assessed based on four main critical criteria: culture and leadership, skills and people, data foundation, and governance framework. Although AI is a business priority for these ASEAN enterprises surveyed, most lack readiness, including the advanced AI and machine learning expertise needed to harness its full potential. In fact, only 17% said their organisations have extensive expertise and dedicated data science teams," he noted. 

"Most organisations are still lagging in AI relevant skills; and are also not prioritising data governance and compliance enough, potentially exposing them to regulation risk. To achieve AI maturity, enterprises must adopt a more strategic and patient approach, particularly in more complex areas where AI can drive significant value. Beyond investing in enterprise data and technology to enhance data readiness, organisations need to be prepared at every level. This involves fostering a culture of innovation, upskilling employees to embrace new technologies, and aligning long-term processes with strategic business goals."

Source: Adobe. Shashank Sharma.
Source: Adobe. Sharma.

Shashank Sharma, Senior Director, Digital Experience, Korea and SEA, Adobe, shared that 64% of senior managers in the APJ region believe that generative AI will lead to major transformations in content workflows and customer journey management. "However, deploying generative AI effectively requires organisations to focus on foundational enablers: robust governance frameworks, clear ethical guidelines, and extensive workforce training," he said.

"By aligning AI strategies with broader business goals and customer expectations and enabling enterprise-wide adoption, forward-thinking organisations can fully realise AI's transformative potential. Companies that master this integration will achieve operational efficiencies while crafting customer experiences that feel more personal and relevant than ever."

Rita Kozlov, VP, Product Management, Cloudflare, said that 2025 will be the year that expectations are reined in. "2025 will be the year of AI pragmatism. After a period of experimentation, organisations will now be more value-conscious with their AI spend. Organisations will be more scientific and methodical in how they approach putting AI in front of customers, evaluating different approaches and options for different use cases," she said. 

"We’re seeing teams pivot to predictable pricing models, transparent gateway metrics, and smaller models that do the job, rather than the largest, most expensive LLMs."

Explore

This is the 3rd of a three-part series on AI predictions for 2025. Read part 1 and part 2.

More AI-related predictions can be found in posts on agentic AI as well as AI cybersecurity, as well as throughout the 2-Z of 2025 predictions series.

Hashtag: #2025Predictions

*The study was specifically focused on machine-translated content, ie content for which there was a version in another language. The 57% statistic refers to the percentage of sentences that had at least 2 other translated versions, implying that AI translation had occurred. It would exclude completely unique sentences, and sentences with 1 translated version. The '57% is AI-generated' claim is seen quite often in media, referencing this study.

1 comment:

  1. As a newbie to cryptocurrency, I lost a lot of money. I would like to express my gratitude to Expert Bernie Doran for their exceptional assistance in recovering my funds from a forex broker. Their expertise and professionalism in navigating the complex process were truly commendable. Through their guidance and relentless efforts, I was able to successfully retrieve my funds of $150,000, providing me with much-needed relief. I highly recommend him on Gmail - Berniedoransignals@ gmail. com to anyone facing similar challenges, as their dedication and commitment to helping clients are truly impressive. Thank you, Bernie doran, for your invaluable support in resolving this matter. i also invested $5000 with his guidance and got a good ROI profit using his signals and strategies









    "Most organisations are still lagging in AI relevant skills; and are also not prioritising data governance and compliance enough, potentially exposing them to regulation risk. To achieve AI maturity, enterprises must adopt a more strategic and patient approach, particularly in more complex areas where AI can drive significant value. Beyond investing in enterprise data and technology to enhance data readiness, organisations need to be prepared at every level. This involves fostering a culture of innovation, upskilling employees to embrace new technologies, and aligning long-term processes with strategic business goals."

    ReplyDelete