This article first appeared in Digital Edge, The Edge Malaysia Weekly on August 26, 2024 - September 1, 2024
The digital transformation initiatives of companies in Malaysia and Southeast Asia are moving at a snail’s pace as legacy applications and infrastructure are still widely present in these countries. Although Covid-19 accelerated the digitisation process, there is still a long way to go.
And when it comes to technology connectivity processes, the conversation surrounding this area is on connecting applications and data sets to get the right databases put together.
Boomi, an integration and automation company, conducted research among its customer base of 20,000 a couple of years ago and found that 60% of data in an organisation is dark, meaning that they do not know where the data is located and what it is being used for.
“Imagine, if you take that data and feed it into a large language model, the output or result will be completely inaccurate,” says David Irecki, chief technology officer for Asia-Pacific and Japan at Boomi.
“But once you have that data foundation in place and economic activity in your organisation, you’ll be in a much better place to adopt artificial intelligence (AI). That’s what Boomi is trying to do — setting that data foundation.”
A lack of data culture and data literacy are probably the two biggest obstacles to the adoption of AI because of that dark data scenario, especially if people do not understand what data is within an organisation. Data culture, Irecki explains, is when data is kept clean and the source is known.
“Some of the research notes that up to 70% of one’s time could be spent just in this area of data culture and literacy before even looking at adopting AI in the organisation,” he says.
But as we move towards a more AI-based future, the conversations are in the realm of AI agents, which are believed to be the next big thing in AI. Simply put, an AI agent is an AI-fuelled software that does a series of jobs, crossing multiple systems and involving different AI models and software within an organisation.
For example, Irecki explains that currently, with ChatGPT, if one were to ask it to write an essay on a topic, it would do so based on how it is trained. But an AI agent has more control over its functions, meaning that it will recognise that if it churns out an essay, it’s just the first draft and the agent will then feed it through the system for a second time for a more refined output.
This also means that the AI agent will become more autonomous. Irecki believes there will be a proliferation of these agents in the next one to two years.
“Hugging Face, a company that develops computation tools for building applications using machine learning, has more than 700,000 AI models and agents recognised there. Businesses may start with one or two models but in the future, they could have tens of thousands [working together],” he explains.
This means that different AI programs would work with other AI programs to create a semi-autonomous loop. But to get to that, Irecki says a lot of trust is required, especially around the data. “If you don’t understand what data you have, how can you trust the output of a large language model (LLM)? There is also a lot of mistrust around the APIs (application programming interfaces) as well.
“You still need that human in the loop or someone to understand the data and ensure it’s of good enough quality to be able to properly feed it into these models to get the right insight,” he says.
In the modern world of applications, data is stored in the cloud and is connected to applications through an API when transferring data in and out. Right now with LLMs, AI agents are communicating via APIs as well.
“If you think about how we used to use integration automation to connect to a software-as-a-service (SaaS) application or database, it’s going to be the same for an AI agent,” says Irecki.
“There is also this new terminology in the market called composite AI or AI orchestration, and it’s exactly that, that when we get to the future of business, they would use multiple AI agents to facilitate an insight or outcome and it would all need to be connected through APIs.”
As enterprises modernise legacy systems and eventually move into AI adoption, governance frameworks would probably be the next challenge for organisations. In March, Minister of Science, Technology and Innovation Chang Lih Kang said the governance guidelines and code of ethics for AI were set to be presented to the cabinet for approval, with a launch scheduled for April 2024.
However, these have yet to be enacted. In the meantime, Irecki says organisations should put in place interval AI frameworks to ensure optimal ethical use and minimal bias.
“I was talking to somebody in human resources a while ago and they trained their AI model based on their historical hiring practices, and it showed that it was actually bad. What they found was that they were hiring for a certain type of person and they weren’t as diverse as they thought.
“But equally, you can go the other way where the model is trained too diversely and you’re picking the wrong people as well. So organisations really have to be careful,” Irecki advises.
Save by subscribing to us for your print and/or digital copy.
P/S: The Edge is also available on Apple's App Store and Android's Google Play.