Report: Clarity on AI’s impact crucial for governance framework
24 Feb 2025, 12:00 am
main news image

This article first appeared in Digital Edge, The Edge Malaysia Weekly on February 24, 2025 - March 2, 2025

Malaysia faces three key risks in artificial intelligence (AI) governance: falling behind in technology adoption, unsafe AI implementations with unintended consequences and malicious use of AI technology. These findings from Khazanah Research Institute’s (KRI) latest report, AI Governance in Malaysia: Risks, Challenges and Pathways Forward, highlight the urgent need for regulatory action.

Drawing on in-depth interviews and a stakeholder roundtable, the report underscores the need for robust policies to mitigate these risks while driving AI adoption. An excerpt reads:

“Our typology contributes by differentiating between risk and readiness. From the conversations on AI risks with stakeholders, it became apparent that some concerns are associated with the lack of readiness to cope with the transition to being an AI- integrated society. These are not AI risks per se but end up compounding problems as they occur. In other words, AI risks can be exacerbated without corresponding readiness to support safe adoption.”

Clear differentiation between risk and readiness issues allows more targeted measures

Malaysia is in a good position to strengthen its own AI governance and influence the global conversation, states the report. In doing so, it adds, Malaysia can play a role in global AI governance. KRI put forward six policy recommendations:

1. Focus on national coordination of existing initiatives and actors. The importance of coordination cannot be overstated as it forms the backbone that connects and maximises the utility of the other recommendations to follow. Investment into coordination must also recognise that this is a long-term endeavour requiring sustainable effort that will bear fruit in the longer run.

2. Participate in international collaboration and global governance processes. This involves establishing Malaysia’s position on global governance debates and engaging strategically in international rules-setting and other global discussions.

3. Establish an agile and fit-for-purpose regulatory framework for AI by considering a whole spectrum of regulatory mechanisms alongside legislation.

4. Strengthen data governance frameworks that build trust and safeguards. Malaysia’s existing data governance frameworks need to be fortified for AI-related risk scenarios. Strong and trustworthy common principles for data sharing are needed to build better technology for the local context.

5. Cultivate understanding of AI impacts and how to manage them among experts and laypeople. There is a need to connect experts across different disciplines and localities to strengthen expertise on AI governance. Consumer and civic education is the next step after AI literacy campaigns. Including non-experts in AI governance discussions can promote critical thinking and collect diverse perspectives on technology adoption.

6. Support research and oversight on AI impacts by having independent oversight on AI use, especially adverse effects. Collecting data on AI-related harms and high-risk use of AI by the government and large corporations can help hold AI deployers accountable. Systematic tracking of AI adoption by small and medium enterprises can also help inform industrial development.

“The recommendations presented in this report point towards the necessity of funding to implement effective policies and initiatives. Without adequate financial support, the ambitious goals set forth for AI governance may remain unfulfilled,” the institute notes. “Securing resources to achieve our strategic vision for AI will pave our way not only to managing risks but also to capitalising on the opportunities presented by the technology.”

AI readiness as important as policy implementation

The report highlights four key areas for AI readiness: governance, capabilities, education and resources. Governance ensures safe AI development through rules and processes that guide individuals and organisations.

“In Malaysia, despite the existence of the National Data Sharing Policy, current national governance structures are inconducive to information sharing among the public and private sectors,” the report reads. “A vacuum in both government and corporate governance leads to risks of unethical, unsafe AI development and use. Insufficiently regulated environments also risk perverse competition practices driven by AI.”

Enhancing workforce skills and strengthening governance are essential for AI integration, as automation reshapes jobs and necessitates adaptable talent.

The report continues: “It is imperative that the national skills training system manages the impact of AI deployment in the workplace and ensures that human and AI capabilities complement each other. The employment outcome of a skilled workforce depends on policies that help grow important industries locally. When labour and industrial policies are not aligned, it can negatively affect both areas.”

Touching on education, KRI notes: “Currently, there is low public awareness of the influence and risks of AI, as well as the individual rights related to AI such as privacy rights. Most small businesses are also likely to be unaware of the compliance needs of their products and services involving AI. Low literacy of individual rights and digital risks exposes the public to potential exploitation by malicious actors. “Our stakeholders also noted that holistic education is needed to train digitally ready talent, including upskilling and reskilling efforts, which will allow the workforce to adjust to the potential labour effects of AI, such as job displacement.”

Lastly, bolstering readiness in governance, capabilities and education will require resources. Inadequate resource allocation can result in poor risk management, which reduces the ability to solve AI-related problems.

Understanding risks for better AI governance

AI has the potential to boost productivity, advance science, and improve health and living conditions. However, it can also deepen divides between adopters and non-adopters, making the cost of inaction significant.

“Developing countries risk missing out on the benefits of AI if their public and private sectors dither in AI adoption, stalling the benefits for their population that could otherwise be realised. The risk of ‘being left behind’ can be critical for countries moving up the development ladder,” the report states. 

KRI adds: “Apart from opportunity costs in AI-related benefits, economic losses are also a source of concern. In developing economies, traditional firms risk being outstripped by newer, digitally competent businesses competing in the same market. In the context of global trade, countries which lag in ensuring AI innovation and adoption in their industries could be outcompeted in the international market.”

Meanwhile, the risks of unsafe AI and unintended consequences encapsulate two types of dangers. First, accidental risks that are “unintended and harmful behaviours that may emerge from poor design of real-world AI systems”.

According to the report: “Accidental risks are an unexpected diversion from the system’s intended goals and can manifest in two primary ways. First, technical failure that causes misbehaviour in the AI system. As AI systems are now deployed in areas such as driverless vehicles, AI-enabled drones, healthcare and security systems, technical failures to meet system objectives can be a critical risk to human safety.

“Second, accidental risks can be the result of poor design, resulting in algorithmic bias, hallucinations and AI security risks. Poor design can include training a model with insufficient or poor-quality data. Even if the data is sufficient and representative of society, the data may carry with it existing biases of society that are transferred to the AI system.”

These risks are structural, arising from the environment or context in which AI systems operate and interact with. They have “… more to do with how AI reshapes and perpetuates risks that already exist. As a technology, AI changes political and economic relationships, how people appropriate natural resources, and how members of society interact with each other. AI deployment without systemic considerations can lead to indirect effects on human labour, environmental sustainability and inequality, affecting human rights and causing security risks,” the report reads.

It continues: “The risks posed by AI misuse will be challenging to control. Not only can the types of harm emerging from AI misuse be varied, but the scale, sophistication and speed at which harm is exacted with the use of AI far surpasses traditional detection and control mechanisms.”

Touching on the malicious use of AI, the report states that it: “… includes the malicious design, development and use of AI, involving direct human decisions that constitute malicious intent. Some of the risks are criminal in nature, such as the use of AI for cyberattacks, carrying out fraudulent activities and the deployment of AI in lethal autonomous weapons and drones. Other risks are not technically criminal but may be of questionable ethics, for example, the use of AI to produce and spread disinformation and deepfakes, tamper with democratic processes and so on.

“The crucial risk of malicious AI use lies in its capacity to scale and democratise security harms that used to be costly and inaccessible, making these harms much more challenging to contain. The enhanced capabilities of AI to carry out cybersecurity attacks at scale, for instance, can amplify security risks on critical information infrastructure with more efficiency and much less cost.”

Save by subscribing to us for your print and/or digital copy.

P/S: The Edge is also available on Apple's App Store and Android's Google Play.

Print
Text Size
Share