This article first appeared in Forum, The Edge Malaysia Weekly on December 30, 2024 - January 12, 2025
The rise of artificial intelligence (AI) has resulted in more frequent cybersecurity attacks, forcing cybersecurity solution providers to react by implementing AI solutions of their own.
This is especially true for generative AI (Gen AI), which creates new materials, such as audio, visual, text and coding material from text prompts, thus increasing frequency and severity of attacks ranging from social engineering and malware to deepfakes and cyber espionage.
One-day vulnerabilities are a major focal point among cybersecurity industry players. They refer to recently discovered security flaws that have not been patched, meant for cybersecurity teams to prioritise fixes and manage vulnerabilities. This information is then publicly disclosed in the Common Vulnerabilities and Exposures (CVE) programme, and several other similar programmes.
However, this same information is open to bad actors as well. By feeding CVE information to large language models (LLMs), users can successfully exploit these known vulnerabilities without any offensive security-specific training or fine-tuning.
A 2024 report by Cloud Security Alliance highlights that AI-powered tools can achieve an 87% success rate in autonomously exploiting one-day vulnerabilities.
“Publicly available tools, like ChatGPT or Gemini, have mechanisms in place to ensure the ethical use of AI,” says Hon Fun Ping, managing director at NetAssist Group.
“If you ask it to help you generate a malicious code to exploit this CVE, it would not give you an answer. However, there is the practice of prompt injection. By providing a certain prompt, you can bypass these security mechanisms.”
One does not even have to dig far to find these modern-day master keys. Hon says many of these prompts are hosted on GitHub, with tutorials freely available on YouTube.
AI has democratised malicious codes, lowering their barriers to entry, making them more scalable and accessible to bad actors. With AI, a single user can generate numerous exploits in a short time. All it takes is for one attack to be successful for it to be financially viable, Hon adds.
The misuse of LLMs has also led to the emergence of “Malla”, or malicious LLMs, explains Ken Yon, senior director of risk and compliance at Payments Network Malaysia Sdn Bhd (PayNet).
Mallas often exploit uncensored LLMs or use “jailbreak” prompts to circumvent safety filters on commercial LLMs. They are typically offered as web services on underground marketplaces, where users can purchase access to these malicious tools, adopting a cybercrime-as-a-service model.
“The barriers to entry to cybercrime is much lower due to the pervasiveness of Gen AI tools and the rise of Malla,” Yon tells Digital Edge.
Recent data supports this concern. Accenture reports a 223% surge in deepfake-related tools trading on dark web forums in the first quarter of 2024 compared with the same period in 2023. Yon concurs, stressing that many companies are not adequately prepared for this new threat landscape.
“AI-powered attacks are becoming more sophisticated, automated and fast-evolving. Traditional security measures are often insufficient to counter these rapid and advanced threats,” he explains. This is compounded by a general lack of understanding regarding the nature and impact of AI-driven attacks. Employees may not be able to detect increasingly realistic phishing attempts or deepfake content.
In this evolving battlefield, Yon believes attackers currently have the upper hand. “Historically, they ‘innovate’ faster and adapt to new technology more effectively, have easy access to tools and quickly adapt their tactics to bypass security controls.”
The economics of cybercrime also favour the attackers. “Attackers operate with lower costs due to easy access to AI tools, and are unencumbered by ethical or legal boundaries. More fundamentally, attackers have a much simpler task of finding a single vulnerability to compromise a system, while defenders have to secure every single point,” he says.
In this cat-and-mouse game between security experts and bad actors, cybersecurity firms have also deployed AI tools of their own. This technological arms race has evolved over the past decade, with both attackers and defenders continuously adapting their strategies as AI capabilities advance.
One way to track the industry’s AI adoption is to review academic research papers, as compiled by researchers from Slovenia’s Jožef Stefan Institute in their “Artificial intelligence for cybersecurity: Literature review and future research directions” study.
The researchers found that before 2016, AI in cybersecurity was described as a “relatively under-researched topic”, with minimal research activity. 2016 and 2017 marked the beginning of increased research interest, exploring basic applications such as intrusion detection and threat identification.
In simple terms, intrusion detection involves feeding AI algorithms with activity logs and network traffic, training them to differentiate between normal behaviours and attacks. On the other hand, threat identification scans through numerous sources, such as Dark Web forums, CVE databases or security news to analyse common patterns, identify potential threats and alert security teams.
These tasks are traditionally done manually through human labour, and AI offers the advantage of automation and speed. Between 2018 and 2021, there was significant growth within this area, in terms of quantity of research publications, and scope and complexity of research, according to the study.
More datasets are now available, and the industry has moved away from basic machine learning models to adopting deep learning methods, complex feature extraction and more sophisticated data analytics methodologies. Research focus has also shifted to scaling up existing operations and more practical use cases, such as automated threat hunting, predictive intelligence and advanced behavioural analysis.
After 2021, there was an emphasis on making cybersecurity systems more applicable in the real world. One such effort is making AI more trustworthy in meeting current and future regulatory requirements through “Explainable AI”, a collection of techniques and processes aimed at helping users understand how AI reaches its conclusions. More importantly, post-2021 research has focused on zero-day attack detection.
Conventionally, cybersecurity is a reactionary activity — a vulnerability needs to be exploited first, detected and only then can experts implement a security patch. Zero-day attack detection aims to discover exploits before bad actors can exploit them.
Flexxon is one company that exemplifies the proactive approach to cybersecurity — and it does so in an unconventional manner. Flexxon has been a NAND flash memory manufacturer for 17 years. NAND flash memory is a type of data storage technology that retains data without needing continuous power and is commonly found in smartphones, solid-state drives (SSDs) and other electronic devices. The company started venturing into hardware security solutions in 2018.
It has developed a product called “X-PHY Cybersecure SSD”, which it claims to be the world’s first dynamic, autonomous defence system focused on protecting data and systems at the physical memory layer.
The X-PHY SSD’s effectiveness stems from its unique architecture and AI implementation.
Unlike conventional SSDs that may have multiple data access points, X-PHY employs a “One Door Access PCIe/NVMe” approach, forcing all data requests through a single, highly monitored pathway.
This is similar to having a single, well-guarded entrance instead of multiple potential entry points that can be compromised.
At this controlled entry point, the embedded AI monitors read-write behaviours in real time. By analysing these operations, the AI can detect anomalies that might indicate a cyberattack or system compromise. When suspicious activities are detected, the SSD can isolate itself to prevent further damage or compromises from spreading to other parts of the system.
“Both attackers and defenders are using AI, but the advantage always goes to the attacker. While hackers only need to succeed once, defenders must be perfect every time,” says Camellia Chan, CEO of Flexxon.
“The current approach [to cybersecurity] is reactive. You have to take at least one loss, to be compromised, before you can identify the threats because [the industry relies on] a database-driven approach.”
Traditional edge detection methods struggle because they must account for countless variables in an open environment, Chan explains.
This results in the industry’s reliance on libraries that document pre-existing threats, but take time to update and patch. By operating at the hardware level with a single access point, X-PHY significantly reduces these variables.
Software-based cybersecurity solutions operate at the outer layers of a system, such as the operating system or application level. X-PHY’s approach of embedding protection directly into storage hardware puts defences at the closest proximity to the data itself — the primary target of most cyberattacks.
This is particularly crucial because regardless of how sophisticated an attack might be, it ultimately needs to interact with the physical storage to access, modify or encrypt data. Hence, X-PHY can even prevent zero-day attacks — a highly coveted feat in the industry.
Flexxon has been granted 40 patents for its technology, and was one of 13 global companies that was showcased at the US White House as part of the Counter Ransomware Initiative (CRI) 2022.
X-PHY was initially built to serve niche customers in the industrial, medical, military, automotive and aerospace sectors, but Chan hopes that such technologies will become more mainstream.
While Flexxon approaches cybersecurity from a hardware perspective, other threats require specialised solutions, particularly in the realm of identity verification where Gen AI and deepfake technology have created new challenges for traditional security measures.
For instance, former director-general of health Tan Sri Dr Noor Hisham Abdullah discovered his likeness being used to promote diabetes medication without his consent in 2023. Multiple high-profile figures including singers, athletes and corporate leaders have been victims of deepfake videos promoting fraudulent investment schemes.
The democratisation of AI tools has not just lowered barriers for code-based attacks, it has also revolutionised identity fraud through deepfake technology.
“These attacks can happen in four different ways,” says Law Tien Soon, deputy CEO of Innov8tif Solutions Sdn Bhd.
“Attackers can swap faces using AI, generate synthetic faces from scratch, manipulate head models via 3D software or manually tamper with digital images.”
According to Pillar Security’s “The State of Attacks on GenAI” report, 90% of successful attacks using Gen AI resulted in sensitive data leakage, with adversaries requiring only 42 seconds on average to complete an attack. More concerning is that attackers need just five interactions with Gen AI applications to execute a successful attack.
Innov8tif has introduced two solutions to address these attacks, says Law. First, it has anti-deepfake technology — AI solutions trained to notice anomalies in footage commonly associated with deepfake attacks, such as discoloration or morphing, and assess the likelihood that it is a deepfake.
Second is anti-injection technology. Because deepfakes are software-generated, most attackers bypass physical cameras and present the modified deepfake footage via virtual cameras or direct injection. The technology recognises device signatures and raises red flags if something seems out of place.
While the technology is ready, Law says many enterprise clients are still unaware that such solutions exist at a production-ready scale. The company is currently in the process of deploying these systems and working with enterprises to upgrade their existing security infrastructure.
The integration of AI into cybersecurity is not just changing how we defend against threats, it is fundamentally reshaping the industry’s workforce requirements. Traditional security certifications, while still valuable, are no longer sufficient on their own.
“Previously, we only looked at whether candidates had certifications like Certified Ethical Hacker or Certified Incident Handler. Now, we also ask: ‘Do you know how to use AI to improve your productivity?’” says NetAssist’s Hon.
This shift highlights a growing gap between educational curricula and industry needs. Hon encourages educational institutions by emphasising the need for cybersecurity graduates to be well-versed in AI technologies alongside traditional security skills.
The impact of AI on workforce dynamics is particularly evident in day-to-day operations. Tasks that once required significant manual effort have been streamlined through AI automation. “What used to take 15 to 20 minutes, like analysing alerts and sending customer notifications, can now be completed in about five minutes [with AI],” says Hon.
This efficiency gain has led to a restructuring of team compositions. For instance, where Security Operations Centers (SOCs) previously needed multiple senior analysts to write cybersecurity reports, they now require fewer personnel, with AI handling the bulk of the analysis and a single expert verifying the outputs.
However, this does not necessarily mean a reduction in workforce — rather, a shift towards more specialised roles. Flexxon, which develops AI-enabled hardware security solutions, faces unique challenges in this transition.
“Because we specialise in hardware, the talent pool is significantly smaller compared with software,” says Chan. The company requires professionals who understand both hardware architecture and AI implementation — a rare combination in today’s market.
The innovative nature of these new solutions adds another layer of complexity to talent acquisition. “It’s something very new. Regardless of their background, new hires need to engage in extensive research and learning. We’re not just implementing existing solutions, we’re creating new ones,” she adds.
As AI becomes more integral to cybersecurity solutions, regulatory frameworks worldwide are struggling to keep pace with technological advancement. According to Adrian Hia, managing director for Asia Pacific at Kaspersky, there is still no clear consensus on the best approach.
Hia has interest in how the regulatory space impacts both innovation and security, as Kaspersky itself has integrated AI across its cybersecurity infrastructure. For instance, its products have machine learning capabilities to detect suspicious activities and a detection and response platform that employs the help of an AI-based auto-analyst.
“Many countries have attempted to develop their own AI regulations and governance frameworks in the past year, but there does not seem to be any single approach that stands out yet,” he says.
In Malaysia, the Ministry of Science, Technology and Innovation (Mosti) has published the National Guidelines on AI Governance and Ethics — a set of broad principles for AI end-users, policymakers and technology providers.
“These are non-binding, which provides a space for innovation involving AI in Malaysia. We believe this is an enlightened approach. Any regulation involving AI should not create artificial obstacles for AI developers,” states Hia.
He asserts the importance of maintaining flexibility in a rapidly evolving technological landscape, where finding the right balance between regulation and innovation is crucial. Instead of implementing blanket restrictions, Hia advocates for regulations that provide additional incentives for companies engaged in AI development and implementation.
The focus should be on high-risk areas rather than attempting to regulate all AI applications equally. “Any specific regulation should target those using a large amount of data, and processes involving significant risk,” says Hia. “This could mean industry-specific requirements, rather than having the same rules apply to all.”
Perhaps the biggest challenge facing the industry is the fragmented nature of current regulations. As companies operate across multiple jurisdictions, they face an increasingly complex web of compliance requirements.
“It would not be in the business interest to have to adapt to a different set of rules for every country that a company operates in,” Hia points out. He hopes to see some international harmonisation of AI regulations in the future, although he acknowledges this may take time as the industry continues to mature and evolve.
Save by subscribing to us for your print and/or digital copy.
P/S: The Edge is also available on Apple's App Store and Android's Google Play.