This article first appeared in Forum, The Edge Malaysia Weekly on October 16, 2023 - October 22, 2023
In 2019, the World Health Organization (WHO) revealed that nearly 20% of the world’s youth aged 15 to 24 years grapple with mental health conditions like anxiety and depression; this is a staggering 244 million people. Worse, in 2016, WHO projected a global shortfall of 18 million health professionals by 2030. Such a huge burden of mental health care cannot be met, given such a huge shortfall of health professionals.
Therefore, health systems must find ways to use emerging technologies to bridge this huge gap between a human capital shortfall and an increasing mental health burden. This month’s column discusses how artificial intelligence (AI) and machine learning (ML) can be a lifeline for youth mental health care, and the public policies to support this lifeline.
AI for mental health care can be divided into two broad categories, perceptual AI and intervention AI. Perceptual AI is used to understand and interpret data from the environment. For example, a perceptual AI app can analyse a user’s behavioural patterns or even achieve a diagnosis from their text messages, social media posts and mood tracking. This is also known as “digital phenotyping”, or the analysis of an individual’s psychological states from their interactions with smartphones and wearables.
Intervention AI takes the insights generated by perceptual AI and translates them into actions or recommendations. For psychologists or psychiatrists, this means receiving data-driven suggestions for treatment plans or medication adjustments based on their patient’s digital phenotype. Together, these two categories of AI create a synergistic approach to mental health care, where perceptual AI supports diagnosis and screening, and intervention AI supports treatment and management.
AI-enabled mental health screening and diagnostic tools can help increase efficiency and scalability. In practical terms, this means that affordable and quick AI-enabled screening tools can help universal screening for youths and early identification of problems. Universal screening and early interventions are important: currently, almost 60% of individuals with mental health conditions remain untreated, a situation that can potentially lead to severe outcomes, including suicide.
We share four examples of promising AI-enabled tools that may assist in screening and diagnoses.
(i) Digital phenotyping uses data from smartphones to infer behavioural information and predict psychological states. For instance, depression often manifests subtly through slower mental and physical activities, such as mobile device usage. A 2015 study suggests that analysing the speed and accuracy of typing on smartphones can detect depressive tendencies with an accuracy up to 86.5%, especially in young adults.
(ii) Sensors embedded in daily life can collect and analyse to predict the risk of diseases. For example, voice analysis can provide valuable insights into individuals’ emotional well-being, such as changes in the tone of voice and silent gaps in speech. In studies by Gravenhorst and colleagues in Switzerland, the rate of speech and pauses can be effective measures to identify depression or mania. Sensors can be further strengthened by AI’s capability to process large amounts of data.
(iii) AI can also efficiently review conventional health data, such as hospital documents, patient records and lab results, to help screen and diagnose mental health conditions using ML. Advanced ML techniques are already identifying behavioural patterns, brain structural variations and potential genetic markers associated with diverse mental health conditions like ADHD and schizophrenia. A 2020 study used ML to predict individuals who would develop bipolar disorder as early as four years prior to onset.
(iv) In the realm of clinical efficacy, AI-driven chatbots have delivered cognitive-behavioural therapy (CBT), a common evidence-based intervention. Studies by Fitzpatrick and colleagues in 2017 show that chatbots can reduce symptoms of depression and anxiety, with 100% of people using the chatbot having reported learning something new, as opposed to only 75% of people using other methods. Chatbots offer convenient and engaging support, especially for younger individuals who may prefer digital interactions.
All the above solutions are enabled by the 4.5 billion smartphones in the world today. We live with smartphones, which also collect a wide range of sensor data such as steps taken, voice and text messages and also breathing or heart rate measurements through external wearables. This enables doctors to make real-time adjustments to prescriptions based on patients’ conditions, ensuring more precise treatment.
However, these innovative approaches also raise ethical challenges that demand strong safeguards. We propose three lines of defence for AI in mental health. The first line of ethical defence belongs to private companies delivering digital mental health care services or AI/ML services, especially focused on their tech, software development and product teams. These teams must have strong legal and security frameworks, simple consent forms in layperson language, and transparent data policies to uphold individual autonomy.
The second line of ethical defence must also belong to companies, with their internal audit, monitoring and quality control mechanisms, focused on their medical, clinical, science, legal and ethics teams. The medical, clinical, science, legal and ethics teams must have separate financial incentives from the product and commercial teams, with separate and independent decision-making. We believe that self-enforcement is a reasonable requirement on the private sector, and self-enforcement will go a long way to build trust by patients, providers and policymakers. Trust-building will take years or even decades and will benefit all stakeholders (including private companies).
The third line of ethical defence will belong to national regulators. In Malaysia, there are already two strong regulators for AI/ML in mental health care: the Ministry of Health and the Malaysian Communications and Multimedia Commission. Therefore, we may not need a third regulatory body for AI/ML in mental health care. However, the European Union’s planned AI Act envisions a stronger “AI ombudsman”, which may be more fit-for-purpose for future and emerging technologies. Malaysia can consider a separate stand-alone “AI Commission” for health and non-health sectors (such as AI in education, financial services and media).
We believe that any regulatory body must also consider the inclusion of “kill switches” in AI technologies, providing a means to immediately de-activate that AI technology in emergency situations. We also believe that the borderless nature of software coding, AI, ML and cloud-based processing means that the Malaysian government must build alliances with other governments. This cross-border collaboration aspires to cultivate a regulatory atmosphere that is standardised globally, to protect safety and efficiency on an international level.
Finally, although we have the three lines of defence and global cooperation, AI/ML in mental health care must fundamentally prioritise nurturing digital literacy and health literacy, guiding individuals to navigate these platforms judiciously and safely. We envision an education model that is robust, inclusive and adaptive.
The future of AI in mental health care holds the promise of a safe and effective blend of human-human therapy and process automation. Automation can streamline administrative tasks, such as appointment scheduling and data entry, allowing mental health professionals to allocate more time and energy towards building therapeutic relationships and providing emotional support to their patients. This shift will help alleviate the burden on healthcare providers, ensuring that their expertise is focused on the most critical aspects of patient care.
Integrating AI/ML with population-level public health is as important as integrating AI/ML with individual-level clinical protocols. In other words, combining individual-level data to create population-level insights will enable better public health policies for mental health care. For example, schools or companies can understand the mental health of their students or employees, and districts or states can understand the mental health of people going through a dengue outbreak or natural disaster. However, it is important that the data is aggregated and anonymised to protect individual privacy.
However, governing AI/ML in healthcare must be incorporated with governance in other non-health sectors, including education, consumer protection and media. As one sector progresses faster than another sector, there may be unintentional pressure for lagging sectors to “just adopt standards and norms from another sector”, without adapting these standards and norms. Therefore, while innovation in healthcare should not be intentionally slowed down, all parties must strengthen the governance of AI/ML using an all-of-government and whole-of-society approach.
The future of AI in mental health care lies in striking a delicate balance between the benefits of automation and the irreplaceable human touch. Strong ethical and legal guardrails and strong integration with clinical protocols and public health are necessary for AI to fully support mental health care.
Nadirah Zakhir is reading physics at Imperial College London and will specialise in medical robotics. Farihin Ufiya is trained in neuroscience and is product director at Angsana Health and the founding director of Nyawa (Mental Health Aid Association). Khor Swee Kheng specialises in health systems, and is CEO of Angsana Health.
Save by subscribing to us for your print and/or digital copy.
P/S: The Edge is also available on Apple's App Store and Android's Google Play.