No, ChatGPT didn’t write this

By Wycliffe Waweru, Deputy Director, Monitoring, PSI

As the title states: ChatGPT did NOT write this article.

I did.  

ChatGPT, the Artificial Intelligence (AI) chatbot that recently took the internet by storm, brings the power of AI into the hands of the everyday users. It can assist students with homework, review computer code, write poetry, jokes and, yes, even articles like this one. 

It could have written this article.

And perhaps it could help us answer the question: could ChatGPT, along with other advances in AI, improve healthcare delivery?

The World Health Organization [1] (WHO) has stated that the use of AI for health holds great promise and can improve patient care, provide accurate diagnosis, support pandemic preparedness and response, inform the decisions of health policymakers or allocate resources within health systems. While the promise of AI is still a long way from being fully realized as scale, there are real-world examples that demonstrate its utility in pandemic preparedness and support for provision of accurate diagnosis.

AI for Pandemic Preparedness and Response

On the final days of 2019, several AI-driven disease surveillance platforms sent alerts about a flu-like outbreak in Wuhan, China.

That was a week before major health agencies notified the public about the outbreak. 

These disease surveillance platforms use natural language processing and machine learning to sift through news articles from around the world and analyse these along with public health data in their threat assessments. The ability of AI algorithms to rapidly identity outbreaks of infectious diseases can improve the speed at which public health professionals respond to health security challenges. In addition, AI can be used for predictive modelling to simulate the spread of infectious diseases, providing valuable information that can help public health officials make informed decisions about containment and control measures. AI can support modelling infectious disease transmission by using real-time data to identify factors that influence the spread of a disease or the effect of interventions, as opposed to conducting the evaluation after the interventions have run their course.

Improving Accuracy of Diagnoses

AI improves the accuracy of diagnoses by assisting in the interpretation of diagnostics tests and medical imaging. In Kenya, PSI partnered with Audere on a research project to assess HealthPulse AI, a suite of artificial intelligence (AI)-powered tools for clinicians, Community Health Workers (CHWs) and consumers. We studied whether the tools could improve the accuracy of administration and interpretation of malaria Rapid Diagnostic Test kits (RDTs) by community health workers and health workers in private clinics.  HealthPulse AI uses machine learning and computer vision to improve the accuracy of rapid diagnostic test kit results. It requires only an image of the RDT captured by the user’s smartphone to interpret the results of the RDT and can read even the faint test result lines that expert test readers may miss. This pilot project demonstrated that AI-powered tools in the hands of health facilities and community health workers can improve the accuracy and interpretation of rapid diagnostic tests and can contribute to positively impacting the quality of care that consumers receive. Additionally, it holds potential as a mobile tool that can be scaled up for use in low resource settings with potential benefits as a supportive supervision, diagnostic, and surveillance tool. Just as importantly, the project also confirmed that such a tool would be accepted and welcomed by health facilities and CHWs.

AI and Consumer Chatbots

What does the buzz generated by ChatGPT and reports of Google’s anticipated release of their Bard AI chatbot mean for consumer facing health chatbots? While there is demand for the conversational dialogue that AI chatbots offer in response to consumer questions, there are specific challenges that need to be addressed before their widespread adoption by the global health community. AI chatbots will need to provide safe, accurate, effective, and quality assured responses to consumer health-related queries as even small inaccuracies can have serious mental or physical consequences. Given they lack the depth of medical knowledge, clinical experience, and judgement that human healthcare providers possess, AI chatbots should provide information and suggestions, and recommendations to users to discuss their specific health concerns with their health care provider.

Adopting AI in Low-and-Middle-Income Countries (LMICs)

The performance of AI depends on the availability of vast amounts of structured, machine-readable data. There is a need for the use of comprehensive and diverse datasets, particularly from LMICs, to train AI models to enable broad applicability of these AI technologies. AI algorithms that are trained in specific contexts may not have applicability outside of the contexts they were developed for.

The adoption of AI is limited by the access to information and communication technology by end-users. AI solutions that work on affordable smartphones can improve access to these technologies across the health system in LMICs.

Finally, to address any potential ethical concerns arising the use of AI, health sector stewards should take the lead in developing guidance, laws and regulations to ensure that AI technologies are used ethically and do not infringe on the rights of consumers.

ChatGPT, do you agree?

Learn more about PSI’s work scaling technology for good. Interested in partnering? Email Martin Dale ([email protected]).

[1] Ethics and governance of artificial intelligence for health: WHO guidance. 2021

Sign up to
Receive Updates

Donate to
Support Our Work



01 #PeoplePowered

02 Breaking Taboos

03 Moving Care Closer to Consumers

04 Innovating on Investments

Let's Talk About Sex