CAROL EL HAYEK___ EPIDEMIOLOGIST
Carol El-Hayek is an infectious diseases epidemiologist with a focus on disease dynamics in marginalised populations. Her expertise lies in unravelling health data to guide public health decisions, particularly in areas like HIV, hepatitis C, and sexual health. As an accredited health informatician the ethical handling of health data is central to El-Hayek’s work, bridging the gap between privacy protection and sharing data-driven insights to improve health outcomes.
El-Hayek is currently working at the nexus of infectious diseases and artificial intelligence. She is completing her PhD at the University of Melbourne where she is applying AI techniques to enhance the use of health data for tailoring health services for those who need them most. John Saint Michel caught up with El-Hayek for Fluoro to discuss health, surveillance and AI.
Fluoro (F). Can you detail your various roles throughout the years and the types of projects you’ve been involved in?
Carol El Hayek (CEH). All my roles over the last 15 years have involved monitoring infectious diseases in the population and identifying what makes some people more susceptible than others. The aim of all my work is to transform health data into information that communities, clinicians, funders and policy makers can use to make decisions to improve health.
I’ve worked on a variety of projects related to HIV, hepatitis C, sexually transmissible infections, sexual health and public health, where I’ve reported trends and risk factors, evaluated health services and health promotion campaigns, and advocated for change in health policies.
The infections I monitor are still highly stigmatised and people at risk of these infections are often marginalised, so I’m focused on helping to make prevention and care accessible to the people that need it most. Right now, for example, I’m applying methods used in AI to clinical data to better monitor hepatitis C among people who inject drugs.
F. How would you describe the current role of AI in society and its application in your research for beneficial outcomes?
CEH. AI is critical I would say.
AI powers so many things that are part of our everyday – social media, advertising, online customer service, home automation, entertainment, fitness, transport and traffic systems, you name it. AI technology has been around for so long although is just emerging in the medical and public health sectors. We can see huge potential for its application now in solving health problems. Robotic assistance for the elderly and disabled, early detection of disease, personalised treatment plans, enhanced diagnostics, etc.
In terms of my research, there is more data than is humanly possible to handle, we can really use AI technology to help us make sense of the overwhelming amount of information we now have available. What would take us decades to discover through conventional methods and analyses, could be uncovered in a fraction of the time. Particularly with the support of high-performance computing and pretty soon quantum computing. Complex calculations and simulations will not be anywhere near as challenging or time consuming to execute.
F. Based on your experience, what challenges have arisen due to the implementation of AI?
CEH. Validation is the first thing – understanding how the AI application works, whether its fit for purpose and how well it will perform in real-world scenarios is so important in healthcare. If AI gets it wrong when predicting what movie you might like to watch next, no big deal but if it gets a diagnosis wrong it could create a lot of distress.
At this stage I see AI as assisting medical and public health practitioners to get better health outcomes faster but integrating AI into services, human workflows and decision-making processes is not straightforward.
Also, an AI application can’t be transferred from one area to another without extensive effort and expertise. For example, an AI-powered chatbot designed to answer patient queries in general practice might not be suitable to address patient queries in pharmacy, so even though the techniques to develop the chatbot are similar, the information used to train it is very different.
F. What are your views on the handling of the COVID-19 pandemic, and what key issues influenced the public’s response to this significant health event?
CEH. A key issue I was concerned with in relation to the COVID-19 pandemic was the infodemic – the spread of misinformation and disinformation — that developed in tandem, enabled by AI. For example, social bots were applied throughout social media platforms mimicking human language and planting false information that was then circulated among peoples’ online networks. Add to that the AI algorithms that predict what content people are interested in so only related content appears, reinforcing the same misinformation or conspiracy theories. I spent a lot of my time talking to people who were genuinely confused and scared about the government’s agenda and vaccinations. This infodemic caused harm. It put people at risk of infection, divided families and in many ways undermined the public health response to COVID-19.
F. The discourse on privacy and data concerns is frequent. Can you explain how your work contributes to addressing these critical matters?
CEH. Health data is highly protected by the Privacy Act in Australia because of its sensitive nature and there are very stringent conditions under which the data can be shared for a secondary purpose – that is, a purpose other than the one it was collected for. Having said that, Australians are mistrusting when it comes to allowing the use of their health data for other purposes. It is justified when we’re still hearing about health data breaches. Earlier this year there was the Medibank data breach and even though they were hacked by cybercriminals it appears Medibank were not following the laws of data protection. They’re being sued but meanwhile peoples’ very private and identifiable health information was leaked. As an informatician I can say that the infrastructure and technology exist to keep data secure but they’re never completely foolproof because they rely on human involvement and oversight and humans are fallible.
As public health researchers we are often the secondary users of peoples’ health data which means we are requesting the data from health services or registries that already have it. In these cases, the data must be de-identified before we receive it so names, dates of birth, addresses, etc have been removed. To obtain public trust we have to be accountable to ethics committees and other levels of governance as a prerequisite for accessing the data. We need to report on how the data is stored, handled, destroyed, etc to avoid it becoming re-identified while in our custody.
F. How do you perceive the potential benefits of AI in the public health sector?
CEH. I see it as a very positive tool that could transform public health in Australia. Most of our work involves the analysis of health data to inform decisions and respond to a public health problem. AI technology means that we could access very large volumes of real-world data, in real time and have the capability to use it and use it much more effectively.
AI can detect patterns in data that humans can’t, I think about the potential to predict disease outbreaks and forecast the spread so we can get ahead of it as just one example. Imagine if we were that prepared for COVID-19. Another example is the analysis of genomic data, which is very complex. AI could quickly identify genetic variations and predict peoples’ susceptibility to certain conditions for early intervention. There are so many opportunities to improve public health action with timely information that can save lives or improve peoples’ quality of life.
F. From your experience, what primary concerns does the public hold regarding AI technology?
CEH. There is an underlying fear that artificial intelligence will one day surpass human intelligence and we will lose control of all decision making and be at the mercy of machines. There’s a fear that AI can be used to make intelligent weapons, for example and increase global conflict or that it can be programmed to do criminal things or violate human rights, such as the right to privacy.
Other more immediate concerns are that peoples’ jobs will become redundant with increasing automation and that interpersonal relationships will suffer with increased interaction with machines.
Governments around the world are now debating laws governing the use of AI, so far only the EU and China have specific laws in place, while other countries like Australia are in the process of doing so and in the meantime are relying on the application of general laws to AI like privacy, data protection and criminal laws. Technology moves quickly and historically regulation lags behind, and this can create anxiety.
F. Are there any parallels you can draw between the realms of science and art?
CEH. Yes, there are many as far as I can see.
I think both artists and scientists are attempting to understand the world they live in, whether that be at a macro or micro level. They’re driven by their curiosities and interests and really live their work because they can’t help but see their subject in everyday things around them. They’re cultivating ideas, assessing their relevance, thinking about innovative ways to make them reality and wondering how they’re going to fund them!
I see art and science as a reflection and documentation of the times. They often make a significant impact on society and culture. They influence the way we function and think, what we talk about and what we know. Art and science also share a process. Ideas are brought to life in a careful and methodical way, through repeated experimentation – trial and error – so techniques are refined and then can be professionally and publicly scrutinised.
F. What limitations do you see in the capabilities of AI?
CEH. I’m not sure AI can be as creative in coming up with new ideas or be as methodical when executing them because it doesn’t “think” per se. It calculates answers based on our input and uses patterns and logic to do so but doesn’t really understand things with context based on human experience and intuition. AI can learn how to make decisions, predictions, etc but relies on programmers for data and guidelines to learn from and these can be biased or very specific to certain countries or professions, so the output from AI won’t always align with what we think is ethical.
Also, AI can give plausible-sounding answers that are completely wrong. I read an article recently that measured the rate at which this happens for frequently used language models and the latest version of ChatGPT for example, was found to “make stuff up” 15-20% of the time.
F. I’ve often discussed with others whether AI is capable of introspection. What is your perspective on this matter?
CEH: I think AI has a level of self-analysis and can continually learn and improve but its purpose is programmed so it will not question and reflect the way we do about our own thought processes and feelings. We have a conscience and imagination that can keep us up at night and we learn mostly from our experiences while AI’s thoughts and feelings are simulated, and it learns mostly from external input or feedback.
F. According to your viewpoint, what core characteristics define humanity from Intelligence machines?
CEH. Emotion and empathy, even though we like to anthropomorphise machines and believe they are feeling emotions or empathising with us, I don’t believe they can. I’m reminded of the art installation Can’t Help Myself by Sun Yuan & Peng Yu where a robotic arm was programmed to keep mopping up a red fluid or it would “die” and everyone who saw it, including myself was so sad for this machine. Other characteristics I can think of are creativity, ethics, sensory experiences, social and cultural influences, humour.
–
Carol El Hayek joined the Burnet Institute in 2008 as an epidemiologist and had a central role in developing, evaluating and managing infectious diseases surveillance systems, both state and national and holds a health Information Australasia Certification with the Australian Institute of Digital Health.
–
Thank you Carol El-Hayek and John Saint Michel.
–
About the art: We’ve collaborated with artists to visually interpret the insights presented. These digital creations explore themes like AI’s transformative potential and the interplay between technology and humanity.
–
Fluoro serves as a platform for curators, artists, designers and other creatives, both from the past and present. It allows them to present their thoughts, ideas, unique narratives, and experiences, enriching our cultural landscape. Join our global community of creatives to connect with us and be informed of all changes and feature stories as they come.