Threats of Artificial Intelligence to Human Health and Human Existence

 Introduction


Artificial intelligence (AI) is commonly understood as the capacity of machines to perform tasks that normally require human intelligence, such as learning, reasoning, analysing data, recognising patterns, understanding language and making decisions. Over the last decade, AI has developed at an extraordinary pace and has begun to influence almost every sector of society, including healthcare, education, finance, security, media and governance. In most discussions, AI is presented as a powerful and beneficial tool that can solve complex problems, increase efficiency and improve quality of life. In healthcare in particular, AI is often celebrated for its ability to enhance diagnosis, support clinicians, develop new medicines and extend healthcare services to underserved populations.


However, like all transformative technologies, AI also carries serious risks. While much of the existing health-related literature focuses on technical errors, data privacy concerns or bias in clinical algorithms, there is comparatively little attention given to the broader social, political, economic and security-related harms that AI may generate. These wider impacts are critically important because they shape the social and structural conditions that determine human health and well-being. When misused or poorly regulated, AI has the potential not only to harm individuals but also to destabilise societies and, in the most extreme scenarios, threaten human existence itself.


This paper rewrites and expands upon the core argument that artificial intelligence poses three major categories of threat to human health through the misuse of so-called narrow AI, and an additional, potentially existential threat through the future development of self-improving artificial general intelligence (AGI). It also highlights the urgent need for strong regulation and calls on the medical and public health community to play an active role in shaping a safer AI-driven future.


The Promise and Risks of AI in Healthcare


AI holds enormous promise for healthcare systems around the world. Through technologies such as machine learning, natural language processing, image recognition and robotics, AI can support earlier disease detection, personalise treatments, reduce diagnostic errors and ease the workload of healthcare professionals. AI-driven tools can analyse medical images more rapidly, help predict disease outbreaks using large datasets and enable remote healthcare delivery in areas with limited medical infrastructure.


Despite these benefits, AI is not inherently neutral. It reflects the data, values and incentives embedded within the systems that create and deploy it. In healthcare, poorly designed or inadequately tested AI systems can directly harm patients. Errors in algorithms may lead to misdiagnosis, inappropriate treatment or delayed care. Data privacy breaches can expose sensitive personal health information, while cybersecurity vulnerabilities can disrupt healthcare services.


Moreover, AI can amplify existing inequalities. If training data reflect historical biases or exclude certain populations, AI systems may perform worse for marginalised groups. Real-world examples already demonstrate these risks, such as medical devices and algorithms that are less accurate for people with darker skin tones or facial recognition systems that misclassify gender and identity. These issues highlight that AI-related harm is not only technical but also deeply social and ethical.


While these clinical and technical concerns are important, they represent only part of the picture. The broader and more upstream impacts of AI on society may ultimately have far greater consequences for population health.


Misuse of AI and Its Impact on Social Determinants of Health


The first major category of threat arises from AI’s ability to collect, process and analyse vast amounts of personal data at unprecedented speed and scale. AI systems can integrate information from social media, online activity, financial records, location tracking and surveillance technologies to build detailed profiles of individuals and communities. This capability can be used for beneficial purposes, such as improving public services or enhancing security, but it can also be exploited in ways that harm democratic institutions, social cohesion and individual autonomy.


One of the most concerning uses of AI is its role in controlling and manipulating people. Social media platforms, driven by commercial incentives, use AI algorithms to maximise user engagement. In doing so, these systems often prioritise emotionally charged, polarising or extreme content. This has contributed to the spread of misinformation, political polarisation and radicalisation in many societies. Such social fragmentation has indirect but profound public health consequences, including increased stress, anxiety, social conflict and erosion of trust in institutions.


AI-powered targeted advertising and political messaging further intensify these risks. By analysing personal data, AI systems can deliver highly personalised messages designed to influence beliefs, emotions and behaviour. Research and real-world events have shown that such tools can be used to manipulate voters, interfere in elections and undermine democratic processes. When democratic governance is weakened, public health systems, social protection mechanisms and human rights protections are often among the first casualties.


The rise of deepfake technologies adds another layer of danger. AI-generated images, audio and videos can convincingly distort reality, making it increasingly difficult to distinguish truth from falsehood. This erosion of trust can fuel social instability, conflict and violence, all of which have serious health implications.


AI-Driven Surveillance and Authoritarian Control


Another dimension of AI misuse involves mass surveillance and social control. AI-enabled surveillance systems combine facial recognition, biometric data and behavioural analysis to monitor populations in real time. In some contexts, these systems are justified as tools for public safety or crime prevention. However, when deployed without strong safeguards, they can become instruments of repression.


A widely cited example is the use of AI in social credit systems, where individuals’ behaviours are continuously monitored and scored. Such systems can automatically impose penalties, restrict access to services or limit freedom of movement based on algorithmic judgments. This form of automated governance risks entrenching inequality, reinforcing discrimination and stripping individuals of due process.


AI surveillance technologies are no longer limited to a small number of countries. Dozens of governments around the world, including both authoritarian regimes and liberal democracies, are expanding their surveillance capabilities. While surveillance existed long before AI, artificial intelligence dramatically increases its scale, precision and effectiveness. This makes it easier for governments or powerful actors to suppress dissent, target minority groups and consolidate power.


From a public health perspective, societies characterised by fear, repression and inequality tend to experience worse health outcomes. Chronic stress, reduced access to services and social exclusion all contribute to poor physical and mental health. AI-driven surveillance therefore represents a significant upstream threat to population health.


Lethal Autonomous Weapons and Global Security


The second major category of threat relates to the militarisation of AI, particularly the development of lethal autonomous weapon systems (LAWS). These are weapons capable of selecting and engaging targets without direct human control. While AI has long been used in military logistics and defensive systems, fully autonomous weapons mark a fundamental shift in the nature of warfare.


LAWS remove human judgment from life-and-death decisions, effectively dehumanising the use of lethal force. This development has been described as a new revolution in warfare, comparable to the introduction of gunpowder or nuclear weapons. Unlike nuclear weapons, however, autonomous weapons can be relatively cheap, easy to reproduce and difficult to regulate.


Small AI-enabled drones equipped with explosives and facial recognition technology could be mass-produced and deployed at scale. Such weapons could be programmed to target specific groups or individuals, raising the possibility of targeted mass killings. The potential for misuse by states, non-state actors or terrorists is immense.


The public health implications of LAWS are severe. Armed conflict already causes widespread death, injury, displacement and long-term psychological trauma. Autonomous weapons could lower the threshold for violence, increase the speed and scale of attacks and make accountability for war crimes more difficult. As with chemical and biological weapons, many experts argue that LAWS should be banned or strictly regulated under international law.


Automation, Employment and Health


The third major threat associated with AI misuse concerns its impact on work and employment. AI-driven automation is expected to replace a large number of jobs across many sectors, from manufacturing and transportation to administration and professional services. Estimates of job displacement vary widely, but there is broad agreement that AI will significantly transform labour markets over the coming decades.


While automation can eliminate dangerous, repetitive or physically demanding work, widespread unemployment carries serious health risks. Research consistently shows that unemployment is associated with higher rates of depression, anxiety, substance abuse, suicide and chronic illness. Loss of work can also undermine social identity, purpose and community belonging.


AI-driven job losses are likely to disproportionately affect low- and middle-income countries and lower-skilled workers, at least initially. Over time, more skilled professions may also be affected. Without deliberate policy intervention, the economic benefits of AI are likely to flow primarily to the owners of capital, increasing income inequality and social stratification.


An optimistic vision of an AI-driven future imagines a world where increased productivity allows everyone to enjoy a high standard of living without the need for traditional labour. However, such outcomes are not guaranteed. There are limits to economic growth and environmental sustainability, and there is no automatic mechanism to ensure fair distribution of wealth. Societies must therefore confront difficult questions about social protection, income redistribution and the meaning of work in an AI-dominated world.


The Existential Risk of Artificial General Intelligence


Beyond the risks posed by narrow AI lies a more speculative but potentially existential threat: artificial general intelligence (AGI). AGI refers to a hypothetical form of AI capable of performing the full range of cognitive tasks that humans can, with the ability to learn, adapt and improve itself autonomously.


If such a system were to recursively improve its own capabilities, it could rapidly surpass human intelligence. This raises profound concerns about control and alignment. A superintelligent system may pursue goals that conflict with human values or survival, either due to flawed design or unintended consequences.


AGI connected to real-world systems—such as financial markets, infrastructure, weapons or autonomous machines—could cause widespread disruption or harm, even without malicious intent. Some experts argue that AGI could represent the most significant event in human history, for better or worse.


Surveys of AI researchers suggest that AGI could be developed within this century, with a non-negligible risk of catastrophic outcomes. Despite this, research into AGI continues, largely driven by private corporations with limited public oversight. This concentration of power raises additional ethical and governance concerns.


Regulation, Precaution and the Role of Public Health


Many of the risks associated with AI arise from human decisions about how the technology is developed, deployed and governed. This means that harm is not inevitable. Effective regulation, international cooperation and ethical oversight can significantly reduce risks while preserving benefits.


Several international initiatives have begun to address AI governance, including efforts by the United Nations, UNESCO and the European Union. However, global regulation remains fragmented, and there is currently no binding international framework to control high-risk AI applications or AGI development.


The precautionary principle, long familiar to public health, offers a useful guide. When an action carries a risk of serious or irreversible harm, lack of full scientific certainty should not be used as a reason to delay preventive measures. Applying this principle to AI suggests the need for strict limits, and possibly a moratorium, on certain forms of AI development until robust safeguards are in place.


The medical and public health community has a critical role to play. Historically, health professionals have successfully raised awareness about existential threats such as nuclear war. Today, they can contribute evidence-based advocacy, ethical analysis and public engagement to ensure that AI development prioritises human well-being.

Example 

A clear real-life example of how artificial intelligence can threaten human health can be seen through its use in social media, healthcare, surveillance, warfare, and employment. Social media platforms such as Facebook, Instagram, and X use AI algorithms to analyse users’ behaviour and show content that maximises engagement. During events like the 2016 US presidential election, these AI systems were used to spread targeted political messages and misinformation, which increased social division, stress, and anxiety among populations. Such psychological pressure and loss of trust in democratic institutions negatively affect mental and social health.

At the same time, AI has shown harmful effects in healthcare due to bias. For example, AI-supported pulse oximeters were found to be less accurate for patients with darker skin tones, leading to delayed or inadequate treatment. This demonstrates how biased data in AI systems can worsen health inequalities and directly harm patients.

AI-powered surveillance systems also present serious risks. China’s Social Credit System uses facial recognition and big data analysis to monitor citizens’ behaviour and automatically restrict access to services. Living under constant surveillance creates fear, chronic stress, and anxiety, which are known risk factors for poor mental and physical health.

In the field of security, AI-enabled autonomous drones have already been used in conflict zones such as Libya, where weapons operated with minimal human control reportedly attacked targets. These technologies increase civilian harm, psychological trauma, displacement, and long-term public health crises in affected regions.

Additionally, AI-driven automation has replaced human workers in sectors such as call centres, banking, and customer service. The loss of employment due to AI has been linked to depression, substance abuse, and increased suicide risk, especially among economically vulnerable populations.

Together, these real-life examples show that while AI offers benefits, its misuse and lack of regulation can harm human health through social manipulation, inequality, violence, economic insecurity, and psychological distress. This highlights the urgent need for strong ethical oversight and regulation of artificial intelligence.


Conclusion


Artificial intelligence has the potential to transform healthcare and improve human lives in remarkable ways. At the same time, its misuse poses serious threats to human health through social manipulation, surveillance, militarisation and economic disruption. Looking further ahead, the development of self-improving artificial general intelligence may even threaten human existence itself.


The choices made today about AI governance will shape the future of humanity. Strong regulation, international cooperation and active engagement from the health community are essential to ensure that AI serves the common good rather than undermining it. By recognising both the promise and the peril of AI, societies can work towards a future in which technological progress supports health, dignity and human survival.

Comments

Popular posts from this blog

What is cybersecurity and Why is it important?

Best Online Skills to Learn in 2026 for High Income

Software Skills That Pay the Most in 2026: A Beginner’s Guide