Who’s afraid of big bad AI?
Bill Toner SJ
Cautionary Note!
My interest in Artificial Intelligence (AI) has mainly been in the ethical, theological, and practical issues that it gives rise to. My knowledge of the technical side of AI is limited. Like most of us, I come across articles in newspapers or listen to casual conversations suggesting that AI is the end of life as we know it; that AI-powered computers have reason and intelligence greater than that of humans and will take over the world; that a the human mind, once regarded as the proof that God had made humans in his own image, can now be replicated in factories by humans or even by robots, with new and better models appearing on a yearly basis. On the basis of this, I was prompted to write a blog about the deeper questions raised by AI. Many of these are affected to some extent by technical issues in AI, and I have tried to gain some knowledge of that. (1) But I am aware that explanations I have given of the amazing AI technology and processes may be flawed, and are certainly not comprehensive. Nevertheless, I think it is important that discussions about AI are not confined only to those with a perfect understanding of the technology behind it.
Introduction: What is New about Artificial Intelligence?
Alan Turing, the English mathematician, was the leading code-breaker in Bletchley Park in World War II. In his 1950 paper, “Computing Machinery and Intelligence” he was perhaps the first person to forecast that computers might lead to intelligent machines, and that eventually a computer would be able to perform any mental operation that a human could perform. But it was John McCarthy, the son of an Irishman from Cromane, Co. Kerry, who, in 1956, first coined the expression, “Artificial Intelligence” (referred to as AI in the rest of this blog). McCarthy’s research was carried out mainly in Stanford University, California. He is widely regarded as the ‘father’ of AI.
Over the next forty years a lot of important research in the emerging science of AI took place, in fits and starts, notably in the area of search algorithms. (2) Several breakthrough moments occurred in 1997-1998. The first was the victory of IBM’s Deep Blue supercomputer over world chess champion Garry Kasparov; Deep Blue was capable of exploring 300 million possible moves in one second. The next big breakthrough was the launch of the language translation site Babel Fish by AltaVista in 1997. The third was the discovery that a neural network (3) could be trained to recognize hand-written digits.
A further major breakthrough occurred in 2012 when the ImageNet challenge was won by AlexNet, a neural network, because of its low (at that time) error rate of 15%. The ImageNet challenge asked computer models (4) to identify the main subject of colour images (such as of dog breeds or lawnmower models). By 2017 neural networks had reduced the error to 3% for 120 dog breeds, better than any human taking up the challenge. AI was now really taking off.
Important progress was also being made in medical diagnosis. One notable example concerned melanoma. It is a difficult skin cancer to detect. G.P.s are likely to come across a few hundred cases in their professional lifetime. But in 2017 an AI system was developed with a database of 129,450 worrying lesions sourced in different hospitals. Through the use of pattern-matching, there was a dramatic improvement in diagnosis.
The business and academic communities were now really beginning to sit up and take notice. There was huge investment in the development of AI. Inevitably, a small number of companies began to dominate the market, partly because there were big advantages to users and researchers if computers had similar systems and were able to talk to one another. This led to individual companies employing enormous numbers of AI researchers who could collaborate on a continuous basis. In early 2025 there were 6 million people working at the cutting edge of AI research in the U.S. The amount of investment involved has been incredible. According to one estimate, IBM has invested $31 billion acquiring 485,000 graphic processing units (GPUs), mostly from Nvidia. GPUs were originally designed to generate graphics, but have been found to have great computational abilities and are at the heart of AI.
A Glimpse at the Technology
The main focus in AI at the moment is on ‘Large Learning Models’ (LLMs). These are advanced computer programmes that use deep learning to understand, generate and process human language. But they have many other applications beyond language. They are trained on massive data-sets of text and other content. The main activity (and expense!) in training LLMs has to do with calculations involving so-called ‘parameters’. The main parameters consist primarily of the ‘weights’ assigned to the connections between neurons. Just to give a very limping example of something similar: in a particular human brain (mine or yours) there may be a neuron (nerve cell) associated with the concept ‘Greece’. This may be connected strongly to another neuron associated with sun-bathing; more weakly to another neuron associated with moussaka; and more weakly still to another neuron associated with Jacqueline Kennedy. You could imagine the different comparative strengths or ‘weights’ associated with each of these three connections, that the mind is influenced by, when it contemplates the word ‘Greece’, perhaps 10, 6, and 1 respectively on an imaginary scale.
In AI, as applied, for instance, to the world of retail business, weights can be identified and then adjusted to predict which factors account for, say, the loss of customers, whether it be frequent stock shortages, changes in taste, demographics, satisfaction with staff, and so on. Decision trees (5) can be constructed to analyse customer behaviour (those that leave and those that return) and break them into different groups, based on such criteria as demographics, purchase history, online behaviour and so on. Weights can be assigned to each of the factors identified. There is a lot of training and testing, requiring continuous adjustment by the AI model until the outputs begin to make sense. This continuous adjustment depends on two important mathematical processes known as backpropagation and gradient descent, and both require enormous computational power.
Of course, a lot also depends on the accuracy and comprehensiveness of the inputs into the model. AI uses massive sets of data, – public datasets, general data from the internet, e-mails, social media posts, sales figures, inventory lists and so on. Data cleaning is carried out to remove irrelevant information and so on. But data can still be very biased. The second parameter to be included in the LLM calculations is called bias (strictly speaking correction for bias). The most common source of bias is in data sampling, and occurs when the sample is not representative of the general ‘population’ of whatever is being studied. Long before the advent of AI, one of the greatest sampling errors occurred in a big opinion poll before the U.S. presidential election in 1948. Opinions were sought only by telephone, but it was not sufficiently adverted to that only rich people had telephones in 1948. The opinion polls forecast victory for Dewey. But while the rich voted for Dewey, the more numerous poor voted for Truman, who won by a landslide. The Chicago Tribune is still famous for its misplaced trust in opinion polls when it ran its infamous early-morning headline, “Dewey defeats Truman” before the results were announced. In AI, if bias is detected in the sample, a corrective factor called ‘bias’ has to be entered into the calculations. The overall process can be expressed as
(Input x Weights) +Bias = Output.
As Ronald Kneusel remarks, “Virtually all the fantastic accomplishments of modern AI are due to this primitive construct” (p.63, op. cit.).
Up to a point, the more weights that go into a model will make it more reliable. In the earliest computing models in the 1950s there would have been less than 100 parameters in a model, but recently there have been systems developed with as many as a trillion parameters! Top-of-the-range AI does not come cheaply.
Uses of AI
AI is not something in the future. It is already with us, and has been, to some extent, for the past 50 years. There are dozens of applications that many of us are familiar with such as ‘personal assistants’ like Alexa; search engines and prompts, (improving all the time); increasing control over social media (which can be positive or negative); automation; self-driving trains and cars; traffic management; security systems for credit cards; medical diagnosis and treatment; robot machines, commercial analysis like the example of retail described above, cross-referencing of laws and legal determination or tax regulations, and so on.
The ongoing development or Artificial Intelligence relies particularly on mathematical theory. It does not depend so much on new discoveries in this area, but more on the application of existing knowledge, some of it lost and almost forgotten. A number of areas have now become crucial to AI such as Linear Algebra (essential for handling the high-dimensional data that algorithms process, especially in ‘deep learning’); Calculus (vital to understand how AI models learn and improve over time);Probability and Statistics (for understanding uncertainty and making predictions); and also topics like graph theory, logic, and set theory). As a result of more sophisticated application of maths to solve particular problems, the efficiency and range of uses of AI is improving all the time. Many of the companies developing AI employ mathematicians at individual salaries of over $1m. per annum.
‘Emergent’ AI
What is also improving is the ‘smartness’ of AI, that is, the ability of Large Language Models (LLMs) to appear to be ‘thinking for themselves’ and to produce outputs over and above what they were asked to produce. Here we enter the mysterious and contentious world of what has been dubbed ‘emergent AI’. Emergent Abilities in LLMs are capabilities that appear suddenly in larger models but are absent in smaller ones. Google AI Overview gives three examples. They sound rather undramatic unless you are a computer scientist:
• The ability of AI to perform tasks which require step-by-step thinking in humans, such as mathematical word problems. These are story-based problems that require a person to translate everyday language into mathematical equations and operations to find a solution. A simple example (taken from Google’s AI Overview of such problems) is “Becky has 7 apples and gives 2 away. How many apples does she have left?”. Sounds very easy, but, if you reflect on it, not easy for an AI model that has been trained specifically to solve maths problems expressed in formats like “ 7-2=?”. To solve this problem the AI model has to do some ‘research’ to understand phrases and words like “gives away” and “left”, and even to use step-by-step thinking.
• The ability to generate computer code snippets without explicit programming for that function. For instance, AI is given a prompt: ”generate Python (6) code to sort a list of integers [whole numbers] in ascending order”. In response AI provides a Python snippet using a sorting algorithm from freeCodeCamp platform which is available on the internet. It was not specifically programmed to do that.
• The capability for In-context Learning (ICL) which is the ability to complete a task based on just a few examples, given in a prompt.
For many people, the main question that arises here, of course, is, Can computers, if programmed up to the most advanced AI level of Large Learning Models, actually think. This may be an exciting prospect for some people, who imagine that future computers, with their vast processing power, may solve many of our problems, whether about climate change, financial systems, food security, ‘benign’ social engineering, or whatever. For others, the concept of thinking computers conjures visions of risks of various kinds: rogue computers messing up financial, transport, and telecommunications systems; or overwhelming social media with toxic content impacting healthcare, politics and international relationships; or simply putting a lot of people out of work. For others, especially religious people, disturbing philosophical and theological dilemmas arise: if humans are made in God’s image, and if the human mind is the apex of God’s creative power on earth, what status does that give to thinking computers? In a recent newspaper article, an academic queried whether the destruction of some computers could be akin to murder. Is the whole theology of the human race’s place in the cosmos now called into question by super-intelligent computers? If the reasoning power of large computing systems is much greater than ours, who is or should be in charge of the planet?
Before we get carried away, it may be instructive to ask Google’s AI Overview what it thinks? If we Google “Can AI think for itself?” we get the following reply on Google AI Overview: “No, AI cannot think for itself in the human sense… While AI can simulate human-like reponses, this is a result of sophisticated algorithms and vast datasets, not genuine thought or feeling”. The reasons given by AI Overview are: “lack of consciousness; dependence on data; no emotions or motivation; a tool, not a thinker”. Incidentally, Overview is written using Google’s Gemini AI model. Overview is believed to have cost $190 million to develop.
In spite of AI’s denial that it has consciousness, it is one of the questions that keeps recurring in the popular mind. Could computers running AI problems become conscious? Before spending time delving into philosophical and even theological arguments, we have to say that on an empirical level we simply d0n’t know, and probably never will know. Consciousness is subjective; it is my feeling, my experience; it cannot be directly observed by anyone else. To some extent AI mimics the human brain. But Susskind (op. cit. p.141) asks the question, “Would a precise and faithful reconstruction, molecule by molecule, of a human brain, spontaneously engender consciousness?” Susskind’s basic view is that scientists and philosophers have debated this question, inconclusively, for decades. Christian theologians might argue that this the human kind of consciousness can only be a work of God, who would have to infuse a ‘soul’ into the machine before it would be able to think, a scenario that most would probably regard as unlikely. Elsewhere (op. cit. p.140). Susskind tells how the German biologist, Jakob von Uexkull, coined the term ‘Umwelt’ to refer to the specific subjective experiences that animals enjoy. We cannot know how the world appears to each animal. Philosopher Thomas Nagel wrote an often-cited article, “What is it Like to be a Bat”. Nagel proposes that there are countless forms of conscious experience totally unimaginable to us. It seems foolhardy to conclude a priori that AI models cannot experience anything. We simply don’t know. That is not to say that it is an unimportant question.
So, what about the computer that solved the mathematical word problem about Becky and the apples that it was not trained to do? Was it thinking out the solution? There are other possible less dramatic solutions. The first thing to note is that part of the answer probably lies in the matter of scale. There are a number of problems that computer models struggle with if a model is small or if its architecture is unsuitable. As explained above AI models have large numbers of parameters, but something strange often happens around the level of 1010. For instance, some models have problems calculating 3-digit additions (such as 365+273=?). Up to a parameter level of 1010 their success rate is near zero; but then, as the number of parameters is increased, the success rate increases dramatically, and with 1011parameters a success rate of about 60% is suddenly reached. Remember that the parameters relate mainly to the connections between neurons. Incidentally, a trillion = 1012. It is only recently that models have been designed with a trillion parameters, and that may be one factor in the sudden appearance of AI with unexpected talents. (7) However:
(a) Some researchers dispute the suddenness of the jumps in capabilities, attributing them to inadequate measurement tools, and failure to detect gradual improvement going on ‘behind the scenes’. Emergence has also been attributed to bugs and errors.
(b) Programmers specify the general algorithm used to learn from data, not how the neural network should deliver a desired result.
In this sense, states Woodside (op. cit.), emergence is the rule, not the exception, in deep learning. Every ability that a neural network attains is emergent; only the very simple structure of the neural network is designed. Researchers at Stanford have claimed that emergence is merely a ‘mirage’ that appears because of the researcher’s choice of a system of measurement.
Woodside sums up: Large Learning Models sometimes possess abilities that their developers did not predict. They do not programme them explicitly, and only test for their capabilities after they have already been trained. So it is not surprising that unexpected developments occur, we just have limited ability to predict them.
The Uncertain Future of AI
The development of AI faces many challenges.
(1) The first is the problem of explainability. (8) As Debevoise and Plimpton (D&B in rest of this blog) explain, no one really knows how the LLMs that drive generative (9) AI tools like ChatGPT come up with the answers to our queries. This is referred to as the ‘black box’ problem. It is often the reason why there is reluctance to use generative AI for certain kinds of decisions such as job interviews or mortgage approvals. There is widespread agreement, and sometime legal stipulation, that people are entitled to know the reasons how decisions are made. However, it should be noted in passing that often we do not know how human decisions are made either!
D&B point out that more basic non-AI algorithms can be understood. But LLMs are made of deep neural networks, with billions of parameters that convert text into outputs. This makes it almost impossible to follow the logic of the process. Not only that, but the outputs are not predetermined, but are probabilistic. Because of that, the same input can, and most often does, produce different outputs on different occasions, even though there is usually a similarity between them. Also, AI systems learn to “cheat”. A 2017 system was given the job of telling if a horse was pictured in random images. However, it learned to look out for a copyright tag that was associated with the horse pictures and relied on these instead. Simpler models that are explainable to a human are referred to as ‘glass box’ models. These sacrifice some AI performance or are not really AI as generally understood. (10)
Even if the engineers of machine-learning models do figure out why a particular output is being generated, it is next to impossible to explain this to the average layman. The technical complexity is a huge barrier.
However, there are certain areas where the explainability is a secondary factor. To take a parallel instance, if an output produced by AI really helps doctors to diagnose cancer better, it is of secondary importance how the model arrived at this output. In fact, there are some widely-used drugs that are effective, but it is not known exactly how they work. Paracetamol is probably the best-known example.
(2) The second problem, already alluded to above, is randomness. Google AI Overview states that the inherent randomness in Large Learning Models is a serious problem for applications requiring predictability, reliability and consistency, particularly in law, and healthcare. In the area of security, there is concern that hackers could exploit this randomness to create even more credible ‘phishing attacks’
(3) AI uses enormous amounts of data to generate its results. Much of this data is found on the internet. But with the advent of AI, where is this data going to be found? If you go to Google with some query, you may find that you need to go no further than Google AI Overview, which seems to have hoovered up all the useful information on the web, and to have summarized it very accurately and clearly. It seems less and less necessary to trawl through pages of the internet trying to find a better site. But if users do not trawl through the internet, the remaining sites will get less and less traffic, less and less revenue from advertising, and will die. But this will gradually affect AI itself which will have fewer sources of quality information. AI also needs this data for testing, which is a vital part of its method of operation. It is instructive to ask Google AI Overview the question, “Does AI need data?”, to get it from the horse’s mouth!
(4) AI has been developed by some of the most gifted people in the world, – mathematicians, logicians, statisticians, linguists and so on. Millions of them work in the large tech companies maintaining and developing AI systems. The question is being increasingly asked, are people like that going to be around for ever? Or, ironically, will AI itself lead to their disappearance? There is considerable concern about the number of students who are using AI to write their college projects and dissertations, instead of building up their own analytic and research skills. Abstruse mathematical questions can increasingly be solved by computing machines, so that students can get by without a solid grasp of the basics. It may be pointed out that this trend has been developing for some time without any obvious downside. For instance, though many people are unable to calculate a square root (√) with pen and paper, as they learned to do it in school, they can easily find it on a €2 pocket calculator. Yet, it is easy to see that there need to be people who understand the foundational workings of mathematical techniques like calculus in order to use or develop them in new ways. If we all become dependent on AI, who will AI be able to depend on? There is a further question: if AI companies recruit all the most gifted mathematicians by offering them enormous salaries, will those who are left be able to teach university students to the standards required for them to have a useful role in the world of AI?
(5) A serious challenge facing AI is the cost, not only financial, but ecological. The hardware itself is very costly. The cost of employing highly-paid mathematicians to set up, test, run, and interpret the appropriate AI applications is also enormous. Of great concern in the public mind is the amount of energy consumed by the computers when they are running the AI programmes. In the long term it is reckoned that AI will consume about 10% of the world’s electricity production. Ireland is an outlier, with the usage on data centres now approaching 30% of national electricity generation. A further problem is the high usage of water, mainly used for cooling purposes.
As if to underline the issue, Meta is currently building the largest data centre in the Western Hemisphere in Louisiana, on a site of 2,250 acres, almost 30% bigger than Dublin’s Phoenix Park. While there will be 1,500 megawatts of solar power generated on the site, Meta has also got permission to build three power plants fuelled by natural gas. The total cost of the project is estimated to be €10bn. The centre is being built for the processing of artificial intelligence as well as data storage.
(6) Complex ethical questions are also emerging in the use of AI, and these may greatly limit its use in such areas as legal issues, healthcare and political affairs. The working assumption is that AI has no sense of right or wrong, and in many cases has to rely on sources such as the internet for the values that underpin their ‘decisions’ or recommendations. There is widespread disagreement on ethical matters in almost every aspect of human affairs, and these disagreements are strongly reflected not only on the internet, but in everyday social and political discourse. Imagine AI being asked to map out the national budget. Within its vast reservoir of data AI will find ethical reasons laid out for expanding agriculture and for curbing agriculture, for buying more weapons and for buying fewer, for increasing tax and for cutting tax and so on ad infinitum.
Conclusion
Perhaps one thing that is different about AI compared with many earlier inventions is that it is not very visible. In the past, innovations such as telephones, automobiles, TV, and mobile phones, were immediately accessible to many people, or could at least be seen in newspaper photographs or on the cinema screen. We are aware that AI is at work in the background, perhaps speeding up search engines on our phones, or given us easier access to language translation, or helping some of us to write essays. And we read about future possible uses in industry, medicine, or transportation. But its relative invisibility makes us uneasy, and gives rise to various nightmare scenarios in which we or our governments lose control, and AI becomes an invisible puppet-master pulling all the strings. We are more affected by exposure to science fiction than we think, and Hal, the computer in Space Odyssey, is probably never far from our thoughts when we contemplate AI. Now, increasingly, as we look at the screens of our smartphones or watch TV or films, we will have to ask ourselves, Are these real people or fakes?
However, the fact is that we, the human race, are still in control, with our (unevenly) distributed gifts of judgment, decision-making, creativity, empathy, and a sense of what is good or bad. As the celebrity mathematician Eugenia Cheng said, what we have to be afraid of is not AI but human stupidity. And, we can add to that, simple wickedness. We cannot be sure that AI is or will remain firmly in the hands of good and wise people. Much of the world’s power and wealth is in the hands of people, some of whom clearly have a streak of genius, but who are, at best, eccentric, and often self-aggrandizing and lacking in human empathy. Some of them seem ruthless enough to use AI in hugely destructive ways.
ENDNOTES
- I would like to acknowledge two books I found very helpful: How to Think About AI by Richard Susskind, Oxford University Press, 2025, mainly concerned with the impact of AI on our world; and How AI Works by Ronald T. Kneusel, No Starch Press, San Francisco, 2024, mainly concerned with the technical aspects; it bypasses the mathematics, but is still not easy reading for the uninitiated.
- An algorithm is a sequence of instructions that a computer must perform to solve a problem.
- A neural network is a computational model that is loosely inspired by the workings of neurons (nerve cells) in the human brain.
- An AI model is a computer programme trained on large sets of data to recognize patterns, make predictions, or perform tasks, with minimal human intervention (Google AI Overview).
- In this context, a decision tree is a machine learning model that asks a series of yes/no questions about its input data, for instance ‘under 50 years of age’ or ‘over 50’.
- Python is a high-level programming language.
- For examples of emergence of similar sudden and unexpected abilities at around the 1010 level, see T. Woodside, Emergent Abilities in Large Language Models. CSET. 2024 available on Google.
- See Debevoise and Plimpton, AI Explainability Explained, blog, July 2025 (available on web).
- Generative AI is AI designed to produce text or images that normally require human intelligence.
- See Wikipedia, Explainable artificial intelligence.

