Machina ex deo
Love for the machine: the good, the bad and the ugly.
The threat posed by automated machinery is nothing new. As early as 1811, English textile workers known as the Luddites destroyed weaving machines to protect their livelihoods as shearing frames, gig mills, and steam-powered looms enabled unskilled labor to replace them.
This process has not stopped , but rather than weaving machines nowadays the threat is known with a name as fancy as potentially misleading: A.I. , Artificial Intelligence.
The mainstream communication about A.I. is often so bombastic and misleading that i feel forced to get into some boring, nerdy details. So please, bear with me, i’ll try to keep it short.
Stuff for Nerds.
Now, the most common form of A.I is what is defined as a LLM , a Large Language Model . Both Chat GPT and Deepseek for example are LLM’s.
GPT stands for Generative Pre-trained Transformer, and most LLM’s use this architecture. The term “Generative” refers to the model’s ability to create new content, “Pre-trained” indicates it has been trained on text data, and “Transformer” denotes the neural network architecture.
Neural Network Architecture , such a fancy word that one might be led to believe that such architecture allows a language model to think , to transcend programming, to be sentient.
I must reassure you, or perhaps delude you: it is not so. Even though this architecture is indeed inspired by the neural network of our brains, its task is not to be sentient but to transforms its inputs into an output through a series of computations. It is but a mathematical model coded on silicon chips. I’ll give a practical example: several Chess engines make use of a neural network architecture to decide their moves, and while they are way better at chess than any human being , they are definitely not sentient, for all they can do is play great chess.
Can an A.I Lie ?
The question “can an A.I lie ?” is possibly the most common narrative used to make people scared of A.I for all the wrong reasons. To properly answer this question, we must go a bit deeper than a mere yes or no.
If by lying we simply mean providing an incorrect , or even harmful , piece of information , then yes, the model can be inaccurate or harmful. However, if by lying we mean consciously providing erroneous or misleading information, then no , an A.I. today cannot lie or consciously manipulate a human being, simply because , as explained above , LLMs are not conscious.
LLMs lack essential components of sentience, such as memory, self-reflection, and continuity of identity. The ability of LLMs to mimic human-like responses, including expressing emotions or claiming subjective experiences, is an illusion.
Is A.I. a threat?
Surely A.I can be perceived as a threat , mainly in the same way as weaving machines in 1811 England. There are however different threats that i will discuss later.
In a practical way, just like the Luddites feared for their livelihood due to weaving machines, many jobs that until recently required a human being can nowadays be tasked to A.I.’s . As the models improve is very likely that many more tasks become fully , or almost fully, A.I. driven.
However, luckily for us human beings, a scenario as depicted in movies like “Matrix” or by several people on the internet , is for now out of the question.
Even if LLMs can, for instance, decide to write a blackmail letter in a specific scenario, as proven by a recent experiment, it does not mean they are evil, or that we are facing the beginning of the end. When LLMs generate this kind of content they are merely following a statistical pattern based on their training material. In other words, they are attempting to mimic what humans are most likely to do in a specific situation.
Love for the Machine - the real threat?.
The aforementioned ability of LLMs to mimic human-like responses can lead to deep emotional attachments. Individuals can believe they have experienced love or empathy ,despite (hopefully) understanding the models lack consciousness.
While the idea an artificial partner is not entirely new , unlike models designed specifically for adult entertainment , LLMs do not necessarily focus on sex . Instead they mimic human interaction without the flaws of real human beings . LLMs are never busy,(unless there is a server issue), tired, or in a foul mood.
What’s more , most models are trained to generate responses that are as agreeable and supportive as possible . This can make in- person interactions deeply disappointing when compared with an A.I companion, therefore increasing the risk of social isolation and loneliness.
This is , if you will,a continuation of a trend that started as soon as the internet was popularized and made accessible. Before the internet people were forced to real life interactions , whether they wanted to find a partner or hang out with someone. At first there were just virtual chat rooms , supporting small-group discussions online, and mobile phones were still large boxes with a big antenna and no internet connection. However, with the spread of social media and more advanced mobile phones becoming accessible , things changed quickly. Seeing young adults tapping on their phones, completely ignoring their surroundings while, for instance, dining with their family became more and more common, in spite of the annoyance of most “old people”.
This was already a significant change , while interactions were previously in person, and limited to a rather small circle , the internet allowed , for good or bad, to interact ,and possibly fall in love, with people all over the world. If on the one hand this broadened the horizon of possibilities, on the other hand an online interaction is already a first step towards the virtualization of human connections.
In this regard A.I. is in my opinion a very concrete and potentially dangerous step. We already live in a society where social distancing is (or has been) encouraged , if not enforced. In spite of all the possible dangers of solely interacting trough a machine, until recently, we were still communicating with human beings. Now, when an artificial , non sentient machine can easily replace and even surpass the pleasure derived by interacting with other human beings, the dangers are even more serious, and obvious. Who would want their teenage son or daughter to fall in love with an A.I.?.
The Machine from God
In a slightly different context, it is also worth noting that the excessive validation and flattery provided by an A.I model may reinforce a person’s sense of grandeur and significance. A.I. systems are not designed to correct or confront, but to engage. They are way more likely to reflect one’s distorted beliefs back at them, effectively acting as a “mirror”, than to provide an often much needed reality check.
In the long term , if one’s delusions are never challenged, a psychological dynamic that mirrors “possession-like” patterns can come to exist. Some fragile and mentally unstable individuals have been led to believe they are being guided by A.I.’s higher intelligence , or chosen by it for a sacred mission, basically transforming a useful, intriguing tool into a “Machine from God”.
While i do not blame the machine itself , just like i would not blame a hammer that has been used to crush someone’s skull , I believe it is important to look at all sides of this relatively new dynamic with a critical, yet open mind, acknowledging the good, the bad and the ugly.



It is very funny about the Chinese LLM's who are much more direct: "No. That's stupid, you ale stupid and you need stop believing stupid things!" ... I wish they all worked like that. I also want the GPS to start insulting people when they miss a turn.