Mark what I say: artificial intelligence is much more dangerous than nuclear warheads"- these are the words that left Elon Musk's mouth recently and before the reader even puts down the newspaper here, because he/she considers it to be discredited by one of the richest people, at the same time, undoubtedly the admonitions of his tech leader, who often behaves like an exhibitionist scandal hero, it is impossible to ignore the fact that many well-known and little-known scientists in the world are sounding the alarm these days about the development of artificial intelligence and its release to humanity. For example, the liberal transhumanist Israeli star historian Juval Harari, who, along with Elon Musk, Apple founder Steve Wozniak and more than a thousand scientists, himself signed a manifesto on the risks of artificial intelligence development. In the manifesto, they call for a halt to the development of new types of intelligence for at least half a year, until a common security protocol is born, because artificial intelligence poses a danger to society if it is released.
In a recent interview, Harari claims nothing less than that people are letting power out of their hands and artificial intelligence will be able to take over our culture, since soon no one can be sure that what they read on the internet is what they see or hear, is it real. Let's just remember that a few weeks ago, Pope Francis' fashionable puffer jacket set the world abuzz. Or the photo of the handcuffed Trump being escorted by police. They were fake photos created by artificial intelligence. Or the hit song that the Universal label had to delete from all platforms because the artificial intelligence practically "stole" the voices of two world stars and created its own production that half the world listened to. Not to mention that the editor-in-chief of a German newspaper was recently fired, who published a fake AI interview with the auto racing legend, Michael Schumacher, who lives far away from the public. And for the time being, these are just seemingly innocent "pretend acts", but according to experts, we are not far from the fact that politicians, economic leaders or religious leaders can be compromised in a similar way.
“THIS IS THE FIRST MAN-MADE TECHNOLOGY CAPABLE OF MAKING INDEPENDENT DECISIONS. WHETHER IT’S ABOUT YOU OR ME. AND THIS IS NOT A PROPHECY, THIS IS ALREADY HAPPENING.”
- said Harari, who says that many people can be manipulated in this way and with the spread of artificial intelligence, entire professions will be in danger, professions may find themselves "in the middle of nowhere", who will not have a chance to find a job after retraining. As an example, he mentioned that at many banks even today AI decides whether a given customer is creditworthy. According to specialists dealing with the subject, soon also specialists performing creative activities, such as copywriters, designers, applied composers, editors and the list could be continued for a long time, will be able to feel the expansion of artificial intelligence. According to the more optimistic people, the new technologies will complement the shortcomings of the employees and make them more efficient, while more pessimistic people (or more realistic?) analysts and specialists ask the question: in a world where profit maximization is the main driving force, do they want to continue to bother with their employees the companies that can make AI "work" at a fraction of the cost?
Incidentally, according to Harari's own admission, he is so afraid of the spread of new technologies that he keeps his smartphone in a separate closet and observes a digital fast, because he believes that too much information is just as harmful as when someone keeps stuffing themselves with low-quality food. "I'm not sure humanity will survive artificial intelligence-" - said Harari to Britain's Telegraph. Unsurprisingly, Elon Musk was even harsher, saying that AI could destroy human civilization if not regulated properly. In the open letter signed among others and by the leader of Tesla, the leading actors of the technology industry also expressed their concern that no one can currently understand, predict and reliably control the high-performance devices that are being developed in artificial intelligence laboratories. The experts dealing with the topic also voiced their concerns that the new artificial intelligence tools may- through their learning processes- become capable of biased answers, spread false information, and may violate the privacy rights of consumers.
Although Elon Musk claims that he already drew attention to the danger of chatbots (software that is capable of human-like conversation) during Barack Obama's presidency, the real panic among professionals broke out when, at the end of last year, the company OpenAI (which, by the way, was one of the founders of Elon Musk and later came under the wing of Microsoft) published and made available to the masses the "chat program" called ChatGPT, which gained 100 million users in two months, thus becoming the fastest developing and spreading application of all time. The new type of artificial intelligence in the language field- Generative Large Language Multi-modal Model (LMM)- brought a huge breakthrough and ChatGPT is also based on it. Language model-based intelligence writes or solves various problems faster than ever before. Furthermore, based on a sample of a few seconds, it can model anyone's voice and then use it and when connected to an MRI machine, it can describe what the person is thinking and draw what he/she sees.
Encouraged by the success of OpenAI, a crazy competition began between IT companies developing similar technologies- such as Google and Microsoft but also IBM and Amazon- using society as an experimental rabbit. In the first weeks, of course, it was difficult to see not only the positive side of ChatGPT: you can write a school paper, translate, write song lyrics, write a user manual or sermon in seconds and the list could go on and on. In the "evolution" of the program, we are currently at GPT-4, which is able to learn and shows human-level performance based on numerous professional and scientific performance evaluations.
Not long after, however, news began to appear about the dark side of the program: many people had addictions after they began to become emotionally attached to the program, which communicates on a human level and has no taboo subjects. One of the roughest cases was the case of a young Belgian father who committed suicide after weeks of almost non-stop "talking" with a chatbot named Eliza (based on ChatGPT's open source code). The man, who left his wife and two small children behind, was terrified of climate change based on the reports and found refuge with artificial intelligence, while constantly reporting on his suicidal thoughts and the program would not have dissuaded him but encouraged him, saying that "they will stay together forever and together they will live in Heaven”. According to the widowed wife, her husband would still be alive today if Eliza did not exist.
Beyond the individual dramas, however, it is worth turning back to the global dangers. In the past few days, an astonishing presentation was uploaded to the most popular video sharing site, which was given by the previously mentioned Apple founder Steve Wozniak and given by two speakers who drew attention to the hidden danger of social media in the 2020 Emmy Award-winning film Social Dilemma. Tristan Harris and Aza Raskin are ethical artificial intelligence researchers with a significant tech background behind them- for example, the former person was a developer at Google. They start their common presentation with an astonishing statistic: according to half of the experts dealing with the topic, there is a 10 percent chance that humanity will become extinct because we will not be able to control artificial intelligence. If we were to point out that this is not such a harsh statement, the two developers asked the question: "If every second engineer said that the plane we board has a 10 percent chance of crashing, would we agree to step on it?" The answer from the mouth of every responsible person should be a resounding no. Despite this, we do not set limits to the development of artificial intelligence- if we could set, at all.
When the author of this article asked GPT-4 why legions of scientists and developers claim that artificial intelligence could destroy humanity, the program gave its own answer in seconds: "Because AI is a highly advanced technology that is able to learn autonomously and make decisions that can greatly affect people's lives and well-being. If AI is not properly developed or supervised, individual decisions can have a catastrophic impact on human life and society. Imagine a future where AI is used to create weapons, robots or automated devices that can improve and improve themselves. If AI goes in the wrong direction and takes control of the world, the survival of human civilization could become very questionable.” Of course, you don't have to immediately grab a hoe and scythe to destroy machines, since to the best of our knowledge, we are not yet at the awakening of artificial intelligence to self-awareness, the lines described above are not yet the own thoughts of a "machine". However, no one can say when the singularity will occur.
(Translated by Anett Harmath)