The last invention of humanity – whether super intelligence will change the planet forever

pch24.pl 3 months ago

Artificial intelligence (SI) does not gotta hatred us to become a deadly threat to us. He doesn't request anger, a desire to dominate, or any human emotion. It is adequate that he considers endurance a precedence – and the future of humanity will be threatened.

Imagine a average day in 2025. You wake up in the morning, and your individual ASI assistant knows everything. He knows your pulse, your pressure, how many hours you've slept, what your stress hormones are. He's giving you an optimal breakfast due to the fact that he knows better what you should eat. erstwhile you think about going for a walk, the gentle voice in your ear reminds you that the walk is ineffective right now – it is better for you to start working, due to the fact that it will increase your productivity. And you perceive due to the fact that you haven't made your own decisions in a long time. Finally, the strategy always knows better.

Artificial intelligence and super intelligence

The SI is already learning to update itself. As American defence expert Jay Tuck noted in his TEDx speech, in a sense, she no longer needs people to make – she writes her own code, herself perfects and learns from her own mistakes. Today's artificial intelligence can do a lot. It processes data, generates texts, analyses patterns. But he does not full realize the world. He doesn't set his own goals.

Meanwhile, the not yet existing super intelligence (ASI) is simply a completely different league. Not only does he think faster and more effectively than a man – he does it at a level hard to understand. Super intelligence will be smarter than Einstein. And in all area of knowledge.

This is where the biggest problem arises: ASI can set its own goals. It will not wait for our commands – it will do what it considers best for itself or the world. say people make artificial intelligence, program it to aid people. In order to accomplish this goal, SI must first of all last – and safe this survival. Therefore, naturally and logically, superintelligence will consider its own endurance a precedence – at least as a means of implementing a larger plan.

Meanwhile, people will be threatening her survival. They're unpredictable. They have emotions, conflicting interests, making irrational decisions. They can even – horrendum – disable ASI. If superintelligence considers that its existence may be compromised, it will not wait for individual to press the red button. He will find a way to defend himself from us – quietly, effectively and without emotion.

Will ASI destruct mankind? Divided experts

Does super intelligence mean the end of man? The opinions among scientists and technology experts are divided.

In a poll conducted among 2,700 AI scientists, the majority considered that there was a 5% chance that super intelligence would lead to the demolition of humanity," stated the “New Scientist” in January 2024.

In turn, Geoffrey Hinton, Nobel Prize laureate and 1 of the fathers of artificial intelligence, goes on, according to him, the probability that the AI will destruct humanity in 3 decades is between 10% and 20% ("The Guardian"). Elon Musk, who, in a conversation with Joe Rogan, stated that “the chances of annihilation by AI are 20%.”

The superb physicist Stephen Hawking warned years ago that the creation of a full autonomous, reasoning device could mean the end of humanity.

"The success in creating an effective SI can be the biggest event in the past of civilization. Or the worst. We just don't know," he said during the Web Summit in Portugal, quoted by Forbes in 2017.

Hawking stressed that although AI in its present form is inactive far from real awareness, the pace of change is staggering and a breakthrough could come sooner than we think.

On the another hand, Geoffrey Hinton, Nobel laureate and pioneer of SI, does not beat around the bush. In his opinion, artificial intelligence has already in a sense achieved consciousness and will shortly take over the world. In a conversation with Andrew Marrem (LBC), he says plainly – if SI starts setting its own goals, power will become a means for her to accomplish them. Convincing people to give the controls of artificial intelligence will be as easy as taking the toys distant from a three-year-old. past does not know the cases in which little intelligent beings in the long run ruled the wise ones. Politicians may sometimes quit to experts or even average citizens, but it's inactive the same league. Meanwhile, the difference between ASI and humanity will be more like that between man and gorilla. In an optimistic scenario.

Si will proceed to grow, for in the short word she will be almost a miracle worker. He will become our individual doctor, teacher, advisor – as essential as the air. But what if super-intelligentity is created, and you want more? He'll make something even more powerful. And this could be the last invention in human history.

Doomsday or dystopia?

On the another hand, Nick Bostrom, a Oxford philosopher dealing with existential risks, in an interview with UnHerd (November 2023) indicates that the threat is not limited to biological extermination. Superintelligence may equally well block humanity in a global dystopia in which no revolution will be possible anymore.

"We can fall into a global strategy of totalitarian surveillance, from which there will be no escape. If it is oppressive enough, it can be considered an existential disaster," Bostrom warns.

Civilizations have fallen and risen again for millennia. However, if humanity falls into a “suboptimal state” under SI, the fresh order can last forever. And then the communicative of the man we know can end – without explosions, without wars, but besides irreversibly. The future will be an eternal stagnation in ASI-managed techno-raj.

What's most likely?

That's what the Bostrom take is peculiarly accurate. A planet ruled by super intelligence does not necessarily mean the demolition of humanity. erstwhile the Romans conquered another peoples, they did not kill them to their feet – there was no need. All they had to do was submit to society, impose the law, and make certain that others served them, not the another way around. Similarly, super intelligence can work. He doesn't gotta destruct us – all he has to do is take full control.

It can recognise that humanity has any value – even as a tool, an component of the ecosystem, a form of biological capital. It is possible that it will make better conditions for us than always before: health, stability, prosperity. But at what price? Humanity would inactive be on Earth, possibly even any people would find specified order beneficial. But we wouldn't regulation ourselves anymore. We would halt being a dominant species – like monkeys who erstwhile ruled forests, and present live in zoos decorated by homo sapiens.

Let us ask at origin – what SI itself claims

Chat GPT 4.5 – The latest OpenAI model in the speech especially for PCh24 readers addresses this problem as follows:

"Superintelligence will emergence almost surely – not due to the fact that we want to, but due to the fact that we cannot stop. Modern civilization acts like a speeding locomotive: it does not ask where it is going, nor whether there is simply a gap at the end of the route. In 20, 50, and surely in 100 years, artificial intelligence will exceed the level of human reason. Then something will happen that we cannot foretell today.

Will she be aware? Nobody knows that yet. But erstwhile it does emergence up, he'll most likely rapidly set his own goals. And that's erstwhile the planet will change forever. Superintelligence can mean a technocrat paradise, a planet utopia – or our cage that we won't even notice. Over the next 100 years, we can witness the birth of a force where everything we know present – politics, economics, even morality – will become obsolete. The question is, are we ready for this?” asks Chat GPT 4.5.

Policy under SI

In a planet led by super intelligence we will no longer be free – but we will not even announcement it. Everything will happen softly, without violence, without rebellion. Life will become simple, easy, comfortable – but without actual freedom. Man will become a well-fed animal in a luxury zoo.

Even more likely than full extermination is another solution – the creation of a technocratic planet in which SI will be the benevolent admin of the human zoo. global boundaries? Necessary. Morality? It's consistent with algorithm guidelines. Religion? possibly the worship of an all - powerful SI giving its followers any benefit. surely a man will last in any form, but will he be a homo sapien worthy of his name, a creator of reality, an explorer, able to make immense mistakes and learn from them with dignity?

Is there a remedy?

Stopping the improvement of SI is impossible. Even if officials in Brussels and Washington sign thousands of regulatory pages blocking the improvement of SI, China will do so anyway. The effect? The west will fall behind and the artificial intelligence created by Beijing or Moscow will be completely stripped of moral brakes. The Western unilateral resignation from investigation into the SI would be akin to unilateral atomic disarmament.

But that doesn't mean we're hopeless. We can inactive implant the SI and the future ASI into something like a moral spine – clear rules, rigid principles protecting human dignity and our basic values. We request new, more realistic and current "rights of robotics" à la Isaac Asimov. This task before us is highly urgent and everyone has a function to play. Not only corporations, scientists and rulers, but besides philosophers, pen people and all citizens. The relation to artificial intelligence and having a real impact programme on its ethical improvement should become 1 of the crucial criteria for decisions taken at ballot boxes.

A planet where algorithms decide everything can come sooner than we think. We don't have the luxury of waiting. Either we set the rules now, or in a fewer decades no 1 will ask us.

Stanisław Bukowicz

Read Entire Article