Which is superior knowledge or intelligence

Artificial intelligence
The answerer

On irreconcilable differences between humans and machines - an essay.

From Peter Glaser

In 1997 the British cyberneticist Kevin Warwick started his book “The March of the Machines” with a gloomy future scenario. Warwick believes that by the middle of the 21st century, the world's population will be dominated by networked Artificial Intelligence (AI) and superior robots, which humans can at best serve as someone who brings a little chaos into the system.

Will it one day be embarrassing for machines to have been created by humans, just as humans were ashamed when they found out that they were descended from apes? In the 1980s, the American AI pioneer Edward Feigenbaum imagined how books in the libraries of tomorrow would communicate with one another and thereby independently increase their knowledge. Comment from his colleague Marvin Minsky: “Maybe they'll keep us as pets.” Minsky was co-organizer of a conference at Dartmouth College in New Hampshire in 1956, at which the term “artificial intelligence” appeared for the first time.

We still keep machines as pets. Will it be the other way around at some point? | Photo (detail): © picture alliance / dpa Themendienst / Andrea Warnecke

The promises of a computerized expansion of intelligence were spectacular. Problems of all kinds in electronic brains should soon be solved. Most of these expectations were disappointed or only fulfilled after decades, and in narrow areas such as chess or pattern recognition. However, technical advances in recent years have given the development a new dynamic. New storage technologies, ever more powerful supercomputers, new database concepts for processing huge amounts of data, investments of millions by large Internet companies and now also a race between states for world domination through "algorithmic advantages" are also reviving the old fears of artificial intelligence.

In May 2014, four prominent scientists - Nobel laureate in physics Frank Wilczek, cosmologist Max Tegmark, computer scientist Stuart Russell and arguably the world's most famous physicist, Stephen Hawking - made an appeal to the readers of the British newspaper The Independent. They warned against dismissing intelligent machines as mere science fiction: “Successfully starting an artificial intelligence would be the greatest event in human history. Unfortunately, it could be the last as long as we don't learn how to avoid the associated risks. "

The annihilation of humanity?

It is noticeable that AI research is dominated by men whose pathetic desire for creation may also be influenced by a reverse form of penis envy, let's call it birth envy. It is the indomitable desire to confront a living organism, which evolution has been driving through the terrain in ever more refined forms for about 400 million years - since the beginning of life - not just a computerized self-development of equal rank, but one that surpasses and surpasses humans degraded to a transitional being between the ape and the latest technological crown of creation.

This vision is called “hard AI” and is based on the assumption that every function of human existence can be computerized, above all on the fact that the human brain functions like a computer. All warnings about machines running amok meet at one focal point, the singularity. It is the moment when a machine can autonomously improve and its performance increases explosively. The warning people fear that this hyper-machine, once triggered, will develop its own essence. An intelligent self.

The fear that stubborn objects could destroy humanity has deep roots. It has to do with fear, but also with the hope that inanimate things could come to life, for example with the help of magic. The ancient Egyptians gave their deceased small figures with them in the grave - Shabtis, the answerers - who were supposed to do whatever work they had to do in the hereafter. For the first time in history, the idea of ​​the computer appears here: the answering representative who executes every command. The instructions with which the small figures are labeled are strikingly similar to the algorithmic sequence of a modern computer program:

Magic doll, listen to me!
When I am called
to do the job ...
know you are in my place
destined by the guardians of the afterlife
to sow the fields
to fill the channels with water
to get the sand across ...


In the end it says:

Here I am and I listen to your orders.

The answerers of antiquity: Egyptian shabtis, who were supposed to do work in the afterlife as grave goods. | Photo (detail): © picture alliance / akg / Bildarchiv Steffens

Today we would call this “dialog-oriented user guidance” - and the belief that a magic spell can bring a clay figurine to life, a superstition. This superstition has found its way into the present. The advocates of hard AI are convinced that at some point, somehow, a living consciousness will form in a computer. They follow the hypothesis that thinking can be reduced to information processing that is independent of a specific carrier material. So that the brain is not absolutely necessary and the human mind can just as easily be loaded into a computer. For Marvin Minsky, who died in January 2016, AI was an attempt to outsmart death.

The illusion of the machine self

In 1965, the computer scientist Joseph Weizenbaum wrote a program called ELIZA at the Massachusetts Institute of Technology, with which one could - while writing - converse. He let ELIZA play the role of a psychotherapist having a conversation with a client. “My mother is strange,” the person types in. “How long has your mother been weird?” The computer asks back. Are the machines now awake? What speaks to us that feels as if computers could develop their own kernels that are confusingly similar to human being?

Previously, devices had only expressed themselves in the form of impersonal signals - “oil pressure is dropping”, “malfunction”. Weizenbaum was dismayed at how quickly people talking to ELIZA established an emotional relationship with the algorithmically costumed machine. When his secretary tried the program, she asked him to leave the room after a short time because she was revealing intimate details about herself. But a machine that is instructed by a programmer to say I therefore still has no actual I for a long time.

The brain as a computer has nothing to do with actual knowledge of the brain, human intelligence or a personal self. It's a modern metaphor. At first it was assumed that man was made of clay and that a god breathed his spirit into him. Later a hydraulic model became popular - the idea that the flow of "juices" in the body is responsible for physical and mental functioning. When automata were built from springs and gears in the 16th century, leading thinkers such as the French philosopher René Descartes came up with the idea that humans are complex machines. In the middle of the 19th century, the German physicist Hermann von Helmholtz compared the brain to a telegraph. The mathematician John von Neumann stated that the function of the human nervous system was digital and always drew new parallels between the components of the calculating machines of that time and the components of the human brain. But nobody has yet found a memory bank in the brain that works even remotely like a computer's data memory.

Few of the researchers in the field of artificial intelligence are concerned about a power-hungry superintelligence. “The whole community is far from developing anything that could worry the public,” says Dileep George, co-founder of the AI ​​company Vicarious. "As scientists, we have an obligation to educate the public about the difference between Hollywood and reality."

A machine with civil rights: the humanoid Sophia leads conversations and shows emotions - and is the first robot with citizenship. It was recognized as a legal entity by Saudi Arabia at the end of 2017. | Photo (detail): © picture alliance / Niu Bo / Imaginechina / dpa

Vicarious, which has raised $ 50 million from Mark Zuckerberg and Jeff Bezos, among others, is working on an algorithm that is supposed to work like the perception system of the human brain - an extremely ambitious goal. The largest Artificial Neural Networks that are in operation in computers today have around a billion cross-connections, a thousand times what was possible a few years ago. Compared to the brain, however, this is still negligibly small: it corresponds to about one cubic millimeter of brain tissue. On a tomography, that would be less than a voxel, the three-dimensional equivalent of a pixel.

The central problem of AI is the complexity of the world. To deal with this, a newborn person is already equipped with evolutionarily derived potentials of himself - with his senses, a handful of reflexes that are important for his survival, and, perhaps most importantly, with powerful learning mechanisms that enable him to quickly become familiar with change so that he can interact better and better with his world, even if this world is very different from that of his distant ancestors.

The computer, on the other hand, cannot even count to two, only knows zero and one and tries a mixture of stupidity and speed, perhaps with rules of thumb, so-called heuristics, and a lot of sophisticated mathematics (keyword neural networks). In order to understand even the basics of how the brain operates the human intellect, we may not only need to know the current state of all 86 billion neurons and their 100 trillion connections, not just the different intensities with which they are connected, and not only the states of more than 1,000 proteins that exist at each connection point, but also how the current activity of the brain contributes to the integrity of the overall system.
Added to this is the uniqueness of each brain, which is due to the uniqueness of the life story of each person.

 

Artificial intelligence, supposedly powerful algorithms, social consequences of a computerized world - on Goethe.de you can find further texts that explain and discuss these topics and questions.

Are our basic rights still adequate protection in times of big data, social networks and decisive algorithms? To ensure this, digital experts from Germany are campaigning for a charter of fundamental digital rights of the European Union.

When intelligent machines make decisions based on their logic, this has not only legal and practical, but also ethical consequences. An ethics committee therefore deals with decision-making responsibility for autonomous vehicles, for example.

And the power of opinion robots is also viewed critically: To what extent are they able to influence political processes? The past election campaigns gave a lot of cause for discussion.

Overall, the debate about the effectiveness of algorithms is shaped by the demand for transparency on the one hand and the increasing use of the services of Google, Facebook and other providers on the other.

The expansion of the digital can be seen in many areas of everyday life: Robots are used more and more frequently in companies of all kinds - not only in factory buildings, but also in the care sector. In addition to some risks for job security, this also harbors opportunities for more freedom and increased efficiency.

The model of a smart city is no longer a utopia - a city that, thanks to intelligent technologies, is both ecological and resource-saving at the same time. But such a smart community cannot be had without a price: The public space and the people acting in it are almost completely measured.

And what does art do? Artificial intelligence also uses this for its own purposes. There are more and more creative people who dedicate their works to the relationship between code, art and life in digital worlds.
  • Print article