“We are still far from human or animal intelligence”



The cross : Is artificial intelligence smarter than us?

Laurence Devillers: Artificial intelligence (AI) has nothing to do with that of humans. For example, she has no emotional intelligence, no collective intelligence, no culture, no body. An AI rather corresponds to an imitation of the human by a set of technologies.

Yann Le Cun: But some of the tasks that AI perform can be called intelligent. For example, to beat a human flat on the game of Go is to demonstrate a form of intelligence, which, yes, is superior to that of humans in this area. The same goes for visual recognition systems: when you take a photo of a plant, the AI ​​can recognize it with greater accuracy than humans. However, the machines are very specialized, and when you go outside the intended domain, they don’t work very well.

→ CHRONICLE. Vertigo of the “chatbot” inspired by our deceased relatives

We must not lie to people: current AIs are a far cry from human or animal intelligence and they have less common sense than an alley cat. But we shouldn’t lie to them by saying that the AI ​​will remain dumb. In several decades or centuries, they will reach the level of humans.

LD: Maybe, but it’s impossible to say for sure. Let’s not confuse marketing with real scientific probabilities. Human brains contain much more complex architectures than those on which AI is based. And we don’t know how to reproduce them. Recreating a human is the great fantasy. We can always go further, but there will always be something living, physiological missing. It has no guts, a system. No flesh. What the machine will never have is the conatus dear to Spinoza, that is to say, the effort to persevere in one’s being, the spark of life, the appetite for life.

You say it, the ideal machine will perform tasks more efficiently than the human. Will she overtake us?

LD: The machine is not perfect, it is rational. And we must not forget that it is human intelligence which builds these machines, which gives them their capacities and decides their “intelligences”.

YLC: Not quite. For forty-fifty years, researchers have tried in vain to design intelligent machines. It is only since the machine learns itself that it is intelligent. The engineer just writes a training program. Then the system trains itself from the data provided. And he’s capable of much more complex things than an engineer could have conceived.

→ TRIBUNE. The ethics of the digital face

LD: But these machines fundamentally lack common sense and reasoning about what common is. The machine is content to apply mathematical formulas written by man, following neural architectures drawn by man. We must not confuse intelligence and computing power. Let AI help decision-making, yes. But do we risk becoming less intelligent if we delegate many of our decisions to machines?

YLC: We have not become less intelligent since we have calculators.

LD: The machines that speak to us and act for us are far beyond the calculator.

Will an artificial intelligence one day be able to feel and express emotions?

LD: Machines can be given the means to detect emotions in an interlocutor and to simulate them in return. However, the system feels nothing. A robot waiter does not feel the heat, for example, and could bring you a hot dish! Putting on a temperature sensor is still possible, but what about social emotions?

YLC: I think, on the contrary, that in the long term, machines will be capable of emotions like animals and humans, even if this is not the case today. If humans one day develop autonomous AI – a lot of people are working in this direction – they will necessarily be able to feel emotions.

→ MAINTENANCE. “Train senior executives in the ethical issues of artificial intelligence”

LD: I do not believe it. A major technological leap should be made in the knowledge of living things. How does it reproduce, a consciousness? We don’t know how to do it.

What would that change in our relationship to technologies, to machines?

LD: We are frequently guilty of anthropomorphism: we attribute to the machine capacities that it does not have. You just have to be aware of it.

YLC: AI brings a new industrial revolution, and like any industrial revolution, this destabilizes society but also represents a tremendous opportunity. I have no doubt that this will ultimately be useful for humanity, whether it is filtering hateful or violent speech online, automatic emergency braking in cars, analysis of medical images, or the discovery of new treatments.

For people to learn to handle new technology, it takes about half a generation. And every generation, parents and grandparents have seen a new invention as bad because young people spend too much time on it. Today it’s social networks, tomorrow maybe AI, but yesterday it was video games, TV, before rock-and-roll.

LD: (joking.) Don’t forget the book!

YLC: But yes ! Farmers of the time were no doubt protesting against their children who spent too much time reading instead of going to the harvest. Yet today no one would say that the printing press was a bad thing.

How can we make machines serve us without enslaving us?

YLC: The strength of democratic institutions protects us from the misuse of AI and its perverse effects. The same technology can be used to detect tumors in medical images or to monitor the population in authoritarian countries. The technology is neutral; what we do with it is not, and it’s up to governments to make sure.

LD: Technology is not neutral! We train the AIs from data sets that we have chosen, it is not neutral. And very often, this reproduces the biases of our society, for example sexism. AI will increasingly disrupt our lives. We must teach the opportunities and risks at school to the youngest, and throughout life, to be and remain in the capacity to decide. If we use AI to be smarter, to make society fairer, that will be fabulous. But for that, we must develop AI in a transparent and ethical way.

YLC: What to identify are the real dangers. Everything we were talking about earlier: bias, bad use, etc. The horrific risks of futuristic killer robots are not the ones we should be concerned about. Let’s focus on the immediate advancements and issues in technology in general and AI in particular. We have to find a new balance. This requires a reflection on ethics, with bodies bringing together governments, experts and users, who would be there to set up recommendations, protocols and rules. This also involves the protection of privacy.

LD: These machines will adapt to us, we will adapt to them. Let us anticipate this human-machine “coevolution”.

__________________________________

Laurence Devillers

1962. Birth.

1992. Doctorate in computer science, on speech recognition systems, at the University of Paris 11-Orsay.

2004-2007. European project on emotions and human-machine interactions.

Since 2011. Professor at the Sorbonne and member of the Limsi computer lab at CNRS, at the head of the working group on social interactions.

Since 2019. Member of the Digital Ethics Pilot Committee, under the aegis of the National Consultative Ethics Committee.

Yann Le Cun

1960. Birth.

1988. Researcher at Bell Laboratories in the United States, after a doctorate at the Sorbonne.

2003. Professor at New York University, where he then headed the center for data science.

2013. Director of Artificial Intelligence Research at Facebook.

2018. Winner of the Turing Prize, jointly with Canadians Yoshua Bengio and Geoffrey Hinton, for their work on artificial neural networks.

.

Leave a Reply

Your email address will not be published. Required fields are marked *