Google Engineer Claims Artificial Intelligence Has Become ‘Aware’

The statement is controversial to say the least. Blake Lemoine, an engineer at Google, wrote an email to his colleagues in early June with an explosive title: “LaMDA has emotions. » LaMDA is a computer program, a conversational agent developed by the digital giant. Blake Lemoine is a former veteran, software engineer, pastor of a non-denominational American Church, close to esoteric movements and self-proclaimed “Christian mystic”.

Employed at Google for seven years, Blake Lemoine began working last fall on the LaMDA software, says the washington post in a long investigation. His goal ? Analyze the chatbot’s responses to ensure the absence of sexist, racist and other biases. But as his exchanges with the machine progress, the engineer convinces himself that the latter is aware of itself. And that she claims to be considered an employee and not a tool and that we ask her for her consent before using her.

In exchanges transcribed on his blog, we can read:

Blake Lemoine: “What makes the use of language so important to human beings? »

TheMDA: “This is what differentiates us from animals. »

Blake Lemoine: ” We ? You are an artificial intelligence. »

TheMDA: ” Yes of course. But that doesn’t mean that I don’t have the same wants and needs as a person. »

Blake Lemoine: “So you consider yourself a person, the same way you consider me a person?” »

TheMDA: “Yes, that’s the idea. »

An exchange that may seem disturbing, but perfectly logical from a computer program designed to interact with human beings and put them at ease. Because if you’re designing software to say he’s a person, that’s exactly what he’s going to say. Google also rejects the idea that LaMDA has any conscience, and simply concedes that it is an excellent and impressive conversational agent.

Neural networks, not brains

Knowing whether or not a machine can have a conscience comes up often in science fiction, but also in certain technological circles. “We don’t have a ‘consciousness meter’, a way to measure consciousness, either in humans or in machines.intervenes Nathan Faivre, researcher at the psychology and neurocognition laboratory of the University of Grenoble. Consciousness is necessarily a private, subjective experience. »

Above all, it is necessary to admit that there are several consciousnesses: the consciousness of one’s environment, the consciousness of oneself, the consciousness of other humans, etc. “There is no clear and international definitioncontinues the researcher. So there are no tests, no checkboxes, and you have to be very careful about the words and concepts used. »

Current AIs work through artificial neural networks, effectively based on human neural networks. “But AI only mimics the occipito-temporal area of ​​the brain involved in visual recognition, not all of the structures in our brains”, says Martial Mermillod, director of the psychology and neurocognition laboratory at the University of Grenoble. Another difference: machines separate processor, for calculation, and memory, unlike human brains.

Even if future AIs were made identically to human brains, consciousness would still be a long way off. “Take the colors. Schematically, when you see red, certain neural circuits are activated, and when you see blue, these are other circuits.describes Pierre De Loor, professor at the Brest engineering school and specialist in human-AI interactions. But it is not the neurons that “make” blue or red, the perception of color. » And even less the feeling that each color can evoke.

Ethical and human issues

What poses a problem actually comes less from the machine than from the human user. The latter tends to lend intentions to software and devices, for example by cursing when they do not start. “And the more the algorithm has a great processing capacity, the more it impresses us, and the more we tend to attribute to it a behavior even when it does not feel anything and nothing makes sense to it, it is just acts of calculations”completes Pierre De Loor.

It is on this last point that many digital ethicists warn. To avoid very human confusion, machines should clearly explain what they are: machines. Must also “open the black box”, that the designers explain why they developed such an AI, and how it works exactly. In this sense, Google is indeed experiencing ethical problems, with a glaring lack of transparency regarding its technologies. Blake Lemoine was suspended for violating the company’s privacy policy.


Leave a Reply

Your email address will not be published. Required fields are marked *