Human autonomy is based on the notion of “agency”, which is an ability to develop independent thought and to freely choose to act according to one’s ideas and emotions. Daniel Kahneman, economist, Nobel Prize in Economics in 2002, studied the way people make decisions. To understand why human beings are often biased in their choices, he schematizes the functioning of the brain in two systems of thought: one, intuitive, which he calls system 1; the other, logical, called system 2.
→ CHRONICLE. The ethical innovation of AI: a political and societal challenge
While system 2 is thoughtful, logical and energy-consuming, system 1 works in automatic mode and operates by association of ideas. For example, by identifying known situations, he deduces the behavior to adopt. These associations are not deduced from reasoning. Resulting from evolution, this instinctive way of functioning is very effective but it is at the origin of bias. Human behavior is therefore difficult to reduce in computer programs devoid of intuition because, as the work of Daniel Kahneman shows, our emotions and our “instinct” play an essential role in our decision-making.
Our brain tricks us into many everyday circumstances by making decisions we are unaware of, including whether we are confident or need to react quickly. We are victims of illusions, effects of optimism or anchoring. By exposing our cognitive biases, Kahneman makes us understand the capacity for direct manipulation of the digital. Keeping your free will in a digital world that always offers us what we should like and what we hate the most to guide our choices is not easy.
Machines do not have an understanding of the basis of their decisions and actions, but they can learn certain mechanisms of system 1 by interacting with humans. Machine intelligence is seen as a pragmatic interaction with the real world and Developed abilities have varying degrees of complexity, allowing for different levels of autonomy. The autonomy of a machine can thus be defined by the ability to operate independently of a human operator or another machine, by exhibiting adaptive behaviors in variable environments.
Providing machines with a self-motivation mechanism with dimensions of satisfaction/non-satisfaction and with a balance function, a sort of homeostasis, allows a certain autonomy of reaction. In biology, homeostasis is a phenomenon by which a factor is in equilibrium around a beneficial value for the system, thanks to a regulatory process. These new types of architecture and algorithms could create autonomous machines capable of self-motivation and planning.
Robots will be increasingly endowed with a feigned humanity and a certain autonomy of adaptation linked to self-motivation programmed into decision-making mechanisms. However, these autonomous machines will not be able to determine moral values, nor learn what human dignity is. It will be necessary to build safeguards and educate to avoid being duped into believing that these future autonomous machines have feelings.