What are ethics?
Before talking about Ethics and AI, let’s take a look at ethics itself. Ethics is one of those subjects that we understand intuitively, but which escape us as soon as we try to define them, like time, life or emptiness, for example.
Or, for the more mathematically inclined, infinity, divergent series or the incompleteness theorem. We lack the concepts and vocabulary to talk about them properly, and definitions and explanations quickly become circular. If we lack words, it’s because these notions are at the frontiers of our perception.
Intimately linked to the notion of morality, ethics has given rise to numerous philosophical reflections since ancient times (Confucius, Plato…), in the 19th century (Kant, Schopenhauer…) and right up to the present day (Levinas, Ricoeur…) without interruption.
For simplicity’s sake, let’s say that moral philosophy (“ethics”) is a philosophical discipline whose object is the study of morality. Moral philosophy has as its object morality, just as morality has as its object the finality of human actions, with the aim of living in accordance with the good. So there you have it, our recursive definition…
We can immediately see the potentially sulphurous links with the religious, political and social aspects that unite (or divide) our societies according to their history, culture, beliefs and contradictions.
Life sciences
From its very beginnings, the medical field has been concerned with ethics. The Hippocratic Oath of the 4th century B.C. bears witness to this.
Advances in biology and medicine have amplified (and complicated) the issues, particularly in relation to the beginning and end of life, but also psychiatry and organ transplants, for example.
To help patients, practitioners and legislators deal with this delicate subject, institutions of “wise and knowledgeable” have been set up (in France, the Comité Consultatif National d’Ethique pour la Vie et la Santé).
Today’s medical ethics therefore revolve around four principles:
- Autonomy: respect for the individual and his or her ability to decide on his or her own health.
- Beneficence: facilitating and doing good, contributing to the patient’s well-being.
- Primum non nocere: the obligation to do no harm.
- Justice: providing the same treatment in a fair and/or equitable manner to all patients.
Digital ethics
“Why software is eating the world”, wrote Marc Andreessen.
In the space of just a few years, digital technology has penetrated our administrations, public spaces, businesses and our (so few) private lives. The power of software is growing exponentially, as witnessed by the spectacular advances in Artificial Intelligence.
Sometimes for the good (medical imaging), sometimes for the bad (infox, conspiracy theories), and most of the time for… who knows what (social networks, chatbots).
The power of these tools has become such that their potentially harmful effects (addiction, manipulation, mass surveillance) have ended up worrying a society fascinated by technology but (for the more enlightened) worried about its consequences.
In France, we like committees: the decree of May 23, 2024 announced the establishment of the Comité Consultatif National d’Éthique du Numérique. This consultative institution marks a turning point in the consideration of ethical issues linked to technological and digital advances.
Let’s face it, it’s a difficult subject, and one that needs to be tackled with panache.
Intention is everything(?)
A helping hand for better or worse
“Nudge” is a way of organizing a person’s environment or communication with them, based on their psychology (reflexes, biases, gregariousness, etc.). It is designed to help people make difficult decisions, based on the principle of “encouraging without constraining”.
For example, to change eating habits, nudges are used in catering establishments. Highlighting vegetables, using smaller plates to give the impression of a larger portion, pre-cutting fruit to make it easier to eat: these are just some of the nudges used to encourage a reasonable diet, both in terms of quantity and quality.
The use of nudges raises its share of ethical issues. In fact, they can be seen as an infantilizing device that condones a certain form of manipulation. They can also give rise to feelings of guilt among those who deliberately circumvent them. What’s more, their real impact is still open to question.
- Nudge for good: fight addiction, promote education, drive more carefully, reduce carbon footprint…
- Nudge for bad: promoting addictions, mass manipulation and surveillance, conspiracy theories, capturing available brain time…
For good as for evil, it’s all about deception to compensate for an inclination you’re trying to correct.
Malborough cowboy versus photos of tarry lungs? Intention is everything, isn’t it?
Irrationality becomes the standard
Emmanuel Kemel, professor-researcher at CNRS – Economie et Sciences de la Décision and HEC Paris, warns of the power of the [AI + nudge] couple in the service of manipulative marketing, which he believes goes so far as to undermine the foundations of the market economy for the benefit of predatory companies.
While the behavioral biases on which conventional Nudges are based were derived from experimental research, Big Data makes it possible to automatically detect the slightest weak point in decision-making, which can be exploited to influence the consumer. Once a new behavioral lever has been identified, algorithms can implement it on a massive scale. What are the consequences? Consumers may find themselves wronged, either because they make purchases that don't correspond to their real needs, or because attempts to resist influences create fatigue that degrades the shopping experience. Thus, the use of Nudges in marketing degrades consumer well-being.
Generally speaking, the widespread use of Nudges tends to make occasional errors systematic: irrationality becomes the norm. In this respect, the growing use of Nudges undermines one of the foundations of the market economy. In theory, it is because consumers make informed choices that producers are encouraged to offer the most suitable products at the best prices. Nudge reverses this process, enabling producers to impose consumer preferences. A consumer under influence no longer exerts counter-power over producers. The possibility of consumers acting under influence thus calls into question certain virtues of the market economy, just as voting by citizens under influence undermines the foundations of the democratic model.
Emmanuel Kemel
Nudges Et Intelligence Artificielle : Unis Pour Le Meilleur Ou Pour Le Pire ? (french) So you can read this article in full.
Affective computing
It’s impossible not to mention Theodore here. But, who is Theodore again?
Theodore Twombly, a sensitive man with a complex personality, is inconsolable following a romantic breakup. He acquires an application capable of adapting to the personality of each user. He meets ‘Samantha’, an intuitive and funny female voice. Theodore and Samantha’s relationship evolves, and little by little, they fall in love… Except…
As in Spike Jonze’s film Her, affective computing consists of an application capable of perceiving the emotions of the users with whom it interacts, analyzing the circumstances of these emotions, and deducing its behavior according to the objective assigned to it by its programmer.
Louis de Diesbach, a technical ethicist, notes that nobody greets their front door or lock when they get home, and very few thank their dishwasher once it’s done its job. Except Ali Baba, one might retort: conversational interactions are of different natures, and partly compensate for the asymmetry of the user/machine relationship. That’s what makes it so charming, in every sense of the word.
Deep Fake
To begin with, let’s start by recalling what deepfakes are: a contraction of “deep learning” and “fake”.
First appearing in 2014, deepfakes were digital manipulations to implicate a person – usually publicly known – in pornographic scenes.
Since then, hypertrusted videos have flourished on the web, such as, a fake Barak Obama warning about the risks of deepfakes or a fake Donald Trump announcing the eradication of AIDS.
Nicolas Obin, lecturer at Sorbonne University and researcher at Ircam:
As semantic manipulation of audio-visual content, deepfakes can be used for malicious purposes such as identity theft. Two properties of modern AI make deepfakes particularly dangerous: on the one hand, the realism of generations made possible by the combination of high-performance learning algorithms and the masses of data available to carry out this learning; on the other, the democratization of these tools with shared resources (models of singers' voices, for example, are freely available on communication channels).
Today, everyone is constantly publishing personal data on networks, much of it publicly accessible. As a result, anyone can fall victim to a deepfake. However, public figures are at far greater risk, due to the sheer volume of data that is freely accessible. Such malicious attacks can be critical in the case of sensitive personalities, as was recently the case with the false statements made by Volodymyr Zelensky and Joe Biden.
But there are other, more pernicious manipulations, such as emotion manipulation, which targets our affects. For example, a voice assistant could have emotional or expressive interactions and influence our behavior by inflecting our emotions, or urging us to buy something, and so on. In politics, the same speech could be addressed to each citizen, with variations in tone adapted to obtain an optimal persuasive effect.
Nicolas Obin
Falsified content, artificial videos…: cyberspace and, in particular, the information sphere are areas where false information abounds. Some of this false content aims to destabilize military operations and can have repercussions in theaters of operation. Detecting and countering them is essential to protect operations and guarantee the credibility of defense forces.
Everything is poison, nothing is poison, it's the dose that makes the poison
According to Laurence Devillers, professor of computer science applied to the social sciences at the Université Paris-Sorbonne:
- To be natural, interactions with humans must be emotionally sensitive;
- Humans naturally project themselves (anthropomorphize) onto everyday objects;
- Robots can interpret signs, but let’s not forget that they don’t have emotions themselves;
- AI is not a neutral object, and ethical rules are needed, because publishers have a responsibility;
- It’s up to society to define the rules; the economic engine won’t be enough. For a drug, we define a dosage and side effects that are monitored over time: for AI it’s the same thing, we need to define the risks and monitor them over time.
You can find the integration of his intervention in the excellent Trench Tech talkshow here.
Ethics in the professional world
Like all digital and AI players, publishers are sensitive to the ethical aspect of their solutions, with varying degrees of sincerity of course. As a result, they are trying to understand what role they have to play, how far they are involved, and what legal risks they run.
It’s not an easy subject, as innovation, its good and bad effects and the consequences for companies have to be balanced in an unstable and highly competitive environment.
We welcome a number of initiatives in this area, which we would like to share with you.
La Villa Numeris, for AI with a positive impact (thus considering ethics)
Independant French Think tank, La villa Numeris promotes a European digital model that affirms the primacy of the human being.
La Villa Numeris calls for the necessary structuring of the European AI ecosystem, as well as the creation of a European giant in the field, in its advocacy For AI with positive impact.
In partnership with ESSEC’s Metalab, La Villa Numeris notes that, ”while necessary to protect fundamental rights, regulation alone will not be sufficient. From their development to their use, digital techniques need to be thought through in terms of their human, sociological, social, economic and environmental consequences”.
Numeum: in-depth reflection on the ethics of AI
Numeum is the French trade association for ESNs and software publishers. AI features prominently among the various topics discussed.
How can we accelerate, in complete safety, the development of AI uses in the various spheres of our daily lives? This is the mission of the AI Commission, which supports the creation and deployment of trustworthy AI and promotes the benefits of using these technologies for the good of all.
Numeum
Numeum’s Ethical AI initiative is designed to support “AI makers” who wish to integrate ethics into the heart of their activities, to help them translate general ethical principles, derived from existing work, into practical methods and tools.
Ethics at the major generative AI publishers
The subject is well identified among the major generative AI publishers. The profusion of committees, guides and red teams testifies to this. As ever, promises only bind those who listen, but it’s encouraging all the same.
Publishers’ main fear is the appearance of government regulations, which would restrict their ability to act and increase the production costs of their AI solutions. To ward off this risk as far as possible, they apply Jean Cocteau’s maxim: “since these mysteries are beyond me, let’s pretend to be their organizer”.
With some justification, given the immaturity and complexity of the subject. While Apple has already announced thatApple Intelligence will not be launched in Europe at the start of the 2024 school year, as in other countries around the world, Meta is following suit, citing an overly uncertain regulatory framework in European Union countries (RGPD, IA Act, DMA…).
OpenAI: AI and Ethics, really?
In late 2015, Elon Musk co-founded OpenAI with Sam Altman and several other associates. As its name suggests, OpenAI is inspired by open source software. The aim of this open non-profit organization is to offer safe artificial intelligence for the benefit of humanity. The technology could not be monetized, but would be used to improve society.
However, good intentions did not stand up to economic reality. OpenAI became exactly the opposite: a closed, for-profit solution controlled by one of the world’s largest multinationals: Microsoft.
In 2024, after the departure of its two leaders, the OpenAI team responsible for the security of a potential artificial superintelligence (AI) was disbanded.
To be honest, it’s no mean feat for one of the world’s richest men (E. Musk) to sue his own creation.
Microsoft, ethics at the core of AI
As always, Microsoft is a solid partner: its report on responsible AI is precise and consistent.
The responsible AI policy is broken down into four points: governance, mapping, measurement and management. See also “The Microsoft Responsible AI Standard”.
You can also read the white paper How Microsoft Secures Generative AI.
Meta, a seriously ethics-oriented AI
As for Meta, Mark Zuckerberg’s teams seem to have taken the ethical and security aspects seriously, multiplying safeguards and control points (notably red teams).
Only time will tell how effective these measures have been.
You’ll find two dedicated resources from the publisher here:
Particular attention is paid to cybersecurity, chemical and biological weapons, child safety and confidentiality.
What's in it for me?
What we don’t know isn’t scary: we don’t know what we don’t know. And what we think we know is the source of many fantasies.
Ethics are about morality, and it’s not easy to fit morality into rules and laws. Let’s not forget the controversies surrounding vaccination, whether compulsory or voluntary, in the recent Covid years.
Couldn’t the four principles of medical ethics (autonomy, beneficence, do no harm and justice) serve as a guide in this journey we’re about to embark upon, without being naive or overly concerned? The European Union is trying to take an interest, notably through the IA Act, which is not without discussion.
But it’s the dose that makes the poison.
_____________
Are you a software publisher? We hope this article has given you some food for thought about ethical AI.
Agora Software is the leading provider of conversational AI solutions for software publishers. We rapidly deploy conversational application interfaces that enhance the user experience of applications and platforms. By integrating Agora Software into your application, your users benefit from rich, multilingual and omnichannel interactions.
Would you like to integrate conversational AI into your applications?
Let’s talk: contact@agora.software
If you enjoyed this article, you too might like to read The AI Act: dura lex, sed lex.
And join us on our Linkedin page to follow our news!
Want to understand how our conversational AI platform
optimizes your users’ productivity and engagement by effectively complementing your business applications?
Want to understand how our conversational AI platform
optimizes your users’ productivity and engagement by effectively complementing your business applications?