Ethics: Lever of trust in the development of an inclusive and responsible AI

Mon 21 Jun 2021

I like to say that data is AI's fuel. Even if they have different objectives, these two fields complement each other.

Amal El Fallah Seghrouchni

The advent of 5G and its future developments will accelerate the emergence and adoption of powerful and often revolutionary AI tools by many players. Capable of benefiting humanity as a whole in many areas of activity, including the fight against Covid-19, smart technologies can also drift towards manipulation, worsening inequalities, discrimination. Ethical rules are essential to guide an inclusive and responsible development of AI. Scientific insights...

You've spoken at many debates about AI in 2020. Why all this hype about a field that has been tackled for over 30 years?

The history of AI goes back to the end of the 19th century! The hype around the topic is very real. I would say that its magnitude is not sudden, but rather exponentially accelerated by three factors:

  • Hardware evolution: supercomputers integrating graphics processing units (GPUs) provide unprecedented computing power. They accelerate the performance of HPC (High Performance Computing) and AI applications.
  • The GAFAM craze: they offer, almost as a turnkey solution, powerful tools allowing non-experts to use very sophisticated AI algorithms. These massive deployments have raised ethical questions related to the "black box" effect of certain learning algorithms: conclusions are proposed following such complex paths that humans cannot explain. They have given rise to a new AI research field: "explainability".
  • The ability to collect massive amounts of data thanks to the deployment of connected objects and the universalization of the Internet. The almost permanent hyper-connection encouraged by the creation of new country evaluation indexes (digital maturity, digital well-being), massive tracing of daily interactions (social networks), generalization of online services, routing of data... are all factors that enabled the massive and sometimes even wild collection of data – hence the new regulations such as the GDPR - as well as training on these databases.

Data and AI are frequently associated. What is the link between these two very distinct fields?

I like to say that data is AI's fuel. Even if they have different objectives, these two fields complement each other. For example, data is necessary for AI algorithms to make predictions. AI is focused on modeling, understanding cognitive processes and often exploration. In turn, AI can produce data.

How will 5G enable AI to reach its full potential? How do 5G and AI depend on each other to thrive?

Coupled with IoT, 5G is bringing out smart applications requiring:

  • Increased download speeds, reduced latency and more accurate user location;
  • The coexistence of numerous objects connected to the Internet, which are required to make autonomous decisions while being immersed in scalable environments.

In smart cities, for example, 5G will foster the development of smart building and smart home solutions, as well as ubiquitous services to help citizens: geolocation, mobile games, e-commerce, autonomous cars (V2V, V2P, V2I). In medicine, it will enable remote surgery and medical monitoring. In industry 4.0, it will accelerate the development of digital twins, these virtual replicas of physical devices allowing either remote control of industrial processes, or to perform calculations that are impossible to do onboard. In the post COVID-19 world, 5G combined with IoT also have a huge potential in the online education and remote working fields.

Regarding the contribution of AI to 5G, we are moving towards 5G datafication. Already, so-called smart networks use AI techniques in routing and resource allocation. The specifications for 5G evolutions, currently being defined, foresee that it will soon host data centers. This will be the real revolution: 5G will use AI at its heart[1].


AI is as fascinating as it is frightening. How can you explain this paradox? In what way can AI be dangerous for society and why does it provoke a certain distrust?

Unlike other developing sciences such as quantum science, AI challenges humans in their most intimate aspects. It can therefore be intrusive.

Indeed, AI generates text, speech (chatbots), vision and pattern recognition such as facial recognition, which is controversial. It has cognitive capabilities, to the point of arguing with users. It can generate emotions in humans. It can be used to manipulate individuals in the context of nudges[1], a well-known concept in marketing and behavioural theory that encourages a person to make a decision that they mistakenly believe they have freely chosen. They have access to privacy and personal data and may not forget anything. In the workplace, many believe that robots and automation will lead to massive unemployment. AI is also frightening in movements like transhumanism...

However, at the same time, it is fascinating to see that robots provide very powerful solutions in many fields: to treat autism, to accompany cognitively impaired people, to make recommendations in record time, to help make relevant decisions, to carry out very deep underwater drilling, to pilot drones, to beat the world champions in chess and other non-trivial games...

What role do you play in UNESCO's prestigious COMEST?

COMEST is an advisory body and a forum of reflection set up by UNESCO in 1998. Together with 17 other experts, we reflect on ethical issues related to the advent of science and technology. Currently, our Commission is working on the ethics of connected things (IoT) and other critical topics such as the ethics of land-use. Last August, UNESCO released a report of the work on AI ethics, which is currently being discussed with all its member states.

What are the safeguards to be foreseen in the field of AI?

In my opinion, several principles should be respected:

  • Respecting human rights and ensuring their dignity. Objective: not to discriminate people who do not have access to digital tools.
  • Respecting privacy: ensuring that data collection is proportionate and defining the legal time limits for their use.
  • A democratic logic of deployment: in my opinion, technologies must be adapted to the "technological IQ" of users so that their consent is free and informed; a public debate must warn on the technical and societal stakes.
  • Regular, transparent, fair and understandable information must be communicated to all.
  • The possibility for the user to revoke his/her consent.
  • The possibility for the citizen to have his/her data erased or modified.
  • A very important point linked to Big Data: the eradication of any kind of bias (cognitive, economic, selection...), now well documented in research.
  • Respecting privacy: the right to keep one's personal affairs and relationships secret, a subject on which Alan Westin has conducted seminal work[2].

How can we reconcile technological innovation and ethical rules in order to put progress at the service of many? How can we move forward while integrating the issues of trust?

Tension often arises between efficiency and ethics. But we must not open Pandora's box under the pretext of accelerating innovation. Trust must be key! For me, the real question is: which society do we want? Several initiatives by countries and institutions are evidence of a global awareness.

In October 2020, the Chilean Congress presented a draft amendment to the constitution that defines mental identity as a right to be protected from manipulation (Cognitive rights). A world first!

Many institutions are working on AI ethics. UNESCO has launched a global consultation in 2020. The OECD has formed an expert group to develop a draft in June 2020. The European Parliament adopted 3 new resolutions (October 20, 2020): the liability regime of AI; AI and intellectual property; and the ethical framework of AI developments (the right to information).

On January 20, it published the last report of the European Parliament Resolution on AI[3]. From Montreal to Melbourne, many AI research centers are addressing this topic.

The advances of AI appear to be disruptive. The acceleration of its development and its popularization raise questions and concerns. Ethics, respect of human rights and the privacy of citizen-users will be major drivers for creating confidence in an inclusive and responsible AI, likely to help with technical progress, the resolution of health crises and the development of emerging countries. The work of multidisciplinary ethics commissions aims to enlighten governments, the scientific community, decision-makers and the general public.

A brief historical overview of AI

1890: the American psychologist William James introduces the concept of associative memory which defines the premises of what will become in 1949 the Hebb theory (learning rule of artificial neural networks) and gives birth to research works on the subject.

1943: Warren Mc Culloch and Walter Pitts are the first to theoretically demonstrate that simple formal neural networks can perform complex logical, arithmetic and symbolic functions.

1946: David P.C. Lloyd publishes a well-known founding article: "Integrative pattern of excitation and inhibition in two-neuron reflex arcs"

Extract from our white paper : Challenges and advancements in the era of data and artificial intelligence


1 Cf. Guy Pujolle, « Faut-il avoir peur de la 5G ? » (2020)2 Richard Thaler et Cass Sunstein « Nudge : Améliorer les décisions concernant la santé, la richesse et le bonheur » (2008).
3 Cf. Alan Westin « Privacy and Freedom » (1967)


Amal El Fallah Seghrouchni

Full Professor at Sorbonne Université - Faculté des Sciences et d'Ingénierie