The cross : Are we heading towards a world where algorithms have taken the place of chance and human subjectivity?
Aurelie Jean: Not exactly. Algorithms intervene in order to achieve more precision, to perform tasks that humans cannot do or do much less well, or to automate certain painful tasks or tasks with very low added human value. That being said, just because you can technologically create an algorithm to perform a task doesn’t mean you absolutely have to. There are social and economic considerations to take into account. You have to make sure you always have a choice.
→ LARGE FORMAT.Parcoursup: investigation into the algorithms that decide the future of our children
The example of Parcoursup shows that algorithms are often seen as scapegoats. Why so many fantasies?
AD: Algorithms have become essential in many fields, whether on a production line, in medical applications, in retail banking (fraud detection), in transport… But when you don’t understand something , three reactions are offered to us: we try to understand, we are afraid or we fantasize. Algorithms being highly intangible, integrated into our daily lives, and often very poorly presented in the media, we develop around them a kind of demonization against which I try to fight. Algorithmic science is not Manichean, it is neither good nor bad, it is what we make of it.
An algorithm that “decide alone”, is it possible and to what extent?
AD: An algorithm provides an a priori decision alone at runtime but it needs men and women to build it. And this decision can be modified or even rejected by the individual who uses it. The most important thing is to know in which situations an algorithm is used, for which type of response (journeys, content to read or buy or profiles to discover) and on which type of data it turns (your readings, your purchases or even your geolocation).
→ INVESTIGATION. Parcoursup: stress and business
Can we regulate them?
AD: No. You can’t regulate an algorithm because you can’t fully evaluate it. On the other hand, we can and we must regulate good practices in the design, development, testing and use of these algorithms. In the event of a scandal (algorithmic error or bias), the owner and/or the user of the algorithm would then see its practices audited to understand the origin of such an error. Most scandals are due to poor algorithmic governance. It is this governance that must be successfully audited and sanctioned in the event of failure.
Why is it important to focus, for each citizen, on the creation of algorithms?
AD: The algorithms work for some of them on personal data, and therefore our data. It is then necessary to understand our indirect role in the biased orientation of the algorithms. All the people on the production chain of an algorithm are responsible: from those who have the idea of the tool to those who use it, including those who develop it, who test it and who sell it. Even our data, as users, can direct, bias or even falsify the algorithm in question.
You are working on a project detecting weak signals in order to prevent breast cancer further upstream. How did this idea of a very positive application of algorithms come about?
AD: The original idea comes from Doctor Philippe Benillouche, co-founder of the company we created, who spoke to me more than two years ago about his vision of developing an algorithmic system capable of detecting a breast tumor signature. A breast cancer takes a few years to develop, all the actors seek to identify, by image recognition algorithms, even the extremely small tumor on the mammogram. I translated his vision into a mathematical problem to solve: find the weak signal of the breast cancer which precedes the strong signal which corresponds to the detection of the tumor on the image. Without algorithms, this project would have been impossible because we were trying to detect the invisible.
→ CHRONICLE. Can we trust algorithms?
What is our power as citizens in the face of all these advances?
AD: Paradoxically we have much more power than one might imagine because we are the Achilles’ heel of technological giants: we can decide not to use certain tools and on the contrary to favor others. We could become their greatest enemies. Thanks to us they would build much better technologies!