Will AI systems become more autonomous?


The use cases of artificial intelligence in everyday life are multiplying incessantly in the era of digital transformation of society. This phenomenon leads to the creation of new risks, which can go as far as social control measures. For this reason, the European Union, among others, plans to strengthen the regulation of this technology.

In its early days, artificial intelligence (AI) was just one segment of computer science. But since then, it has escaped from the research labs with the emergence of AI systems. Now we are starting to realize that AI has developed so fast. To the point that it will undoubtedly change the world.

Faced with this possibility, computer scientists should ensure that this technology remains under human control. To work in AI, these experts can join a company in the sector as an employee or become a freelancer. A form of employment whose adoption requires following some steps: market research, choice of legal status, prospecting …

Developers may lose control over some decisions

Until recently, algorithms and the resulting decisions emanating from equations and mathematical formulas depended heavily on :

    • Programmers ;
    • Of their vision of things and their culture.

Recently, these rules come more from properties hidden in the information on the basis of which the algorithms will learn by themselves. This depends on the quality, but also the quantity of the analyzed data. These algorithms are known to be complicated to understand, both for the average user and for their expert programmer. The reason: their optimization processes are :

    • Difficult to duplicate identically ;
    • In total opposition with the ways of reasoning of the human.

These algorithms remain initially still relatively under the control of the developer However, a considerable part of the decisions that will result from them may escape him. This, in turn, will leave each related problem unresolved:

    • To the lack of transparency of the solutions put forward by the algorithm ;
    • To their complexity ;
    • Sometimes to their non-understanding.

AI has allowed incredible advances in pattern recognition (on locations, voices, images…). However, the progress in this field generates prediction systems which create increasing risks in :

    • Fundamental rights: discrimination, manipulation… ;
    • Security: autonomous weapons, social control ;
    • Health.

EU prepares regulation to control AI

Last March, China adopted a law on the transparency of suggestion algorithms.

In parallel, the european Union will enter a new phase of AI regulation next year. The supranational organization plans to roll out the AIA (Artificial Intelligence Act) regulation by then.

This text is based on the risk level of AI systems. This is based on a pyramid structure similar to nuclear threats: minimum risk, low risk, high risk, unacceptable. Each degree of risk is matched with:

    • Requirements or obligations, on which the Commission and the european Parliament are still negotiating and which are detailed in the annexes;
    • Of prohibitions.

The European IA Committee and the competent national authorities verify their compliance and sanctions.

The use of automated decision making systems is becoming increasingly popular. In this context, it becomes essential to be able to confirm, interpret and explain the suggestions given by the algorithms. Moreover, these are inevitably imposed in the life that humans lead every day, with for example :

    • The granting of bank credits ;
    • Facial recognition ;
    • Student selection in higher education.

The algorithm used by Parcoursup has exposed the opacity of those on which universities rely to register students.

Share On Social Media