Artificial intelligence constitutes a major turning point for many European industries (automotive, finance, defence, etc.) and for civil society as a whole. It is developing through the use of image recognition and neural networks (deep-learning).

Such networks are, however, ‘black boxes’, for which there is currently no standardised process to ensure the absence of bias or their proper functioning. Such a standardisation is required to enable the use of artificial intelligence in critical systems (drones, cars, etc.).

At present, countries such as the US and China are moving fast to put in place their own standards, which could be detrimental to European citizens and companies, who may enjoy a lower level of protection as a result.

Is the Commission aware of this problem?

How does the Commission intend to defend the interests both of companies developing artificial intelligence systems and of their users (public and private)?

What is the Commission’s view on the balance to be struck between the right to experiment and regulation of the use of artificial intelligence?