Privanova

The intensification of global efforts to regulate AI

Rapid advancement of artificial intelligence now intervenes in all sectors and the call for regulation becomes a necessity. The unchecked upscaling of AI can bring unintended consequences. Therefore, oversight of AI systems needs to mitigate biases, protect citizens’ privacy, and avoid security vulnerabilities.

Regulation ensures that AI development and deployment adhere to ethical standards, fostering fairness, accountability, and transparency. By imposing these rules and standards, AI would be prevented from becoming a force that could harm rather than help. 

Today, a wave of AI regulation is emerging all over the world, this regulation not only mitigates risks but also fosters a fair and competitive AI ecosystem, of which the labour market and research ecosystem are dynamic.

Multilateral efforts towards accountable AI 

The international context is a fertile ground for an overall regulation of artificial intelligence. As the question of AI is now pressing in all sectors and all geographical areas, it is paramount to develop an international framework to set the ground for eventual legislation. In this context, The OECD Principles on Artificial Intelligence were officially ratified in May 2019 through the unanimous approval of the OECD Council Recommendation on Artificial Intelligence by member countries. The principles advocate for the development of innovative and trustworthy AI systems that uphold human rights and democratic principles. The aim is to foster a human-centric development of AI while focusing on accountability, transparency, and security, as well as dynamizing the ecosystem.

The G7 are also continuously developing its own guidelines on AI through the Hiroshima process, led by Japan. These guidelines, adopted in late October, are a dynamic resource that continuously adapts to the latest advancements in AI systems, thus building on and complementing the existing OECD AI Principles.

These efforts are also reflected in regional and national legislations such as the EU’s current trilogy as well as the Executive Order by the US presidency, adopted on October 30th, on Safe, Secure, and Trustworthy Artificial Intelligence.

AI regulations in Privanova’s work

As our projects’ aim is to continuously find innovative ways to achieve general interest, a lot of our R&I efforts exploit the emerging AI dynamism to propose cutting-edge solutions. Our role in Privanova often revolves around legal and ethical accompaniment of our consortia, therefore, whenever AI is used, we assess its deployment against a set of ethical and legal rules stemming mainly from the EC’s Ethics Guidelines for Trustworthy AI. This was done, for instance, in TRACE where illicit money trails are actively investigated through AI, or in AI4HEALTHSEC where swarm intelligence is used to solidify medical IT infrastructure.