Artificial Intelligence and Ethics in EU-funded Projects

Artificial Intelligence is the aim or the enabler in many ongoing EU-funded projects and upcoming Calls for proposals. On the EU level, Artificial Intelligence (AI) is perceived as the key element of economic growth and competitiveness. AI is one of the most important applications of the digital economy based on the processing of data. At the same time, the human and ethical implications of AI and its impact on various research areas are being highlighted.


For this reason, the issue of Artificial Intelligence and Ethics is addressed on different levels within the EU – from policy, over legal and regulatory to research and development.

Artificial Intelligence and Ethics: a policy framework

The cornerstone document outlining the EU’s approach to Artificial Intelligence and Ethics is the European Commission’s White Paper On Artificial Intelligence – A European approach to excellence and trust. This document presents “policy options to enable a trustworthy and secure development of AI in Europe, in full respect of the values and rights of EU citizens”. 

The White paper builds on top of two main aspects:

  • the policy framework
  • the key elements for a future regulatory framework

The first element focuses on the “ecosystem of excellence”, trying to capitalize on potential benefits of AI, across all sectors and industries. This part addresses the research an industrial infrastructure in EU and engages the potential of its computing infrastructure (e.g. high-performance computers). The document clearly states: “the centres and the networks should concentrate in sectors where Europe has the potential to become a global champion such as industry, health, transport, finance, agrifood value chains, energy/environment, forestry, earth observation and space.”

Artificial Intelligence and Ethics: a regulatory approach

The second element, the future regulatory framework addresses the potential risks AI may present and aims to propose adequate safeguards. To ensure the proper balance between Artificial Intelligence and Ethics, the Commission created a discussion platform organized around the High-level expert group on artificial intelligence.

The Group already identified several key requirements concerning AI:

  • Human agency and oversight,
  • Technical robustness and safety,
  • Privacy and data governance,
  • Transparency,
  • Diversity, non-discrimination and fairness,
  • Societal and environmental wellbeing, and
  • Accountability

While non-binding, these requirements will influence the ongoing discussion on the potential regulation in this area – Proposal for a Regulation laying down harmonised rules on artificial intelligence.

Seven requirements for a trustworthy AI

When it comes to the regulatory framework, the EC’s White Paper outlines potential harms the AI may bring: “This harm might be both material (safety and health of individuals, including loss of life, damage to property) and immaterial (loss of privacy, limitations to the right of freedom of expression, human dignity, discrimination for instance in access to employment), and can relate to a wide variety of risks. A regulatory framework should concentrate on how to minimise the various risks of potential harm, in particular the most significant ones.”

Having in mind the speed with which the AI is being developed and its potential implications, it will be necessary to integrate flexibility into the regulatory framework to ensure its future-proofing. While the Proposal for a Regulation laying down harmonised rules on artificial intelligence must be clear enough to be applicable in real life, it must also satisfy the need for adaptability to future solutions.

Addressing Artificial Intelligence and Ethics issues from an ethics perspective

The EU policies and priorities are being translated into Calls for proposals for what later become EU-funded projects. The EC’s approach to artificial intelligence and ethics incorporates elements aiming to:

In this context, targeting AI in its research framework programmes serves the purpose of coordinating investments and maximising research outputs of programmes such as Digital Europe and Horizon Europe. This, however, includes the necessity of ensuring ethics compliance concerning the use of AI in research projects.

To provide guidance on the Artificial Intelligence and Ethics in EU projects the EC published “Ethics guidelines for trustworthy AI” – a document that is now almost systematically being referenced during the ethics evaluations and is almost always being mentioned as a recommendation for the applicants/consortia.

Since the AI can be used, in breach of EU privacy and data protection rules to, for example, de-anonymise data about individuals or to raise potential risks of mass surveillance by analysing vast quantities of personal data and identifying links among them, it is relevant to mention here the EC’s Guidance Note on “Ethics and Data Protection”.

Finally, for the AI context, in rare cases, the ethics evaluators and consortia or applicants fulfilling ethics requirements must also consider the AI from the military/civil application viewpoint. Two main documents are of interest in this case: Guidance note “Research with an exclusive focus on civil applications” and Guidance note “Research involving dual-use items”.

Artificial Intelligence and Ethics: achieving compliance

AI4HealthSec: enhancing cybersecurity in Healthcare ICT infrastructures

artificial intelligence and ethics - ai4healthsec

AI4HealthSec project

Over the past decade, the medical field has experienced massive digitization. The value of personal medical data has increased on the black market and, therefore, adversaries of Health Care Information Infrastructures (HCIIs) are now more numerous and better-skilled. The AI4HealthSec project proposes a state-of-the-art solution that improves the detection and analysis of cyber-attacks and threats on HCIIs, and increases the knowledge on the current cybersecurity and privacy risks. Additionally, AI4HEALTHSEC builds risk awareness, within the digital Healthcare ecosystem and among the involved Health operators, to enhance their insight into their Healthcare ICT infrastructures and provides them with the capability to react in case of security and privacy breaches. Last but not least AI4HEALTHSEC fosters the exchange of reliable and trusted incident.

Following the successful scientific evaluation of the proposal, AI4HealthSec was evaluated by a panel of ethics experts. The ethics requirements of the project addressed several categories: involvement of human participants, protection of personal data and inclusion of third countries (non-EU).

As the project relies on cutting-edge AI application in a highly regulated, medical field, Privanova’s approach on ethics compliance included reliance on the three main components that should be met during the entire life cycle of the AI:

  • Lawful AI – respecting all applicable laws and regulations
  • Ethical AI – respecting ethical principles and values
  • Robust AI – both from a technical perspective while taking into account its social environment

Inspired by the Ethical Guidelines for Trustworthy AI, we helped the Coordinator and our project partners go beyond legal obligations and seek adequate balance while respecting ethical requirements applicable to AI.

MARVEL: addressing Artificial Intelligence biases within Multimodal, Extreme-Scale Data Analytics for Smart Cities Environments

MARVEL delivers a disruptive Edge-to-Fog-to-Cloud ubiquitous computing framework that enables multi-modal perception and intelligence for audio-visual scene recognition, event detection in a smart city environment.

The project had to address the issue of Artificial Intelligence and Ethics in a very interesting context. In particular, under the expert guidance of the Project Coordinator, the whole project team worked together to demonstrate the elimination or mitigation of potential algorithmic biases perceived as a threat to the project by the ethics reviewers.

AI (or algorithmic) bias describes systematic and repeatable errors in a computer system that create unfair outcomes, such as favoring one arbitrary group of users over others. AI bias is found across platforms, including but not limited to search engine results and social media platforms, and can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity.

MARVEL – project homepage

Building on top of the high-quality feedback we received from the project coordinator, technical manager and all project partners, Privanova analysed relevant applicable framework and implemented it fulfilling the ethics requirements. As a result, MARVEL achieved overall better ethics compliance including safeguards for diversity, non-discrimination and ensuring fairness across all project outcomes. By following EU guidance, the project successfully implemented measures preserving the principle of transparency and effectively safeguarding its results from emerging AI biases.

CYRENE: Certifying the security and resilience of supply chain services

The CYRENE project aims to promote trust and confidence of the European consumers, providers and suppliers by enhancing the security, privacy, resilience, accountability and trustworthiness of Supply Chains. One of the main impacts the project will achieve is to pave the way for a competitive and trustworthy Digital Single Market.

CYRENE aims to create certification schemes to support the security and resilience of SCs through the following schemes:

  • Security Certification Scheme for Supply Chain (e.g. risk assessment tool and process);
  • ICT Security Certification Scheme for ICT-based or ICT-interconnected Supply Chain;
  • ICT Security Certification Scheme for SCs’ (e.g. Maritime, Transport or Manufacturing) IoT devices and ICT systems that should differ from traditional IoT and systems as more stress should be put on data protection and privacy issues.

Engaging AI, Information Mining and Deep Learning in security and privacy risk evaluation facilitate the detection and analysis of new, sophisticated and advanced persistent threats, the handling of complex cybersecurity incidents and data breaches and the sharing of security-related information.

In this context, CYRENE introduces AI and machine learning techniques to allow the systems and services to monitor a wider number of factors towards identifying patterns of abnormal activity. It relies on two main components: the artificial intelligence (AI) engine, and the threat intelligence. The AI engine produces optimal learning models for threat detection and enables efficient model update. Threat intelligence, on the other hand, offers real-time cyber threat monitoring and intrusion detection.

From an ethics perspective, the use of AI within CYRENE does not require close monitoring. Its purpose is not necessarily the processing of personal data, so the risks it presents for the rights of individuals are quite limited. The project focuses on more technical aspects of cyber threat monitoring and privacy assurance (the how) rather than on the information including personal data (the what) that may be put at risk due to a potential security incident.

Nevertheless, because the requirements that AI systems must meet in order to be deemed trustworthy remain applicable, Privanova as the ethics and legal lead of the project remain vigilant with this regard. Together with the Project Coordinator and relevant partners, we consider accountability as the key principle in the context of CYRENE. This, in particular concerns the auditability of AI systems. This aspect “enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications.

IoT-NGIN: AI within the context of Next Generation IoT 

IoT-NGIN introduces novel research and innovation concepts, to establish itself as the “IoT Engine” that will fuel the Next Generation of IoT as a part of the European Next Generation Internet.

The IoT-NGIN project will develop software components based on federated Machine Learning (ML) to offer decentralized AI at IoT nodes level. Moreover, research on ML frameworks and federated privacy-preserving ML training will accelerate ML deployment in various verticals including energy, manufacturing and network optimisation.

The project aims to develop a hybrid platform, operating on both the IoT devices, the edge and the cloud, that makes use of artificial intelligence techniques to generate ML models, based on the data coming from IoT end-devices. This platform will facilitate the creation, monitoring, deployment, performance measurement, and update of ML models customized for use case requirements (i.e. IoT devices architecture, Internet connection availability, etc.).

Privanova’s role in the IoT-NGIN project is to ensure compliance with the privacy, data protection and ethics principles of the H2020. Focusing on the AI use within the project the Ethics Panel required a Risk assessment to be performed accompanied with the details on measures to prevent misuse of research findings. In its action plan, validating the Coordinator’s compliance efforts, Privanova reduced the scope of the requirement by focusing on what really matters, in this case – the respect of all applicable laws and regulations, ethical principles and values concerning the deployment of AI within the project. This way, not only the ethics compliance was ensured during the first phase of the project, but long-term monitoring was implemented.