Privanova

MARVEL: a case study for an Ethical AI

Ethical AI – Algorithmic biases and the potential for misuse

By using cutting-edge technologies, the MARVEL project aims to provide a framework useful to the public authorities to detect audio-visual events and achieve contextual awareness in smart cities. The MARVEL technology could help in improving safety by monitoring certain areas and optimizing traffic when and where it is needed.

Notwithstanding the extreme benefits that smart cities bring to the quality of life, they also might lead to privacy, security, and ethical concerns. MARVEL aspires to develop a lawful, ethical, and robust AI system that performs in a safe, secure, and trustworthy manner and will not cause any unintentional harm. Therefore, MARVEL should not include algorithmic bias. Moreover, it should ensure the principle of fairness along with the conceptual aspects of diversity and non-discrimination. In addition, MARVEL should take precautions to prevent potential algorithmic biases and presents how the model will be able to justify the results it has provided for specific situations.

MARVEL framework and its building blocks are composed of many systems and assets. The framework must be analyzed in order to assess the likelihood of the asset and systems to lead to potential misuse. Also, respective mitigation measures should be identified and proposed. These ethics requirements are laid down by relevant Horizon 2020 rules that regulate the transversal nature of ethics in research and scientific projects. By respecting relevant rules, MARVEL will be able to build not only lawful but also ethical AI.

Ethical AI + Lawful AI = Trustworthy AI

The baseline for all rights and freedoms granted by the EU Law is respect for human dignity. Therefore, the EU law is described as a ‘human-centric approach’ in which the ‘human being enjoys a unique and inalienable moral status of primacy in the civil, political, economic and social fields.’ The fundamental rights fall under the first component of the European Commission ‘Ethics Guidelines for Trustworthy AI’.

Apart from the fact that Trustworthy AI (that is often considered as lawful AI) serves to satisfy legal requirements, it is also the foundation for ethics compliance. Therefore, it would not be wrong that law and ethics (and consequently lawful AI and ethical AI) are inseparable concepts. In fact, lawful AI means that development, the deployment and use of AI are in accordance with various legally binding rules and laws.

Considering that AI should improve individual and collective wellbeing, it would not be wrong to claim that applied ethical principles for Trustworthy AI are rooted in fundamental rights. Fundamental rights are ethical imperatives and hence all AI practitioners should tend to adhere to them. The Guidelines refer to principles set up by the EU Charter as a mirror to fundamental rights. Thus, the promoted principles of Trustworthy AI are:

  • Respect for human autonomy
  • Prevention of harm
  • Fairness
  • Explicability

Trustworthy AI

The Trustworthy AI is grounded on seven key requirements that AI systems should meet. These requirements apply to various stakeholders such as developers, deployers, end-users, and broader society. The requirements are given in the following order:

  • Human agency and oversight (including fundamental rights, human agency and human oversight).
  • Technical robustness and safety (including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility).
  • Privacy and data governance (including respect for privacy, quality and integrity of data, and access to data).
  • Transparency (including traceability, explainability and communication).
  • Diversity, non-discrimination and fairness (including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation).
  • Societal and environmental wellbeing (including sustainability and environmental friendliness, social impact, society and democracy).
  • Accountability (including auditability, minimisation and reporting of negative impact, trade-offs and redress).

MARVEL - Trustworthy AI GuidanceImplementation of the above principles should prevent the occurrence of AI Bias. AI (or algorithmic) bias describes systematic and repeatable errors in a computer system that create unfair outcomes, such as favouring one arbitrary group of users over others. In addition, their implementation serves to ensure fairness (equal treatment in accordance with accepted social standards), inclusion and respect for diversity.

Trustworthy AI in MARVEL

One of the measures taken within MARVEL toward ethical AI is to consult stakeholders who may directly or indirectly be affected by the AI system. The feedback is expected to be significant for building trustworthy AI. Therefore, MARVEL builds a mechanism that will enable the inclusion of the widest range of possible stakeholders in the AI system’s design and development.

Development of the AI system demands continuous monitoring of processes to detect possible fails, address them and improve the overall development process. It is recommendable to make algorithms publicly accessible. That allows external parties to feed in their own datasets and examine the results. It should affect the accountability mechanism and reporting of cases where the model may not behave fairly. Making datasets available to the public reduces the issues of representative information scarcity that contributes to under or misrepresentation of people groups in models building. This would enable other organisations to use those data sets to build less biased models.

MARVEL Project Consortium has conducted a comprehensive analysis of the MARVEL framework including all its’ systems and assets. The analysis demonstrated that the likelihood of AI bias within the MARVEL AI system is low. The datasets that will be used to develop the AI components are coming from IoT devices that are placed in public areas with free access. However, MARVEL plans to consider this risk as a potential risk and identify the set of precautions that will be applied and are summarised below:

  • careful selection/augmentation of training datasets – no biased training will be used;
  • ensure that data will come from diverse and representative set of data subjects;
  • the data acquisition will cover a fair time span;
  • no selection or rejection of certain types of input data will be performed;
  • continuous monitoring of the results to identify potential issues related to bias, discrimination, or poor performance of the AI system;
  • ensure that its components will work reliably and efficiently across different cities in different countries;
  • equal access to services developed for MARVEL’s architecture;
  • the developed AI models/algorithms and the respective datasets will become publicly available;
  • a wide range of stakeholders will be used for the design and development of the MARVEL AI system;
  • the impact of the AI system on the potential end-users and/or subjects will be assessed.