Privanova

Building Ethical AI Through Dialogue: Inside the ELOQUENCE Community of Experts

As artificial intelligence (AI) continues to expand its presence in high-impact areas such as healthcare, emergency response, and public services, one question becomes increasingly urgent: how do we ensure these systems are fair, trustworthy, and aligned with human values, especially across languages, cultures, and social contexts?

This is where the ELOQUENCE Community of Experts (CoE) steps in. More than just an advisory body, the CoE is a unique initiative within the EU-funded ELOQUENCE project, designed to anchor cutting-edge AI development in ethical, inclusive, and multidisciplinary thinking.

Led by Privanova, the project partner responsible for ethics, legal, and societal aspects, the CoE is playing a vital role in ensuring that the foundations of ELOQUENCE’s technology are aligned with EU values from the ground up.

A Community With a Purpose

ELOQUENCE, an ambitious Horizon Europe project focused on developing multilingual, bias-aware conversational AI for safety-critical applications, recognises that AI systems are not built in a vacuum. They reflect the data they are trained on, the objectives they are set, and the values of the societies in which they operate. When deployed in domains such as emergency services or public administration, these systems must do more than function, they must be trusted, understood, and fair.

To ensure this, ELOQUENCE created the Community of Experts, a group of leading academics, policy specialists, technical experts, and civil society actors brought together to reflect on the project’s ethical, societal, and practical challenges. The CoE is designed as an interface between research and society, offering both strategic guidance and hands-on feedback.

Privanova’s Role

As the partner responsible for ethics, legal, and societal aspects within ELOQUENCE, Privanova plays a dual role with the CoE. First, it curates and coordinates the community itself, facilitating engagement, shaping discussions, and ensuring the group remains relevant and representative. Second, it acts as a bridge between the CoE and the project’s technical teams, translating concerns and recommendations into actionable project guidance.

Community of Experts is a core part of the project’s governance. Privanova’s role is to ensure their expertise feeds into deliverables, decisions, and ultimately, the values embedded in the technology. The aim is not to burden experts with time-consuming obligations, but to create light-touch, high-value opportunities for dialogue through thematic meetings, written feedback on deliverables, and periodic briefings.

A Meeting of Minds: Bias, Fairness, and Trust

The most recent CoE meeting, held on 13 June 2025 as part of the ELOQUENCE General Assembly in Barcelona, provided a perfect example of this model in action.

Held in a hybrid format, the 75-minute session brought together experts from multiple fields to engage in a focused discussion on a key challenge for the project: how we frame and evaluate bias mitigation, particularly in multilingual and multicultural contexts.

This topic, titled “From Bias to Fairness: Are We Asking the Right Questions?”, was not chosen arbitrarily. It sits at the centre of ELOQUENCE’s mission to develop conversational AI that works effectively and fairly across all 24 official EU languages, including those that are under-resourced or under-represented in training data. The discussion explored what fairness means in practical terms, how bias shows up in linguistic systems, and how developers can avoid reducing fairness to merely a technical checkbox.

Participants shared perspectives on a range of topics, from cultural nuances in dialogue systems to the risks of reinforcing stereotypes in public-facing AI tools. The discussion also directly contributed to one of the project’s key deliverables, D6.2: Emerging ELOQUENCE Technology – Approved by the Community. The report serves a critical purpose: to assess the project’s early-stage technological outputs at Technology Readiness Level 3 through an ethical lens. The deliverable evaluates ELOQUENCE’s emerging conversational AI technologies as respectful of EU values, with a particular focus on risks related to gender, cultural, and racial bias. The endorsement by Community of Experts adds legitimacy to D6.2, framing it as a community-validated benchmark for fairness and trust at this formative stage in development.

Sustainable, Inclusive Engagement

One of the challenges in coordinating expert engagement across a multi-year project is avoiding fatigue. To that end, Privanova is developing a tiered engagement model that allows members to contribute at their preferred pace through periodic meetings, short surveys, brief commentaries, or light reviews of deliverables. Future activities include:

  • Themed online consultations on topics like explainability and human-in-the-loop design
  • A biannual ethics and fairness update
  • Opportunities for CoE members to contribute thought pieces or spotlight challenges from their domains.

This lightweight model ensures continuity while remaining responsive to the project’s evolving needs.

Looking Ahead

With D6.2 now approved and endorsed by the CoE, ELOQUENCE is entering a new phase of development. The insights gathered will shape not just future deliverables, but also the project’s real-world pilots, where trust, fairness, and multilingual performance will be tested in action. The Community of Experts will continue to play a key role, supporting Privanova in reviewing, guiding, and strengthening the ethical backbone of the project. In doing so, the ELOQUENCE project offers a valuable model for others: a way of ensuring that ethics isn’t bolted on after the fact, but baked in from the beginning, with the community as a co-creator, not a bystander.