AI and Confusion: Why Intersectional Fairness Matters in AI, a conversation with Ilina Georgieva from DIVERSIFAIR

Artificial intelligence (AI) is not neutral. As Ilina Georgieva from the Dutch Organization for Applied Scientific Research TNO and a lead scientist on the DIVERSIFAIR project explains, AI often reproduces and amplifies existing inequalities, especially for people who live at the intersection of multiple marginalized identities. “Thinking about these biases from the perspective of intersectionality allows us to examine the combined, compounded effects that basically result in the under-representation of this specific group,” she says. That observation sets the tone for a conversation that moves quickly from concepts to concrete recommendations.

What is intersectional bias in AI?

Intersectional bias appears when systems designed around single demographic categories miss how attributes like race, gender, class, and disability combine in real people’s lives. Ilina gives a clear example: facial recognition systems that “perform very poorly on, for instance, women of color.” Those are not isolated failures: they are the product of models trained on biased data and evaluated with narrow metrics. As Ilina points out, an intersectional lens helps us see under-representation, erasure, mislabeling, and misgendering that single-axis analyses tend to overlook.

Concrete harms in everyday systems

The harms are practical and painful. She warns that algorithms inherit structural disadvantages from the data and business models that produce them. Those are not abstract problems; they affect who gets a job, a loan, a ride, or fair treatment.

Why data practices must change

Data alone won’t fix the problem. Ilina stresses that the whole data value chain needs attention, from the initial business case to collection, storage, and governance. “The data collection itself should give them agency… control of their data and ownership of their data, especially when it comes to data on those intersecting identities,” she argues. In practice, that means designing data practices that capture lived experience rather than forcing people into crude boxes, and creating mechanisms so communities can shape how their data is used:

  • Agency over data: Let communities decide what is collected and how it’s shared.
  • Context-rich data: Include social determinants, such as neighborhood conditions and access to services.
  • Not just anonymization: Ownership, consent, and participatory practices matter as much as technical safeguards.

Practical steps organizations can take

When I asked Ilina for actionable advice, she emphasized three connected strategies: 

  • Co-creation and meaningful participation: Bring affected people into the design process, not as a checkbox, but as key partners. “Securing the belonging of whatever community, marginalized community, individuals is key… then you get to create actually something holistic and create with the deployment of the AI system, meaning beyond just metrics,” Ilina explains. True participation requires time, resources, and an openness to change project timelines.
  • Include social determinants in data and evaluation: Systems should measure structural factors, like economic barriers, neighborhood services, or caregiving responsibilities, that shape outcomes. Without that context, fairness metrics fail to explain why a model behaves unjustly.
  • Adaptive governance: AI governance must be flexible and ongoing. Ilina notes that organizations need governance systems that can adapt as social contexts and technology co-evolve: “An organization that is trying to govern the data must also be ready to have flexibility in the maintenance of the nuance and of the context.”

Final takeaway

Intersectional fairness is not an optional add-on; it changes how systems are conceived, built, and governed. As Ilina puts it, the approach creates visibility and empowers marginalized communities. It demands more time, interdisciplinary work, and institutional flexibility; however, without these investments, AI will continue to reproduce harm instead of reducing it.

Listen to the full interview

For the full conversation and more examples from Ilina, listen to the complete interview on our Spotify channel. Tune in to hear the extended discussion and additional recommendations straight from Ilina and the DiversiFair team.