Addactis Data Community

Towards more inclusive artificial intelligence: gender issues and perspectives

11/07/2025

An unbalanced representation of women in the field of AI

Artificial intelligence (AI) is omnipresent in our daily lives and is increasingly influencing social, economic and professional dynamics. Yet a gendered analysis of its ecosystem reveals profound inequalities and systemic biases that weigh heavily on the place and role of women in the field of AI.

This article offers an in-depth exploration of the many facets of this reality, based on around thirty reports, scientific publications and field studies : How are women represented in AI? Are there gender biases in algorithms? What are the impacts on the healthcare sector? And insurance?

Persistent under-representation of women in STEM and AI

As we have noted for many years, women are still largely in the minority in scientific and technical fields, particularly in AI. Today, they represent around 22% of AI professionals worldwide, with an even smaller presence in management positions, universities and technical teams.
This situation is the result of a combination of factors: gender stereotypes about skills, biased educational guidance, lack of female role models, self-censorship (i.e. the self-exclusion of certain career paths perceived as masculine), work-life balance with family responsibilities weighing more heavily on women, and sexism at work.

This trend can also be seen in the academic field. Women publish less in AI research journals, and their articles are cited less than men’s overall. Research such as “Voices of Her” or “Gender-Specific Patterns in the AI Scientific Ecosystem” shows that women’s contributions are undervalued in scientific publication networks.

Distrust fuelled by concerns about ethics, transparency and governance

Over and above the structural obstacles to entry into the AI professions, several studies have highlighted a form of critical distance between women and this technology. This reticence does not stem from a lack of interest, but rather from a lack of confidence fuelled by legitimate concerns, often reported in the areas of health, data governance and ethical regulation.

Some women express fears about the use of their personal data, particularly in sensitive sectors such as health and finance. In these areas, algorithmic decisions can have concrete consequences (access to healthcare, credit, insurance). The lack of transparency about the decision-making criteria used by algorithmic systems, which are often opaque or difficult to understand, exacerbates this feeling of vulnerability.

Several publications (Fondation Jean-Jaurès, Magellan Partners, Harvard Business Review) also suggest that the lack of governance perceived as inclusive – with committees dominated by male profiles, little public consultation and little consideration of gender issues – fuels this scepticism. These factors contribute to an ambivalent relationship between women and AI, which combines hopes and expectations of progress with vigilance against the risks of reproducing inequalities.

Gender bias in algorithms: a mirror amplifying inequalities

AI, particularly in its generative versions (ChatGPT, Midjourney, etc.), learns from massive bodies of data, often taken from the Internet, social networks or digital archives. This data is impregnated with stereotypes and sexist representations, which the models then reproduce.

Several studies (UNESCO, Conseil du statut de la femme du Québec, Public Sénat) show that generative AIs associate women more with domestic activities, physical appearance or caring professions, while men are associated with competence, power or technology.

These biases can have a real impact on women’s lives: discriminatory automated recruitment, less accurate facial recognition systems for racialised women, unequal access to services or care, incorrect or inappropriate medical diagnoses. Non-inclusive AI can thus reinforce existing discrimination.

Health, AI and gender: towards truly inclusive medicine

As we mentioned earlier, the field of health is a privileged observation ground for the gender biases conveyed by AI technologies. Historically, medicine has long been based on a default male model, relegating women’s specific biological and symptomatic characteristics to the background. Today, this trend is reflected in automated care systems and the databases on which artificial intelligences are trained.
Studies have shown that the symptoms of cardiovascular disease in women are often underestimated or misinterpreted. Specifically female pathologies (endometriosis, PCOS, chronic menstrual pain, etc.) are also poorly integrated into diagnostic tools.

Yet AI could play a major role in reducing these inequalities, provided that the tools are designed to be inclusive. Here are a few examples:

  • A deep learning algorithm analysing mammograms has enabled early detection of breast cancer risks.
  • AI improves the prediction of post-partum complications or care pathways adapted to reproductive health.
  • Digital platforms are facilitating better access to medical information and personalised monitoring for women.
Gender inequalities in health also affect mental health, an area that is still insufficiently taken into account in current systems. Women are more frequently affected by certain psychological pathologies, such as anxiety disorders, depression and burn-out. These conditions, which are often under-diagnosed and stigmatised, would benefit from being better integrated into AI-assisted prevention and detection tools. Analysis of behavioural or physiological data (sleep, pace of life, language) could contribute to early identification and appropriate support. To do this, the models need to be trained in a way that is representative and sensitive to gender variations.

Finally, joint governance of digital health remains essential to guarantee inclusive medicine.

AI and insurance: between discriminatory risks and equity levers

The insurance sector, particularly health and provident insurance, is using AI to refine risk assessments. But this practice is not neutral. If models are built using biased data or over-representing certain profiles, they risk reinforcing existing inequalities. Women, for example, may be penalised by specific health histories (pregnancies, hormonal disorders), without these factors being put into context.

Faced with this challenge, several avenues for action have emerged:

  • The adoption of global risk assessment approaches, incorporating behavioural factors rather than focusing solely on female pathologies.
  • Transparency on the criteria used to set insurance premiums, with audits of algorithms to detect any indirect discrimination. Some insurers, particularly in loan insurance, are testing scoring solutions that are easier to read, where the acceptance criteria are explained more clearly (e.g. job stability, absence of serious medical history, healthy lifestyle), thereby limiting the opacity of automated decisions and promoting fairness.
  • Measuring and mitigating biases via so-called “fairness-aware” AI techniques, which incorporate fairness constraints, such as the Fairness-aware PCA applied to life insurance pricing and mortality prediction (arXiv study), making it possible to correct certain proxy discrimination effects.

AI as a prevention tool

AI can also be a positive lever for prevention. With a view to active prevention and personalised support, some health insurers are beginning to develop medical monitoring programmes for their policyholders who have been identified as presenting an increased risk – whether due to a medical history or behavioural factors detected using predictive tools. Although these initiatives are still in their infancy, they are evidence of a move towards a more individualised approach to risk management.

In this context, insurers have an important role to play in prevention. They can – subject to compliance with personal data protection regulations :

  • Customise prevention messages based on health or lifestyle data (age, diet, physical activity, sleep, family history)
  • Offer digital support or health coaching programmes (mobile applications, advice platforms, online monitoring)
  • Encourage early detection of certain diseases that are under-diagnosed in women (cardiovascular disease, hormonal disorders, certain cancers).

 

In addition, actuarial dissertations have shown that algorithmic regulation is possible, provided that it incorporates fairness metrics, acceptability thresholds and usable transparency indicators. In France, the Autorité de Contrôle Prudentiel et de Résolution (ACPR) has published several recommendations to regulate the use of AI in insurance. Its 2022 report insists on:

  • Rigorous supervision of models (documentation, audits, traceability)
  • Bias controls (to avoid proxy-discrimination)
  • Vigilance over uses in healthcare
  • The appointment of an AI manager

 

These recommendations are part of the European AI Act regulation, which governs high-risk systems (insurance, health, credit) and the future FIDA (Framework for Integrated Data Analysis), which aims to govern the use of data in the EU. They reiterate the importance of human supervision of algorithmic decisions, to prevent technical biases from translating into concrete injustices for policyholders.

Work and AI: gender-differentiated effects

AI is profoundly transforming professions, automating certain tasks while at the same time creating new needs for technical, ethical and analytical skills. However, this transformation of the labour market is not gender-neutral.

Studies from OECD, UNESCO and UN Women point out that women are over-represented in the sectors most exposed to automation (administration, care, education) and under-represented in the expanding technological sectors. The risk is twofold: loss of employment and reduced access to the opportunities created. The absence of women in the design of AI tools perpetuates this invisibility.

Yet AI can also be a lever for professional emancipation for women if :

  • Training policies are inclusive and targeted.
  • Women are supported in accessing jobs in data, development or algorithmic ethics.
  • Digital tools make it easier to reconcile different lifestyles or choose flexibility.

Towards a more egalitarian AI: initiatives, governance and regulation

To promote a more inclusive AI, institutions such as UNESCO, the Council of Europe, the Laboratoire de l’Égalité and Mila recommend :

  • Better representation of women in scientific careers, from secondary education onwards (female role models, historical valorisation, mentoring, etc.) and parity in strategic positions within technology companies.
  • Gender analysis throughout the lifecycle of AI technologies: data collection, development, testing, deployment.
  • Tools for detecting and correcting bias in AI models.
  • Ethical charters incorporating equality principles.
  • Inclusive governance of AI ecosystems, with equal representation on decision-making committees.

 

Initiatives such as Women4Ethical AI and the Pact for Equal AI bear witness to this growing mobilisation.

The AI Act Regulation (2024) and the IFAD Regulation under construction now provide a structuring legal framework. Taken together, these texts are an important lever for building AI that respects fundamental rights and is sensitive to gender issues.

Conclusion

Artificial intelligence is a key technology for the decades to come. If it is not deliberately geared towards greater equity, it could reproduce or even exacerbate gender inequalities. On the other hand, if women are fully included in its design, regulation and deployment, AI can become a powerful lever for social transformation and emancipation. This is the challenge of inclusive governance, rebalanced education and a strong and fair political will to create artificial intelligence that truly serves everyone.

Discover our other content on this topic:

Automating the InsurTech Back-Office

Automating the InsurTech Back-Office

“The addactis software bridges the gap between old legacy insurance systems and the modern web-based tools”, says Markus Bernhart to Insurance CIO magazine in his interview.

read more