An unbalanced representation of women in the field of AI
This article offers an in-depth exploration of the many facets of this reality, based on around thirty reports, scientific publications and field studies : How are women represented in AI? Are there gender biases in algorithms? What are the impacts on the healthcare sector? And insurance?
Persistent under-representation of women in STEM and AI
This situation is the result of a combination of factors: gender stereotypes about skills, biased educational guidance, lack of female role models, self-censorship (i.e. the self-exclusion of certain career paths perceived as masculine), work-life balance with family responsibilities weighing more heavily on women, and sexism at work.
This trend can also be seen in the academic field. Women publish less in AI research journals, and their articles are cited less than men’s overall. Research such as “Voices of Her” or “Gender-Specific Patterns in the AI Scientific Ecosystem” shows that women’s contributions are undervalued in scientific publication networks.
Distrust fuelled by concerns about ethics, transparency and governance
Some women express fears about the use of their personal data, particularly in sensitive sectors such as health and finance. In these areas, algorithmic decisions can have concrete consequences (access to healthcare, credit, insurance). The lack of transparency about the decision-making criteria used by algorithmic systems, which are often opaque or difficult to understand, exacerbates this feeling of vulnerability.
Several publications (Fondation Jean-Jaurès, Magellan Partners, Harvard Business Review) also suggest that the lack of governance perceived as inclusive – with committees dominated by male profiles, little public consultation and little consideration of gender issues – fuels this scepticism. These factors contribute to an ambivalent relationship between women and AI, which combines hopes and expectations of progress with vigilance against the risks of reproducing inequalities.
Gender bias in algorithms: a mirror amplifying inequalities
Several studies (UNESCO, Conseil du statut de la femme du Québec, Public Sénat) show that generative AIs associate women more with domestic activities, physical appearance or caring professions, while men are associated with competence, power or technology.
These biases can have a real impact on women’s lives: discriminatory automated recruitment, less accurate facial recognition systems for racialised women, unequal access to services or care, incorrect or inappropriate medical diagnoses. Non-inclusive AI can thus reinforce existing discrimination.
Health, AI and gender: towards truly inclusive medicine
Yet AI could play a major role in reducing these inequalities, provided that the tools are designed to be inclusive. Here are a few examples:
- A deep learning algorithm analysing mammograms has enabled early detection of breast cancer risks.
- AI improves the prediction of post-partum complications or care pathways adapted to reproductive health.
- Digital platforms are facilitating better access to medical information and personalised monitoring for women.
Finally, joint governance of digital health remains essential to guarantee inclusive medicine.
AI and insurance: between discriminatory risks and equity levers
Faced with this challenge, several avenues for action have emerged:
- The adoption of global risk assessment approaches, incorporating behavioural factors rather than focusing solely on female pathologies.
- Transparency on the criteria used to set insurance premiums, with audits of algorithms to detect any indirect discrimination. Some insurers, particularly in loan insurance, are testing scoring solutions that are easier to read, where the acceptance criteria are explained more clearly (e.g. job stability, absence of serious medical history, healthy lifestyle), thereby limiting the opacity of automated decisions and promoting fairness.
- Measuring and mitigating biases via so-called “fairness-aware” AI techniques, which incorporate fairness constraints, such as the Fairness-aware PCA applied to life insurance pricing and mortality prediction (arXiv study), making it possible to correct certain proxy discrimination effects.
AI as a prevention tool
In this context, insurers have an important role to play in prevention. They can – subject to compliance with personal data protection regulations :
- Customise prevention messages based on health or lifestyle data (age, diet, physical activity, sleep, family history)
- Offer digital support or health coaching programmes (mobile applications, advice platforms, online monitoring)
- Encourage early detection of certain diseases that are under-diagnosed in women (cardiovascular disease, hormonal disorders, certain cancers).
In addition, actuarial dissertations have shown that algorithmic regulation is possible, provided that it incorporates fairness metrics, acceptability thresholds and usable transparency indicators. In France, the Autorité de Contrôle Prudentiel et de Résolution (ACPR) has published several recommendations to regulate the use of AI in insurance. Its 2022 report insists on:
- Rigorous supervision of models (documentation, audits, traceability)
- Bias controls (to avoid proxy-discrimination)
- Vigilance over uses in healthcare
- The appointment of an AI manager
These recommendations are part of the European AI Act regulation, which governs high-risk systems (insurance, health, credit) and the future FIDA (Framework for Integrated Data Analysis), which aims to govern the use of data in the EU. They reiterate the importance of human supervision of algorithmic decisions, to prevent technical biases from translating into concrete injustices for policyholders.
Work and AI: gender-differentiated effects
Studies from OECD, UNESCO and UN Women point out that women are over-represented in the sectors most exposed to automation (administration, care, education) and under-represented in the expanding technological sectors. The risk is twofold: loss of employment and reduced access to the opportunities created. The absence of women in the design of AI tools perpetuates this invisibility.
Yet AI can also be a lever for professional emancipation for women if :
- Training policies are inclusive and targeted.
- Women are supported in accessing jobs in data, development or algorithmic ethics.
- Digital tools make it easier to reconcile different lifestyles or choose flexibility.
Towards a more egalitarian AI: initiatives, governance and regulation
- Better representation of women in scientific careers, from secondary education onwards (female role models, historical valorisation, mentoring, etc.) and parity in strategic positions within technology companies.
- Gender analysis throughout the lifecycle of AI technologies: data collection, development, testing, deployment.
- Tools for detecting and correcting bias in AI models.
- Ethical charters incorporating equality principles.
- Inclusive governance of AI ecosystems, with equal representation on decision-making committees.
Initiatives such as Women4Ethical AI and the Pact for Equal AI bear witness to this growing mobilisation.
The AI Act Regulation (2024) and the IFAD Regulation under construction now provide a structuring legal framework. Taken together, these texts are an important lever for building AI that respects fundamental rights and is sensitive to gender issues.
Conclusion
Discover our other content on this topic:
Future of Insurance: the transformative power of AI on results explainability
In this article, we explore how AI agents contribute to the explainability of results by providing detailed information on complex processes and ensuring that decisions are transparent and understandable.
Discover the key points in our infographic!
How new technologies are shaping the insurance risk management?
Insurance is undergoing a rapid transformation, powered by AI & tech. How is innovation reshaping risk management, regulatory compliance, and profitability?
Automating the InsurTech Back-Office
“The addactis software bridges the gap between old legacy insurance systems and the modern web-based tools”, says Markus Bernhart to Insurance CIO magazine in his interview.