Abstract
Keywords
Introduction
In recent years, health-data-driven artificial intelligence and machine learning (AI/ML) applications have been introduced to many areas of medicine. Broadly defined, AI here refers machines mimicking human decisions and tasks and ML is a subset of AI technology, which uses statistical methods to enable machines to learn from experience.
1
In the field of assisted reproduction, AI/ML applications (from here on simply, AI) and related technologies have been hailed as (potentially) significant and ground-breaking, not least because they promise standardisation and automation in IVF (
In this essay, we aim to critically discuss AI as a technological clinical practice, which is currently being organised into its own interdisciplinary field in the fertility sector and moving from bench to bedside internationally. We first discuss the entrance of AI into the fertility sector before considering the context of rising health and social disparities and the potential for AI to increase the
Colen’s (1995) concept of reproductive stratification and subsequent work that has built on it (e.g. Ginsburg and Rapp, 1995), draws attention to the local and global ‘power relations by which some categories of people are empowered to nurture and reproduce, while others are disempowered’ (Ginsburg and Rapp, 1995: 3). In the context of assisted reproduction, it takes the form of ‘selective pronatalism’ (Thompson, 2005), whereby regulations, uneven distribution of wealth and resources and/or medical practice exclude groups of people from care, including most women of colour, queer families, low-income groups, and entire populations in the Global South.
We propose that introducing AI into this already stratified context threatens to black-box health disparities and to generate what we refer to as ‘hyper-stratifications’. As feminist, social science and bioethics scholars, we are all too aware of how reproductive technologies reinforce normativities rather than unravel them. We cannot presume that AI is an ethical technological agent or user of health data but, instead, need to keep a critical eye on the moral ambivalence of emerging and evolving AI-assisted reproduction technologies (ART) practices and their gendered consequences. Given the current hype around AI, but also with concerns around the fast development and deployment of AI generally and in ARTs particularly in mind, there is an urgent need to engage in critical feminist discussion of such developments.
Our framework for the analysis of AI in fertility is that of
The concept of reproductive justice positions reproductive rights discussions within broader social justice discussions. Reproductive justice shifts from an individual rights and choices discourse, to a wider group of concerns over the conditions under which rights can be exercised (Shotwell, 2013; Smietana et al., 2018). For people’s individual decisions to be realised, options for making choices must be safe, affordable, and accessible; in other words, genuinely executable. This also means the social-political-economic conditions necessary to have and raise children in one’s community – enabling networks of opportunities, support, and services – must be present (Ross and Solinger, 2017).
Globally, ART is a highly commercialised sector, including within Europe. While some countries offer publicly provided treatments or reimburse expenses, in most cases, people need to pay out of pocket (at least to some extent) or through private insurance – even in large redistributive welfare states such as those found in the Nordic countries (Blell and Homanen, 2023). A large number of people who could otherwise be eligible are excluded not just because of high prices but also because of regulatory and normative barriers to parenthood: who can be allowed to parent and how parenting should be done (Mamo, 2018; Smietana et al., 2018). We suggest that these understandings are foundational for contextualising and considering the ethical dimensions of reproductive technological and political transformations such as those presented by AI.
We have structured our speculative intersectional critical RJ analysis of AI and ARTs according to the main justice claims for RJ (to have and raise as many or few children as one desires, while maintaining personal autonomy in safe and healthy environments) as we understand they apply and can be translated in the setting of AI-ARTs. These relate specifically to questions of: ‘access’, ‘autonomy’, and ‘safe and healthy reproductive environments’. Before moving on to that analysis, we describe how AI is currently used, designed, and imagined in assisted reproductive socio-technological practice.
State of the art of AI technology in assisted reproduction
In IVF, AI offers a wide range of techniques aimed at improving the accuracy of preparation, selection of reproductive cells (gamete and embryo), implantation of embryos and pregnancy care (Dalal et al., 2020). AI is perceived as uniquely suited to solving complex and multifactorial problems related to infertility and offers the possibility of increased efficiency and efficacy, and reduced error. The push for accuracy and prediction may increase the likelihood of a successful pregnancy outcome, increasing success rates from historic lows of around 30% in Europe (ESHRE, 2019). The potential for improvement could increase the number of people who benefit from IVF as well as reduce the psychological and physical effects of failed treatment cycles (Rolfes et al., 2023).
AI-driven automation is promised to take over time-consuming mundane and monotonous, responsibilities that require attention to detail, such as quality control, tissue vitrification and biopsy sample loading, freeing IVF laboratory staff to focus on more demanding and exciting tasks, and reducing administrative work, thus, saving labour, time and money (Fairtility, 2023; Trolice et al., 2021).
There is a lot of excitement and hope with regard to AI among advocates (e.g. Trolice et al., 2021; Zaninovic et al., 2019). However, current rhetoric and ambition exceed current clinical practice. AI software is being developed by major biotech companies (such as Swedish Vitrolife) and start-up companies as both separate programmes and suites of functions, designed to integrate and draw on the increasing volumes of data within clinics. Start-up companies in Israel, such as Fairtility and Embryonics, and Alife in California, are developing AI systems aimed at clinics, which can integrate into the IVF workflow and offer end-to-end optimisation, and cost savings, balanced with the promise of personalisation of treatment. As part of these developments, patients are offered personalised apps to enable them to track health, cycles, and pregnancies. For example, Alife’s offering helps patients set reminders, log results, and include costs tracking, cloud-based, and linking into clinics medical records.
AI-powered tools can be used to predict an optimal follicle-stimulating hormone (FSH) dose and identify when to trigger ovulation (Alife, 2023). Using time-lapse imagining systems (TLS), images of embryos can be graded. Pictures of the embryo are taken at 5- to 20-minute intervals, generating large amounts of visual data that can be analysed in trained neural nets. For instance, Vitrolife’s iDAScore evaluates embryo viability with a neural network analysing time-lapse videos, and Fairtility’s CHLOE (Cultivating Human Life through Optimum Embryos) is an embryo quality assistant, which uses both embryo and patient data to predict probability of embryo implantation. While data driven, the decision remains with the embryologist who might recommend implantation of embryo(s) where probability is low. Such analysis can be backed up by pre-implantation genetic testing (PGT) to check, for example, ploidy which refers to chromosomes missing or present in an embryo. Embryonics’ (2023) UBar Embryo selection software offers ‘an objective ranking on the likelihood of implantation of each of a patient’s embryos. All at once, at the touch of a button’.
Despite these developments, actual usage in clinics is only just beginning. For example, the IVF Clinic group IVFAustralia (2023) claimed in 2023 to be the first group in Australia to use AI imaging to analyse embryo growth. It is also noteworthy that beyond laboratory practice of embryo selection AI is seen as an opportunity to increase the perceived stagnant success rates of IVF. However, while there are yet few studies on the impact of AI intervention, improvements of 15%–17% have been found (ImVitro, 2023).
There are some acknowledged obstacles to AI and its claims regarding improving treatment access, safety, efficacy, and efficiency. The significance of data and its origins and quality in AI relates to its potential uses and successes. Most of the obstacles to good data relate to two aspects, first its availability: there is too little, or it cannot be shared because of trade secrets, and national and supranational restrictions about sharing even anonymised patient data. A second challenge relates to data heterogeneity, which refers to the production contexts of data as clinics use different craft ways and technologies to produce data on tissue and patients.
Research on AI in ART is still scarce. There is some evidence to suggest that AI enables robust assessment and selection of human gametes and embryos after IVF but not without problems (e.g. Abbasi et al., 2021; Khosravi et al., 2019; Riegler et al., 2021). There is a particular dearth of bioethical literature, which considers AI ART through the lens of medical ethics (e.g. Ho, 2023; Krupiy, 2020; Polonski, 2018; Tamir, 2023), AI ethics (e.g. Rolfes et al., 2023) and feminist bioethics (e.g. Ho, 2023). In our analysis in the rest of this essay, we draw on what work exists to provide a needed commentary on what we identify as three important reproductive justice issues regarding the adoption of AI into the fertility sector: access, autonomy and safety.
Access to AI technology in assisted reproduction
Access to assisted reproduction technology is a central concern for reproductive justice in general in terms of who can benefit from the enablement of parenthood offered by IVF and related technologies (Smietana et al., 2018). Legal, cost, and availability barriers to ARTs, however, exist internationally, resulting in social inequalities (Blell and Homanen, 2023; Culley et al., 2009; Ross and Solinger, 2017; Smietana et al., 2018). Given this context, we may legitimately expect that the introduction of AI automation will not magically fix existing inequalities, but may, instead, reinforce dominant norms and hyper-stratify reproduction along the usual lines even further.
Arguably, AI automation, if proven efficient, could provide greater access to IVF through reduced costs. First, prices per treatment cycle should eventually decrease when analysis of bodies and cells can be carried out in a quantitative way. Second, efficient selection should reduce the number of treatment cycles needed for a successful pregnancy, lowering the overall costs. Third, costs would be lower with some routine treatment cycle monitoring done at home through self-tracking, for example, using follicle ultrasound devices with AI algorithmic software, making it unnecessary to travel to healthcare providers for every step in the process (Brayboy and Quaas, 2022; Ho, 2023; Rolfes et al., 2023; Tamir, 2023), and thereby generating geographical equality in access (Ho, 2023).
However, as pointed out by Rolfes et al. (2023: 112), AI-based technology may involve disproportionately high costs generated by factors such as purchase, operation, maintenance, and updating of AI models, data processing and storage, rectification of mistakes, need for skilled operators and liability costs, and so on. In such cases, there may be an incentive to increase treatment prices rather than lower them.
Furthermore, it is likely that AI-based reproductive medicine will be at least initially centred around a small number of big tech-savvy players, such as chains of clinics, gamete banks, and biotechnological and pharmaceutical companies. This would mean AI applications would not be widely available (Rolfes et al., 2023), let alone among public welfare services, which tend to be the last ones to integrate adjunct treatments and techniques in IVF (so-called ‘add-ons’), if ever. The perception is that the greater the access to training data, the more accurate the AI system will be. Large clinics benefit more because they have critical mass and can access larger datasets for AI training. The application of AI requires substantial investment, which usually comes either from venture capital companies that are investing into AI fertility start-ups, or clinics and biotech companies, which either contract AI start-ups to receive software and services, or invest in their own in-house AI implementation. Thus, the advent of AI is likely to benefit large clinics and health groups over smaller clinics. However, we would suggest that the trajectory of AI will move it from optional to an expected part of AI treatment. This may increase the amount of investment needed in IVF clinics, both in capital and expertise, and increase the domination of larger clinic groups. In Europe, these clinic groups are mainly based in the fertility hubs of Europe, mainly in Spain and Eastern Europe, such as Czechia. However, the big clinic chains are expanding transnationally.
It is likely that people from rural and lower resource settings will be excluded from AI diagnostic and clinical treatments unless they are among the few who are able to cross national borders, understand, and make use of regulatory frameworks, and overcome other availability barriers to get treatments. The temporal dimension here is also noteworthy: intended parents with fertility disruptions do not have the time to wait around for the new ‘gold standard’ to arrive in their local clinic, let alone for it to be included in the public provision of ART (if public provision of ART even exists in the country where they are located). In this kind of socio-technical vision, AI will not be part of facilitating equitable global gender (reproductive) health; quite the contrary.
Personal (relational) autonomy in making care decisions
In the context of reproduction, autonomy refers to a person’s ability to make free and informed decisions about their reproductive health and practices (Johnston and Zacharias, 2017). Reproductive justice thinking on autonomy is that it is relational; meaning that the social context, including unequal structures in society and community, may especially influence people’s ability to make decisions that accord with their value priorities (Ross and Solinger, 2017). The impact on marginalised groups is, therefore, greater. AI has been hailed as offering better control over individuals’ reproductive lives: if the outcome of specific reproductive decisions can be better predicted, individuals can better plan their life trajectories (Tamir, 2023). That is, only if we can trust the ‘certainties’ AI offers. We propose a range of challenges to this claim.
A general medical–ethical problem identified with the adoption of AI technology in healthcare is ensuring informed consent (Ho, 2023; Rolfes et al., 2023; Tamir, 2023). First, because decisions to adopt these technologies and fund them are made at the healthcare system or administrative level, there is only a small chance that patients are given a choice about alternative care options (Ho, 2023). A possible outcome is that medical institutions might interfere with or shape patient decisions in their efforts to stay competitive through the adoption of AI technologies.
It has been argued in earlier research that ARTs have become normalised and naturalised as a way to make babies, parents, and kin (Lie and Lykke, 2017; Thompson, 2005; Throsby, 2004). Involuntary childlessness is regarded as a condition that follows IVF rather than precedes it (Franklin, 2013, 1997; Ravn, 2017; Thompson, 2005). Many intended parents use every possible technological add-on, even if not their intention at treatment outset (e.g. Franklin, 1997; Ravn, 2017). In future, many of these treatment add-ons (such time Time-Lapse embryo monitoring systems and ICSI microscopes) might involve an AI component.
Given both the institutional and cultural context of IVF, in practice, adopting AI will bring about gendered implications for autonomy. Fertility and ARTs are considered culturally to be ‘women’s issues’ – not least because fertility treatments mostly intervene on women’s bodies and women have been historically and institutionally blamed for fertility disruptions (Culley et al., 2013; Throsby, 2004). We might question whether women’s care decisions in this normative climate are free from pressure, and whether adopting AI might, in fact, reinforce the power of medical knowledge and the medical profession over fertility and women’s bodies in particular.
Uncritical enthusiasm over AI solutions resolving (fertility) health problems (Polonski, 2018) may also further reinforce power dynamics where authority to know (best) is shifted from both intended parents and clinicians to machines (Ho, 2023). Reproductive decision making in clinics often involves intended parents, in some cases even in the laboratory (Helosvuori and Homanen, 2022). Information about embryo development and selection are shared routinely with intended parents, who are also given a role in making decisions about embryo selection (Helosvuori, 2019; Helosvuori and Homanen, 2022). Through information-sharing, patients become involved in the process of knowledge production and, sometimes, even in making decisions about interventions that render previous conceptualisations of embryo viability inaccurate. This change in conceptualisation about embryo viability happens with ‘pity transfers’, where embryos that are (perceived to be) inviable are transferred because the patient wishes it. These transfers sometimes result in the birth of healthy babies, which in turn leads embryologists to revise their views on embryo viability (Helosvuori, 2019).
With AI technology-driven decisions, detecting algorithmic bias and errors is hard even for computer engineers and data scientists, let alone end users such clinicians or intended parents. Their involvement and epistemic authority could easily become denied. Algorithmic bias occurs when datasets used to train AI models inaccurately represent the AI model’s users, resulting in systemic prejudice and low accuracy. AI models that use deep learning neural networks may also hide the logic of their analysis – logic involving something perhaps previously undetected by human-involved analysis, for example, the significance of particular biomarkers in embryonic development.
AI selection logics are ‘black-boxed’, meaning that clinicians do not know exactly why AI selects what it selects. It could, therefore, potentially be difficult – if not impossible – for a clinician to explain AI selection logics to their patients. It may prove difficult for intended parents to make decisions on the basis of the ever-more complex kinds of medical and technical information made available to them. There might be a danger that intended parents with less access to forms of social and cultural capital or the health literacy required to understand and negotiate care decisions, are further disadvantaged in the clinical space. Generally, AI technological practice may be disempowering to groups who are marginalised along class, ableist, and racial lines, exacerbating inequalities.
Selective technologies, which include AI components, are not restricted to reproductive cell selection. Facial recognition technology has been used in the matching of donors and recipient intended parents for some time. The technology involves AI models trained on large datasets of facial images and has been used in a variety of settings to identify ‘suitable’ individuals. The intended result is a child that physically resembles its intended parents, long considered desirable in assisted reproductive technologies because it allows a family to ‘pass’ as one that is genetically related (see, for example, Becker et al., 2005; Hudson and Culley, 2015; Nordqvist, 2010). The use of this process removes the clinical ‘labour’ of manually searching for clinics in order to secure a good match. There has been rapid uptake of this technology in some contexts. Spain is an example where the law surrounding donor conception treatment requires that clinicians strive to physically match donors and recipients as closely as possible and where there also exists a highly commercialised sector (Coveney et al., 2022).
Recently, AI models have been integrated into facial matching technology (Cryos, 2023 in Denmark, Cyprus, and the United States; IVIty, 2023, Fertili in Spain, Italy, Portugal, the United Kingdom, and countries in Middle and South America), all currently within the private commercial sector. These technologies may also combine an analysis of facial measurements from donor databases with genetic carrier screening tests and physical characteristics, including ethnic/racial characteristic, hair and eye colour, height and physical build, and Rh compatibility 3 and blood type to provide a ‘perfect match’ (e.g. IVI Fertility’s Perfect Match 360°).
One of the most salient aspects in donor selection and matching is the aim to match with recipients according to
In some European contexts, the use of facial matching/AI has the potential to exacerbate an existing lack of patient autonomy in decision-making. In many countries (especially where donors and intended parents remain anonymous to one another), the ‘matching’ process is already managed by fertility professionals rather than by the intended parents (like in more commercial settings such as the United States). Patients already have little autonomy within selection processes, but the use of AI may reduce this even further. The embedding, through AI, of
Safe and healthy reproductive environments
In the ART setting, a reproductive justice lens directs us to look at medical and social risks, and the care systems in which they are situated. This includes the local bioethical context: policy, economic and service system conditions that either support or not (some) peoples’ reproduction and reproductive health. For example, predominately commercial fertility clinics provide treatments, including third-party services, such as gamete and embryo donation and gestational surrogacy, without systematically providing long-term healthcare to donors, surrogates or recipient intended parents (Mamo, 2018; Smietana et al., 2018). It is surely, then, safer to be an egg donor in a country with universal free-of-charge public healthcare and social benefits in case of complications in a commercial clinic than in a country that does not offer such public services.
AI technologies are touted as personalised, non-invasive, and low-risk. Many such technologies are designed to be integrated into existing technologies so no additional intervention is involved for intended parents, IVF-offspring or donors. Furthermore, advocates of AI are suggesting possibilities whereby invasive procedures like pre-implantation genetic testing (PGT), which involves embryo biopsy, could be avoided altogether when AI learns to analyse biomarkers that are indicative of genetic abnormalities in time-lapse videos or other visual data (Zaninovic et al., 2019). While this might be true, and some safety might be provided with the adoption of AI, there are additional concerns to consider.
One of the concerns around AI as discussed above relates to data bias and inaccurate analysis. AI has been said to be as good as the data used for training it (while of course AI should also learn from cases it analyses and human corrective guidance; Riegler et al., 2021). Some advocates think that, with supervision of embryology and medical professionals, errors can be controlled but bioethicists in the field are not so certain. Ho (2022) warns that AI might hide bias from not just lay patients but also the professionals – not least because relatively minimal human supervision is involved in AI training processes. And to the extent that human engineering is involved, their bias about the world might become intrinsic to the AI model, as Tamir (2023) and Rolfes et al. (2023) point out.
Existing literature has shown that members of racialised groups and people who live in low-income countries do not have as much access to assisted reproductive technologies as non-racialised groups or people in high-income countries, even though rates of infertility are similar globally (e.g. Bell, 2009; Culley et al., 2009; Roberts, 1998; Thompson, 2005). Inevitably, this underrepresentation creates a bias in the patient health and molecular data used in reproductive science and embryology research in general and potentially, therefore, also in AI modelling, possibly creating a risk of harm to economically and racially marginalised intended parents and their offspring.
The problem with small and isolated clinics with limited databases of patient data and borderline patient cases has been raised in bioethics discussions (Tamir, 2023). The accuracy and reliability of AI predictions might be too low in these cases. Tamir (2023) suggests that in these cases, medical professional reasoning might be a better solution. The same applies to members of racialised groups and people who live in low-income countries for whom IVF is mostly out of reach. Without thoroughly rearranging global healthcare provision and data economy, it seems unlikely that these minoritised groups will receive equally safe care. Will some intended parents – the ones who can afford to and are able to – then need to travel away from their communities of care and support to an unknown and potentially unsafe reprohub to access AI-enhanced reproductive care?
Conclusion
Feminist theory has taken an ambivalent stance towards assisted reproductive technologies. On one hand, ARTs have been seen as extending the reach of patriarchal control of the womb, thus reinforcing social hierarchies and locations of power in medical knowledge. On the other hand, these technologies have been seen as liberating (Thompson, 2005). ARTs have a role in enabling (especially) women to extend their fertility and synchronise conflicting normative timescales (Hampshire et al., 2012). Few can deny that more successful pregnancies and live births for women, men, non-binary and LBGTQI+ people who, often desperately, want to have children is a positive endeavour.
As feminist scholars, we must do more than denounce reproductive biomedical science, technology and industry as surveillance and exploitation of gendered bodies (Inhorn and Birenbaum-Carmeli, 2008; Thompson, 2005). The biovalue produced in the (transnational) fertility market enfolds economic and ethical value, at its best: money may be made but lives may also be improved, and historical wrongs addressed (Dussauge et al., 2015). In this essay, we have begun to explore whether introducing AI within assisted reproductive technologies has the potential to be part of a shift towards more or less stratification of reproduction from a reproductive justice perspective. The promises and potential of AI in ART include first, increased access to ARTs through automation, making ARTs more affordable and accessible for people living in peripheral regions. Second, AI is touted as offering people better control over their personal reproductive lives and decisions with the help of more precise medicine and pregnancy predictions. Third, AI models as data-driven technology have been perceived as providing personalised, non-invasive, and safer care in terms of physical but also mental health due to less mentally and physically strenuous treatment cycles and medical intervention in achieving pregnancies.
Taking into consideration the larger social, political, economic and cultural context, required for a reproductive justice approach, we suggest that while AI integrated assisted reproductive technologies may offer more access, autonomy and safer care for some groups, it may in fact be the reverse for many others; following, in many ways, the historical lines of stratification, including in the European context and in welfare states.
Global ART is a highly commercialised sector. Furthermore, the transnational fertility industry is also expanding rapidly, and is being reshaped by forces such as globalisation, the significance of patenting, and increasing consolidation. Consolidation manifests itself in several ways, including the merging of fertility clinics into larger international chains, the rise of private equity investments, and the expansion of company portfolios to incorporate every step of the fertility treatment journey and wider ranges of fertility products (Helosvuori and Homanen, 2022; Van de Wiel, 2019). It is likely that AI-based reproductive medicine will be centred around a few of these big fertility companies and health groups; further consolidating the market. People living at the periphery of these developments and with low-income – who are disproportionally non-white, disabled and/or non-binary persons – will not be able to access AI ART in their local clinics nor cross transnational borders for treatments. Neither are there any guarantees that integrating AI into ART will not increase costs if more resources and expertise are needed for data purchases and management.
Autonomy in reproductive decision-making presents a challenge in ART generally. Explaining the science of embryo selection and stimulation protocols in the face of the chronic uncertainty that characterises outcomes in IVF treatment is difficult even without the unexplainable black-boxes of AI in ART. Groups who are traditionally marginalised from technology use or have lower levels of health literacy may be further excluded from ART treatments due to challenges in navigating access in the context of machine-led decisions. Decisions to not use AI in ART might not even be possible as decisions to adopt these technologies are administrative decisions rather than clinician or patient decisions. Overall, if clinics are eager to adopt AI to stay competitive, they might interfere more than before in individual decisions about reproductive bodies – especially those made by women and non-binary people.
Decisions on donor selection might also easily shift from patients and clinicians to AI machines. Facial recognition technologies with AI pattern matching will likely be trusted to do a better job in making the normative and often desired resemblance match between donors and recipients. Racial and ethnic matching is an even stronger imperative in clinical practice and even in some legislation on ARTs in Europe. Given this background, AI facial recognition use in matching will potentially reinforce normativities regarding family-making in ART. This is especially the case with regard to cis-gendered heterosexual couples where the norm of biogenetic-like relatedness and racial and ethnic resemblance determines family belonging and the framing of how families should ‘naturally’ be built. AI may then be easily appropriated in economic and government policy as technologies to strategically assist nation-building through reproduction and family.
Finally, AI ART care, safety, and appropriateness depend centrally on whether the databases used to train the AI are comprehensive and unbiased. There is a real risk that the data are not equitably based on diverse populations sets and, in particular, that individuals from minoritised or low-income contexts are not properly represented in the data because of historic stratification of reproduction. It is not far-fetched to suspect some groups and their reproductive cells could be done harm if treated with solutions based on data from advantaged populations with different kinds of reproductive health histories.
Drawing on insights from reproductive justice scholarship, we have raised concerns about the potential risks of a wider integration of AI into ART. This is not by any means intended to be a comprehensive account of the issues. Critical research and further discussion are urgently needed. To truly achieve reproductive justice in the field of assisted reproduction, AI or any other technology may not provide the solution if it is built onto the foundations of existing provision. Instead, we likely need a total reorganisation of the provision of assisted reproductive medicine, which is largely commercial in Europe and elsewhere, accompanied by a reorganisation of global and local data economies and policies.
We cannot even stop there. As Rubin and Phillips (2012: 191) point out, the cost and problems with access related to ARTs is a distraction from addressing the basic healthcare needs (and social support and education needs) of women (and other marginalised people), and the health management strategies that would substantially increase the probability of conception and reduce the need for medical intervention in the first place. Europe is not exempt from this. While many European welfare systems still provide health services, social support and equality, worrying transformations in social and healthcare systems have been observed over the last decades, including increased marketisation, decentralisation, and deregulation in the name of cost-cutting and efficiency (e.g. McGregor, 2001). To change the normativities restricting people from forming families based on their lived and fluid ideas of relatedness and identity, we need political and cultural will to change legislation, policy, medical practice, and data bias.
