Abstract
Keywords
Introduction
The forces backing datafication's economic power are formidable. An international coalition of actors including the G20, WTO, World Economic Forum and others with agenda-setting and norm-making power have for more than a decade explicitly made the case that data is an economic good that must flow freely, and that the task of law, regulation, and governance is to ensure ‘a stable normative environment for data flows’ (Weber, 2023).
In this article, we will make the case that it is possible to turn this logic of data flows on its head and identify an entirely coherent, but distributed and plural, international norm-making process that is taking place through civil society organisations (CSOs). We focus in particular on organisations who are engaged in work around digital rights, digital policy as well as data governance. This knowledge production, we will argue, is norm-making of a distributed nature.
An example: beginning in 2016, human rights defenders began to flag the use of a spyware application named Pegasus to spy on journalists, lawyers, human rights defenders, heads of state, businesspeople, and others via their phones (Marczak and Scott-Railton, 2016; R3D, 2017). The ensuing international response from Amnesty International and from Michelle Bachelet, the UN High Commissioner for Human Rights, however, did not focus on existing data governance measures such as data protection. Instead, both argued that this was a red line that should not be crossed, with Bachelet stating that ‘Companies developing and distributing surveillance technologies are responsible for avoiding human rights abuses’ (UN High Commissioner for Human Rights, 2021).
The Pegasus case raised the issue of how to respond to ‘regulatory disasters’ (Black, 2014) flowing from the ways in which poorly designed regulation itself can materially affect people's everyday lives. With the flurry of frameworks, principles, and legislations to govern data globally, new challenges, and new potential for such disasters, is emerging. Yet the governing of data does not currently provide red lines to prevent certain uses entirely. We will explore how providing such limits could ensure not just that the economic functions of data are being regulated but that regulation is designed to address injustice and harm at its core.
Repeated waves of globalisation (in the form of trade, capital, goods, mobility of people, and colonial empires) over millennia have generated the notion of norms as a tool for governing the agency and behaviour of state actors on the international level. Norm-making and enforcement power tends, like all international relations (IRs), to be weighted toward the needs of the most powerful actors, such as the European Union and the United States. However, it is not only (powerful) states but also corporations and their representatives who can engage in formal norm-making processes, and who benefit from the power shaped by international norms. It is for this reason that we interrogate the potential of norms as counter-power. We will show that the corporate capture that characterises most international norm-making around the digital, is being challenged by bottom-up counter-norm-making worldwide by CSOs explicitly setting out a different, grassroots agenda.
A guide to what comes next: first, we will provide a brief outline of international norm-making, paying particular attention to
A note on methods: this article presents conceptual conclusions drawing on five years of work by the Global Data Justice project, based at the Tilburg Institute for Law, Technology and Society, Tilburg University. It comprises ethnographies of the institutions governing data; the experiences of datafication among people in different regions, and evolving foci of advocacy and civil-society organisations at the intersection of digital rights and social justice. The research had a geographic focus on the majority world. This article is therefore co-produced as an outcome of several of these interactions, but the framing of the norm has emerged as our own understanding of these conversations, and of the civil society work we have engaged with and learnt from. The norm-related discussions we have had over the course of the project have focused on digitalisation as an object of governance, both nationally and internationally, and on the extent to which this governance can consider the subjective needs and experiences of those represented in the production and use of digital data.
Over the course of our five-year project the language of ‘data’ evolved into that of artificial intelligence (‘AI’), so that the civil society actors we are quoting may be dealing with one or the other depending on their mission, their history, the community they are representing and their socio-political environment in which they are working. Without wishing, to conflate these two areas of the technological landscape, there are important overlaps. The groups whose arguments we cite here are focused on data as well as AI and are all working on different dimensions of the ways in which data, as well as AI, can be used to appropriate resources, violate rights, and enclose what was previously public. We, therefore, address both as boundary objects (Star and Griesemer, 1989) but also rhetorical objects for organising, which form pragmatic meeting-places for different needs, practices, and understandings.
Norms as our central concept: Fundamental, constraining, and constitutive
The notion of norm entrepreneurship has been used in relation to a huge variety of actors. It has become a subject of research in relation to corporate actors attempting to create conducive conditions for business on the international level (Flohr et al., 2010; Hurel and Lobato, 2018) or where shareholders attempt to influence business practices (Sjöström, 2010). There has also been limited discussion of ‘moral norm entrepreneurship’ in the religious sphere (Stoeckl, 2016). Most of the literature on norm entrepreneurship, however, relates to efforts by public actors to influence state behaviour, and by states to influence the international community.
In this article, we claim that civil society organisations (CSOs) are critical actors in norm-setting, and that their agency does not have to be directed (solely) toward influencing state behaviour in order to be considered norm entrepreneurship. Taking the existing normative power of civil society organisations seriously is also a way to actively pluralise public knowledge of data's effects and undermine the international normative consensus. This logic is not only applicable to technology but can be seen increasingly in play in the sphere of climate justice and through the ecocide movement which expands the definition of a norm entrepreneur beyond these state actors to include civil society groups. These processes of norm-making are emerging through counter-processes of deliberation within and among communities worldwide and identify bottom-up processes of norm formation both in the form of a negative obligation which limits behaviour, but also as a positive obligation that requires proactive participation on the part of authorities. In doing so we follow both Fraser (Fraser, 2020) in rejecting the Westphalian assumptions of most of the literature on deliberation and norm-building, and Fanon (Fanon, 1963) in identifying resistance, refusal, and regrouping that is not recognisable as the power to those involved in current hegemonic processes.
We will make the case that there is a different high-level agreement emerging from discussions within CSO groups, and that it is important to engage with these perspectives, because embedded in these norm-making ideas are ways in which people have resisted. As Rajagopal has argued (Rajagopal, 2005), it is critical not just to have theories on governance, but also theories of resistance to ensure that the voices of those who are marginal to knowledge making processes are heard, in meaningful and careful ways.
We are interested in exploring a standard for what is fundamentally (un)acceptable as a benchmark for data governance at a global, regional, and local level. With CSOs as the locations for knowledge about these benchmarks, the questions we begin with are first, what are CSOs worldwide doing to establish norms, and what are the processes and the substance of the normative consensus? And second, what would it mean to enforce such a bottom-up norm, given that one characteristic of classic international norms is that states have the monopoly on enforcing them?
Our starting point is to understand aspects of norms which have fundamental unacceptability are peremptory, or
Therefore, these norms are articulated to protect values seen as fundamental to the international community, which is understood by a majority of states (International Law Commission, 2022). A further distinguishing factor is that peremptory norms have an
Peremptory norms have so far originated from customary laws or treaties. These norms can be used to determine state behaviour and conduct, as a mechanism that can be used to build common understanding and cooperation between states, and to reduce the potential for conflict. Peremptory norms include genocide, apartheid, crimes against humanity, slavery, torture, and maritime piracy (Frowein, 2013), but are increasingly seen not as an exclusive list, but as one that can emerge through practice. For instance, international environmental legal scholars have discussed the inclusion of environmental crimes including ecocide as a peremptory norm of international law (Kotzé, 2016). They argue that recognising such crimes as a peremptory norm will enhance the ability of legal systems to hold states, individuals, and corporations to account for the large-scale destruction of the environment. However, there remain practical obstacles, including a lack of consensus on the definition of such a norm, a lack of political will to enforce it, and the regulatory capacity to do so (Saul, 2015).
These definitional difficulties also translate into conversations around data. While there have always been norms attached to the development of the data economy – notably the Fair Information Practice Principles (Gellman, 2014) deriving in turn from the Nuremberg code, which set out the notion of informed consent and the right to refuse participation. However, these have lost traction as technology has evolved and scaled. Today, the principles inherited from the Nuremberg Code come explicitly into conflict with ‘unlocking the value of data’ (European Council, 2021; WEF, 2013; World Bank, 2022) and the homogenisation of contextual meaning and claims that goes with it.
As a result, we are increasingly seeing the emergence of ‘data law’ globally. Data privacy and data protection, as well as cybersecurity, governance, regulating digital markets, and digital rights also substantially undergird the regulation of AI, which is heavily dependent on access to data in order to train models. This body of law is also emerging in dialogue with worldwide advocacy on fundamental rights and equity, which are often at risk from datafication (Taylor et al., 2025).
This means that using traditional international law instruments to articulate norms for data, such as state practice, court decisions, treaties or the International Law Commission can be insufficient – especially as norm-making is increasingly taking place either in transnational private organisations such as standards bodies or international fora, or amongst state-sponsored organisations such as the general partnership on AI. As such, the formal space for norm entrepreneurs on data is still centred around that of Internet governance (Radu et al., 2021), where fora such as the Internet Governance Forum have become spaces from where values and shared understandings are proposed and discussed (Kettemann, 2020, Milan & ten Oever, 2017).
The case for the importance of formal norm-making has been made predominantly by the field of International Relations. IR has studied how norm-making became a tool for governing the post-Westphalian world, and increasingly became formalised to the point where it became considered the exclusive preserve of states. In this literature, norms can be understood as a set of shared understandings that create standards for acceptable behaviour. Finnemore and Sikkink (1998) identified how they provide a framework around which people can come together and rally together to make claims, organise and articulate grievances, and how norms can be used to determine standards of appropriate behaviour. At one level, therefore, norms offer a constitutive function, where they determine criteria for acceptable behaviour, and create categories and functions. They also, however, take on a constraining function where they limit what can be understood as permissible and impermissible behaviour (Finnemore and Sikkink, 1998).
Finnemore and Sikkink described norms as coming into being in three stages. The first is that of
Existing norms in relation to data
Norm-building processes specifically in the field of data governance are already occurring on multiple levels, on the part of international organisations, national governments, and civil society actors such as think tanks and rights organisations. The most documented effort at establishing international norms on data, however, has so far been on the supra-national level, where the core assumption on the part of almost all actors involved is that data must and will flow as freely as possible and can be addressed as a commodity during all parts of its lifecycle. This norm is being embedded into international data governance and has gained such discursive power in the spheres of technology governance and IR that it is hard to find any discussions on the state level that do not begin from this assumption. Examples include the UN General Assembly resolution on the need to respect, protect, and promote human rights in the development, deployment, and use of AI (UNGA, 2024). UNCTAD's work on governing data trade (UNCTAD, 2021), UN STATS' contributions to normative data governance (UN STATS, 2023), the comments on red lines and AI from Michelle Bachelet, quoted above; the movement for a digital Bretton Woods (Zysman, 2023), the WTO's ongoing efforts to establish frameworks for the trade in data (World Trade Organisation, 2021), the G20's discussions on data (Global Solutions Initiative, 2023) and those of the Global Partnership on AI (GPAI, 2024), the European Union's digital policy (European Union, 2023), the African Union's similarly (African Union, 2022); international organisations arguing for a global data convention, and the WHO's effort to establish a treaty to deal with COVID data (World Health Organisation, 2021). A recent discussion in the UN Data Forum has also called for a Global Data Convention focusing on privacy, data exchange, data interoperability, and transparency, with the goal of harmonising data governance (Me et al., 2021).
What all these processes have in common is that they incorporate the assumption that data must and should flow to facilitate transnational digital trade (and in connection with this, the development of the AI economy) unless there is a specific and clearly demarcated reason to restrict it. For example, the EU's AI Act, like the rest of the EU's data regulation framework, centres governance models that rely on the free flow of data, while also naming the importance of ‘reasons of public interest and of rights of persons’. 1 Similarly, the WTO's draft rules for international trade in data propose only to allow exceptions to the free flow of data based on national data protection regulations, with other restrictions considered illegitimate (WTO and WEF, 2022). Data that has been de-identified or otherwise does not lead to identifiability of individuals (‘non-personal data’) is proposed to be free for all to use.
The World Bank's World Development Report 2021 (World Bank, 2021) clearly demonstrates how the notion of civil and political rights becomes an operational safeguard for the economic value of data. The report focuses closely on how both human and digital rights should be incorporated into data governance, but to achieve a free market for data. The terminology of economic worth is cited in different forms 477 times, and although ‘rights’ are cited 206 times, only half of these relate to human, civil, and political rights. The rest of the rights cited relate to property rights, including intellectual property, data ownership approaches, and data portability.
The agenda-defining force of this trade-focused norm-building process has exerted a gravitational force on data governance. Most international organisations and governments' independent processes of establishing legislative frameworks and codes of practice have adopted the core norm – that data is a commodity that must flow – as a non-negotiable assumption. This has led to a certain amount of conceptual acrobatics, where normative statements about rights and non-economic forms of value have had to be layered on top of this principle. As the Indian organisation IT for change puts it, ‘Most mainstream approaches to digital technologies for development are either techno-utopic or promote a market-fundamentalist approach, often both together’ (IT for Change, 2023).
These acrobatics have become so normal that they are rarely questioned in policy discussions.
For example, the OECD in 2019 introduced five value-based principles for the ‘responsible stewardship of trustworthy AI’ (OECD, 2019). These include ‘inclusive growth, sustainable development and well-being; human-centred values and fairness; transparency and explainability; robustness, security, and safety; and accountability’. While the normative content of each recommendation differs, there is an emphasis on the economic benefits of building stewardship for the growth and sustainability of the digital economy, leading from market-based objectives as much as concerns about the risks of the use of AI.
There are exceptions to this economic focus. For instance, UNESCO's (2021) recommendations on the ethical development and deployment of Artificial Intelligence, already adopted by UNESCO member states (UNESCO, 2021), argues for a universal framework of values, principles, and actions consistent with international law and designed to embed ethics into all the stages of the AI lifecycle. The recommendations speak to a need to promote human rights and fundamental freedoms, focus on intergenerational equity, place importance on the environment, and encourage cultural diversity through encouraging pluralistic dialogue, and promoting benefit sharing in terms of access to development and knowledge in the field of AI. If operationalised, these recommendations will be explicitly in tension with the economically oriented consensus driving regulation on the national level.
Civil society's international norm-making
Around the world, we see a set of alternate normative assumptions emerging from CSOs both nationally and internationally over the last decade. In this section, we document perspectives from a variety of CSOs who are engaged in digital rights, digital policy, and data governance. These groups are identified through an examination of their motivations, backgrounds, as well as how they are positioned both in their national debates, as well as international contributions. Our interactions with them in some cases were directly in the course of the project, and also through recommendations from other CSOs we encountered. We subsequently studied knowledge that these groups produced, such as in the form of projects, policy documentations, and manifestoes.
From our study, we have found that there is an explicit push back against the commodification of data and the economic and governmental configurations that arise from it. These norms are contesting the market framing of digital resources and practices. These claim different sources of legitimacy from market-oriented norms, but have an international resonance and appeal because, like recognised norms, they define common problems, generate collective agency, and create transnational alignment around common goals.
These normative countermoves can be grouped into three categories, each of which engages with different streams of debate and resistance.
The first of these is the claim that data constitutes personal identity – digital bodies – and should therefore be awarded dignity as an extension of the people from whom it derives. In the US, the Our Data Bodies project writes: ‘data is our stories. When our data is manipulated, distorted, stolen, or misused, our communities are stifled, and our ability to prosper decreases’ (Our Data Bodies, 2023).
This claim that data and bodies are commensurate provides leverage for advocacy of marginalised communities in particular, as in Nepal where the organisation (Body and Data, 2023) discusses how to ‘enhance understanding and access to information on digital rights among women, queer people, and marginalised groups where they are able to exercise their rights in a safe and just digital space’, acknowledging that this realisation of rights and protections emerges differently for different groups. In Los Angeles, the StopLAPD spying project offers an algorithmic ecology model, which is meant to demonstrate how bodies are criminalised through aspects of land, race, and poverty, and how these have implications for how people are controlled, surveilled, and displaced (Stoplapdspying, 2020). A related but differently oriented normative claim can be found in the work of the Rethink Aadhaar movement in India which seeks to reform the country's biometric ID system (Rethink Aadhaar, n.d.). This group has argued that Indians’ civil registration data in the Aadhaar database must be treated by the state with the same respect with which the state treats its citizens, and not traded to private parties to stimulate business.
These claims about data as people have a common focus on dignity, but differences in their positionality lead to distinct proposals as to what constitutes an appropriate governance response to the commodification of data. Thus, the warning of Roberto Kozak, defender of human rights under Pinochet in 1980s Chile: ‘…that the files [of the disappeared] must be carefully taken care of, that they were not files, they were human lives, worthy of respect’ (MacAskill and Franklin, 2016). From a very different economic perspective, Gurumurthy and Chami (2022), argue data is used to create economic value, that value is tied to citizens and cannot be alienated from them.
There also exists a more radical claim that the harms caused through data are real harms in the world, and therefore data can and should be equated with bodies. This more critical perspective exists in both academic and civil society streams of thinking, for example, Haggerty and Ericson's notion of ‘data doubles’ (Haggerty and Ericson, 2000) which become extracted and commodified, activities which are illegitimate because they commodify people themselves in concrete ways, causing not digital risk but actual harm (Sandvik, 2023). According to this view, datafied bodies are bodies; datafied violence is violence (D’Ignazio et al., 2022; Hoffmann, 2018; Ricaurte, 2019), and there are corresponding implications for governing data in the economic and civil spheres (Gillwald et al., 2022).
In a second theme, there is the assertion that data constitutes community, and by extension, land. This perspective has been a specific claim by the Māori Data Sovereignty Network, Te Mana Rarunga, who argue that data is living, and it has relationships to the environments in which it is produced and must be useful to the collective (Te Mana Raraunga, 2022). In their principles, the network argues that any regulation around data must consider the fact that Māori communities have authority over the use and extraction of their data, as it relates to their culture, identity, and practice. This regulation must ensure that the authority around the data resides with the community, and that it is to their benefit. This assertion is also being made by indigenous groups in Chile (Lehuedé, 2022), with the argument that data is both relational and forms part of the groups' unceded territory. Cheesman's study of the beta-testing of digital projects by technology vendors on refugees in camps (Cheesman, 2022) argues that these projects involve technologically sequestrating refugees' material practices of exchange and mutual responsibility and using them as a factor of production in higher-income locations.
This idea of data-as-relations is also useful to explicitly decolonial thinkers such as Mhlambi (2020) who uses the notion of Ubuntu – relational personhood – to argue that digital extraction and exploitation in relation to AI systems should not be seen as impacting individuals, as they are in human rights frameworks and more generally across legal regimes based on liberal Western framings of the individual as rights-holder (Cohen, 2012). Instead, starting from an Ubuntu understanding, ‘relationality is the nature of reality and the measure of ethical living’ (Mhlambi, 2020). If we orient governance in accordance with this, this means that communities, along with individuals, should be considered the unit within which technology's benefits and harms occur, because ‘the distribution of power within and between these relationships enables social progress, social harmony, and human dignity’ (Mhlambi, 2020: 15). Also in line with this view is the claim that Indigenous data should be governed according to the norms of sovereignty, independence and integrity (Carroll et al., 2020), but also potentially exclusively so that only those with a right to data can access and use it (Kukutai and Taylor, 2016)
Community organising around problems of technological extractivism frequently takes the form of what we term norm entrepreneurship here: one high-profile example is the antiextractivist norm-building effort undertaken by Tierra Común in Latin America. The collective invites its members to ‘imagine a future where the terrain of human life does not involve extraction of data that discriminates between us and separates us from our own lives’ (Tierra Común, n.d.). Tierra Común's normative statement most aligns with the vision of data as social relations outlined above: data is a common good because it grows out of lived practice. It embodies sociality because its meaning derives from communications and from relational actions. Closely related to Tierra Común, The Non-Aligned Technologies Movement (NATM), a worldwide network of civil society organisations, argues for the ‘purposeful implementation of digital technologies in a way that affirms each community's power of self-determination and governance’ (NATM, n.d.).
This ‘data as community’ approach is also visible in the work of community organisers in Toronto. These rejected Google's Sidewalk Labs’ 2017 project to monetise data about their neighbourhood, and its claim to ‘urban data’ as a new formal category for data governance purposes (Tusikov, 2019), which would have allowed it to extract economic value without reference to communities' rights over the data stemming from their activities and public spaces (Wylie, 2020).
This stream of thinking and contestation is similar to the critical elements of the ‘data as bodies’ stream in that it names digital dispossession as equivalent to the appropriation of land or lives (de Souza et al., 2024). It adds to it the notion that this kind of digital dispossession is unique to each affected community because it prevents unique and culturally specific forms of value being realised. In this view, data is not only people but also history, culture, and the fabric of community. This comes explicitly into tension with ‘data stewardship’ approaches in which these relational aspects are purposely deconstructed.
A third stream of thinking addresses data as labour. An early articulation of this came from Fuchs, who has written that ‘The notion of digital labour signifies that the time spent on Facebook and other corporate platforms is not simple consumption or leisure time, but productive time that generates economic value’ (Fuchs, 2014: 98). This notion has become more current with the rise of AI as an object of economic and labour policy, ‘dependent on human labour, personal data, and social behaviours that accrue over long periods’ (Pasquinelli and Joler, 2021), and the armies of gig labour undertaking data labelling and content moderation in underpaid and exploitative conditions (Siapera, 2022). This was apparent in the recent publicity around Kenyan workers' conditions, where they were paid < $2 per hour to curate data, identifying toxic content such as hate speech or sexual abuse through data labelling on an Open AI project (Perrigo, 2023). This kind of labour is also a mental health risk (Dzieza, 2023), providing a new category of ‘dirty, difficult and dangerous’ work.
Many of the myriad processes of digital extractivism can be categorised as labour (Iyer et al., 2021; Winterhalter, 2022). As Crawford and Joler demonstrate in the case of the Amazon Echo, the user performs the function of ‘a consumer, a resource, a worker, and a product’ (Crawford and Joler, 2018). As a consumer they receive convenience, as a resource they provide their voice to a large data set, as labour they provide feedback, and thereby contribute to the development of the product. This view of data as inalienable aligns with the previous two themes and adds in another source of leverage by connecting struggles over digital exploitation to labour rights and thus to a history of organising and the development of legal rights.
These three positions differ in their conceptual starting points, their advocates' positionality, and the redress and governance responses they demand. However, we argue that they are emerging around a normative consensus. This is not to diminish the contextuality that gives them their power, but to demonstrate overarching commonalities which, while conceptually plural, together form the shape of global, normative counter-power.
Specifically, claims regarding embodiment and the embedding of data in historical reality and cultural practice are effective starting points for more explicit norm definitions. This in turn gives more general purchase on sectorally oriented technologies in particular, whose application tends to be justified according to economic necessity. For instance, a critique of edtech that links the exploitation of data-as-labour to the problem of data-as-bodies can show how this class of technology, as well as creating more efficient testing and tracking in schools, extracts children's engagement (Anderson, 2024), as well as their educational trajectories as marketable commodities. It also leads teachers and schools into trajectories of engagement with proprietary cloud and software services which then hollow out local provision and make other education levels and data flows more amenable to capture by for-profit service providers (Renz and Hilbig, 2020). Similarly, the critique of large language models posed by Research ICT Africa (Research ICT Africa, 2022) links AI to climate justice, inequitable resource distribution, and social movements' transnational claims to recognition and standing.
This is where CSO norm making becomes uniquely powerful, because it demonstrates how norm-formation and enforcement are stochastic and complex rather than linear in their progression, and that they take place over widely varying time frames depending on the degree of cultural reversal they aim for. Graeber and Wengrow have argued that there is not a linear progression from autocracy to democracy over the course of human history, but instead a series of ‘bold social experiments’ with different ways of organising community life, with societies continually reinventing politics in a predominantly bottom-up way (Graeber and Wengrow, 2022). We see the development of norms regarding just technology as obeying this logic: a continual process of responding, reshaping, and claim-making.
So, what is the common narrative in the different proposals being put forward? In our view, visible in these, and other such coalitions, is a normative consensus around data's inalienability to the effect that ‘digital’ rights are intersectional and cannot be confined to the realm of internet governance, freedom of speech, or access to connectivity. The potential for these efforts to be seen as explicit norm-building increases with their intersectionality; where organisations start to address education, gender, labour policy, welfare entitlements, law enforcement, agricultural policy, democratic representation, housing, public health, refugee rights, and a host of other issues through the lens of digital governance, and to do so predominantly through networked strategies rather than adopting single targets per organisation – as most of the organisations cited above are doing.
The substance of civil society's claims: ‘Nothing about us without us’
A critical overlap in the different articulations of data as bodies, as social relations and as labour, is the inalienability of data from people, and in turn how data structures and systematises relations between people, and between structures used to control them. These examples highlight the ways in which data is situated knowledge, how it has embodied meanings, and the ways in which it can have material implications. Underpinning this inalienability is the notion of solidarity: collective control and ownership of data which can sustain and create more public value (Prainsack et al., 2022). A solidaristic approach to data governance would place emphasis on process – ensuring that ‘claims for recognition and respect have been heard and considered’ (Braun and Hummel, 2022). This kind of solidarity-making brings together digital rights struggles with existing organising efforts to create structures that can challenge injustice (Sharma and de Souza, 2023) Here we draw on EDRi, who connect solidarity in digital rights with themes from the South African disability movement and with the slogan ‘nothing about us, without us’ (Meyer, 2021).
In its earliest form, the slogan became an organising motivation to challenge the structural inequalities and systemic barriers that denied persons with disabilities the rights to take decisions which were in their interest (Charlton, 1998). In the context of data, ‘nothing about us without us’ offers a core normative statement that – like the work of the community organisations described above, acknowledges data has material implications for people. It speaks to the ideas of the inseparability of data and people (such as from the work of Our Data Bodies or Body and Data), the living nature of data and its relationship with the environment (such as from the work by Māori data sovereignty groups), and it speaks of a purposeful engagement with people to ensure that their interests are represented (such as through the work of Rethink Aadhaar or NATM). In these illustrations, it is abundantly apparent that the norm is not something that is being proposed but one that is already being enacted.
The substance of this norm points to the importance of recognising a plurality of voices in articulating distinct experiences, offering ideas and vocabularies, contributing to forms of knowledge, and participating in decision-making processes (Baxi, 1998). In doing so, it articulates the need to reclaim power for those who are directly affected by data-related decisions, highlighting the primacy of agency, and through its possibilities for contestation from diverse points of view. Also intrinsic to the norm is representation, since where this is lacking there are likely to be not only procedural injustices, but also substantive and epistemic injustices on account of a lack of engagement with those who are affected the most (Arora, 2016; Lopez Solano et al., 2022).
As a framework for action based on common understandings, this norm of ‘nothing about us without us’ to us epitomises the work of the CSOs which are collectively aligned with the idea that data has become an important tool for representation and autonomy at the community level, and that data governance can either support or undermine claims to social justice. This movement is plural: it reflects different communities' needs and experiences. We can also see resonance with data justice research and practice which has a dual nature: it foregrounds the incommensurability of communities' needs from data, and the inevitable misalignment of current governance approaches with their lived experience, but it also surfaces common contestations and aspirations around self-determination, autonomy through independent digital infrastructures, collective contestation, and collective refusal of extractive practices.
A norm that includes the possibility of abolitionism-as-governance would therefore make it possible for communities to veto or re-channel the use of data, explicitly contesting the idea that data is a non-excludable good (Veale, 2023) which can be reproduced and serve different needs without negative effects. In fact, we would propose the capacity of a given governance model to encompass the possibility of leaving data in the ground as a test for whether it can be representative of people's and communities' interests. Conversely, if a model answers every question in a way that results in data flowing, it is probably not responsive enough to contestation and refusal.
The value of a norm framed in terms of ‘nothing about us without us’ is that at its core, it is flexible, plural, and applicable in different contexts. It does not carry the weight of legal cultures or traditions, nor does it require a particular institutional arrangement for it to be recognised. Instead, it makes a fundamental statement about governing data that is recognisable to civil society worldwide but is malleable enough to serve as a benchmark for different enforcement architectures.
Enforcement beyond the state
Critical to the efficacy and viability of a norm is its potential to create shared understandings whether by creating constraints, or through constituting frames of appropriate behaviour. To ensure the cascading of a norm, one important consideration is whether or not it can be enforceable. The established theorisation of international norms cited above involves states as the only actor able to enforce them. Kayaoglu (2010) terms this ‘the Westphalian narrative’, where ‘Western states produce norms, principles, and institutions of international society and non-Western states lack these until they are socialised into the norms, principles, and institutions of international society’. In short, the Westphalian order is also a hierarchical one.
It is therefore important to note that the process of norm-making we have posited here is non-Westphalian. It challenges the realist assumption that data governance, like IRs overall, is a product of both hegemonic power and modernity (Capan, 2017) and must ultimately be determined by economic negotiations among states.
It is entirely possible to look beyond the state for ways in which norms can be enforced. In his work on the pluralism of norm enforcement regimes, Schuppert offers a helpful typology that acknowledges legal pluralism (Schuppert et al., 2015), that is, the fact that in the majority world, law emerges not just from the state but from a variety of different legal universes based on community, custom, and religion, among other sources (see also Merry, 1988). In Schuppert's typology, enforcement can happen formally through courts or other state-backed entities, but also through community-based enforcement regimes which enforce through reputation, belonging, and cultural practice, and through organisation-based enforcement, such as standardisation undertaken by private organisations.
The state/community divide in norm enforcement embodies the distinction made by Amartya Sen between
Strategic litigation can also enact
Conclusion
Norms are a way of governing what cannot be regulated or legislated but is necessary to our common good – and as such has value for civil society groups wanting to deconstruct and remake the data economy. The main assertion we have made in this article is that the formation of counter-norms regarding data and its uses (including AI) by civil society movements should be considered norm-making. The value of this, we have argued, is that it answers an urgent need to foreground civil society actors' agency in relation to the international governance of data, given that the process of formalising international data and AI governance centres almost entirely on the economic interests of states and corporations.
We have documented a variety of ways in which civil society actors behave as norm entrepreneurs, articulating shared understandings which in turn generate the power to shape the governance of data in both constitutive and constraining ways. States can name policy priorities and can stimulate the creation of particular digital infrastructures. Yet civil society has the power to articulate different directions, to disobey and refuse, and to create local practices of sovereignty and resistance that disrupt top-down data governance and render it inefficient or even impossible. Although this in no way resembles a classic top-down state-building process, we should not assume that it will not be successful. Instead, as Graeber suggests (Graeber, 2004) in order to understand what is already happening, we may need to redefine ‘governing’ data to mean something more distributed and plural than top-down forms of enforcement by state or international authorities.
If successfully applied, the norm ‘nothing about us without us’ would hinder the international data market, which is predicated on the ability to (legally) separate out data from the people to whom it relates. It would form an obstacle to defining and trading data as bits, bytes, and insights (profiles and data derivatives), as well as to deidentifying it and using it to train and feed AI models. It stands for competing claims to sovereignty which will, as communities find ways to enforce them, disrupt the creation and running of systems such as large language models. It also places new demands on those creating data repositories in the name of the public good, for instance in the field of health research, or in creating digital twins of cities. The main function of this norm in such cases would be to place consent, or refusal, squarely in the gift of those groups represented (rather than identified) in the data, and to channel negotiation and permission through structures of democratic representation – which would have to be created, since they do not currently exist – rather than expert-led processes of pseudo-consent.
Data governance architectures that reflect the norm we have identified would be complex, often unwieldy, and would almost inevitably result in a failure to ‘unlock the value of data’. Such architectures would, however, be more representative of the ways in which we currently govern – or aspire to govern – other resources that affect us all both individually and at a societal scale, such as our environment, the air we breathe, or the water we drink. Just as the environmental justice movement has recognised the need to operate without waiting for permission from states, we can expect to see a corresponding movement around data technologies. It may prove similarly disruptive and transgressive but is currently the only model for governance which centres an inclusive notion of what is just and sustainable, and as such will become increasingly visible in the international sphere.
