Abstract
Introduction
Consent forms with privacy policies serve as tools to make the use and acquisition of private information permissible. This practice has arguably been somewhat successful in some situations, such as in medical research. However, when applied to online websites, applications, and programs, the efficacy of consent-gathering practices encounters a familiar challenge—namely, the fact that privacy policies or terms and conditions are rarely, if ever, thoroughly read and understood. 1 This phenomenon can be attributed to a multitude of reasons, often stemming from the fact that they are often long, incomprehensible, and ubiquitous. As Nissenbaum (2011) argues, if proper notice is given, consent is impossible because of the length and complexity of the agreements (cf. Andreotta et al., 2022). Furthermore, as Solove (2013) points out, there are so many entities collecting and using our data that it makes it difficult to self-manage our privacy. Some forms are simpler though, but the shared multitude of requests is leading to cognitive overload (Lundgren, 2020a).
The practice of uninformed consent online is an
In this article, we attempt to develop an idea of how to solve this problem—the basics of which have been explored by Lundgren (2020a) and developed in greater detail by Jones et al. (2018), Le Métayer and Monteleone (2009), Gomer et al. (2014), among others. One solution involves software developers, and particularly their role in browser design. Lundgren for example suggests the incorporation of a software-based “automated functionality” (e.g. as part of a web browser), based on user preferences about what they consent to sharing in which contexts. A more advanced version involves AI and has the capacity to offer consent on our behalf based on patterns of our consenting behavior. Jones et al. suggest that “AI may be able to technically grant consent as an agent of the data subject by predicting what a particular user finds unobjectionable” (2018: 69). The core idea is to empower these software solutions to handle the task of comprehending these voluminous consent agreements on our behalf, effectively bypassing the need for both meticulous personal reading and consideration of alternate options. Our paper seeks to make an original contribution to the literature by categorizing and conceptualizing the different approaches, as each have their flaws and strengths.
While an impactful proposal, its practical implementation demands careful discussion. To begin, we must explore the mechanics of setting up such a system. What functionality should and could be involved? Are certain types of automated consent preferable over one another, for example? 4 Additionally, before embracing this idea wholesale, it is imperative to address several noteworthy objections—specifically, whether or not the bypassing approach would actually undermine informed consent; and second, whether the information collection required to set up this solution raises any ethical considerations in their own right (and if so, whether those are on balance, acceptable). We also suggest that a system of this kind can help to further improve data protection by informing users of web services with previously problematic behavior. 5
Consider the case of Bounty (a company that offers support on getting pregnant, pregnancy, and parenthood). In 2019, it was fined £400,000 by the UK Information Commissioner's Office (ICO) for sharing personal information about its 14 million users without proper informed consent.
6
According to
While automated consent features might sound like a fanciful idea, there are both practical and normative issues to consider, which is the topic of this article. The remainder of the article will be structured as follows. In the “Practical feasibility” section, we consider the practical feasibility of our approach. Specifically, we discuss whether it can be done and whether anyone would want to do it. However, our aim is not to get into the nitty-gritty of the specific details of a technical solution (i.e. with respect to how the programming, or machine learning components of this approach, might look like), rather we will approach technical feasibility from a high-level perspective discussing overarching options. In the “Should it be done?” section, we consider some of the normative issues. We consider whether automated consent undermines
Part of the relevant queries here are legal—that is, they relate to questions of whether an automated informed consent would be legally binding. Although highly relevant, we will set those questions aside to address more fundamental questions: whether it is feasible—in the sense that it can be technically done and that someone would do it—and whether we should do it, and if so, how. Only once these questions are settled would it be worthwhile for someone, with the appropriate expertise, to analyze the legal constraints and requirements related to our proposal. 9 Despite this, we will aim to make clear when what we say has legal implications that require further scrutiny. However, what is important here is the ethical analysis, which must underpin any legal work on the issue. Hence, we see this article as an invitation to legal scholars to take up the regulatory implications of our discussion.
Moreover, our aim here is
Finally, while we should all agree that there is individual and collective value in preserving privacy and anonymity online, there are also individual and collective costs. For example, anonymous browsing can make it harder to detect child abuse, the sale of illicit drugs, and malware distribution (Jardine et al., 2020). This is a complex trade-off that we do not aim to engage with here. However, if the solution we provide (building on previous work) is feasible, then fewer users may feel the need to use services’ anonymous browsing, but of course, that also depends on companies living up to the constraints of the law and their agreements.
Practical feasibility
There are two questions that act as a starting point for our discussion. One has to do with the issue of whether automated consent technically
Can it be done?
The key question we address in this section is whether software solutions could be designed to deal with the input of user preferences—that is, the kinds of information people wish to share or not share—and provide a functionality to automatically provide an informed consent on behalf of the user. In the three subsections below, we advance three high-level proposals of how this may be realized, which illustrate that the solution can be accomplished, at least from a theoretical perspective. Whether or not such approaches should be taken up is a question we address in the “Should it be done?” section. Since there are many different applications of this software, we will focus on some simplified examples. Moreover, to further simplify the discussion we will presume that the software is integrated into a web browser.
The minimalist option (reject all)
There are several ways in which automated consent may be realized, we start by discussing the
However, these types of protections can be essential to ensuring that the rejection of consent agreements is fully respected. By using TOR, the browser effectively provides anonymous browsing, by protecting against tracking, tracing, and so forth. This is beneficial for users who do not
Given the simplicity of the Minimalist Option—and the fact that tools are already available—one might wonder what our proposal adds to what already exists. Setting aside situations in which we do
It is worthwhile pointing out that Facebook have been accused of using data in violation of agreements. For example, asking for users’ phone numbers, ostensibly for security purposes, the company was accused of re-purposing them for targeted advertising and for unifying data sets across its platforms (Hern, 2019b; cf. Véliz, 2021 for further examples). Hence, one may worry that our proposal fails to protect users against abuse. However, although it cannot protect against abuse, it can ensure that such abuse is not accidentally consented to and hence, impermissible, which may hinder some abuse due to ethical and legal risks.
Therefore, we believe that an automated minimalist consent could help both in cases when TOR browsing is insufficient, and also in cases when there is a risk of abuse. The Minimalist Option would prevent any kind of data sharing from occurring that was within the users’ control (data collection necessary for the site to function would still present a problem of course). Moreover, the software solution could easily be designed to automatically set up all your privacy settings in a way that selects the most minimalist options (of course, automatic settings cannot be expected to be automatic for all possible services, but for major services such as Facebook, it would not be very difficult).
This would, furthermore, be easy for users to set up, since the software, once initiated, would not require further user interventions—that is, once initiated, the software would essentially execute a rule like the following: “if there is a choice to deny data from being shared, deny sharing.” Clearly, such functionality is not difficult to achieve even if forms for consent vary in format. For example, the GDPR requires that the minimalist options are easy to choose (although this may not always be the case, we imagine that these software solutions could also be a testing ground for implementation of the law by automatically reporting forms in which the minimalist option is less accessible than a maximalist option, as well as warning users of any services known to have acted in a questionable manner). This is an issue that we return to in the “Should it be done?” section.
We anticipate that some may object to our characterization of the Minimalist Option as a form of automated consent. Our response here is that while it is certainly true that it does not offer much granularity, it does offer a form of control and does so automatically. That is significant in the context of why informed consent matters, as it gives data subjects some capacity to self-manage their personal data (cf. Richards, 2021).
The semi-automated option
The second approach we will consider is one which gives users a greater variability in the options for automated consent, especially in cases where they are logged into websites. Ideally, the browser would be able to help with sites which have the capability to sell/monetize user data that is given up or generated by the user. One way in which this could be realized is by the establishment of rules. Suppose a person sets up their web browser for the first time. They could be asked in very general terms the kinds of things they would like to allow. If a user only wishes to accept necessary cookies when browsing on websites, then this option could be considered. Rules could then get more complicated. Lundgren suggests this in his paper: “it should be possible for any user to adapt these changes as she sees fit (e.g. on websites of type
There are other examples of this kind of approach outlined in the literature. For example, Gomer et al. (2014) have put forward the idea of “semi-autonomous consent.” Their idea revolves around the idea of training a semi-autonomous software agent. This involves three stages: a preference-setting phase, a consent phase, and a review phase.
An issue with this rule-based approach arises in contexts where website consent agreements do not allow for the granularity in decision-making that is desired by the user. In such cases, we imagine that this option could still serve a purpose since a functionality could warn the user that the website does not satisfy the user's preferences (and there could be automatic fall-back options, such as sharing less rather than more). Similarly, in cases when the website legally may use more data than the individual user prefers. Of course, it is a problem that there is a lack of meaningful alternatives in current big tech practices, in terms of users’ ability to negotiate (Nissenbaum, 2011; Andreotta et al., 2022). But at least the presence of a red flag will require that the user explicitly give consent to what will occur (even if they are not happy about it).
There is, however, a larger problem that besets the semi-automated option, and that is the problem of setting up the rules. While some users may be able and have the time to set up the rules in a way that corresponds to their preferences, it is not clear how this would solve the basic issue at hand—that is, the (cognitive) overload of performing these tasks. This problem is exacerbated by the fact that data use seldomly remains static. As Nissenbaum has pointed out, “the realm is in constant flux, with new firms entering the picture, new analytics, and new back-end contracts forged” (2011: 36). As changes occur, existing rules may need to be revised since existing data could be used in novel ways for novel purposes. While the software may help in situations when websites change their options within the constraints of what have already been defined, other changes or simply the setup thereof quickly add up and make the task of rule creation overly demanding even for the expert user and unfeasible for the average internet user.
One way to get around the task of setting up the rules would be to get assistance from a third party. In a recent paper, Mario Pascalev for example, has outlined the idea of “Privacy exchange authorities” (PEAs) (2017: 43). The basic idea involves three distinct steps. First, the PEAs would standardize and codify certain privacy options (e.g. never share a certain type of personal data). Next, a user would be able to select one of the PEA’s privacy options based upon which one accords with their values. Third, the PEAs would generate a record of the user's choices, which could be used for websites and services that the user encounters in the future. The upshot of this, Pascalev claims, is that “instead of reading and accepting the company's privacy policy, the individual user will present their own terms” (2017: 43). While promising, there are several concerns with Pascalev's proposal. One has to do with the ease with which the choices could be selected by users, and the extent to which the “privacy options” would cohere with what users really wanted, and how easy customization would be. While changes might be easy to make in theory, in practice they may require a lot of work on the part of the data subject as mentioned above, since they ought to also understand the implications of selecting one privacy option over another. Pascalev suggests that a “user could take sufficient time to study all the options” and that users could “receive assistance by volunteer or for-fee privacy advisors, similar to getting tax advice” (2017: 45). How this could be achieved on a large scale so that everyday users could benefit from it, remains an open question.
However, in defense of the semi-automated option, it should be said that well-designed software will allow for simpler navigation of choices compared to reading the consent agreement, even if it would still require both time and a substantial understanding of the choices involved. We say simpler here because users would not be required to read privacy policies in their entirety, thus saving time, and preventing understanding problems. While promising, however, we believe there is a more suitable option.
The “Fully” automated option
One advantage of the semi-automated approach is that it provides users with the option and ability to create rules that cohere with their preferences and thus gives them a greater ability to control their data privacy. It does, however, introduce a problem that is found with existing consent processes—namely, it requires users to spend time configuring their preferences and responding to alerts and requires a competency in understanding the rules. As the rules get more complex, furthermore, this may add an additional burden for some users. Moreover, users may not have much of a choice to negotiate with companies.
The third option we have in mind, the “
The “Fully” automated option bears some similarity to the
One problem with these approaches is how they handle change. Not only might our privacy choices diverge from choices made in the past, but the risks of sharing certain data might change too, which might make us rethink certain choices. This is a problem that is discussed by Jones et al. (2018: 71): “It must also be noted that these examples of improving or automating consent do little to offer continual consent.” Call this the problem of
There are several issues with this approach too. First, we might worry about the method by which the software grounds the automated choices. We know from the so-called privacy paradox that there is a discrepancy between revealed and stated preferences (see e.g. Solove, 2021). Indeed, part of the problem that we want to solve here has to do with the fact that we sometimes just click through agreements out of cognitive overload, by mistake, or with the purpose of saving time, and so on. However, this just tells us that the solution may need to focus more on querying the user, rather than relying on their actions (we think priority ought to be given to stated preferences rather than actions, as consent agreements—when they are fully normatively transformative—are stated agreements). Nevertheless, machine learning can still save a lot of time since well-designed questions can cover a large set of situations and the need for queries will arguably reduce over time. Given the current problems of users not reading consent agreements, we believe that it is plausible that an automated informed consent application could be able to make decisions on behalf of the user that is more reliable, relative to what the user would have wanted if they had time and energy to make a decision.
A second concern is about the training data and its storage. If the software can use this data to make such complex choices, there is a risk that the data can be re-purposed. This data could be very valuable for data brokers since the specific settings that individuals make would reveal key insights into their behavior. Learning the preferences of the most privacy-savvy users, who they are, and what they do, for example, could help companies maximize the effectiveness of their advertising campaigns and targeted ads.
Third, another central concern is whether it satisfies the condition (legally and morally) of actually providing consent (since it is, after all, a probabilistic model aimed at predicting user's data protection preferences). One might worry that informed consent requires that a person is informed about what they are doing; getting an algorithm to choose on your behalf seems to undermine consent.
However, these two latter concerns are
Will anyone do it?
One initial challenge to our proposal is motivation. What factors would drive a development company to invest in the development of the software we have in mind? One response is that it could be profitable to do so. Notably, some companies have incorporated data protection into their advertising and marketing strategies. Apple, for instance, made headlines with their iOS 14.5 update to its operating system, which mandated user consent for information collection by apps. Google's new initiative, its Privacy Sandbox, purports to help protect people's privacy online and give developers tools to build their businesses. 14 And WhatsApp have released an ad campaign build around its capacity to secure private messaging (HT Tech, 2022). There are also services such as Signal that aim to be a privacy-preserving messaging service. 15 Whether these companies’ privacy efforts are effective and trustworthy or not, these changes in advertising and marketing reflect an awareness of the public's ever-increasing concerns about privacy.
In the realm of browsers, privacy benefits are also counted as a selling point. Take Brave, for instance, which proudly declares on their website that they shield users from third-party cookie tracking, invasive ads, and malware. They provide a comparative table against other browsers lacking this level of protection, appealing to individuals who prioritize privacy—since they are already able to achieve many of these. Spending on privacy is increasing too. According to a recent Gartner security report, “privacy-driven spending on compliance tooling will rise to $8 billion worldwide” (Gartner, 2020).
Indeed, there is a broad set of software and applications aimed at the privacy-minded user, ranging from search engines to communication tools. Thus, it seems clear that there is a market here and that there are plausible companies and other organizations that are likely candidates for developing software of the kind we have in mind. One example has already been mentioned—namely, the Personalized Privacy Assistant Project.
In summation, we presume that there is a demand for this form of product and hence that there should be a market for them. Moreover, beyond market motivations, there are also people motivated by idealistic non-profit beliefs in privacy and anonymity.
Should it be done?
In the previous section, we argued that automated informed consents are practically feasible. Here we turn to the normative side of the argument, addressing whether it should be done and if so, how? Part of the relevant questions here also pertain to feasibility, and specifically to the normative side of feasibility, that is, whether these software solutions are
Informed consent
As noted in the “Introduction” section, informed consent functions (if ethically satisfactory) to make an impermissible acquisition, distribution, or processing of personal data, permissible. In order for a consent to serve that normative transformative role, the consenter must be decisionally capacitated and consent freely with access to the relevant information (Eyal, 2019). An implicit consequence of these criteria is that consent agreements must not be too broad. The reason for this is simple: if the consent is too broad, it is questionable what the relevant information is and whether we can make a well-grounded decision even if all relevant information is provided. For example, in the medical context, a doctor may inform you about the risks of a bypass operation before you consent to it. If consents are appropriate, at all, in the medical context, such a consent would be appropriate. However, imagine that your doctor instead informs you broadly about medicine and treatments and then asks you to consent to any procedure they deem suitable. Such a consent would be problematic because it is too broad and badly specified (i.e. it is not clear what information is relevant for the decision).
This poses a challenge for the “bypass” or automated approach since one might think that what the user consents to is per definition too broad. However, what we are suggesting here is that the software functions similarly to a situation when someone consents on our behalf. There seems to be nothing ethically problematic with having your lawyer responding to all your consent agreements, it is just not practically feasible. The question, however, is if the software system can serve this function. 17
To see how this might create challenges, it is worthwhile to consider an extreme case. Imagine that you set up your account in 2023, and never read a privacy policy on the web again. If you never actually read and accept any privacy policy for the next 10 years, to what extent is the consent you give to the automated system valid for the remaining 10 years? After all, you may not be in a position to know which agreements the system has entered into on your behalf. We do not need to be too concerned with such extreme cases. Indeed, the more general worry is, simply put, that the system fails to formalize our preferences. Nevertheless, the risks here should not be underestimated. If the system fails to correctly infer our preferences, then our personal data could be used in ways that cause us great harm. Depending on how badly the system performed, personal data could be used in ways that are more ethically problematic than are currently the case.
In response to this worry, we think this just means that the software needs to be configured in such a way to minimize such mistakes. For instance, the system should be designed to recognize that there is uncertainty about the user's preferences (cf. Russell, 2019) and, hence, it should keep making inquiries as to the user's preferences. The system should also be designed to recognize certain types of data collecting as being risky and require that users actively intervene from time to time to enable data collection.
One may worry that this solution faces a dilemma: either the system is causing precisely what we wanted to avoid (i.e. the inefficacy and cognitive overload of having to react to an endless amount of information and options), or it will error in representing our wishes. We reject this dilemma: Just because the software would query the user occasionally, it does not mean that it cannot unburden the user. A 2008 study suggested that it would take the average user 244 hours to read all the privacy policies they receive in a year (McDonald and Cranor, 2008). Obviously, this is unsustainable for the average user. The occasional query on the other hand could be realistically completed by users. Indeed, we see this as more of a design challenge. Occasional notifications could be used to pass on information to users in cases where the software (powered by AI and machine learning algorithms) deemed it important. For example, in certain contexts, the software could inform users that a website had requested a certain type of data, and that request was automatically rejected by the software—or perhaps more importantly, when information is shared. It would be important that these “updates” would not be given all the time, as this would introduce cognitive overload issues, of the kind we described above.
When it comes to system accuracy, we think that there is no reason, in principle, to deny the transformative role of the consent process on the basis that there may be machine errors. As noted earlier, there is no reason to think that the current praxis is error-free. Consider for yourself how often you accidentally click “accept” on a cookie-request just to quickly get rid of the dialogue. Thus, our argument is simple. First, for automated consent to be normatively transformative, the system's error-rate must be lower or equal to that of the human user. As we argued before, we think this could be the case, even though we grant that an empirical investigation is required to corroborate this claim. Second, the errors the system make should have less or equal moral weight to the error that the human user makes. Moreover, to further reduce the risks involved, these types of software applications should only be used in limited contexts and in situations where there are basic legal and normative safeguards. As should be clear, this is satisfied by the focus we have here—and what Lundgren originally intended—when talking of websites cookie requests.
A different worry here concerns the legal relationship between data subjects and AI agents, of the kind discussed in the “‘Fully’ automated option” section. In drawing an analogy with the law, we said above that there is nothing ethically problematic with having your lawyer responding to all your consent agreements, and so there might be nothing ethically wrong with getting an AI agent to do it. In the law context, as Deborah DeMott (1998: 301) notes, lawyers can act on behalf of their clients, “with consequences that bind the client.” A lawyer can act as a client's agent in both transactional settings and in litigation (Deborah DeMott, 1998: 301). This can occur in cases where the client would not be able to comprehend what is occurring in the legal setting. So as a result, “Many lawyers, especially in litigation settings, make decisions with significant consequences for the client without the client's knowledge or assent” (Deborah DeMott, 1998: 303). Some clients, DeMott also suggests, will lack the expertise to “supervise” the lawyers’ actions, and they will also be unable to detect errors. One issue with applying this model to automated consent has to do with the nature of AI. As DeMott suggests, Lawyers are
Character (critical thinking)
One important consideration of bypassing online consent procedures is the effect it might have on the development of our online competencies. There is a worry that by automating consent practices, we will withdraw ourselves from the online decision-making process to such an extent that it will undermine our ability to think critically about how best to operate online.
18
An interesting parallel here can be found in John Stuart Mill's if we now pass to the influence of the form of government [democracy] upon character, we shall find the superiority of popular government over every other to be, if possible, still more decided and indisputable (1977 [1861]: 406).
One immediate difference here is that much of our online life is repetitive. The kinds of things that occur in a democracy are constantly changing and evolving. Removing oneself from the democratic process seems to remove oneself from one's civic duties. For example, someone who removes themselves from the democratic process cannot let their political leaders know how certain policies are affecting their lives in either positive or negative ways. Citizens who believe a government is not doing what is best for their country, such as failing to tackle climate change, infringing upon indigenous rights, or failing to tell the truth, can vote a party or politician out of office. They can make their own voice heard, and in the process cultivate a form of control upon their own lives. Removing oneself from this process, and getting someone to vote on your behalf, would be fraught with danger, accordingly, because voting is such a personal choice. Voting on behalf of another takes away an individual's opportunity to think about how certain policies effect their existence and seems to prevent them from voicing their own opinion. Removing oneself from the process of reading hundreds of privacy policies, that are often asking for the same kinds of data, does not seem comparable. Of course, an individuals’ capacity to read and understand privacy policies may fail to be cultivated, as a consequence of automated consent, but we do not see this as a large problem since we do not see online consent processes as having intrinsic worth. We see them as important of course, but only for the instrumental reasons pertaining to how they help individuals control how their data are shared. If the same results could be achieved with automated consent—or as we argue, give users even better control—then we do not see this as a problem.
A relevant parallel can be drawn by considering John Danaher's (2016) discussion of the threat of algocracy. The worry Danaher raises is about situations in which, on the one hand, AI systems are better at decision-making than humans, thus we need them for important large-scale public decision-making (e.g. bureaucratic and legal). However, on the other hand, in liberal democracies, we think it is important to maintain some sort of human control over fundamental societal decisions. With different degrees of algorithmic opacity, it may prove difficult for humans to enact their civic duties and maintain democratic control over their societies. If AI systems are highly beneficial, giving up those benefits would be dangerous either since democracies would be at high risk of being outcompeted or otherwise threatened long-term. However, such a risk is clearly different from allowing a system to facilitate your consent agreements for you, something most of us are not doing very well to begin with. Although such decisions may sometime be both individually and collectively important, we maintain that individual users can maintain control through the system by shaping—manually or through the machine’s queries or machine learning—the way it makes decisions (indeed, that is the whole idea). Moreover, it should be feasible for the system to produce the set of rules it applies based on the users’ inputs, which would then allow for the transparency that Danaher is worried about when it comes to democratic societal decision-making.
Still, might some complacency set in? We recognize that there may be a small risk depending on how the system is designed; however, it is a risk that is outweighed by the benefits of time-saving and the level of control that automated consent brings. Moreover, it is also possible to design the software in a way that
Re-purposed data problem
One further problem concerns the re-purposing of data. 19 Suppose you consent to a browser saving your settings—your privacy preferences—so that bypassing can occur. If such settings are elaborate or based on machine learning, then that data would potentially be privacy-sensitive. So, one might ask if the solution is simply pushing the problem into a new arena? Could you trust that such data would not be shared or used in some other way? For example, data about data subjects’ privacy preferences could be very valuable for targeted advertising. 20
This we think can be resolved in various ways. For example, it has clear implications for data security, restrictions on processing and re-purposing. Simply put, the data should only be used for functionality of the singular user and the security features of the software should ensure that this can be guaranteed. Moreover, the risks involved are arguably smaller than the risks that occur when we mistakenly agree to oversharing our data in the current regime. Giving express, though not informed, consent to a website to track and follow you, or unwillingly consenting to a company to collect your phone number or location can cause great harm. The kinds of data that would be created and stored to power and run automated consent would still pose risks, but would not be as high as the former, especially if the collection of those data is done under the agreement that it cannot be accessed, processed, or shared for other purposes.
Interference with market models
So far, we have focused on the user-side of the question. But it is also worthwhile to ask if there is a potential harm for website owners or other parties (e.g. the business model of companies such as Meta or Alphabet partly depends on the world-wide collection of data)? It is reasonable to presume that the type of software solution we discuss will result in less data being shared with external parties, which would then affect their market models. One might wonder to what degree it is ethically permissible to intervene in this business model (see Lundgren (under review) for a relevant but preliminary discussion). However, we think the answer in this case is simple. If the business model is affected by automated informed consents, and these are otherwise ethically permissible, then it seems that the business has no claim for the simple reason that their business model seems to rely on users oversharing of data. That is, if automated informed consents are permissible, then they are so partly in virtue of their ability to—on behalf of a user—provide a valid consent that reliably matches the user's preferences (e.g. the choices they would have made in case they had time to inform themselves) or at least succeed in achieving that to a degree equal to or better than the current regime. If that is the case, then it seems that any complaint about interference with the business model depends on the current abuse of the users.
Technology and responsibility
One significant technical issue, with normative and potentially legal implications, is that of “defective” automated consent. Le Métayer and Monteleone (2009) suggested two ways in which this might occur. First, this might occur when a bug in the software arises
Le Métayer and Monteleone suggest that in such cases a data subject can respond in several different ways: first, he can explicitly object to any further processing of his data on the basis of “legitimate ground”; he can also turn against the supplier of his Privacy Agent to get appropriate indemnifications or compensations (provided however that he has executed with this supplier a contract which is sufficiently protective) (2009: 139).
Another problem with Le Métayer and Monteleone's claim that a user could “turn against the supplier of his Privacy Agent to get appropriate indemnifications or compensations” (Le Métayer and Monteleone, 2009: 139), is that it may be difficult to determine who is exactly to blame for the decision that resulted in the mistake. There is substantial debate on whether machine learning, in particular, results, in what is known as a “responsibility gap” (Matthias, 2004)—which may either be understood as a situation in which it is impossible to assign responsibility or when we are unable to assign responsibility (i.e. the gap may either be metaphysical or epistemic). This issue has been discussed broadly in both ethical and legal debates and has been particularly common in the discussion on the ethics of crashing of autonomous vehicles (see e.g. Nyholm, 2018 and Hansson et al., 2021 for two relevant overviews).
Concluding comments
In this article, we started by arguing that automated consents are feasible. We presented three versions, one of which is arguably already available. These included the minimalist option (reject all), the semi-automated option, and the fully automated option. Next, we turned to discuss the ethical implications of the proposal. We suggested that while there are some risks associated with automated consent, those risks are far less than the ones associated with current consent procedures. Of course, the proposal we have sketched here is abstract in nature, and absent of any technical details. Our view is that before those technical details can be developed, a series of foundational, conceptual, and normative issues, need to be grappled with. We do not take ourselves to have settled any of these issues in this article. Our hope is that we have done enough in this article to encourage others to contribute to this important and pressing topic. 21
The main goal of this article was to introduce in more detail the idea of automated informed consent. Our hope is that this article will stir a debate on some highly relevant topics. We envision a fruitful debate on the topic to proceed in at least three directions. First, a deeper normative discussion about underlying principles of consent and whether those match-up with an automated consent and how that compares, on balance, with the current privacy management regime. Second, an in-depth applied technical discussion that goes more deeply into how it could be done. Third, a discussion addressing the legal aspects of automated informed consent.
