Abstract
From comparative psychology to positive psychology, the initial notions of hopelessness and helplessness generated an immense body of research into the causes of this particular form of inaction and its remedies. What began as work on rats and dogs would culminate into the theory of learned helplessness (LH) and, in part, the formal beginnings of positive psychology and the broader science of well-being toward the end of the 20th century. Alongside psychological inquiries into hopelessness, helplessness, laziness, industriousness, and later optimism and character strengths (like persistence)—inquiries that began with laboratory animals with the eventual human models in mind—American welfare policies were also evolving. Welfare changed from a focus on the psychological causes and consequences of poverty during the Kennedy and Lyndon B. Johnson administrations, toward an increasing suspicion of the welfare system and reframing its recipients as in need of taking personal responsibility. As welfare criticisms and reform increasingly adopted the language of responsibility and self-governance—reform that was built around longstanding assumptions about the race, gender, and motives of the welfare recipient—LH research was a central thread toward the rise of early positive psychology and its shared vision of responsibility, optimism, and character.
As neoliberalism became a political-economic requisite for the Western world, the importance of individual marketplace responsibility resonated with neoconservatives, libertarians, and democratic liberals alike. Numerous scholars have written about the relationship between psychologies and self-understanding, such as psychotherapy’s complicity with consumerist versions of the self (Cushman, 1996), or considerations of the circuitry of psychology and its publics (e.g. Pettit and Young, 2017). While a significant volume of histories focuses on neoliberalism and other variants in the ideological and material networks of political-economic change, this article will be more focused on the impacts of these values on both psychological research and its publics. As Nikolas Rose wrote during the 1990s, liberal and democratic spaces aligned with individuals governing ‘themselves as subjects simultaneously of liberty and responsibility’ (Rose, 1996b: 12), noting elsewhere an emphasis on non-social strategies governance like valorizing personal responsibility (Rose, 1996a). Some psychologists argue that the consequences of rising neoliberal policies has also contributed to an emerging subjectivity that globalization has helped distribute to labour-rich parts of the world (Bhatia and Priya, 2018; Teo, 2018). A person is seen as an autonomous and individualized working subject ‘whose personal and professional goals are blurred and work life is seen as part of a larger creative project of seeking fulfillment and fun’ (Bhatia and Priya, 2018: 656). Several of psychology’s approaches and topics, including positive psychology, have been positioned as sustaining and making normal several tenets of the neoliberal system, including the ‘selfways’ or paths of imperative growth toward well-being, and that of self-regulation toward personal success (Adams et al., 2019: 191).
Others have begun to chart the intellectual and political history of psychology’s transition from self-esteem to forms of self-reliance and self-governance (Pettit, 2020, 2024). Dubious psychological research on IQ would also bolster the strongly conservative libertarian and neoliberal critiques of welfare that viewed the inequities of race and class as natural (Winston, 2018). While questions of intelligence seemed to give way to questions of both self-control (such as delay of gratification) and emotionality (such as emotional intelligence), the latter apparently shedding its racialized past and connections to class (Staub, 2018), the debate over welfare was arguably a site where the identity of recipients was frequently in mind.
In this paper, I attend to the developments of psychological research on the processes behind hopelessness, helplessness, and eventually responsibility and perseverance, alongside an increasing distrust toward ‘the welfare state’ and its eventual dismantling. While the origins, forms, scope, and legacy of Western liberalisms is still an active area of historical inquiry (e.g. Stewart, 2020), the story traced in this article follows the postwar liberal seeds of psychological concepts that bloomed in a predominantly neoliberal garden. To do so, I will be framing moments in psychological research within histories of American welfare policy. My hope is this will provide the reader with some understanding about the broader changes in the discourse around welfare within which these psychological researchers were operating, reacting to, and possibly influencing.
First, after providing some mid-century details on the roots and standing of American welfare, I tell the background story of psychobiologist Curt P. Richter and his 1950s research on hopeless rats. While Richter was one of the points of influence in studying helplessness in laboratory animals, he also directly linked his findings to his worries of the ramifications of life in a ‘welfare state.’ In the 1960s, despite initiatives like LBJ’s War on Poverty, with stereotypes of black, unwed mothers undeservingly receiving financial aid, alongside social scientific work on the cultural deficits and psychological damage of the racialized poor, the tide was beginning to turn against welfare. Then, I begin to unpack the 1970s rise of Martin Seligman and colleagues’ research on LH, which would generate spinoff concepts either connected directly to welfare (in the case of learned laziness) or that spoke to more general concerns of productivity and achievement (in the case of learned industriousness). Into the 1980s, as discourse around welfare and the loss of personal responsibility began to coalesce, there was a parallel redevelopment of LH—now framed as a theory of depression among humans—from a problem of environment to, additionally, a problem of an individual’s thought pattern and, later, character. Finally, during the late 1990s era of celebratory liberal democratic capitalism, LH unfolded into a science and practice of optimism and character strength: begetting the massively successful subfield of positive psychology and coinciding with the increasing political turn toward preferring personal responsibility in the adjacent arenas of well-being and welfare.
The roots and rats of mid-century welfare
While varied Western forms of government aid and relief to citizens, often placed under the umbrella term of
While welfare now seemed nationalized, the already existing, state-level moral guidelines that ran along the normative channels of gender, race, sexuality, and class also continued, such as treating motherhood out of wedlock as substantially different from marital motherhood. Making matters worse for potential and active welfare recipients, the 1939 amendment to the act created a separate survivor’s insurance system for widows and children of certain male workers. Effectively, this turned ‘welfare into a safety net for morally disdained, racially despised women,’ with some states further restricting the moral guidelines to become a welfare recipient’ (Mink, 1998: 46; also Lawinski, 2010: 26).
In the postwar era, while some expansions were made in eligibility for assistance, these soon led to further restrictions such as state-level investigations into alleged fraud, including investigating suitable family arrangements at recipients’ homes. Such backlash was still bound up with moral concerns, such as welfare rewarding promiscuity and unwed motherhood—a traditional, reproductive morality that would carry into welfare debate and reform for decades to come (Mink, 1998: 35; see also Briggs, 2019). Despite the increased costs of welfare programs during the 1950s keeping up with population growth, the image of the welfare recipient as a black, urban, and single mother began to change the public’s support of such initiatives (Withorn, Axinn, and Levin, 2018: 220).
The postwar era was also a time of widespread, but unequally spread, affluence, prompting many to begin to wonder about the societal effects of a materialist consumer society alongside what would become understood as a culture of poverty (O’Connor, 2001: 146). This included popular economist and political advisor J. K. Galbraith, who would contend that economic growth was not a cure-all for everyone, including the poor. Concerns about poverty, changing and sometimes expanding welfare policies, as well as emerging criticisms of a liberal economy and the ‘welfare state’ that the New Deal birthed, were leading to pushes to reform welfare. All of these were elements colouring the perspectives of politicians, policymakers, and social scientists moving into the 1960s. Such colouring included the view from Curt Richter’s rat research laboratory.
Hopeless laboratory rats
Behavioural research on helplessness that used animals has one of its roots in research on rats, especially the work done by noted psychobiologist Curt P. Richter. Born in Denver, Colorado, to German parents, Richter’s initial scholarly interests lay in engineering. Yet, while at Harvard as an undergraduate, he became drawn to biology and, especially, psychology as taught to him in the classrooms of E. B. Holt and Robert Yerkes (Blass, 1991). Richter began doctoral work at Baltimore’s Johns Hopkins University with two of behaviourism’s most iconic players: John B. Watson and the laboratory rat. As Richter recalled, he had no clear notion of what he should research, and started studying rats only when a cage of 12 of them was left in his room: ‘I was not sure whether Watson had sent them to give me some ideas about what I could do for a thesis, or simply to give me a company while I make up my mind’ (Richter, 1985: 372).
When Watson left the academy in 1920, Adolf Meyer (who had granted Watson his research space within the Henry Phipps Psychiatric clinic) eventually placed Watson’s former student Richter in charge (Leys, 1984: 145). As Leys suggests (ibid.), with the departure of Watson, the clinic’s work would retain Meyer’s relative psychobiological and functionalist bend. Richter’s work on rats would more so follow in that direction. Inspired by the physiological research of Claude Bernard and Walter Cannon, Richter pursued a prolific career investigating several topics, such as the study of ‘behavioral and biochemical interrelationships governing such diverse matters as sleep, jet lag, stress reactions, and the onset of cancer and other diseases’ (‘Curt Richter’, 1988).
During Richter’s long career in Baltimore, a rat population control program was established at Johns Hopkins Medical School in 1942, with the support of the city as well as the Office of Scientific Research and Development (Ramsden, 2012: 127). Having by then established himself as a recognized expert on the behaviour of rats, including what they would be interested in eating and what would most efficiently kill them, Richter spent the first two years of that program as its director. His encounter with an urban rat population, a wild group when compared to his familiar lab-bound critters, would redefine his study of rats. The various lines of research associated with Baltimore’s rat control program would ultimately feature the lab environment as a stand-in for the city (ibid.). 1 The region’s inhabitants, whether rats or humans, would likewise be rendered comparable.
In the case of Richter’s work, the stark differences between the urban/wild rats and the lab/domesticated rats made him reconsider the scope of his research’s focus. While others would begin criticizing using domesticated rats as a form of artifice, Richter saw scientific promise in comparison of free rats to those shaped by the lab practices and spaces. He would come to understand domesticated rats not as useful models for human nature as usually understood, but as models for ‘a modern human moulded by an urban environment of indulgence and banality’ (Ramsden, 2012: 126). From that work and the research of others, Richter was able to delineate the key physical and behavioural differences between wild and domesticated Norway rats. Richter thought the wild rat to be ‘a really wonderful animal, very well equipped to take care of itself under all kinds of conditions’ (Richter, 1985: 385).
One of Richter’s late-career projects that continued these comparisons focused on an idea sourced from anthropologist Walter Cannon’s work on so-called ‘voodoo deaths’ (Cannon, 1942). These were human deaths allegedly caused by black magic that other anthropologists had heard about. Cannon suggested that such deaths ‘may be real [and] may be explained as due to shocking emotional stress—to obvious or repressed terror’ (ibid.: 180). Richter, based on his research using both domesticated and wild rats, observed instances of sudden death that challenged Cannon’s theory of sympathetic nervous system action (Richter, 1957). Working under contracts with the Office of the Surgeon-General and the Office of Naval Research, Richter sacrificed thousands of rats in forced swimming experiments. Once inside a glass jar filed with water, jetting from above was water of ‘any desired temperature’, which ‘precluded the animals’ floating’ (ibid.: 193). Rats had no choice: swim or drown. Swimming until unavoidable death by drowning could last for up to 60 hours.
A minority of rats, regardless of temperature, ‘died within 5–10 minutes after immersion’ (Richter, 1957: 193). Realizing he was witnessing a kind of sudden death in rats similar to the psychogenic ‘voodoo deaths’ that Cannon recounted, Richter explored several possible factors contributing to these cases. Among his findings, he noted how wild rats would sometimes die while being held or when transferred from cage to water without being held: ‘What killed these rats? Why do all of the fierce, aggressive, wild rats die promptly on immersion after clipping [of their whiskers], and only a small number of the similarly treated tame domesticated rats?’ (ibid.: 195). Remarking on how these rats’ hearts would slow down rather than quicken in their extended stressful situation, Richter speculated that the process of sudden death did not have to do with a fight or flight response, but ‘it is rather one of hopelessness’ (ibid.: 196). These findings led Richter to suggest that the more ‘rigorous and healthy’ wild rats, relative to their lab-bound brethren, were more at risk of sudden death (ibid.: 197). While under the impression of hopelessness, or ‘voodoo,’ both wild rats and ‘primitive man’ could simply give up living—or return to life once ‘freed from voodoo’ (ibid.). In this specific instance, civilizing or domesticating an organism could protect it against superstitious death. However, Richter found much to be desired in the domesticated rats typically found in laboratories.
Richter on the welfare state
Richter shared his increasing concerns about domestication to a broad psychological audience in a 1959
Likening this nefarious process of domestication in the Norway rat to the process of civilization in humans, Richter noted how economist J. K. Galbraith argued in his then-recent postwar classic
Richter’s extensions from lab rat to modern human were met with skepticism from funders such as the Rockefeller Foundation (Ramsden, 2012: 130). Within the pages of
Welfare grows? 1960s into 1970s
A wavering postwar optimism, concerns over equality, poverty, as well as the proper eligibility and structure of welfare, were all in motion throughout the 1960s. Kennedy's administration worked toward adopting a commercial form of the Keynesianism that guided the New Deal—one that, in keeping with a booming postwar consumer society, looked toward market growth and compensatory social welfare policies (O’Connor, 2001: 140). The Welfare Amendments of 1962 led to the ADC program being renamed the Aid and Services to Needy Families with Children, as well as expanded public assistance in the form of employment services where a parent needed to accept retraining to become a recipient (Withorn, Axinn, and Levin, 2018: 224). Kennedy’s administration also sought to understand causes of poverty just as a new area of liberal science on the culture of poverty was emerging, with books such as Michael Harrington’s
Growing out of the Kennedy-era ‘new economics’ approaches, LBJ’s Great Society initiatives worked toward addressing problems such as poverty. In 1964, shortly after the Civil Rights act passed, the Office of Economic Opportunity was formed to oversee new programs (Withorn, Axinn, and Levin, 2018: 227–30). What held together programs, such as the Head Start education program and the War on Poverty welfare program, was an understanding rooted in the pathological effects of sensory deprivation and psychoanalytic attachment research: that the poor were culturally and maternally deprived (Raz, 2013). Consonant with a sociological focus on pathology and psychological understandings of damage, Moynihan’s 1965 report controversially brought such views to the public (O’Connor, 2001: 202–3). In his unusual report that largely built on the work of others, Moynihan pointed to his infamous ‘tangle of pathology’ that included the psychological and cultural damage of slavery, as well as the prevalence of female-headed households among black families as an indicator of matriarchal family structures causing issues such as welfare dependency (Lawinski, 2010: 28; O’Connor, 2001: 204–5). Into the 1970s, within public policy writings, some reports would also connect black poverty with psychological deficiencies in achievement motivation and the increasingly prized ability to delay gratification (Staub, 2018: 124).
As the 1960s ended, the heteronormative and racialized pathological-psychological understanding of poverty, as emphasized by Moynihan, would continue to influence attitudes and approaches to welfare, despite various criticisms against such a model (Greenbaum, 2015). Moynihan would continue to write about poverty while serving in Nixon's administration, arguing, for example, that poverty is a cultural form of malnutrition (Raz, 2013: 47). Nixon and his speechwriters (such as Moynihan) were using the latest social scientific ideas to explain the consequences of a deprived environment, including using the very new concept of ‘learned helplessness’ as early as 1969 (Staub, 2018: 17–22). Guided by a vision of welfare recipients as damaged, duplicitous, or both, Nixon-era programs, such as the Work Incentive Program and the proposed Family Assistance Program, were setting the course toward a dismantling of welfare and the rise of a workfare model in which ‘state-designed programs [with] broad flexibility’ subjected recipients to an ‘inculcation of work values, behavior modification techniques, mandatory program and work requirements … and strict penalties for noncompliance’ (Lawinski, 2010: 33). While the 1960s saw some beneficial expansions in welfare, it also continued to mark a rising distrust of the logics and ethics of the welfare state as another flawed institution rendering citizens hopeless and helpless.
Helpless laboratory dogs
The conceptual roots of learned helplessness are in various experiments on animals—primarily dogs. In 1967, two articles were published on the effects of submitting dogs to what was termed ‘inescapable shock’ within laboratory settings (Overmier and Seligman, 1967; Seligman and Maier, 1967). Martin Seligman, now most associated with the movement and subfield of positive psychology, was an authorial constant in those and later works reporting the research and theory of LH. As he put it in a preface to an updated 1998 edition of his popular 1990 book
Seeing the potential to study helplessness, a topic long of interest even to a young Seligman, he worked with Bruce Overmier and then Steve Maier to further explore the dogs’ misbehaviour. In the first article on the effects of inescapable shock in canines, Overmier and Seligman (1967) noted previous research but with a different hypothesis, one to do with the motivation and adaptation of animals exposed to inescapable shock, also interested them. In the original experiment, dogs were first placed into a unit for administering inescapable shock. Referred to as a ‘Pavlovian harness’ in later articles (e.g. Seligman, Maier, and Geer, 1968), it was described as a ‘rubberized, cloth hammock located inside a shielded, white, sound-reducing cubicle’ (Overmier and Seligman, 1967: 28). While in this hammock, a dog’s legs would be placed through four holes so they could hang beneath its body. Once the position of its legs was secured, the dog was fastened into the hammock. A dog’s head would also be forced into immobility, with ‘panels placed on either side with a yoke between them across
Some time after a dog’s lesson in inescapable shock, their ability to leave an escapable shock was put to the test. For this part of the experiment, a dog would be placed into a unit called a shuttle box. This was a two-way box with two compartments separated by a barrier that could be adjusted. In the original LH article, the barrier was set to a dog’s shoulder height. Inside the shuttle box, shocks flowed through the grid floor. If a dog jumped past the barrier into the other compartment, then ‘photocell beams were interrupted, a response was automatically recorded, and an ongoing trail was terminated’ (Overmier and Seligman, 1967: 29). For the shuttle box test, dogs would be given training to breach the barrier before a shock occurred. For any dog who failed to do so within the allotted time, a shock was administered until either the dog finally jumped the barrier or if a full minute passed without a response. The general finding in a variety of research was that for dogs who had previously learned that shock was independent of their actions, most failed to escape the shock in the new task.
Seligman would later take the time to mention that he was an ‘animal lover’ and that participating in the early learned helplessness studies initially left him ‘dejected’ (Seligman, 2006: 20). Before committing to his research programme, he discussed his discomfort with his philosophy professor. When his professor asked if there was a chance that causing pain in dogs now could eliminate even more pain in humans later, and that there was a case for generalizing these topics from animals to humans, Seligman’s ‘answer to both these queries was yes’ (ibid.: 21). Briefly revisiting the problem of animal welfare, Seligman stayed with this argument; adding that he ceased work on dogs as soon as they figured out the ‘basic facts [of] how to cure and prevent helplessness’ (Seligman, 1995: 3). Seligman came to view his initial research on dogs as necessary to potentially understand the underpinnings of human helplessness—and perhaps even mental illness, in particular the lethargic and self-defeating throes of depression (e.g. Miller and Seligman, 1975; Seligman, 1974, 1975).
In explaining their early findings in a follow-up article, Seligman and Maier explained the process of ‘learning that shock termination is independent of responding seems of learned ‘helplessness’ or ‘hopelessness’ advanced by Richter’ among others (Seligman and Maier, 1967: 4). Within a year, Seligman and his colleagues were conjecturing that the ‘maladaptive failure of dogs to escape shock resembles some human behavior disorders in which individuals passively accept aversive events without attempting to resist or escape’ (Seligman, Maier, and Geer, 1968: 258). In support of this, they pointed to a description in Bruno Bettelheim’s book
Lazy pigeons of the welfare state
Alongside Seligman and colleagues’ numerous outputs on learned helplessness throughout the 1970s, other research on LH and related concepts also grew. In 1972, Larry Engberg, Gary Hansen, Robert Welker, and David R. Thomas, all of the University of Colorado, Boulder, shared their findings on an allegedly analogous concept: learned laziness. In their interpretation of their findings, published in
For their autoshaping experiment, Engberg and team drew on the emerging literature on learned helplessness (Engberg et al., 1972: 1002). Engberg was a departmental colleague of Steven Maier (one of Seligman’s early LH collaborators)—and Maier also served on the dissertation committee of Robert Welker, who was a graduate student under the advisement of David R. Thomas when their initial article on learned laziness appeared. 3 Engberg and team used 27 domestic pigeons who were ‘trained in a standard single key operant conditioning chamber’ (ibid.: 1003). Pigeons were assigned to one of three groups that varied in when food was delivered—with one of the groups receiving food ‘independent of the subjects’ behavior’ (ibid.). Through the eyes of this team of Coloradan experimental psychologists, the pigeons had become lazy because they did not need to expend any of their own energy in a state of guaranteed food security—in other words, an environment in which energy was freely and lavishly provided for them.
Engberg and team added that if laziness was conditionable, it would suggest that its opposite—industriousness—could also be learned. After addressing and dispensing with the possibility of superstitious behaviour in their pigeons (cf. Skinner, 1948), they argued that their pigeons represented the ‘development of a set to respond (industriousness) or of a set to not respond (laziness)’ (Engberg et al., 1972: 1004). Laziness was seen as an adaptive non-response set stemming from the pigeons’ past experiences and present environment. Laziness was something learned; the security of their environment was their teacher. The lazy pigeon research was met with criticisms of methodology as well as the researchers’ overreaching interpretation. Gamzu and team remarked on the grand shift from an ‘alteration of a behavioral repertoire’ to ‘characteristics induced in the organism’ (Gamzu, Williams, and Schwartz, 1973: 367). For these researchers, there was a yawning chasm between observed behaviour and a high-level psychological concept. Further complicating the matter was the expanded ‘experimental burden’ when trying to answer what, exactly, was learned—such a question could only ‘be addressed at a level of analysis far removed from the specifics of behavior’ (ibid.).
Continuing their deconstruction of the laziness interpretation, Gamzu and team remarked on the potential subtext of using a word like
The emerging welfare of individuals: 1970s into 1980s
While Nixon and Moynihan’s Family Assistance Program never made it past debate, being left behind in 1972, smaller measures such as Supplemental Security Income in 1973, and then Carter’s Better Jobs and Income Program in 1978, saw welfare continuing to be reformed under the increasingly important principles of work and individuality (Withorn, Axinn, and Levin, 2018: 261–2). As Carter’s 1979 ‘crisis of confidence’ speech or ‘malaise address’ indicated, American culture had taken on a pessimistic tone in the wake of Vietnam, Watergate, and stagflation; despite this, a groundwork was being laid for both happiness studies and positive psychology (Horowitz, 2017: 61). Trends toward neoliberalist policies would cement in the 1980s, from Reagan’s 1981 Omnibus Budget Reconciliation Act making work rules stricter, to the Family Support Act of 1988 and its Job Opportunities and Basic Skills Training (JOBS) program that required women even with very young children to work on work (such as job seeking or training; Lawinski, 2010: 29–30). As a supposed new ‘consensus’ emerged in the mid-1980s, a broadly appealing neoliberalism focused on ‘dependent’ welfare mothers and the importance of individual responsibility (O’Connor, 2001: 256–7).
At the extreme end of welfare criticism was Charles Murray’s
Extending helplessness to humans
Toward the end of the 1970s, more research building on LH argued that there must be a counterpart, such as learned industriousness—and such a concept could complete the learned industriousness–helplessness continuum (Eisenberger, Park, and Frank, 1976). For Robert Eisenberger and team, the accumulated body of research on learned helplessness was lopsided: ‘The emphasis on learning to be helpless implies a peculiar asymmetry in the laws of learning’ (ibid.: 227). Eisenberger’s first major research study pursued the concept by studying 144 New York state elementary schoolchildren. They found the children who received approval performed significantly better than the control groups of children. Eisenberger would continue to pursue a research programme on learned industriousness over the ensuing decade, and later deliver a detailed review paper in
As Horowitz (2017) points out, Seligman’s later positive psychology would rarely consider the social setting and determinants of well-being. Nevertheless, during the 1970s era when trust in governmental programs and decision-making was declining, Seligman did briefly speak to the apparent problems of the welfare state. In a book meant for a broad audience,
Yet Seligman held out hope for the impoverished who were further resigned into helplessness with the indignity of welfare. Based on the emerging research on unlearning helplessness, or regaining hope, Seligman saw the theory of LH as the framework for an important social tool. Developments of LH into a theory of depression were connected to developments of reversing or unlearning helplessness. Essentially, if procedures could be designed to bring dogs back from their induced state of helplessness, perhaps these would suggest therapies for humans. Seligman extended these mechanisms to the impoverished of the welfare state: When oppressed and impoverished people see all around them the possibility of power and affluence, their belief in uncontrollability shatters, and revolution becomes a possibility… The resentment in the black community against liberals and social workers who try to alleviate black problems is understandable, for poverty is not only a financial problem, but, more significantly, a problem of individual mastery, dignity, and self-esteem. (Seligman, 1975: 165)
Precedence for reversing helplessness—or restoring hope—is found in Richter’s work on sudden death in wild and domesticated rats. In pursuing further support for the emotional cause of sudden deaths in rats, Richter demonstrated that if hopelessness was eliminated, the rats did not die. By grabbing and freeing the rats repeatedly, and by exposing them to water immersion, wild rats would apparently learn that these situations were note hopelessly fatal. Wild rats taught to hope would ‘again become aggressive, try to escape, and show no signs of giving up’ (Richter, 1957: 196). Seligman and his colleagues, aware of Richter’s findings, would explore the possibility of eliminating helplessness early on in their research pursuits.
Donald S. Hiroto’s article (based on his dissertation work under Seligman) is a key example of extending learned helplessness research to humans (Hiroto, 1974; see also Hiroto and Seligman, 1975). Unlike dogs and other animals, these 96 University of Portland introductory psychology students were not given shocks as an aversive stimulus, but instead were subjected to a 3000 Hz tone. Similar to the dogs, some humans were assigned to groups wherein following instructions and completing a task, the tone would cease. Others were assigned to groups where aural evasion was impossible. Results were quite similar to research on animals, leading Hiroto to conclude that learned helplessness ‘can be experimentally produced in man’ (Hiroto, 1974: 192). Importantly, Hiroto’s work connected environmentally induced helplessness to an external locus of control. Into the 1980s, Seligman and others would soon also begin incorporating such cognitive components into theories of learned helplessness.
LH research was expanding from the environmental explanations to personal explanations of behaviours. In 1978, working with well-known British cognitive researcher John Teasdale, Seligman and another colleague wrote that central to the learned helplessness hypothesis was how ‘learning that outcomes are uncontrollable results in three deficits: motivation, cognitive, and emotional’ (Abramson, Seligman, and Teasdale, 1978: 50). In clarifying what they meant by the cognitive aspect of the learned helplessness hypothesis, they argued that simply exposing a subject to uncontrollable outcomes wasn’t enough to elicit helplessness. Instead, ‘the organism must come to expect that outcomes are uncontrollable’ (ibid.: 51). Connecting expectation with an attributional theory framework—such as a person attributing the causes of helplessness as being internal or external to them—Seligman would later look back on this work as the seeds for explanatory styles (Peterson and Seligman, 2004).
Seligman’s focus on helplessness and depression would open a new collaborative avenue of inquiry on explanatory styles. Now working alongside Christopher Peterson, Seligman expanded his cognitive framing of depression (via the attributional reframing of learned helplessness theory). The proposed, intersecting styles of causal explanations of events included internal/external, stable/unstable, and global/specific. For example, in explaining an overdrawn bank account, a person with a stable, global, and internal style of explanation may find the cause in their inability to do anything correctly; meanwhile a person with a stable, global, but external style of explanation may find blame in the widespread incompetency of all financial institutions (Peterson and Seligman, 1984: 349).
Although never denying the importance of environment or social reality in shaping a person’s explanation, the discourse of individual differences was beginning to shape Peterson and Seligman’s work. They considered explanatory style to be a trait, like ‘liberalism’ or ‘vanity’—though they also allowed room for explanatory style to be ‘traitlike’ and potentially variant (Peterson and Seligman, 1984: 370). Explanatory style shaped, while being shaped by, environmental events. A problematic explanatory style was becoming a measurable precursor of learned helplessness: ‘just as smoking is a risk factor for lung caner’ (ibid.: 350). One’s individual cognitive style, if lacking in adequate strength and perspective, was becoming understood as a risk factor for what were once environmental and social problems.
New responsibilities of the coming new millennium: The individual pursuit of well-being
While public views on poverty had been changing since the 1960s—from economic forces beyond an individual’s control to blaming the impoverished and the national welfare programs themselves—the notion of responsibility began to calcify (Trattner, 1999: 396). From the mid-1980s onward, varied political persuasions—liberal, conservative, and libertarian—gravitated toward a new consensus, one that prioritized work, family, personal responsibility, and situational malleability (Graebner, 2002: 181). As Trattner (1999: 396) pointed out, John Goodman, a fellow of the Barry Goldwater Institute, promulgated sentiments such as ‘poverty is a choice’ in 1995. With the apparent threats to both family and work that welfare posed, the ‘low-wage, full-employment economy’ of much of the 1980s and 1990s was a site where ‘work became a natural, even irresistible solution to social problems, even for liberals’ (Graebner, 2002: 187).
In 1996 under the Clinton administration, with the passing of the Personal Responsibility and Work Opportunity Reconciliation Act (PRWOR), several federal policies providing social safety and financial assistance were either ended or reversed (Trattner, 1999: 397). While it was a slightly less extreme version of the initially proposed Personal Responsibility Act, the passed act ended the AFDC, putting the Temporary Assistance for Needy Families in its place (Withorn, Axinn, and Levin, 2018: 300–1). PRWOR focused on promoting responsibility via work, including encouraging states to develop ‘individual responsibility plans’ to help welfare recipients move into the workforce as soon as possible (The Personal Responsibility … Act 1996). In a culmination of morally guided welfare reform, the new standard for welfare reaffirmed the heteronormative values of the household structure, the suspicious gaze through which racialized recipients were monitored, and the new norm of workfare (Briggs, 2019: 55; Lawinski, 2010: 31–4; Mink, 1998: 43–66). Despite the socially conservative morality, the ethics of work—such as all citizens and actions needing to be oriented toward work—appealed broadly in an increasingly neoliberal era. As LH research grew from its cognitive revisioning toward a psychology of optimism, character, and general well-being, the soon-to-be-christened positive psychology coincided remarkably with this neoliberalist revisioning: working on the self, cultivating a type of moral character shining in personal control, responsibility, and persistence, helped to legitimate the working subject as normative and desirable.
The responsibilities of self-work
During the 1990s, Seligman was continuing to shift toward what would become positive psychology. He also continued to expand into popular press markets, such as
The growing research on explanatory styles culminated in an updated book on learned helplessness that Seligman co-authored with Peterson as well as his older collaborator Steven Maier (Peterson, Maier, and Seligman, 1993). Seligman and colleagues were not naïve to the influence of cultural and political trajectories on the rise of the own work—in fact, they positioned research on LH and similar concepts as reflecting an ‘Age of Personal Control’ (ibid.). The authors were skeptical of the presumed mobility of an American culture that fills ‘us with boundless expectations—anyone can be president, or a tennis champion, or a movie star, or a CEO.… No wonder the age of personal control takes such a toll’ (ibid.: 308). The trio thought that ‘personal control and its flip side, what we call helplessness’ was one of the reasons for their observed mid-century shift from classic, stimulus–response formulations of learning theory toward theories of ‘the individual as a source of action’ (ibid.: 4). Likewise, they pointed toward changes in American society, such as the ‘erosion of belief in the efficacy of [American] institutions’ as well as the preponderance of both wealth and individual, consumer choice as likely causes for this theoretical shift from environment to the individual: ‘Notions like learned helplessness, self-efficacy, and the locus of control were spawned by Rhinestone Refrigerators, political assassinations, and general affluence’ (ibid.: 16).
Despite their caution, neoliberal discourse of a marketable and self-regulating entrepreneurial selves was arguably inherent to their work, such as when they opined that the lessons of LH research could be ‘profitably’ applied to ‘Black Americans’ and the problems of joblessness (Peterson, Maier, and Seligman, 1993: 257). Though personal control was important, they saw obvious shortcomings in emphasizing a belief in personal control. For one, now knowing the causal cognitive pathways to LH, such a belief could bring increased depression. Seligman later noted that the ‘costs’ of someone with depression went beyond emotional suffering: ‘It markedly hurts her productivity at work or school’ (Seligman, 1995: 16).
In 2000, between the eras of unbridled economic optimism and shocking political terror, Seligman led the charge in promoting positive psychology—at once a movement and new subdiscipline (e.g. Seligman and Csikszentmihalyi, 2000). During this early initiation phase of positive psychology, Seligman continued collaborating with Peterson, together pursuing work that led to a major book,
Trading social welfare for individual well-being
While LH research that yielded positive psychology has its roots in the study of animals, the application to problems of work and welfare implied a specific type of human: an unemployed individual likely racialized as black, likely gendered as mother, and understood to need individual-level intervention in their responsibility. While hopelessness and helplessness were first assumed as problems of the environment, amidst rising suspicions of the welfare state and the propriety of its recipients, explanations shifted internally to a cognitive problem of thinking style, a productivity problem of responsibility, and a repackaged moral problem of character. Just as the primacy and authority of the environment of behavioural laboratory research on animals slipped into the past, Richter’s presumed stand-in of lab-as-civilization also gave way to internalized explanations. By the 1990s, Seligman understood that human behaviour was not comparable to the behaviourist lab animal, and it was a person’s mentality and inner strength that determined their success—in this era, self-discipline was the new key to seemingly everything (Staub, 2018: 130–1). 6 As helplessness gave way to responsibility in the burgeoning field of what become known as positive psychology, and the long-running vein of welfare reform finally reached its terminus in dismantling what was once a basic social security, those suffering from the pangs of ‘welfare dependency’ and unemployment were put to work on themselves. Workfare programs likewise focused on the personal responsibility of achieving a neoliberal form of well-being.
As Horowitz (2017: 119) argues, linking the developments within an academic discipline to ‘a nation’s political mood’ can render a simplified version of history that neglects the variegated workings within a discipline’s formations. Nevertheless, when we consider the beginnings of positive psychology and the concentration of neoliberalism under Reagan’s 1984 Morning in America campaign, ‘the parallels between these two mornings in America are striking’ (ibid.). Tied to the rise of happiness studies, positive psychology, the more general science of well-being, and the wellness industry is the story of American welfare. While in the era of the Moynihan Report, the direct influence of social scientific research—on the culture of poverty, the consequences of deprivation, and the importance of self-esteem—on policy development is evident, the influence becomes more diffuse when entering the neoliberal era of personal responsibility and welfare reform. One of the most notorious places that learned helplessness research appears to have had influence is in CIA torture-interrogation techniques, culminating in the American Psychological Association’s own crisis of conscience and confidence (Aalbers, 2022; Hoffman et al., 2015; Seligman, 2018). That intellectual and political lineage of LH has not only extended the findings of LH research, but also generalized the suffering of rats and dogs to dehumanized enemies of the state.
In addition to the diffusion of influence—in both actors and direction—positive psychology was consistently criticized on various grounds. While some of these critics would dissect the alleged scientific status of positive psychology (e.g. Brown, Sokal, and Friedman, 2013), several would examine its lineage in positive thinking and New Thought, and its disregard for fields already studying facets of human resilience such as social work (e.g. Becker and Marecek, 2008; Ehrenreich, 2010). With its emphasis on individual strengths and inward growth, others would comfortably identify positive psychology with a neoliberal form of self-governance and self-control (e.g. Binkley, 2011; Christopher and Hickinbottom, 2008). Within the wide area of the science of well-being, these types of criticisms are noted, and the initial Seligman-era is viewed as a choppy ‘first wave’ of positive psychology that researchers have since tried to expand and improve upon (e.g. Lomas et al., 2021; Ryff, 2022; Steger, 2025; Wissing, 2022). Whatever internal refashioning the field has been going through, the early era of positive psychology is a central part of a cultural movement of achieving well-being, or at least purchasing knowledge toward happiness and wellness (Davies, 2015; Horowitz, 2017: 7).
If taken as such, then this cultural movement is part of the more global turn toward neoliberal values of self-determined, self-monitored, self-controlled, and self-governed pursuits of well-being in which the barriers between life and work are part of a forgotten map. What Rose (1996b: 164) observed within the ‘new political culture’ was a ‘mutual translatability’ across life spaces such as work, family, or play. In a society that came to understand its citizens as either consumers/workers or workers-in-waiting, the fault of unmarketability was one’s own. As more recently put, under neoliberalism ‘individuals act on themselves so that power relations are interiorized—and then interpreted as freedom’ (Han, 2017: 28). In the language of character and responsibility, you are responsible for your psychological/economic well-being, one in which self-fulfillment and self-work is undifferentiated. Coinciding with the welfare-reform virtue of individual responsibility, our pursuit of individual well-being seems to have come at the cost of our shared responsibility to the welfare of others.
