Abstract
Economist Richard Nelson observed back in 1959 that basic research generated many such spillovers and that firms who funded this research had only limited ability to appropriate value from these spillovers. Nobel Laureate Kenneth Arrow recognized that these spillovers meant that the social return to R&D investment exceeded that of the private return to the firm undertaking the investment. Hence, he reasoned that private firms will underinvest in R&D from a social perspective, and therefore, the public ought to provide a subsidy for R&D investment to stimulate further R&D to move closer to the socially ideal level. Economists Wes Cohen and Dan Levinthal, in turn, wrote about the importance of investing in internal research in order to be able to use external technology, an ability they termed “absorptive capacity.” Nathan Rosenberg asked a related question, “Why do firms conduct basic research with their own money,” and answered that this research enhanced the firm’s ability to use external knowledge.
It is important to note, however, that the specific mechanisms that enable companies to absorb external knowledge were not identified by these scholars. Nor was there any consideration of companies opting to move unused internal knowledge out to the wider environment, which might enable the firm to obtain additional revenues or lower their costs of sustaining the technology over time. And spillovers themselves were considered to be a cost to the focal firm of doing business in R&D and were judged to be essentially unmanageable.
Antecedents of OI
Prior to the publication of the book
In parallel with Chandler’s work on business history, Michael Porter revolutionized the approach to business strategy by building upon the Structure-Conduct-Performance model of industrial organization put forward by Joe Bain. Porter modified the industrial organization literature of that time, which modeled producer and consumer surplus, to focus on ways for firms to increase producer surplus, even if it came at the expense of consumer surplus. Entry barriers, switching costs, and specialization were the primary interests of this work.
The action and the locus of innovation in both of these treatments were largely inside the firm. R&D was to be done internally. For Chandler, internal R&D was the source of key differentiation in products. For Porter, internal R&D acted to raise entry barriers for competitors who might wish to imitate the company’s products. University research was of some help in initial understanding of the underlying science but lacked the specific evidence to apply that science to the requisite innovation needs of a company. Startup companies were not capable of doing very much since they lacked any deep science and technology capabilities and also lacked much of an organization to produce and deliver new products to market. By construction, anything a startup could do would lack much of an entry barrier for any other firm, making them, according to Porter’s logic, an unlikely source of anything particularly innovative.
These concepts were reflected in innovation data of much of the 20th century. As recently as 1981, 70% of U.S. R&D spending was performed by organizations of more than 25,000 employees, according to data from the National Science Foundation (as shown in Table 1). That same year, organizations of less than 1,000 employees accounted for just 4.1% of R&D spending. This is quite consistent with what Chandler and Porter would have predicted.
U.S. Industrial R&D in Private Organizations by Size of Enterprise.
However, over the next 40 years, this pattern gave way to a different one, one that was hard to understand with Chandlerian or Porterian logic. By 2021, about 38% of U.S. R&D spending was performed by organizations with more than 25,000 employees, and organizations with less than 1,000 employees accounted for more than 18% of R&D spending. While large companies remain very much a part of the U.S. innovation system, their role in it has declined significantly, while the role of smaller, younger companies has risen dramatically.
Primary Findings and Results from OI
OI can plausibly claim to be one of the first coherent explanations for this shift from a relatively centralized innovation system driven by the largest organizations in the United States to a more distributed system that involved organizations of all sizes—and increased the importance of external knowledge from diverse sources such as universities, research institutes, and individuals. Twenty years ago, searches on Google for “open innovation” yielded about 200 results, none of them reflecting anything more than the word “open” appearing near the word “innovation,” such as an organization opening an innovation office. Today, Google searches on “open innovation” yield more than 22 million links, reflecting the emergence of a new concept. Relatedly, searches on LinkedIn using the term “open innovation” reveal more than 6000 people who have OI in their profile. And Google Scholar reports hundreds of thousands of citations to the term.
So, the concept has spread significantly over the past 20 years. Evidence of its benefit likely helped diffuse the concept. Many individual companies such as Procter & Gamble have proudly proclaimed their success with their version of OI called Connect and Develop. 1 Another consumer products firm, General Mills, analyzed sixty new product introductions in a twelve-month period. They found that those which had a substantial contribution from OI outsold the ones that did not by more than 100 percent. 2 In the industrial sector, a study of 489 projects inside a large European manufacturer found that projects involving significant OI collaboration achieved a better financial return for the company than projects that did not. 3
OI has also been used to manage complex ecosystems of firms. One example comes from semiconductors. The Taiwan Semiconductor Manufacturing Corporation (TSMC) is the world’s leading semiconductor fabricator. Designing and building chips is a complex process that requires customers to use a variety of design tools, such as reference designs and process recipes. Many of the third-party companies who make these tools wanted to assure their customers that their offerings would run on TSMC’s processes. This expansion in third-party tool offerings creates more design options for TSMC’s customers—a clear benefit. However, these new offerings and the complex interactions between the many tools and the many steps in fabricating new chips also increase the complexity for TSMC’s customers to manage, and this complexity might cause new chips to require re-designs or other expensive modifications to be manufactured correctly—a clear risk.
TSMC has addressed this risk with its Open Innovation Platform (their term, not mine!). 4 The OI Platform starts by combining the many design and manufacturing services of TSMC with those provided by many third-party companies, and then testing all these combinations together. So TSMC uses OI to manage a complex ecosystem of internal and external design sources, and provides a guarantee to its customers, provided they stick to these validated resources when designing their chips.
Large sample studies of companies also support the value of OI. Laursen and Salter employed the Community Innovation Survey to study the effects of a range of knowledge sources on innovation outcomes and reported significant and positive results. 5 However, these results came with a twist: after a certain point, the initially positive results turn negative. So, openness improves performance, but only up to a point. These results have since been replicated in other countries using their own Community Innovation Survey data from other countries. Recent surveys of large firms in Europe and North America also found that firms that employed OI were getting better innovation results. 6 Other studies, however, have reported mixed results from using OI. 7 Laursen and Salter found a “paradox of openness,” where the benefits of openness must be balanced against the risks of unwanted expropriation. 8 Grimpe and Sofka contrasted collaborative versus transactional OI activities in searching for useful knowledge and found that each had a stronger effect on performance in the presence of the other, making them complementary. 9 Brunswicker and Vanhaverbeke, in studying OI in SMEs, argued that not all OI practices were beneficial in enhancing firm innovation performance. 10
Learning from OI Failures
This mixed evidence about OI’s results brings us to the study of failures in OI. One fair criticism of OI is that there have been many more studies of its successes than of its failures. This is an oversight, since OI does not always work, and there are lessons about OI to be learned from the cases where it was tried and it failed.
One such illustrative failure is Quirky. This company was founded in 2009 by serial entrepreneur Ben Kaufman, and it raised over $150 million in venture funding. Quirky found its ideas by inviting individual inventors to submit their ideas for products to the company via the company’s website. If the company selected the inventor’s product for commercial development, the inventor would receive a portion of the resulting revenues as a royalty. In turn, Quirky would handle the further development, merchandising, distribution, and advertising to promote the product. 11
The company attracted a lot of attention and publicity with this model. The company also had some big hits, with products like its PowerPivot (a flexible multioutlet plug extension cord) generating millions of dollars in revenues. In addition to attracting a lot of venture capital financing, the company signed a partnership with GE to manufacture and market some of its products as well (GE also invested in the company). So Quirky was well-backed and well-connected.
But this promising approach failed. In 2015, the company filed for bankruptcy, and founder Kaufman was forced out of the company. In this case, OI clearly did not work, despite its ample funding and its top management support. One likely reason is that crowdsourcing has a number of hidden costs. For every idea that became a PowerPivot, there were hundreds or thousands of ideas submitted to Quirky that were poor or marginal ideas. And every one of these needed to be reviewed by someone within Quirky, a hidden cost that likely built up over time. This has been found in many companies that have tried crowdsourcing: most crowdsourced ideas that are submitted are terrible. 12 This limits the use of crowdsourcing in many companies. 13
OI at P&G
A different kind of OI failure occurred at Proctor & Gamble (P&G), the giant consumer products company. Here, OI worked well for a time, as noted above. In 2006, P&G proudly touted its OI success with a well-received, broadly disseminated article in the
But P&G’s success did not last. P&G’s revenues slumped badly after Lafley retired as CEO in 2009, and the Great Recession hit both the United States and Europe. And P&G struggled to resume its growth even after the economy later rebounded from the recession. P&G’s Board of Directors decided to bring Lafley back to the CEO role in 2013, presumably to revive the growth magic that Lafley had created during his first period as CEO. But this growth did not materialize. Lafley subsequently stepped down again as CEO in July 2015 and retired from being Chairman of the Board in June 2016. OI worked well at P&G from 2001 to 2009, and yet did not work well from 2009 to 2016.
Part of the explanation for this turn of events may lie in the role of people and leadership in OI. Many of the key people in the early years of the P&G program had left the company by the time the Great Recession occurred. Importantly, their skills and beliefs in OI do not seem to have transferred to those who replaced them. So OI must be more than a series of corporate practices, since those same practices stopped yielding good results after 2009.
Competition may have also played a role. Other consumer product companies noticed the success of Connect & Develop and began to imitate that process themselves. A number of these companies during this period, including Nestle, Unilever, Kraft, Del Monte, SC Johnson, Clorox, General Mills, and Kellogg’s, sought to learn from P&G’s success. By the time Lafley returned to the CEO role in 2013, many consumer products companies had incorporated many elements of OI into their own innovation processes. So, OI’s benefits may depend in part on whether one can remain ahead of the competition in the innovation race. It is certainly the case that we need more research on when, how, and why OI has failed, and then we must search for lessons to learn from those failures to improve its use in the future.
Where Is OI Going in the Future?
So, OI has had an impact over the past 20 years. What about OI today? And what are its prospects for the future? This special issue of the
Part of the vitality of OI derives from its ongoing interaction between theory and practice in innovation. While it is studied at some length by many academic scholars, OI is not simply an academic notion, as seen in those eight chapters in the
IBM Watson in Health Care
IBM was an early and ardent researcher of AI. By the year 2011, the company achieved a remarkable breakthrough. IBM succeeded in training its algorithms to analyze unstructured data like textual language. This culminated in IBM staging a competition to have its AI technology, now called Watson, play the game of Jeopardy against two other humans. And Watson carried the day, winning handily. IBM followed this achievement by commercializing its now-demonstrated ability to make inferences from unstructured data in the health care field. It selected the field of diagnostic radiology, where Watson could read tens of thousands of film studies where breast cancer was detected. This would enable Watson to predict whether or not breast cancer would be present in new film studies. They invested many billions of dollars and signed agreements with hospitals around the country to commercialize this technology.
However, IBM was not at all open during its pursuit of the diagnostic radiology market. The Watson technology was kept entirely proprietary. There were no APIs (Application Programming Interfaces), software development kits, or other ways to connect to and build upon its technology. There were no third-party support organizations to help install, configure, and deploy the technology. There were no system integrators to carry the technology into new domains of application. It was a black box.
This lack of openness meant that IBM bore 100% of all of the costs of developing, deploying, and distributing Watson. This meant that IBM had to choose where and how to apply Watson without knowing much about the many ways it might be used. And customers had to depend solely upon IBM to employ Watson in their radiology practices. Any issues or problems had to be taken to IBM directly, and only IBM could resolve them.
Despite IBM’s technical lead, and despite its significant investments in commercializing Watson in health care, the business results were deeply disappointing. In 2016, the MD Anderson Oncology Center in Texas publicly discontinued its use of Watson in its hospital. An internal audit determined that the hospital had invested more than $60 million, not including staff time, and gotten little benefit from the investment. 17 In 2020, IBM sold off its Watson health care business assets to a private equity firm, Francisco Partners. 18
IBM would have done better to have been more open in its commercialization efforts. IBM could have created APIs to allow others to build upon and extend its algorithms to new domains. This approach would allow IBM to explore multiple use cases and applications in parallel, instead of having to bet big on one single application. IBM would also have done well to include others in the installation, distribution, and deployment of the technology so that customers of all sizes could have the chance to try it and see whether it might fit with their own needs. By itself, IBM could only hope to serve the largest of its customers, and only for a narrow set of applications. To reach the rest of the market, IBM needed partners and others to build upon its technologies.
This experience leads to an interesting insight that qualifies the result of David Teece’s “Profiting from Innovation” model. 19 The AI contained in Watson is an example of a powerful general-purpose technology (GPT). GPTs can be used in many possible ways, and this very rich set of possibilities, which seems quite positive, actually can create a serious challenge for an innovative firm. In Teece’s framework, the unstated assumption is that the innovator knows which market to address with her new technology. What if that technology is more general in its nature, so that the innovating firm does not know where to focus its innovation? Which way of using the technology should be the focus of the commercialization effort? Which aspects of that use should be supported with additional technological development? Implicitly, such additional development may render the GPT less useful in some other application. And the way that the technology is developed, manufactured, distributed, and supported requires specific investments that further constrict the range of applications of the technology. Profiting from GPT innovations is more complex than Teece originally considered.
OI and OpenAI
Just over a decade after Watson’s introduction to the market, organizations like OpenAI with a new kind of AI provide a new set of circumstances that again raise the question of the value of openness in commercializing a new technology. OpenAI has achieved tremendous penetration into organizations in a remarkably short period of time since its introduction of ChatGPT-3 in November of 2022. Their success in commercializing large language models (LLMs) in AI has been impressive. It is instructive to compare OpenAI’s market introduction to Watson’s, as sketched above.
The first thing to observe is that, by itself, OpenAI cannot be considered to be open. Its code is entirely proprietary, as was IBM Watson’s code. Its method of distribution is similarly restricted to its own websites and those of its partners (more information below). So, this part of OpenAI fails to qualify as OI.
And yet, OpenAI did take some inspiration from OI. First, it did not try to commercialize its technology entirely on its own. It reached out to a strategic partner, Microsoft, for both investment capital and for cloud computing services, and licensed its technology to Microsoft. Second, it did not limit its product to the capabilities in its own code. Instead, OpenAI included a set of APIs that enabled many other organizations to build upon and extend OpenAI’s technology in many new directions. When the company first introduced its ChatGPT-3, it did so initially for free. Explaining that it was doing research, the company invited users to log in and try it. Over the next 6 months, more than 100 million users did so. Left unsaid was that the company was researching what we users would do with this new capability. Every one of those more than 100 million user interactions was hosted by OpenAI, so the company saw everything that all those users tried. So, instead of trying to pick an application for ChatGPT-3, the company simply watched how users tried to apply it. This is a much more open way to construct the eventual applications to develop from this general-purpose technology.
It is also instructive to examine the motivations of OpenAI’s partner, Microsoft. Here, we find several aspects of OI that are critical to what Microsoft was doing. First, Microsoft has funded and sustained a significant corporate research organization for more than 30 years. And Microsoft Research has undoubtedly been working hard on various aspects of AI in general and LLMs in particular for many years. When OpenAI asked for $1 billion in investment from Microsoft, we can safely assume that the Microsoft Research unit asked to be given that $1 billion instead. Yet Microsoft’s leadership, while continuing to fund its internal research, wisely chose to be more open, and made the initial $1 billion investment in OpenAI, later to be followed by an additional $10 billion investment commitment (most of this in the form of Azure Cloud Compute services).
Microsoft is also building out an impressive suite of products and services to incorporate its licensed OpenAI technology into the many tools that it offers to developers and business customers. These CoPilot offerings leverage and extend Microsoft’s existing products and services, and help customers embrace LLM technology in a very approachable way. This also enables Microsoft’s powerful ecosystem of third-party service and support organizations to get involved in helping attract, close, and then support customers of all sizes with these new, AI-enabled products.
While Microsoft is strongly supporting OpenAI, it is also casting a wider net for supporting LLMs with its cloud computing services. Companies such as Meta have created open-source LLMs like Llama-3, and Microsoft is working with Llama-3 to distribute it through its Azure Cloud Computing service. This likely reflects Microsoft’s understanding that there will be multiple winners in the eventual market for LLMs, and it is wise to support a number of them rather than restrict their support to OpenAI alone.
Indeed, while OpenAI has gotten the lion’s share of attention in the introduction of LLMs to the world, a much larger movement of companies is actively developing open-source software versions of LLMs as well. This question between pursuing LLMs with open versus closed commercialization approaches has been debated since the inception of OpenAI. Elon Musk publicly disparaged the decision of OpenAI to remove its code from the open-source domain. 20 And the larger question of when to open a technology to the public, and when to close it instead, is itself a new area of research in OI. 21
This has several implications for the LLM market. One is that there probably will not be a single winning technology. Rather, there are likely to be multiple versions of LLMs. IBM has even gotten back in the game by co-launching the AI Alliance, with more than 70 companies supporting open-source versions of LLMs. 22 This will limit the ability of OpenAI (or any eventual winner) to extract most of the profits from LLMs. It also means that there will likely be many types of LLMs deployed, some of which will be much more customized to specific sets of data for specific uses. So, there will be multiple segments in the LLM market that will feature many customer applications and needs.
And we must also acknowledge that IBM has become a more open company itself in this process. After its difficulties with Watson in health care, the company pivoted back to its IT roots. It acquired Red Hat, the leading distributor of the Linux operating system, for $34 billion and is building upon that more open organization to develop new opportunities (including the AI Alliance above).
To summarize this illustration, OI is a strategy for how to improve the outcomes from innovation for an organization by utilizing more external knowledge in one’s own innovations and allowing one’s own knowledge to be used by others in their innovations. Organizations such as Microsoft and IBM remain very active in their own internal research activities in areas like AI but also actively engage with startups and other external organizations in the AI domain as well. This shows how OI as a concept draws insight and inspiration from observing innovation in practice as well as in theory.
Other New Practices in OI
Innovation does not stand still. New technologies, new business models, and new industries all challenge the existing theories of innovation in general and OI in particular. Just as P&G could not rest upon its laurels with its early success of Connect and Develop, so too must we push OI concepts and evidence forward with it into new terrain.
One set of industry practices that continue to evolve are related to the ways that companies engage with universities in the exploration of new science, and then commercialize the results of that knowledge. Universities have invested more money into their technology transfer offices and do a better job of capturing more of the research discoveries that emanate from their internal laboratories than they used to. There are more facilities where companies and university staff work together, in shared laboratory spaces, usually involving researchers from multiple companies, so that no one organization bears the brunt of the research costs.
More recently, this multiorganizational approach has been taken further. The university or research center can invest in expensive, specialized technical equipment that most participating companies could not afford to buy for themselves. They then build relationships and programs to engage a variety of companies to fund and use this equipment. This results in a vibrant technical ecosystem that better uses the equipment and shares the knowledge of how to innovate with the technologies it uses more broadly. KL Leuven in Belgium 23 and Lawrence Berkeley National Labs’ National Molecular Foundry in California 24 are two examples of how deep technology’s costs and benefits are distributed across a broad landscape of companies that help pay for and use the technology.
Indeed, until very recently, it was fairly common for universities to be suspicious of companies and to keep a clear line between academic pursuits of knowledge and industrial profiting from that knowledge. Derek Bok’s book,
Stanford University was led by a former engineering professor, John Hennessey, for 16 years. Hennessey himself took three leaves of absence during his academic career and started three different startup companies during these periods. Berkeley’s Jennifer Doudna is not only a Nobel laureate in Chemistry for her work on CRISPR, but she is also involved in several startup companies commercializing various aspects of her discoveries. Carnegie Mellon University is now led by a computer scientist, while MIT is led by a cell biologist, and the University of Michigan is led by an immunologist. These leaders have also had extensive interactions with industry partners during their research careers. This exposure gives them a broader view of how research and technology support an industry, and what industry needs to be more successful with new research and technology from the university.
There have also been important developments on the industry side in OI. While corporate venture capital has been practiced for quite a long time, it has grown in importance in recent years. Today, CVC amounts to 40% of all VC investment in the United States and more than 50% of all VC investment in China. 26 Yet companies remain unclear about how best to organize, manage, and reward CVC activities. Recent research by Ilya Strebulaev shows that companies differ significantly in where CVC activities report inside the company, and they also differ on whether to focus on strategic returns or financial returns. 27
A more recent development has been the growth of business incubators and accelerators inside large organizations. While seeking out new ideas from employees goes back to the employee suggestion box, companies have determined that they can and should do more to grow and nurture promising ideas that fall outside the current business scope of the company. Crisan et al. report that the use of accelerators is growing but that the purpose and definition of this structure again varies across organizations. 28
Perhaps even more recently, some companies are seeking to streamline their support of new business ideas from their employees by using a Venture Client model. Gutman et al. describe how this model can align the perspectives of internal departments of a large company so that they work more effectively and more efficiently with young startup companies. 29 It is also worth noting that this sort of model involves less overhead, fewer facilities and staff, and lower administrative costs for the organization.
A similar progression toward lower-cost models of engagement can be observed in how companies work with startups. Initially, the dominant mode of engagement was through corporate venture capital. However, later models of collaboration focused on more specific objectives and milestones, like an initial proof of concept, instead of an entire new business. These later models also required less funding and administrative financial support. These more lightweight models of engagement also allow organizations to scale up the number of startups that they engage with concurrently, enabling a more rapid and extensive exploration of a new business opportunity space. 30
New Theories in OI
A lot of academic research has been conducted on OI in the past 20 years. For that reason, a group of academics chose to organize and orchestrate the first
These theoretical probes are a healthy manifestation of how much OI has grown over the past 20 years. If no one cared about OI, or if OI was seldom encountered in the world of organizations and technologies, there would be little interest in a deeper examination of OI as a concept. Instead, as the above list of theoretical chapters in the
This enables a more coherent comparison across the different theoretical lenses that are employed with regard to OI. Knowledge flows, for example, are generated both by individual actions as well as organizational decisions. Some of those flows take place inside a single organization, while others move across multiple organizations. The linear value chain of suppliers, organizations, and customers in innovation is augmented by a surrounding ecosystem of knowledge sources and partners. Connecting strategy with OI can involve both a process view that focuses primarily on participation and transparency and a content view that primarily considers resource and knowledge advantages of one organization relative to another. Inside-out knowledge flows can enable certain kinds of business model innovation, while outside-in knowledge flows enable other kinds of business models.
Scholars have begun to employ some AI-based research methods in their study of OI, and these new methods are yielding new results. Lu and Chesbrough studied the Russell 3000 Index firms that encompass 95% of the value of the U.S. stock market, and they report evidence that OI does improve business performance. 32 However, this performance improvement is contingent upon internal R&D spending and varies across different sectors of the economy. Schaper and colleagues report evidence across 65,000 observations that OI’s benefits depend upon both environmental dynamics and appropriability. 33 The conclusion from these new studies, built on these new research methods, is that there cannot be a single set of best practices for OI across industries. Instead, organizations must develop processes that are most relevant to their industry, their geography, and their institutional context.
New Challenges for OI to Address
The advent of large language models in AI promises to influence the ability of OI itself to address some of the grand challenges facing the world. As Ferras, Nylund, and Brem observe, LLMs allow OI to not only connect the dots between organizations but to connect invisible dots that previously would go undetected. 34
Hand in hand with the development of LLMs and generative AI is the increased availability of training data needed to fuel these models. It is debatable whether fair use provisions will allow LLMs to train on all data available from the internet. If fair use is not upheld, then there will be a prolonged struggle to identify, access, and harvest training data that are authorized for such training. Some providers, such as Adobe, have already committed to a model that promises their customers that their training data will come only from authorized sources and that customers’ use of Adobe tools will not in turn be harvested for further training. 35
With the further advance of AI, it is likely that OI itself will be able to be extended, as Ferras, Nylund, and Brem would predict. One recent example of such an extension comes in the use of AI to scan through existing drugs that are no longer on patent, to see if they might have efficacy in treating rare diseases. 36 This would relieve significant pain and suffering for patients who otherwise would contend with little or no effective therapy for their conditions, and it would provide a second market application for older drugs that are no longer providing major income to their developers.
One area outside of AI where these dots are beginning to get connected lies in the energy sector. Zobel, Comello, and Falcke explore the ways that multiple energy organizations are sharing knowledge, data, and technology to identify more sustainable ways to generate, transmit, and distribute energy. 37 The NetZero strategy that these firms strive to achieve is accelerated by sharing practices and collectively engaging to test ways that move toward the NetZero goal. However, the needs of the larger society must be considered as these models scale. Some recent research has shown that the penetration of residential solar panels, along with the ability to sell energy back to the local energy grid, has shifted much of the fixed cost burden for operating the utilities toward lower-income households. 38
Another area where OI will make a contribution lies in the development and scaling of circular business models more generally. Nancy Bocken and her colleagues’ work details the importance of circular business models, 39 and OI can enable the improvement of circular models by attracting suppliers, specialists, startups, nongovernmental organizations (NGOs), and others who can build out, validate, and scale these models.
More generally, the desire to implement the sustainability development goals (SDGs) of the United Nations will almost certainly require new collaborations between many organizations. 40 Addressing each of the 17 ambitious SDGs will involve partnerships between private sector organizations, and also between public sector actors and private actors. We know from work in the technology sector that platforms can orchestrate the activities of hundreds or even thousands of organizations, as is the case with TSMC above. As we turn to address the SDGs, this collaboration must go even further, reaching out to governments, NGOs, and ordinary citizens if we are to make significant progress toward the realization of these SDGs. Questions of governance are of central importance to develop and sustain these initiatives. New research into the rights and responsibilities of different stakeholders in the organization will also matter. 41
These new developments, along with many others, will force OI to take on these concerns, stretching both the theories of OI and the practices of OI in new ways. There will be lots and lots of new dots to explore and connect.
Conclusion
A lot has happened with OI over the past 20 years. OI began as a series of observations about the anomalous patterns of innovation activity that were observed in a small number of industries yet were hard to understand with prevailing theories of innovation and organization. Now, with many more empirical studies and with a broader set of theoretical lenses, the anomalies are better understood, and OI rests upon a far more robust theoretical and empirical base than it initially did. Today, it is supported by a broad, diverse community of scholars in many parts of the world and is practiced by companies throughout the world.
It is also sustained by events that bring industry participants into direct contact with academic scholars, and this interaction helps to nurture the vitality of the community. These events include the annual World OI Conference, the newly published
