Governing Emerging Technologies

governing

Daniel Byman

11 minutes

The most consequential challenge facing the world today is the capacity of governments and societies to understand, govern, and regulate powerful, fast-changing technologies whose effects are reshaping economies, social orders, and the balance of national security. Artificial intelligence (AI), biotechnology, and nanotechnology (and possibly quantum technologies) stand out among these technologies for their transformative potential. Each is already delivering extraordinary benefits, yet each also poses risks that are global in scope, difficult to manage, and poorly matched to existing regulatory and political frameworks. The failure to develop effective governance for these technologies would not merely exacerbate existing problems; it would create new and potentially irreversible ones that affect people around the world and lead to devastating consequences.

To be clear, artificial intelligence, biotechnology, and nanotechnology offer immense power for good. AI has already demonstrated its ability to improve medical diagnosis, accelerate scientific discovery, optimize logistics, and expand access to knowledge—for example, AI systems that detect cancer from imaging scans more accurately than human clinicians, or that help design new drugs in a fraction of the traditional time. (AI can also help research and write essays for policy journals—it helped with this article.) Advances in biotechnology hold the promise of curing genetic diseases, extending healthy lifespans, strengthening food security, and responding rapidly to pandemics as seen in the unprecedented speed with which mRNA vaccines were developed and deployed during COVID-19. Nanotechnology enables new materials with extraordinary properties, from more efficient energy storage to targeted drug delivery systems and advanced sensors such as nanoparticles that deliver chemotherapy drugs directly to tumors, reducing damage to healthy tissue. Together, these technologies could help address some of humanity’s most persistent challenges, including disease, hunger, environmental degradation, and economic inefficiency. Their positive potential is not speculative; it is already visible in laboratories, hospitals, and markets around the world.

” It is precisely because these technologies are so powerful that they pose such a profound governance challenge. They are evolving at remarkable speed, often faster than policymakers, regulators, and even domain experts can fully comprehend their implications.”

Yet it is precisely because these technologies are so powerful that they pose such a profound governance challenge. They are evolving at remarkable speed, often faster than policymakers, regulators, and even domain experts can fully comprehend their implications. AI systems that seemed experimental a decade ago are now embedded in financial markets, military planning, and everyday consumer applications—from algorithmic trading systems capable of triggering market volatility to AI-enabled targeting tools used in modern warfare. Biotechnology techniques such as CRISPR gene editing have moved from obscure academic tools to widely accessible methods with profound ethical and security implications, including experiments that alter embryos or enhance pathogens in ways that blur the line between medical research and weapons development. Nanotechnology continues to advance incrementally but steadily, often invisibly integrated into other systems, making its cumulative effects harder to track as in advanced coatings, batteries, and sensors that are embedded deep within supply chains.

Major technological shifts can transform societies in unpredictable ways. The printing press, for example, did far more than make books cheaper. It reshaped religious authority, enabled mass political mobilization, accelerated scientific exchange, and ultimately contributed to revolutions and wars such as the Protestant Reformation and the political upheavals that followed the spread of mass literacy. The industrial revolution reorganized labor, urbanization, and state power, producing both unprecedented wealth and profound social dislocation including child factor labor, urban poverty, and new forms of class conflict. Nuclear technology transformed international politics by introducing the possibility of instant, catastrophic destruction and forcing states to develop new doctrines and institutions to manage existential risk such as deterrence theory, arms control treaties, monitoring bodies like the International Atomic Energy Association, and crisis hotlines. In each case, technology did not simply add new tools; it altered the structure of society and the nature of power itself.

AI, biotechnology, and nanotechnology are likely to have similarly far-reaching effects, but with two crucial differences. First, they are developing simultaneously, interacting with one another in ways that may amplify their impact. AI accelerates biological research; biotechnology generates vast datasets that feed AI systems; nanotechnology enables new hardware for both, such as AI-driven protein folding models that rely on massive biological datasets and specialized computing hardware. Second, they are diffusing globally at unprecedented speed. Unlike nuclear weapons, which required enormous industrial and financial resources and were directly controlled by a small number of governments, many of today’s most powerful technologies are accessible to small teams, private firms, and even individuals as demonstrated by open-source AI models, inexpensive gene-editing kits, and cloud-based computing resources. This diffusion complicates efforts to control misuse and increases the risk of surprise.

“Policymakers, who already struggle to keep pace with more familiar domains, are often asked to regulate systems they do not fully understand. “

Regulating these technologies is exceptionally difficult for several reasons. The first is their intrinsic complexity. AI systems, particularly those based on deep learning, are often opaque even to their creators, with decision-making processes that cannot be easily explained or audited. Biotechnology involves intricate biological processes that can behave unpredictably outside controlled environments as illustrated by gene drives that spread beyond intended ecosystems. Nanotechnology operates at scales that challenge human intuition and traditional testing methods where small changes in structure can produce radically different effects. Policymakers, who already struggle to keep pace with more familiar domains, are often asked to regulate systems they do not fully understand. Much of the expertise is in the private sector, where companies have an incentive to limit regulation or use it to undercut competition. This knowledge gap creates a structural disadvantage for regulators and encourages either overbroad rules that stifle innovation or under-regulation that leaves societies exposed to harm.

Second, these technologies are developing across multiple countries, often in highly competitive environments. Research talent, venture capital, and industrial capacity are globally distributed. A regulatory approach adopted by one country may be undermined if others choose not to follow suit, as seen when firms relocate data centers, laboratories, or manufacturing facilities to jurisdictions with looser oversight. Indeed, countries that regulate may find themselves at a competitive disadvantage for some aspects of these technologies as firms can relocate research activities, and knowledge can cross borders with relative ease. This creates classic collective action problems: States may recognize the long-term benefits of regulation but fear that unilateral restraint will leave them economically or strategically disadvantaged. The result is regulatory fragmentation or paralysis.

Third, the rapid pace of technological change makes it difficult for regulation to catch up. Traditional regulatory processes are slow by design, emphasizing deliberation, consultation, and legal robustness. These virtues become liabilities when technologies evolve on monthly or even weekly cycles such as rapid improvements in generative AI models that leapfrog prior benchmarks within months. By the time a regulatory framework is implemented, the underlying technology may have changed substantially. This lag encourages reactive governance, in which rules are written in response to crises rather than in anticipation of them, often after damage has already been done.

“By the time a regulatory framework is implemented, the underlying technology may have changed substantially. “

This dilemma is difficult to overcome. Imagine lawmakers who want to put guardrails on large language models (LLMs). Between initial conceptualization, hearings, drafting, and eventual signing, many months might pass, and the capabilities being regulated would have changed dramatically. For instance, models may gain multimodal abilities, autonomous task execution, or vastly expanded training data during the legislative process.

Fourth, it is unlikely that U.S. lawmakers would enter these choppy waters, as the United States is not consistently taking the lead in shaping global norms and institutions for these technologies. Historically, the United States played a central role in establishing international regimes for trade, finance, and arms control. Today, its approach to emerging technologies is fragmented and often domestically focused. While U.S. firms remain leaders in many aspects of AI and biotechnology, the federal government has been hesitant to articulate a clear vision for global governance in these areas beyond ad hoc executive orders and nonbinding frameworks. This vacuum creates opportunities for other actors, notably China, to shape norms in ways that may not align with liberal democratic values or long-term global stability, such as state-centric approaches to data governance and surveillance.

There is limited appetite for regulation within the United States. Political polarization, distrust of government, and concern about hindering innovation have all constrained regulatory ambition. Technology policy debates are often framed as zero-sum contests between economic competitiveness and public safety, rather than as efforts to align innovation with societal values as seen in debates over AI safety, content moderation, and data privacy. This environment makes it difficult to sustain the long-term investments in expertise and institutions that effective regulation requires. It also discourages politicians from taking positions that might be portrayed as anti-growth or anti-innovation.

“There is limited appetite for regulation within the United States. Political polarization, distrust of government, and concern about hindering innovation have all constrained regulatory ambition. “

Fifth, many of these technologies have significant military applications, which complicates transparency and regulation. AI is increasingly central to intelligence analysis, autonomous systems, and command and control. Numerous weapons use AI with only a limited role for human intervention, and autonomous systems are increasingly likely to interact with other autonomous ones, creating emergent situations that are difficult to anticipate and control such as autonomous drones responding to other automated defenses at machine speed. Biotechnology has implications for biodefense as well as the potential development of novel pathogens including engineered viruses with enhanced transmissibility or resistance to existing countermeasures. Nanotechnology contributes to advanced materials, sensors, and weapons systems. Military interest often brings secrecy, classification, and urgency, reducing opportunities for public debate and international confidence-building. In some cases, systems may be rushed into deployment under the pressure of strategic competition, increasing the risk of accidents or unintended escalation.

Sixth, existing arms control and nonproliferation structures are weak or poorly suited to these technologies. Many Cold War-era agreements were designed for relatively discrete, observable weapons systems like missiles and stockpiles of fissile material. They relied on verification mechanisms that are difficult to apply to software, biological research, or dual-use manufacturing processes where the same laboratory or codebase can serve both civilian and military purposes. Efforts to adapt or expand these regimes have struggled, in part because trust among major powers has eroded and in part because the technologies themselves defy traditional categorization. Indeed, they are struggling to regulate well-known technologies like nuclear weapons and missiles—Russia, for example, refused to extend New START. The absence of robust international institutions leaves states without forums to manage risk collectively.

Seventh, private companies cannot be expected to regulate themselves effectively. While many firms emphasize ethical principles and responsible innovation, they operate in competitive markets that reward speed, scale, and first-mover advantage. Voluntary guidelines and corporate social responsibility initiatives are insufficient. They lack enforcement mechanisms and are usually abandoned when they conflict with commercial incentives. Moreover, firms are not accountable to the public in the same way that governments are, particularly when the consequences of technological deployment extend beyond national borders such as the global spread of AI-generated misinformation or bioengineering tools.

“Addressing this challenge requires a rethinking of how societies approach technological governance. “

Addressing this challenge requires a rethinking of how societies approach technological governance. Regulation must be adaptive, informed by technical expertise, and coordinated internationally—an easy sentence to write, but a hard one to implement in practice. Governments need sustained investment in scientific literacy within regulatory agencies and legislatures. Unfortunately, in the United States, much of this expertise and capacity is being dismantled while trust in experts is declining, making this difficult task even harder. International cooperation should focus not only on prohibition but also on transparency, confidence-building, and shared standards such as common reporting requirements, safety benchmarks, and incident-notification mechanisms. Importantly, governance frameworks should aim to shape the direction of technological development, not merely to constrain its excesses. The goal is not to slow progress, but to ensure that progress serves broadly shared human interests.

Ultimately, the ability to regulate emerging technologies will be a test of political imagination and institutional capacity. These technologies will shape the future regardless of whether societies are prepared for them. The central question is whether governance will be proactive or reactive, inclusive or fragmented, guided by foresight or driven by crisis. If the world fails this test, the consequences will not be confined to any single domain. They will reverberate across economies, security systems, and social orders, defining the trajectory of the 21st century in ways that may be difficult to reverse and could be catastrophic.

Orbis
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.