Overview
In a Letter to the Editor recently published by the Columbia Daily Spectator, contributors argued that "AI is not inevitable." That phrase—simple but provocative—touches on an enduring debate among technologists, policymakers and the public: is the trajectory of artificial intelligence predetermined by science and market forces, or can democratic decisions meaningfully shape its development and distribution?
This article examines the evidence behind the claim that AI is not inevitable. It traces historical precedents, outlines the technical and economic drivers of current AI trends, summarizes measurable impacts and describes governance levers and policy alternatives. The review draws on published reports, academic research and public statements by experts to show that while technical momentum is strong, the future shape of AI remains a matter of choice.
Background and context
What people mean by "inevitable"
When commentators call AI "inevitable," they generally mean one of two related things: (1) continued advances in machine learning and computing will produce increasingly capable systems regardless of regulation, and (2) social and economic structures will necessarily lead to those systems becoming embedded and dominant in political, commercial and military spheres.
Both claims rest on observable trends: rapid growth in compute capacity used for model training, concentration of data and talent, and intense commercial investment. But observable trends are not immutable laws. History offers multiple cases—public health, nuclear energy, gene editing—where technological trajectories were meaningfully altered through policy, litigation, corporate practice and social movements.
Key data points
- Compute growth: Research by OpenAI and others has documented extraordinary growth in the compute used to train leading machine-learning models. An influential OpenAI analysis found that compute used in the largest AI training runs doubled approximately every 3.4 months between 2012 and 2018, driving rapid capability improvements (OpenAI, "AI and Compute").
- Policy adoption: By the early 2020s, dozens of countries had adopted national AI strategies. International organizations such as the OECD and UNESCO have produced recommendations and frameworks for AI governance; these documents underscore that governments are actively shaping national approaches (OECD AI Policy Observatory, UNESCO Recommendation on the Ethics of AI).
- Labor projections: The World Economic Forum's Future of Jobs Report (2020) estimated that by 2025 automation could displace 85 million jobs in certain sectors while creating 97 million new roles—illustrating that technology changes labor markets but outcomes depend on policy, retraining and industrial shifts (World Economic Forum, 2020).
- Concentration of resources: Analysis by the Stanford Institute for Human-Centered Artificial Intelligence and the AI Index highlights concentration of compute, data and talent among a small set of large firms and a few research universities—an institutional reality that shapes what types of AI are pursued and deployed (Stanford AI Index).
Why AI can seem "inevitable"
Several interlocking forces contribute to the sense that AI's advance is unstoppable.
Technical momentum and path dependence
Machine learning, particularly deep learning, benefits from positive feedback loops: more compute and data improve model performance; improved models unlock new products and revenues that justify further investment. This creates a path-dependent trajectory where early technical choices influence subsequent investment and research directions.
Economic incentives
Private-sector incentives are powerful. Large technology companies view AI as a core competitive capability for search, cloud services, advertising, and enterprise software. Venture capital flows into startups that can deliver immediate productization of AI capabilities. These market incentives favor rapid deployment and scaling, especially when first-mover advantage or data advantage can be converted into long-term market share.
National strategic competition
States view AI as strategically important across defense, intelligence, and economic policy. This adds urgency to development and can blunt calls for restraint. Policymakers in some countries emphasize industrial policy to capture capabilities and jobs, making the technology politically salient.
Why AI is not inevitable
Despite momentum, several substantive levers can—and historically have—altered technological trajectories. The following elements make the "not inevitable" argument concrete and actionable.
Governance and regulation can change incentives
Regulatory choices affect the economic calculus of firms. Requirements for transparency, safety testing, licensing, data protection, and liability can slow or redirect deployments. Examples include:
- Data protection laws such as the EU's General Data Protection Regulation (GDPR), which altered data flows and operational practices across industries.
- Export controls and trade measures that limit the transfer of certain technologies or hardware.
- Sector-specific rules—finance, healthcare, and transportation—where regulators can impose higher standards or pre-market review.
Where such rules have been applied, they have changed business models and research priorities. That is evidence that policy can shape outcomes.
Public spending and public-interest alternatives
Public funding can channel research toward public-interest applications rather than purely commercial ones. Governments and philanthropic organizations can also underwrite shared infrastructure—public datasets, open compute grants, and transparent benchmarks—that reduce the advantage of secrecy and monopoly control.
For example, publicly funded scientific infrastructure in fields such as genomics and climate modeling has historically enabled broad scientific advances rather than locking progress behind proprietary systems.
Social norms, litigation and civic pressure
Civil-society advocacy, journalistic scrutiny and legal challenges have repeatedly reshaped technological programs. Concerns over facial recognition, predictive policing and algorithmic bias have led some municipalities to ban certain AI uses, while forcing companies to alter product roadmaps.
These interventions demonstrate that public pressure and legal remedies can constrain specific applications and create delay, oversight or redesign requirements.
Expert perspectives
Experts in AI and technology studies offer a range of views that illuminate different facets of the inevitability debate.
"AI is neither artificial nor intelligent." — Kate Crawford, The Atlas of AI
Crawford's pithy formulation is intended to shift attention from techno-optimism to the political economy of AI: its material infrastructure, labor implications and governance. Her work emphasizes that what we call "AI" is embedded in social and economic systems and therefore susceptible to social choices.
"Machine intelligence is the last invention that humanity will ever need to make." — Nick Bostrom, Superintelligence
Bostrom's sentence is often invoked to argue that advanced AI, if achieved, would be a decisively transformative event. Such framing underlines why some advocates press for urgent safety and alignment work. But the gravity of that claim does not make the timing or the governance responses inevitable.
Researchers working on policy and accountability also stress contingency. The UNESCO Recommendation and documents from the OECD stress that ethical frameworks and policy choices matter in shaping adoption and social outcomes (UNESCO Recommendation, OECD AI Policy Observatory).
From the technical side, leading practitioners have highlighted both power and responsibility. In public testimony and commentary, some AI researchers have urged stronger oversight, while others warn of the economic and strategic costs of over-restrictive policy. The diversity of these views means the direction taken will reflect political negotiation, not technical fate alone.
Where policy has already altered AI's course
Illustrative cases show how regulation and civic action changed trajectories:
- Reining in uses of facial recognition: Several U.S. cities and European regulators have limited or banned use of facial-recognition systems for law enforcement and public surveillance, prompting vendors to restrict certain sales and driving industry pauses on deployment.
- GDPR's impact on personalization: The EU's privacy rules forced companies to change data-handling practices, altering business models for online personalization and targeted advertising.
- Open science in health: Publicly funded datasets and pre-competitive collaborations in genomics have steered research toward shared benefits rather than exclusive proprietary gains.
These cases demonstrate that it is feasible to shape deployment and that governance matters in practice.
Policy options for shaping AI's trajectory
If AI is not inevitable, the question becomes: which policies would be effective, proportionate and politically viable? Key levers include:
- Regulatory frameworks for high-risk systems: Pre-market review and post-market surveillance in critical domains (healthcare, transportation, criminal justice) to ensure safety and fairness.
- Licensing and access controls for compute and data: Targeted measures to limit unconstrained scaling of systems that pose systemic risk, while enabling legitimate research.
- Standards and certification: International standards bodies and independent auditors can provide interoperable safety and transparency benchmarks.
- Data governance and worker protections: Rules to protect people whose data fuels models and to support workers displaced by automation through retraining and social safety nets.
- Public investment in open infrastructure: Grants for public datasets, reproducible research, and community-led model development can reduce monopoly pressures.
- International coordination: Agreements on export controls, military uses, and safety norms can reduce adversarial races and create shared norms.
All these options involve trade-offs and require democratic deliberation. They also require enforcement capacity and technical expertise in regulatory agencies—both of which are often in short supply.
Obstacles to changing course
Even when policy options exist, practical and political barriers limit their adoption:
- Regulatory capture and lobbying: Powerful firms have incentives to resist constraints.
- Global competition: States worried about relative disadvantage may underinvest in restraint.
- Technical opacity: Models and datasets are complex and often proprietary, making oversight difficult.
- Speed of innovation: Policy processes can struggle to keep pace with rapidly evolving technology.
These obstacles do not make policy impossible; they underline the need for strategic design, capacity-building and international cooperation.
Practical steps for universities, companies and governments
Different actors can take concrete steps that cumulatively influence the direction of AI:
- Universities: Prioritize transparent, reproducible research; tie funding to open practices; and teach AI ethics and governance alongside technical courses.
- Companies: Adopt publication and accountability standards; implement internal review boards for risky systems; and participate in sectoral pre-competitive collaborations.
- Governments: Invest in regulatory capacity; fund public-interest AI research; adopt risk-based regulation; and engage in international coordination.
These steps can change incentives and norms, weakening the argument that commercial momentum alone determines outcomes.
Risks of complacency
Treating AI as inevitable risks a fatalistic posture: if adoption is seen as unavoidable, fewer stakeholders will work to shape it. That could magnify harms—concentration of economic power, erosion of privacy, biased automated decisions, environmental costs from high compute, and strategic misuse in military contexts.
By contrast, framing AI as contingent creates political space for intervention and debate about alternative futures and governance priorities.
Case study: facial recognition
Facial recognition technology provides a compact example of how policy and public pressure can alter deployment. After high-profile reports of misidentification and discriminatory impacts, several U.S. cities enacted bans on public-sector use; companies paused or withdrew facial recognition products for law enforcement; and regulators signaled increased scrutiny. The outcome was neither total prohibition nor unregulated diffusion: policy moves and civic pressure produced a middle path with restrictions, research, and renewed oversight.
That pattern is illustrative: a combination of public evidence, civic activism and policymaking resulted in tangible limits on an AI application—demonstrating that the direction of adoption is negotiable.
Balance of uncertainty
Predicting which specific models will be deployed, when, and by whom is uncertain. Technical roadmaps change. Market shocks and regulatory interventions can shift investment flows. The evidence reviewed here indicates a strong technical momentum but also multiple levers capable of redirecting or constraining that momentum.
In other words: AI's broad arc is shaped by both technological capability and social choices. To treat it as wholly inevitable is to ignore the many historical examples where policy and social engagement changed technological outcomes.
Conclusion
The assertion that "AI is not inevitable" is not an empty rhetorical stance. It is a proposition supported by historical precedent, current policy activity and by the practical reality that incentives—legal, economic and political—shape technological development. Technical momentum is real and powerful, but it does not equate to determinism. Democratic societies can influence the path of AI through regulation, public investment, civic engagement and international coordination.
That influence requires work: building regulatory capacity, investing in public-interest research, and engaging in sustained debate about trade-offs. If democratic control and public oversight are to have meaning in the age of advanced computation, stakeholders must treat AI as a set of choices to be governed rather than as an inevitable destiny to be passively endured.
Disclaimer: This article is based on publicly available information and does not represent investment or legal advice.
Comments 0