They Sounded Crazy - Until the Internet Proved Them Right. What That Reveals About AI Today
In February 1995, as the World Wide Web was just beginning its transformation of human society, Newsweek published what would become one of the most spectacularly wrong predictions in technology history.

In February 1995, as the World Wide Web was just beginning its transformation of human society, Newsweek published what would become one of the most spectacularly wrong predictions in technology history. "The truth is no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works," wrote Clifford Stoll in a piece that has been preserved online for posterity. He continued with particular skepticism about electronic publishing: "Try reading a book on disc. Yet Nicholas Negroponte, director of the MIT Media Lab, predicts that we'll soon buy books and newspapers straight over the Internet. Uh, sure."
Seventeen years later, Newsweek itself ceased print publication and became exclusively available online.
Today, as artificial intelligence stands poised to reshape society in ways that may dwarf even the internet's impact, we find ourselves in a remarkably similar moment. Once again, we have visionary technologists making bold predictions about transformative change. Once again, we have skeptics dismissing these forecasts as overblown. And once again, we face the challenge of distinguishing between genuine insight and mere speculation about technologies whose full implications remain unknowable.
But this time, there's a crucial difference: we have the benefit of hindsight. We can examine what the early internet pioneers saw that others missed, understand why their warnings were initially dismissed, and apply those lessons to today's discourse around artificial intelligence. The parallels are striking, the stakes are higher, and the window for proactive response may be narrower than we think.
The Prophets and the Skeptics: How the Internet's Early Believers Were Dismissed
The story of the internet's early predictions reveals a consistent pattern: those closest to the technology often had the most accurate sense of its transformative potential, while those viewing it from the outside focused on its limitations and dismissed its possibilities. This dynamic played out repeatedly throughout the 1990s, creating a rich archive of both prescient insights and spectacular miscalculations.
The skepticism wasn't limited to Clifford Stoll's famous Newsweek piece. Robert Metcalfe, the inventor of Ethernet and a figure who should have understood network effects better than most, predicted in InfoWorld in 1995 that "the Internet will soon go spectacularly supernova and in 1996 catastrophically collapse". He gave the entire web a twelve-month life expectancy. Waring Partridge, writing in Wired that same year, dismissed the internet's potential for mass adoption with the observation that "most things that succeed don't require retraining 250 million people". Brian Carpenter, speaking to the Associated Press, worried that Tim Berners-Lee had forgotten to build in expiration dates for web content, meaning "any information can just be left and forgotten. It could stay on the network until it is five years out of date".
These weren't random commentators or technophobic journalists. These were serious technologists and industry observers who understood computers and networks. Yet they consistently underestimated the internet's potential for several key reasons that would prove instructive for today's AI discourse.
First, they focused on technical limitations rather than social possibilities. The bandwidth was too narrow, the interfaces too clunky, the content too sparse. They saw the internet as it was in 1995, not as it could become with Moore's Law improvements and network effects. Second, they underestimated the speed of adoption and the willingness of people to change their behaviors. The idea that hundreds of millions of people would learn new ways of shopping, communicating, and consuming media seemed implausible. Third, they missed the emergent properties that would arise from connecting so many people and systems. They couldn't anticipate social media, viral content, or the platform economy because these weren't just scaled-up versions of existing phenomena - they were entirely new categories of human activity.
Perhaps most tellingly, even Tim Berners-Lee himself initially described his creation in modest terms. Posting on a forum of early internet users in 1991, he summarized the World Wide Web as "aiming to allow information sharing within internationally dispersed teams, and the dissemination of information by support groups". This summary, as the New Statesman noted, "does not describe the many exciting possibilities opened up by the WWW project," and Berners-Lee was "blissfully unaware of the forthcoming arrival of Nyan Cat."
The pattern extended beyond individual predictions to broader cultural assumptions about human behavior online. John Allen, speaking to CBC in 1993, mused about civility and restraint on the internet, sharing his belief that "groups have their own sense of community and what we can do" would prevent people from saying and doing terrible things to one another over the web. This optimistic view of human nature online would prove tragically naive, as evidenced by the fact that a 2016 Australian study found 76 percent of women under 30 had experienced abuse or harassment online.
The Deeper Critics: Those Who Saw the Real Implications
While many early internet skeptics focused on technical limitations or adoption challenges, a smaller group of critics offered more sophisticated analyses that proved remarkably prescient. These voices, most notably collected in the 1995 anthology "Resisting the Virtual Life" published by City Lights Books, weren't simply arguing that the internet wouldn't work - they were warning about how it would work and what that would mean for society..
As Alexis Madrigal noted in The Atlantic, these weren't the "humbuggery" of Clifford Stoll's technical dismissals. "These were deeper criticisms about the kind of society that was building the internet, and how the dominant values of that culture, once encoded into the network, would generate new forms of oppression and suffering, at home and abroad".
This distinction is crucial for understanding both internet history and today's AI discourse. The deeper critics weren't wrong about the technology's potential - they were concerned about its implications. They understood that the values, priorities, and power structures of the people building these systems would inevitably be embedded in the technology itself. They worried about surveillance, about the concentration of power in the hands of a few large corporations, about the potential for manipulation and control.
These concerns proved remarkably accurate. The internet did indeed become a tool for unprecedented surveillance, with both governments and corporations tracking users' every click and movement. It did concentrate enormous wealth and power in the hands of a few technology companies. It did enable new forms of manipulation, from targeted advertising to political disinformation campaigns. The critics of "Resisting the Virtual Life" saw these possibilities not because they were pessimistic about technology, but because they understood how power works and how it shapes technological development.
The contrast between surface-level skepticism and deeper structural analysis offers important lessons for evaluating today's AI discourse. When Geoffrey Hinton warns about AI extinction risks, or when Yoshua Bengio calls for dramatic changes in how we develop AI systems, they're not making the same kind of technical predictions that Robert Metcalfe made about internet collapse. They're offering structural analyses about intelligence, control, and power that deserve the same serious consideration that the "Resisting the Virtual Life" critics deserved but didn't receive.
Today's AI Prophets: What the Current Believers Are Saying
The discourse around artificial intelligence today bears striking similarities to the early internet debates, but with several crucial differences. Most notably, many of the most dire warnings about AI are coming not from outside critics but from the technology's own creators and leading researchers. This represents a significant departure from the internet era, when pioneers like Berners-Lee and early web developers were generally optimistic about their creation's potential.
Geoffrey Hinton, often called the "godfather of AI" for his foundational work in neural networks, has become increasingly vocal about existential risks as AI systems have grown more powerful. In December 2024, he updated his assessment of the probability that AI could lead to human extinction within the next thirty years from 10 percent to "10% to 20%". His reasoning reveals the depth of his concern: "You see, we've never had to deal with things more intelligent than ourselves before. How many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There's a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that's about the only example I know of."
Hinton's analogy is particularly striking: "I like to think of it as: imagine yourself and a three-year-old. We'll be the three-year-olds" when compared to future AI systems. This isn't a technical prediction about processing power or algorithmic improvements - it's a fundamental observation about intelligence hierarchies and control relationships.
The concern extends beyond individual researchers to industry leaders who are actively building these systems. In May 2023, a statement published by the Centre for AI Safety and signed by dozens of AI researchers and industry leaders declared that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". The signatories included Sam Altman, CEO of OpenAI; Demis Hassabis, CEO of Google DeepMind; and Dario Amodei, CEO of Anthropic - the very people leading the development of the most advanced AI systems.
The Centre for AI Safety has outlined several specific disaster scenarios that go well beyond science fiction speculation. They warn that AI systems could be weaponized, with drug-discovery tools repurposed to create chemical weapons. They predict that AI-generated misinformation could destabilize society and "undermine collective decision-making." They worry about the concentration of AI power in fewer hands, enabling "regimes to enforce narrow values through pervasive surveillance and oppressive censorship." They even raise concerns about human "enfeeblement," where people become dependent on AI systems "similar to the scenario portrayed in the film Wall-E."
Yoshua Bengio, another of the three "godfathers of AI" who won the 2018 Turing Award, has been equally vocal about the need for caution. In his blog post "Reasoning through arguments against taking AI safety seriously," he writes about his concerns regarding "the speed at which the intelligence of AI systems could grow" and warns that some people with "a lot of power" may even want to see humanity replaced by machines.
The timeline these researchers envision is remarkably compressed compared to earlier technological transformations. As Hinton noted in his BBC interview, "most of the experts in the field think that sometime, within probably the next 20 years, we're going to develop AIs that are smarter than people. And that's a very scary thought". The pace of development, he says, is "very, very fast, much faster than I expected."
The Counter-Voices: Why Some Experts Remain Skeptical
Not all AI researchers share these apocalyptic concerns, and the nature of their skepticism offers important insights into the current debate. Yann LeCun, the third member of the AI "godfathers" trio and chief AI scientist at Meta, has been particularly vocal in pushing back against extinction warnings. He has tweeted that "the most common reaction by AI researchers to these prophecies of doom is face palming".
LeCun's skepticism represents a different kind of pushback than the early internet critics offered. Rather than dismissing AI's transformative potential, he argues that current AI systems are nowhere near capable enough to pose existential risks and that focusing on such scenarios distracts from more immediate concerns.
This perspective is shared by other prominent researchers. Arvind Narayanan, a computer scientist at Princeton University, has argued that "current AI is nowhere near capable enough for these risks to materialize. As a result, it distracts attention away from the near-term harms of AI". Elizabeth Renieris from Oxford's Institute for Ethics in AI has expressed similar concerns, worrying that "advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable".
Renieris's critique goes beyond technical capabilities to economic and social structures, echoing the deeper internet critics of the 1990s. She argues that AI systems "free ride" on "the whole of human experience to date," trained on human-created content, text, art, and music, while their creators "have effectively transferred tremendous wealth and power from the public sphere to a small handful of private entities".
This tension between existential and near-term risks reflects a broader debate about priorities and resource allocation. Dan Hendrycks, director of the Centre for AI Safety, has argued that these concerns "shouldn't be viewed antagonistically," noting that "addressing some of the issues today can be useful for addressing many of the later risks tomorrow".
The skeptical voices serve an important function in the current discourse, much as the early internet critics did. They force proponents of dramatic change to defend their assumptions and provide evidence for their claims. However, the history of internet predictions suggests we should be particularly attentive to the difference between technical skepticism and structural analysis, and between dismissing possibilities and questioning their implications.
What Hindsight Teaches Us: Lessons from the Internet's Transformation
Looking back at the internet's development with three decades of perspective reveals patterns that should inform how we approach artificial intelligence today. The most striking lesson is how consistently experts underestimated the speed and scope of transformation, even when they correctly identified the underlying technological potential.
The early internet skeptics made several systematic errors that offer crucial insights for today's AI discourse. First, they focused on current technical limitations rather than the trajectory of improvement. When Clifford Stoll dismissed online shopping and digital publishing, he was looking at the internet of 1995 - slow dial-up connections, primitive interfaces, and limited content. He couldn't envision the broadband networks, sophisticated e-commerce platforms, and vast digital libraries that would emerge within a decade.
This pattern of extrapolating from current limitations rather than anticipating exponential improvement appears throughout technology history. The early internet critics saw bandwidth constraints and assumed they would persist. They observed clunky interfaces and concluded that ordinary people would never adapt. They noted the scarcity of online content and failed to anticipate the explosion of user-generated material that would follow.
Second, the skeptics underestimated network effects and emergent behaviors. The internet's most transformative applications such as social media, viral content, platform marketplaces, collaborative knowledge creation, weren't simply digital versions of existing activities. They were entirely new forms of human organization and interaction that emerged from connecting millions of people in unprecedented ways. These emergent properties couldn't be predicted by analyzing the technology in isolation; they required understanding how human behavior would evolve in response to new possibilities.
Third, the critics missed the economic incentives that would drive rapid adoption and improvement. They saw the internet as a curiosity for academics and technologists, not as a platform for commerce, entertainment, and social connection that would attract massive investment and innovation. The profit motive, combined with network effects, created a self-reinforcing cycle of improvement that accelerated development far beyond what early observers anticipated.
Perhaps most importantly, the early skeptics failed to appreciate how quickly human behavior could change when presented with sufficiently compelling benefits. Waring Partridge's observation that "most things that succeed don't require retraining 250 million people" proved spectacularly wrong - not because people didn't need to learn new skills, but because they were willing to do so when the rewards were clear. The internet didn't just require behavioral change; it incentivized it through convenience, connection, and economic opportunity.
The deeper critics of the 1990s, by contrast, proved remarkably prescient in their structural analyses. Their warnings about surveillance, corporate power concentration, and social manipulation weren't based on technical predictions but on understanding how power operates in technological systems. They recognized that the internet wouldn't just be a neutral tool for information sharing - it would reflect and amplify the values and interests of those who controlled its development.
These insights proved accurate not because the critics could predict specific technologies like targeted advertising or social media algorithms, but because they understood the underlying dynamics of power, profit, and control that would shape the internet's evolution. They saw that a network built by and for commercial interests would inevitably become a tool for commercial exploitation. They recognized that systems designed for efficiency and scale would prioritize those values over privacy and autonomy.
The accuracy of these structural predictions offers crucial guidance for evaluating today's AI discourse. When researchers like Geoffrey Hinton warn about control problems with super-intelligent systems, they're not making technical predictions about specific AI architectures. They're offering structural analyses about intelligence hierarchies and power relationships that deserve serious consideration regardless of the specific timeline or implementation details.
Similarly, when critics like Elizabeth Renieris warn about AI systems concentrating wealth and power in the hands of a few corporations, they're building on the demonstrated pattern of how transformative technologies develop under current economic and political structures. These aren't speculative concerns - they're extrapolations from observable trends in AI development and deployment.
The internet's history also reveals the importance of timing in technological governance. Most attempts to address the internet's negative consequences - from privacy regulations to antitrust enforcement - came years or decades after the problems became apparent. By then, the basic architecture of the internet economy was already established, making fundamental changes extremely difficult and expensive.
This pattern suggests that waiting for AI problems to emerge before addressing them may be too late. The current moment, when AI systems are powerful enough to demonstrate transformative potential but not yet ubiquitous enough to be unchangeable, may represent a narrow window for proactive governance that won't remain open indefinitely.
Signals We Might Be Missing: What Today's Discourse Reveals
Examining today's AI discourse through the lens of internet history reveals several signals that deserve more attention than they're currently receiving. These aren't necessarily predictions about specific outcomes, but rather indicators of the kind of transformation that may be underway and the speed at which it might occur.
The first signal is the remarkable consensus among AI researchers about the timeline for artificial general intelligence. When Geoffrey Hinton states that "most of the experts in the field think that sometime, within probably the next 20 years, we're going to develop AIs that are smarter than people," he's describing not just his personal opinion but a broad professional consensus. This represents a significant shift from even five years ago, when such timelines were considered highly speculative.
The speed of this consensus formation itself deserves attention. The early internet took decades to move from academic curiosity to mainstream recognition of its transformative potential. AI discourse has compressed this timeline dramatically, with widespread acknowledgment of transformative potential emerging within just a few years of systems like GPT-3 demonstrating unexpected capabilities.
This acceleration reflects not just faster technological development but also the AI community's awareness of exponential improvement curves. Unlike the early internet, where each improvement was incremental and visible, AI capabilities can improve dramatically with relatively small changes in model size, training data, or algorithmic approaches. The jump from GPT-3 to GPT-4, for example, represented a qualitative leap in capabilities that surprised even the researchers who built these systems.
The second signal is the nature of the warnings coming from AI developers themselves. The internet era was characterized by optimistic pioneers and skeptical outsiders. Today's AI discourse features the unusual spectacle of technology creators warning about their own creations. When the CEOs of OpenAI, Google DeepMind, and Anthropic sign statements comparing AI risks to pandemics and nuclear war, they're not engaging in marketing hyperbole - they're expressing genuine concerns about technologies they understand better than anyone else.
This pattern of creator concern is historically unusual and suggests that AI development may be proceeding faster than even its architects are comfortable with. The fact that Geoffrey Hinton left Google specifically to speak more freely about AI risks indicates that normal corporate incentives may be insufficient to ensure responsible development.
The third signal is the economic disruption already visible in labor markets. Research has shown that the number of new UK entry-level jobs has declined significantly since ChatGPT's launch, suggesting that AI's impact on employment may be happening faster and more broadly than anticipated. This isn't just automation of routine tasks, it's the displacement of cognitive work that was previously considered safe from technological substitution.
The speed of this disruption is particularly noteworthy. Previous waves of automation typically took decades to reshape labor markets, allowing time for workers to retrain and economies to adjust. AI's impact on cognitive work appears to be happening much more rapidly, potentially outpacing society's ability to adapt.
The fourth signal is the concentration of AI development in a small number of organizations with unprecedented computational resources. Training state-of-the-art AI systems now requires investments measured in hundreds of millions or billions of dollars, effectively limiting serious AI research to a handful of technology companies and well-funded research institutions. This concentration of capability represents a significant departure from the internet's early development, which was characterized by distributed innovation and relatively low barriers to entry.
This concentration has implications beyond just market competition. It means that decisions about AI development - including safety measures, deployment timelines, and capability targets, are being made by a very small number of people with limited democratic accountability. The internet's development, while certainly influenced by commercial interests, involved thousands of researchers, developers, and organizations. AI's development is increasingly centralized in ways that may limit both innovation and oversight.
The fifth signal is the emergence of AI capabilities that weren't explicitly programmed or anticipated by their creators. Large language models have demonstrated abilities in reasoning, creativity, and problem-solving that emerged from training on text prediction tasks. These emergent capabilities suggest that AI development may be less predictable and controllable than traditional software development, with implications for both safety and governance.
The pattern of emergent capabilities also raises questions about the adequacy of current evaluation and safety measures. If AI systems can develop unexpected abilities through training, then testing them only on anticipated use cases may be insufficient to ensure safe deployment. This unpredictability echoes the internet's development, where the most transformative applications like social media, e-commerce platforms, search engines weren't anticipated by the network's original designers.
The Governance Challenge: Learning from Internet Regulation
The internet's regulatory history offers both cautionary tales and potential models for AI governance. The most striking lesson is how difficult it becomes to impose meaningful constraints on a technology after it has achieved widespread adoption and economic entrenchment. Most significant internet regulations - from GDPR to antitrust investigations - came decades after the problems they address became apparent, by which point the basic architecture of the internet economy was already established.
This pattern suggests that the current moment may represent a crucial window for AI governance. Unlike the internet, which developed largely without regulatory oversight and only faced serious governance efforts after its transformative effects were already apparent, AI is attracting regulatory attention while still in its early stages of development and deployment.
The challenge lies in designing governance frameworks that can adapt to rapidly evolving capabilities while avoiding both premature restrictions that stifle beneficial innovation and delayed responses that allow harmful applications to become entrenched. The internet's history suggests that this balance is extremely difficult to achieve, particularly given the global nature of technology development and the competitive pressures that drive rapid deployment.
Several models for AI governance have emerged from the current discourse, each drawing different lessons from internet history. The first is the nuclear analogy, explicitly invoked by OpenAI's suggestion that "we are likely to eventually need something like an IAEA [International Atomic Energy Agency] for superintelligence efforts". This model emphasizes international coordination, technical expertise, and the recognition that some technologies require special oversight due to their potential for catastrophic harm.
The nuclear analogy has both strengths and limitations. Nuclear technology development was successfully constrained through international agreements and oversight mechanisms, preventing the widespread proliferation that many experts feared in the 1950s and 1960s. However, nuclear technology development was also much more centralized and resource-intensive than AI development, making it easier to monitor and control. AI development is more distributed, requires fewer specialized resources, and has more immediate commercial applications, making nuclear-style governance more challenging to implement.
The second model draws on pharmaceutical regulation, emphasizing safety testing and approval processes before deployment. This approach would require AI developers to demonstrate safety and efficacy before releasing systems with certain capabilities or applications. The pharmaceutical model has been successful in preventing many harmful drugs from reaching the market, but it also significantly slows innovation and may be poorly suited to the rapid iteration cycles that characterize AI development.
The third model focuses on transparency and accountability rather than pre-approval, requiring AI developers to disclose information about their systems' capabilities, training data, and safety measures. This approach draws on financial regulation and environmental disclosure requirements, emphasizing market-based solutions and informed decision-making rather than direct government control.
Each of these models reflects different assumptions about the nature of AI risks and the appropriate role of government in technology development. The choice between them, or the development of hybrid approaches - will likely depend on how AI capabilities evolve and what kinds of problems emerge in the coming years.
The internet's regulatory history also highlights the importance of international coordination in technology governance. The global nature of both the internet and AI development means that purely national approaches are likely to be ineffective, either driving innovation to less regulated jurisdictions or creating fragmented systems that undermine the technologies' benefits.
The European Union's approach to AI regulation, embodied in the AI Act, represents one attempt to create comprehensive governance frameworks before problems become entrenched. However, the effectiveness of this approach will depend on whether other major jurisdictions adopt similar measures and whether the regulations can adapt to rapidly evolving capabilities.
Inclusive Participation in Technological Futures
One of the most significant lessons from internet history is the importance of inclusive participation in shaping technological development. The internet's early development was largely driven by technical experts and commercial interests, with limited input from the broader public about the kind of society these technologies would create. By the time ordinary citizens began to experience the internet's negative consequences - from privacy violations to misinformation campaigns - the basic architecture was already established and extremely difficult to change.
This pattern suggests that waiting for AI's societal impacts to become apparent before engaging in democratic deliberation may be too late. The current moment, when AI capabilities are advancing rapidly but haven't yet become ubiquitous, may represent a crucial opportunity for public participation in shaping how these technologies develop and deploy.
The challenge lies in creating meaningful opportunities for democratic input on highly technical issues that are evolving rapidly. Traditional democratic institutions like legislatures, regulatory agencies, public comment processes, are often too slow and too removed from technical details to provide effective oversight of emerging technologies. New mechanisms for public participation may be needed that can operate at the speed of technological development while still ensuring broad representation and accountability.
Several experiments in democratic technology governance offer potential models. Citizens' assemblies, which bring together randomly selected groups of citizens to deliberate on complex policy issues, have been used successfully to address contentious topics like climate change and genetic engineering. These assemblies combine expert input with public deliberation, allowing ordinary citizens to develop informed opinions on technical issues while maintaining democratic legitimacy.
Participatory technology assessment, developed in several European countries, involves public engagement in evaluating emerging technologies before they become widely deployed. These processes typically combine expert analysis with public consultation, creating opportunities for citizens to influence technology development based on their values and priorities rather than just technical considerations.
The AI community itself has begun experimenting with public engagement mechanisms. OpenAI's red team exercises, which involve external experts in testing AI systems for potential harms, represent one approach to broadening participation in AI safety evaluation. However, these efforts remain limited in scope and primarily involve technical experts rather than the broader public.
More ambitious approaches might involve public participation in setting research priorities, deployment standards, and safety requirements for AI systems. This could include citizen oversight of AI research funding, public input on acceptable risk levels for different AI applications, and democratic deliberation about the kinds of AI futures society wants to pursue.
The internet's history also highlights the importance of preserving space for alternative approaches and dissenting voices. The early internet's diversity - with multiple competing protocols, platforms, and business models, gradually gave way to consolidation around a few dominant companies and approaches. This consolidation made the internet more efficient and user-friendly in many ways, but it also reduced the space for experimentation and alternative visions.
AI development shows similar tendencies toward consolidation, with a few large companies dominating research and development. Preserving space for alternative approaches - whether through public research funding, open-source development, or regulatory requirements for interoperability, may be crucial for maintaining democratic control over AI's development.
The goal isn't to slow AI development or prevent beneficial applications, but to ensure that the trajectory of AI development reflects democratic values and priorities rather than just technical possibilities and commercial incentives. This requires creating institutions and processes that can engage with rapidly evolving technologies while maintaining democratic accountability and representation.
Conclusion: The Urgency of Proactive Engagement
The parallels between early internet discourse and today's AI debates are striking, but they point toward a crucial difference: we now have the benefit of hindsight. We know how transformative technologies can reshape society in ways that their creators never anticipated. We understand how quickly human behavior can change when presented with compelling new capabilities. We've seen how difficult it becomes to impose meaningful constraints on technologies after they achieve widespread adoption.
This knowledge creates both an opportunity and an obligation. The opportunity lies in applying lessons from internet history to shape AI development more deliberately and democratically. The obligation lies in recognizing that the current moment may represent a narrow window for proactive engagement that won't remain open indefinitely.
The early internet critics who warned about surveillance, corporate power concentration, and social manipulation weren't wrong—they were simply ignored until their predictions became reality. Today's AI researchers who warn about control problems, existential risks, and rapid societal transformation deserve the same serious consideration that the internet's deeper critics should have received but didn't.
This doesn't mean accepting every dire prediction or halting AI development. It means engaging seriously with the structural analyses and systemic concerns that researchers like Geoffrey Hinton, Yoshua Bengio, and Elizabeth Renieris are raising. It means recognizing that the speed of AI development may not allow for the gradual adaptation and course correction that characterized the internet's evolution.
Most importantly, it means expanding participation in decisions about AI development beyond the small community of researchers and entrepreneurs who currently control the technology's trajectory. The internet's development was shaped primarily by technical experts and commercial interests, with limited democratic input about the kind of society these technologies would create. We have an opportunity to do better with AI, but only if we act while the technology's basic architecture is still malleable.
The signals are clear: AI development is proceeding faster than most experts anticipated, with capabilities emerging that weren't explicitly programmed or predicted. The economic and social disruptions are already beginning, and the concentration of AI development in a few organizations is creating unprecedented concentrations of power. The window for proactive governance and democratic engagement may be narrower than we think.
The early internet believers saw transformative potential that skeptics missed. Today's AI researchers are seeing similar potential, but they're also seeing risks that the internet's pioneers didn't anticipate or couldn't imagine. We have the opportunity to learn from both their insights and their oversights, but only if we take seriously the urgency of the moment and the magnitude of what may be at stake.
AI will transform society - that transformation is already underway. Whether we'll shape that transformation deliberately and inclusively, or whether we'll find ourselves, like the internet's early critics, looking back with regret at opportunities missed and warnings ignored remains to be seen. The choice is ours, but the window for making it may be closing faster than we think.