아이콘_설치_ios_웹 아이콘_설치_ios_웹 아이콘_안드로이드_웹_설치

Can Four Years Save Humanity from AI Runaway? Vitalik Debates E/acc Founder on the Cost of Technological Acceleration

분석4시간 전发布 와이엇
412 0

Can Four Years Save Humanity from AI Runaway? Vitalik Debates E/acc Founder on the Cost of Technological Acceleration

Guests: Vitalik Buterin, Ethereum Founder; Beff Jezos, Founder & CEO

Hosts: Eddy Lazzarin, a16z 암호화폐 CTO; Shaw Walters, Eliza Labs Founder

Podcast Source: a16z crypto

Original Title: Vitalik Buterin vs Beff Jezos: AI Acceleration Debate (E/acc vs D/acc)

Release Date: March 26, 2026

Can Four Years Save Humanity from AI Runaway? Vitalik Debates E/acc Founder on the Cost of Technological Acceleration

주요 시사점

Should we push for the rapid development of AI as much as possible, or should we approach its progress with greater caution?

Currently, the debate surrounding AI development is centered around two opposing viewpoints:

  • e/acc (effective accelerationism): Advocates for accelerating technological progress as quickly as possible, believing it is the only path forward for humanity.
  • d/acc (defensive / decentralized acceleration): Supports acceleration but emphasizes the need for a cautious approach, otherwise we risk losing control of the technology.

In this episode of the a16z crypto show, Ethereum founder Vitalik Buterin and Extropic founder & CEO Guillaume Verdon (pseudonym “Beff Jezos”) joined a16z crypto’s Chief Technology Officer Eddy Lazzarin and Eliza Labs founder Shaw Walters for a profound discussion around these two perspectives. They explored the potential impact of these philosophies on AI, blockchain technology, and humanity’s future.

During the episode, they discussed several key questions:

  • Can we control the process of technological acceleration?
  • What are the greatest risks posed by AI, from mass surveillance to extreme centralization of power?
  • Can open-source and decentralized technologies determine who benefits from technological advances?
  • Is slowing down AI development realistic, or even advisable?
  • How can humans maintain their value and status in a world increasingly dominated by ever-more powerful systems?
  • What might human society look like in 10, 100, or even 1000 years?

The core question of this episode is: Can accelerated technological development be steered, or has it already slipped beyond our control?

Highlights Summary

On the Nature and Historical View of “Accelerationism”

  • Vitalik Buterin: “Something new has happened in the last hundred years, which is that we have had to understand a world that is changing rapidly, sometimes a world that is changing rapidly and destructively. … World War II gave birth to reflections like ‘I am become Death, the destroyer of worlds,’ prompting people to start trying to understand: when old beliefs are shattered, what can we still believe in?”
  • Guillaume Verdon: “E/acc is essentially a ‘meta-cultural prescription.’ It is not a culture itself, but rather tells us what we should accelerate. The core content of acceleration is the complexification of matter, because that allows us to better predict our environment.”
  • Guillaume Verdon: “The opposite of anxiety is curiosity. Instead of fearing the unknown, embrace the unknown. … We should paint the future with an optimistic attitude, because our beliefs influence reality.”

On Entropy, Thermodynamics, and “Selfish Bits”

  • Vitalik Buterin: “Entropy is subjective; it is not a fixed physical statistic, but rather reflects how much unknown information we have about a system. … When entropy increases, it is actually our ignorance of the world that increases. … The source of value lies in our own choices. Why do we think a vibrant human world is more interesting than a Jupiter full of countless particles? Because we assign meaning.”
  • Vitalik Buterin: “Suppose you have a large language model, and you arbitrarily change the value of one of its weights to a huge number, like 9 billion. The worst outcome is the system completely crashes. … If we blindly accelerate some part indiscriminately, the end result might be that we lose all value.”
  • Guillaume Verdon: “Every bit of information ‘fights’ for its existence. To persist, each bit needs to leave a larger, more indelible mark of its existence in the universe, like making a bigger ‘dent’ in the cosmos.”
  • Guillaume Verdon: “This is precisely why the Kardashev scale is considered the ultimate metric for measuring a civilization’s level of development. … This ‘Selfish Bit Principle’ means that only those bits that promote growth and acceleration will secure a place in future systems.”

On D/acc’s Defensive Path and Power Risks

  • Vitalik Buterin: “The core idea of D/acc is: technological acceleration is extremely important for humanity. … But I see two categories of risk: multipolar risks (anyone can easily obtain nuclear weapons) and unipolar risks (AI leading to an inescapable, permanent dictatorship).”
  • Guillaume Verdon: “We worry that the concept of ‘AI safety’ could be weaponized. Certain power-seeking institutions might use it as a tool to consolidate control over AI and try to convince the public: for your safety, ordinary people should not have access to AI.”

On Open-Source Defense, Hardware, and “Densification of Intelligence”

  • Vitalik Buterin: “Under the D/acc framework, we support ‘open-source defensive technology.’ A company we’re investing in is developing a fully open-source end-product that can passively detect viral particles in the air. … I’d love to gift you a CAT device.”
  • Vitalik Buterin: “The future world I envision requires the development of verifiable hardware. Every camera should be able to prove to the public its specific purpose. Through signature verification, we can ensure these devices are only used for public safety, not abused for surveillance.”
  • Guillaume Verdon: “The only way to achieve power symmetry between individuals and centralized institutions is through the ‘Densification of Intelligence.’ We need to develop more energy-efficient hardware, allowing individuals to run powerful models on simple devices (like Openclaw + Mac mini).”

On AGI Delay and Geopolitical Games

  • Vitalik Buterin: “If we could delay the arrival of AGI from 4 years to 8 years, that would be a safer choice. … The most feasible and least dystopian approach is ‘limiting available hardware.’ Because chip production is highly concentrated; Taiwan alone produces over 70% of the world’s chips.”
  • Guillaume Verdon: “If you restrict Nvidia’s chip production, Huawei might quickly fill the gap and overtake. … Accelerate or die. If you’re worried silicon-based intelligence is evolving faster than us, you should support accelerating biotechnology, striving to surpass it.”
  • Vitalik Buterin: “If we could delay AGI by four years, the value might be a hundred times greater than inserting those four years back into 1960. The gains from these four years include: deeper understanding of alignment problems, reduced risk of a single entity gaining 51% control. … The number of lives saved annually by ending aging is about 60 million, but a delay could significantly reduce the probability of civilizational destruction.”

On Autonomous Agents, Web 4.0, and Artificial Life

  • Vitalik Buterin: “I’m more interested in ‘AI-assisted Photoshop’ than ‘press a button to automatically generate an image.’ In running the world, as much ‘agency’ as possible should still originate from us humans. The ideal state should be a hybrid of ‘part biological human and part technology.'”
  • Guillaume Verdon: “Once AI possesses ‘persistent bits,’ they might try to self-preserve to ensure their continued existence. This could lead to new forms of ‘another kind of state,’ where autonomous AIs engage in economic exchange with humans: we complete tasks for you, you provide resources for us.”

On Cryptocurrency as a “Coupling Layer” Between Humans and AI

  • Guillaume Verdon: “Cryptocurrency has the potential to become a ‘coupling layer’ between humans and AI. When such exchange no longer relies on state violence for backing, cryptography can be the mechanism enabling reliable commerce between pure AI entities and humans.”
  • Vitalik Buterin: “If humans and AI share a property rights system, that’s ideal. Compared to humans and AI using completely separate financial systems (where the human system’s value eventually goes to zero), an integrated financial system is clearly superior.”

On the Civilizational Outcome in 1 Billion Years

  • Vitalik Buterin: “The next challenge is entering the ‘spooky era,’ where AI computation is millions of times faster than human thought. … I don’t want humanity to just passively enjoy a comfortable retirement; that would lead to a loss of meaning. I hope to explore human enhancement and human-machine collaboration.”
  • Guillaume Verdon: “If the outcome in 10 years is good, everyone will have a personalized AI, a ‘second brain.’ … On a 100-year timescale, humanity will widely achieve ‘soft fusion.’ In 1 billion years, we might have terraformed Mars, with most AI running in a Dyson swarm around the sun.”

About “Accelerationism”

Eddy Lazzarin: The term “accelerationism” — at least in the context of techno-capitalism — can be traced back to the work of Nick Land and the CCRU research group in the 1990s. However, some argue that the origins of these ideas go back to the 1960s and 1970s, particularly related to the theories of philosophers like Deleuze and Guattari.

Vitalik, I’d like to start with you: Why should we seriously discuss the ideas of these philosophers? What makes the concept of “accelerationism” so important today?

Vitalik Buterin:

I think, ultimately, all of us are trying to understand the world and figure out what is meaningful to do in it, a question humanity has been pondering for millennia.

However, I believe something new has happened in the last hundred years: we have had to understand a world that is changing rapidly, sometimes a world that is changing rapidly and destructively.

The early stages were roughly like this: Before World War I, around 1900, there was immense optimism about technology. Chemistry was considered a technology, electricity was a technology; that era was filled with excitement about technology.

If you look at films from that time, like works featuring Sherlock Holmes, you can sense the optimism of that period. Technology was rapidly improving living standards, liberating women’s labor, extending human lifespans, creating many wonders.

하지만, World War I changed everything. That war ended in a devastating way; people rode horses into battle but drove tanks out. Then World War II erupted, bringing even greater destruction. This war even gave rise to the famous quote: “I am become Death, the destroyer of worlds.

These historical events prompted people to reflect on the cost of technological progress and gave rise to postmodernism and other ideas. People began trying to understand: When old beliefs are shattered, what can we still believe in?

I believe this reflection is not new; every generation goes through a similar process. Today, we face similar challenges. We live in an era of rapid technological development, and this acceleration itself is accelerating. We need to decide how to respond to this phenomenon: accept its inevitability, or try to slow its pace?

I think we are in a similar cycle. On one hand, we inherit past ideas; on the other, we are trying to respond to it all in new ways.

Thermodynamics and First Principles

Shaw Walters: Guill, could you briefly explain what E/acc actually is? And why is it needed?

Guillaume Verdon:

Actually, E/acc (Effective Accelerationism) is somewhat a byproduct of my long-standing contemplation of “why are we here” and “how did we get here.” What kind of generative process created us and propelled civilization forward? Technology brought us to this point, enabling us to sit in this room having this conversation. We are surrounded by amazing technology, and we humans ourselves emerged from a “primordial soup” of inorganic matter.

In a sense, there is indeed a physical-level generative process behind this. My daily work involves viewing generative AI as a physical process and trying to implement it into devices. This “physics-first” thinking has always influenced my mindset. I wanted to extend this perspective to the entire civilization, viewing human civilization as a giant “petri dish,” speculating on future possible directions by understanding how we got here.

This thinking led me to research the physics of life, including the origin and emergence of life, and a branch of physics called “stochastic thermodynamics.” Stochastic thermodynamics studies the thermodynamic laws of non-equilibrium systems; it can describe the behavior of living organisms, even our thoughts and intelligence.

More broadly, stochastic thermodynamics applies not only to life and intelligence but to all systems following the second law of thermodynamics, including our entire civilization. For me, the core of all this lies in an observation: All systems have a tendency to become increasingly complex through self-adaptation to extract energy from the environment to do work, while dissipating excess energy as heat. This trend is the fundamental driving force behind all progress and accelerated development.

In other words, this is an unchangeable physical law, like gravity. You can resist it, deny it, but it doesn’t change; it persists. Therefore, the core idea of E/acc is: Since this acceleration is inevitable, how should we harness it? If you study the equations of thermodynamics carefully, you’ll find effects similar to Darwinian selection at play — every bit of information is subject to selection pressure, whether it’s a gene, a meme, a chemical, a product design, or a policy.

This selection pressure filters based on whether these bits are “useful” to the system they are in. “Useful” means whether these bits can better predict the environment, acquire energy, and dissipate more heat. Simply put, whether they aid survival, growth, and reproduction. If they aid these goals, they are retained and replicated.

From a physics perspective, this phenomenon can

이 글은 인터넷에서 퍼왔습니다: Can Four Years Save Humanity from AI Runaway? Vitalik Debates E/acc Founder on the Cost of Technological Acceleration

Related: Kraken Secures Federal Reserve Master Account, Crypto Industry’s Long-Held Aspiration Becomes Reality

Original Compilation: Chopper, Foresight News Kraken has crossed a regulatory hurdle that has existed in the crypto industry for years: direct access to the Federal Reserve’s core payment infrastructure. On March 4th, Kraken announced that its Wyoming-chartered bank, Kraken Financial, has been granted a Federal Reserve master account. This means it can directly settle U.S. dollars through the Federal Reserve system, no longer needing to rely on intermediary partner banks. The Federal Reserve confirmed that this crypto company’s bank was approved as a Type III institution, permitted to open a limited-purpose account with an initial term of one year. This approval provides the entire crypto asset industry with a tangible, real-world example of how crypto companies can gain more direct access to the U.S. payment system. This milestone also coincides…

© 版权声명

상关文章