Posts E/Acc is Losing
Post
Cancel

E/Acc is Losing

Would you believe me if I told you that Eliezer Yudkowsky, the OG AI safety guru, spent years trying to build superintelligent AI as fast as he could? It seems crazy, considering his current doomsday warnings about AI and the fervent following he’s gathered – smart people who seem to agree that AI, as it exists right now, should be locked up tighter than a nuclear launch code, and some serious researchers have followed suit, warning about the dangers of AI . So what’s the deal? I needed to understand how this small, seemingly paranoid community holds such sway over the future of AI.

I needed answers, and answers I found by going to LessOnline, a gathering of self-proclaimed “rationalists” and those who dabble in their unique brand of thought. I’ll admit, as a regular reader of Astral Codex Ten and a committed LessWrong lurker, I feel a strange kinship with these folks. They’re brilliant, forward-thinking, and more willing than most to entertain my ramblings about AI timelines. Some of the best conversations I have had in this foreign land were held under the warm trees in the Berkeley summer during this event. And really, I met all sorts of people, from Extropians to doomsday prophets in one event. But at this year’s closing ceremony, a disturbing realization hit me: despite incredible breakthroughs in AI, the “slow down, we’re all gonna die” camp seems to be winning. Why?

My theory is simple, perhaps even uncomfortable: AI deceleration is not gaining steam, acceleration is simply being left in the dust. Accelerationists(at least the loudest ones) just aren’t as agentic or knowledgeable, and they often seem to underestimate the sheer complexity of the challenge. Their main arguments rely on vibe-based assessment that stand no chance against a community that has been writing essays and books, making youtube videos, and whole movies for the better part of the last 25 years. Even more troubling, there’s no charismatic leader for the pro-AI movement. Sure, you have brilliant minds like Garry Tan and Marc Andreessen dipping their toes in the water, but no one really wants to listen to a philosopher who stands to make billions off their own hype. I had high hopes for Based Beff, but I personally do not believe that he has made any strategic decisions(other than raise money for his startup), that strengthens his cause. Other thought leaders, Like Yann Lecun or Francois Chollet, do not believe that LLMs shoud be entertained as a viable path towards AGI, thus weakening the cause for acceleration as most AI optimists(and pessimists) are counting on Transformer-based LLMs to reach AGI.

E/Acc

Ever heard of E/Acc? No? Don’t worry, most haven’t outside of the tech and AI Twitter Bubbles. Short for Effective Accelerationism, E/Acc is like the rebellious, lesser-known cousin of Effective Altruism (EA). EA grapples with existential risks like rogue AI, albeit their methodology and “effectiveness” has been many times put into questions. And scandals around the organization have disgraced its reputation for the last few years. E/Acc embraces the unknown, pushing for the rapid, almost reckless development of transformative AI. They see AI not as a threat to be contained, but as a vast, untapped potential waiting to be unleashed.

This difference in philosophy creates a fascinating, albeit one-sided, rivalry. A lot of the proponents of AI safety that I talked to, genuinely do not see E/Acc as a serious threat. Most of them even admit that they never think about this group of unruly X/twitter addicts. On the other end of the story, E/Acc sees EA’s focus on AI safety as a self-fulfilling prophecy, a way of hindering progress and potentially locking humanity out of a brighter future. The stakes couldn’t be higher: it’s a philosophical battle for the very fate of intelligent life.

But here’s the catch: E/Acc, despite its fiery rhetoric, often comes across as…well, unfriendly. Maybe it’s the “move fast and break things” attitude, or their penchant for online debates that border on the abrasive. Whatever the reason, they haven’t exactly won hearts and minds. The irony is palpable: EA might be reeling from internal scandals and shifting priorities, but their message of AI caution continues to resonate with a population that barely understands the implications of a post-AGI world. Meanwhile, E/Acc, the supposed champions of a bold, optimistic future, remain in the shadows, their voices lost in the cacophony of AI doomsday predictions.

E/Acc has an agency problem

The California SB-1047 Bill was my wake up call. How the heck was it possible for the California state legislature, infamous for being swarmed with legislative proposals, was able to move so fast. I was also baffled by how much concensus there was over the passing of the bill(I am not the only one). AI safetysts, taking advantage of their first-mover advantage, successfully convinced California legislators that what they were doing was for the good of humanity.

Plus, how is it possible that there was no one lobbying even harder on the other side, given how a lot of corporations stand to gain a lot of money from a possible post-AGI world? How is it possible that AI safetysts, which, admittedly, are led by a very small group of individuals, could have such a huge headstart in the most important US State for AI?

The problem here is that e/acc’s lack of clear organization makes it hard for them to coordinate to do anything, which is why I think that Beff not becoming an intellectual leader will become the downfall of the e/acc movement if no action is taken ASAP. On the other hand, safety lobbyists have been organizing and advocating for their cause, even going as far as to encourage their members to join governments office for the only sake of promoting “AI governance,” a philosophy that puts the government at the center of AI policy.

E/Acc doesn’t seem interested in the kind of slow, bureaucratic maneuvering required to influence policy like the SB-1047 Bill. They aren’t organizing think tanks(we will come to this later, because this is not really accurate), courting politicians, or drafting legislation. In fact, most E/Acc proponents would likely scoff at such an approach, viewing it as a distraction from their true objective: building AGI, not controlling it. This is truly ironic as one of the motos of e/acc is that you can just do things, but few seem inclined to do the things that matter for the future of the movement. I guess the problem lies in the fact that E/acc primarily exists online, a nebulous collection of individuals scattered across forums and Discord channels, unified by a shared vague ideology rather than any formal structure. This lack of cohesion is further exacerbated by the constant forking of the movement, with splinter groups advocating for variations on the core tenets of E/Acc, often blurring the lines between transhumanism and their own unique brand of technological acceleration. This makes it incredibly difficult to pin down exactly what E/Acc stands for and how it plans to achieve its goals beyond simply advocating for faster AI development.

E/Acc has a philosophy problem

For anyone who has interacted with e/accs online, they might have realized that it is incredibly hard to pinpoint exactly what most e/accs stand for. Most have a vague idea that goes along the lines of “technology go brr…” and it is truly sad that there is no real conception of what an e/acc should try to know or a basic information base of what to read other than this substack article. This pales in comparison to the sequences, a series of articles written by Yudkowsky, whose number of words range from 1M to 1.2M words, the equivalent of seven decently-sized books. Many of the AI safety proponents have read and re-read the whole sequences and much much more. The sequences pretty much introduces a lot of the ideas advanced by modern-day AI safetyists, and I have to commend the level of effort it takes for one to read and write such a huge series.

As a supporter of techno-optimism, I would like to see more people doing the grunt work of writing articles, debating online(I think this interview is an example of what not to do), and forming coherent ideologies.

I am sometimes not sure if e/acc is misunderstood or if their leaders are truly anti-human. I am confident I am not the only one who has that feeling. Furthermore, here is a nice quote that I found on the NYT article on E/Acc from one of the commenters:

“And there you have it. A bunch of young, rich, single, childless techbros decide they’re going to accelerate the end of humanity… for the lulz. If this movement isn’t a sign that frontier AI needs adult supervision in the form of intelligent regulation, I’m not sure what is.”

This comment pretty much encapsulates what most regular folks think about E/acc. In its abrasiveness and adversarial attitude towards EA, they are positioning themselves as anti-human(which I am sure most are not). As they seem to not take seriously a perceived existential risk from Superintelligence. I think that if they engaged more in civil discourse on why that is not a problem, people would not be writing 170 mostly negative comments on that popular NYT thread.

E/Acc has a mimetic problem

The single greatest tool that AI doomerism has at its disposal is Harry Potter and the Methods or rationality, written by Yudkowsky himself. For the uninitiated, this book acts as a gateway drug to the world of LessWrong (LW) and its particular brand of rationality. Readers, often bright and intellectually curious, are lured in with the promise of honing their critical thinking skills, only to find themselves immersed in a community obsessed with existential risk and the looming threat of AI. This pipeline, from HPMOR to LW to AI doomerism, has proven remarkably effective.

The issue isn’t just the ideas themselves, but how they’re packaged and disseminated. LW, with its unique lexicon and in-group jargon, creates an environment that feels exclusive, intellectual, and important. This gives them an aura of seriousness that most regular people endorse as LW seems genuinely concerned about whether or not humanity survives this century.
On the other end of the spectrum, E/Acc has instead become appealing to those who can afford to be taken less seriously: the perpetually online, the terminally anonymous. While EA/LW boasts real names, real profiles, and real-world influence, most E/Acc proponents remain shrouded in anonymity, their arguments confined to the echo chambers of Twitter and Discord. This lack of a public face, of individuals willing to stake their reputations on the future of AI, makes E/Acc appear unserious, their arguments easily dismissed as the ramblings of internet fringe groups.

I am not advocating for E/Acc to become a mirror image of EA/LW. However, to gain real traction, the movement needs figureheads. It needs individuals with the credibility and courage to engage in the public sphere, to translate complex ideas for a wider audience, and to advocate for a future where AI is not feared but embraced. Beff Jesos, unfortunately doxxed, might have been better served revealing his identity from the start. His accomplishments and intellect could have lent much-needed weight to the movement.

How not to Lose

There are a few things that one can do to promote acceleration and help e/acc win:

The first advice I can give is to join Big Tech and promote more acceleration, Yacine, one of the most vocal OG e/accs, did it, and he also, in one of his X spaces, recommended that wanna-be accelerationists start there. I am also planning to write an article on this in the future. Mostly on how Big tech will swing whichever way the profit margins increase, so the ball is in anyone’s camp as to wether they will support stricter regulations or more lax policies around AI. If e/acc can successfully infliltrate big tech, it stands a chance to win the battle for the future.

Another path would be one of creating a startup. Beff, Nous Research, and others have done it, and they are now some of the loudest activists online for techno/AGI optimism.

Most importantly, I recommend that e/accs read as much as possible. This is definitely and uphill battle as most LW/EAs have been reading, writing, debating, and arguing for the last 25 years. There is quite honestly very few arguments one can present them that they haven’t heard at least a dozen times. Someone curated an Awesome E/Acc github repo, and I recommend you check its book section. Also, there are a ton of informed and very intelligent people who have been advocating for AI optimism. The first is Brian Chau though his Alliance for the Future Think Tank. I also like the AI panic newseletter. I also follow marginal revolution as I feel like Tyler Cowen has grounded takes on AI.

Advocate! This is the most underrated part. I am liking the fact that a lot of e/acc parties/hackathons are starting to spring up, but those mostly attract the most tech-savvy among us. The problem with this method is that it creates a divide between the tech bubble and the average person. One of the reasons why EA has been so successful so far is that they were able to attract lambda college students, convert them to their cause, then funnel them into AI safety. Most of the members of EA are not technical, heck, I doubt most of the loudest advocates(bar some) are. This non-technical individuals learn the necessary jargon to convert their other non-technical friends, creating a network effect that spreads the values of EA.

Lobby! Donate to Brian Chau and other non-profits advocating for AI proliferation and against regulatory capture. Help raise the voices of the for-profit organizations working in open-source 2 3 and who hold optimistic views on AI and the future.

Price in on AGI! I am working with the assumption that if you are e/acc, you believe that AGI is inevitable, thus, I highly encourage you to put your money where your mouth is. If you believe in AGI by 2027, be smart about it and make sure you benefit from it. The converse is also true, a lot of AI whisteblowers have been making the rounds for giving up extremely lucrative jobs because they did not believe that the companies they are working for are not investing enough effort in alignment(or superalignment if you may). Be the opposite and try to make sure your company[or you] can make the most out of a Post-AGI world, depending on your timelines. I am not telling you to be unethical, if you feel like the company you are working for has questionable ethics, please quit and distance yourself from it.

Being involved in serious research also helps. Very few papers were as e/acc as the Chinchilla scaling laws paper. If you can help produce work like that, please do.

I believe that this will be one of the most important events in this century, and I honestly want to ensure that AI optimism gains the upper hand.

This post is licensed under CC BY 4.0 by the author.