Consultant. Mentor. Career Coach.

Why Marc Andreessen Won’t Save the World

I enjoyed reading Marc Andreessen’s “Why AI will Save the World” last week. I don’t pretend to be an AI expert, and have oscillated in my thinking about it frequently as I learn more, though I will admit to being on the “concerned” side of the “skepticism-optimism” spectrum. While Andreessen’s piece is an eloquent, thoughtful cri de coeur for the benefits of AI, and he likely knows more about the specific work being done in the space than I ever will, I believe he commits a few logical fallacies:

  1. Disregarding the potential harm/risk of destruction because it is unclear how it arises. We’re dealing with an unprecedented and opaque form of intelligence. Can it create killer robots out of thin air? No. But could/might it ignore great harms? Sure. The “flash crash” of 2010 had real consequences from (essentially) algorithms triggering each other to act, and the risks of multiple Ais influencing each other at extremely high speed would be exponentially greater, and with further-reaching issues. Exploring it purely through the lens of innovation, Andreessen’s apparent professional raison d’etre, suppose that innovators are unwilling or unable to monetize new business ideas, since AI scrapes ideas even as they germinate, without traceability back to the source(s). This might, in fact, quash greater innovation than it supports.
  2. Downplaying the harms of misinformation and perceived certainty. Short of global destruction, Andreessen slips from people’s concerns about AI-supported misinformation and disinformation to a critique of social pressures and limits on speech. It’s not hard to imagine a world in which people turn first to AI in areas outside of their expertise, and use its solutions to question the conclusions of genuine experts, leading to a world of even more atomized “fact” bases. After all, the AI is smarter than any one of us, so why should I listen to an individual who’s read a paltry thousand books on a subject, when I can consult an AI advisor that has scraped 100x as much data? We have a hard enough time maintaining the epistemological humility that allows for good-faith debate and interaction as it is- if your AI says one thing, mine says another, and neither of us can identify the source assumptions, we’re simply at an impasse. That’s harmful for democracy and the social fabric.
  3. Minimizing externalities/unanticipated consequences. Andreessen presents the enticing vision of a future in which “Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.” But it’s not a hard leap to go from that potential to a scenario in which parents (through self-interest or due to demands of their own work/ambitions) abdicate increasing amounts of child care to the AI, since “it’s more patient and knowledgeable than I can ever be anyway.” We know that children need touch and human interaction to thrive, and part of a child’s social education is the recognition that others are NOT infinitely patient and infinitely helpful. WILL AI-raised children be self-centered and socially inept? No one can say. But that they might is a fair concern. Perhaps closer to home, there are many people who would question whether social media’s net impact on society has been positive or negative. If something as simple as “helping people connect and share pictures” could turn out to have potentially destructive consequences for children’s mental health and even democracy, we should recognize that there is a vast array of “unknown unknowns” we’re approaching with AI.
  4. Assuming low transaction costs. If bad actors could easily, instantaneously spoof logins so we all need retinal scanners to log into our computers, those scanners push up the cost of the computer without adding any actual value to the work that the computer does. There is a risk that the cost of securing/validating everything diverts resources into an arms race from genuinely productive activities.
  5. Disregarding the harms because of the messenger(s). Andreessen accuses others of having an interest in regulating AI because of their own professional positions. As a heavy investor in AI companies, Andreessen is subject to the same critique.
  6. Denying the reality of firm rent-seeking behavior. This is the most surprising to me; Tesla may have chosen to move down-market in order to profit-maximize, but AI-owners could equally pursue the model of Bloomberg terminals, providing premium services to deep-pocketed purchasers, which operate at a speed and scale that ordinary consumers or small players cannot match. For markets with a winner-take-all dynamic, this is not an unlikely business model. Additionally, there are real questions of compensation. If everyone who’s ever written something on the internet is inadvertently feeding a LLM, they are being underpaid for the societal value that they create.
  7. Rejecting new rules simply because they’re hard/others are on the books. Yes, most criminal activities have already been regulated. At the same time, AI (and platforms) introduce new questions that have not been litigated or made clear yet. Questions of negligence, recklessness, liability, and responsibility for the actions of AI need to be carefully considered as it gains prevalence. We are all at risk for harm from careless actors, as much or more than bad actors, and currently may have no recourse for the damages caused by AI hallucinations. Andreessen’s proposed solution is for AI to police/identify AI. That is a valid, and likely essential, step. It doesn’t negate the opportunity for well-designed laws and policies that are specifically crafted around AI to improve overall surplus, promote smarter risk-taking, and reduce the tragedy of the commons harms that AI might cause. I’m sure there are some people who just want to “ban AI,” but setting a total ban as the only alternative to “full-speed ahead” seems a bit of a straw man argument (and less interesting) than thoughtful engagement with the “tap the brakes” or “pause” elements that knowledgeable people in the industry have advocated.

You may notice that I don’t disagree with Andreessen’s jobs optimism. At the societal level, I believe that we can and should do a better job of supporting those whose jobs may be disrupted by AI, but I do think that there will always be work to be done. But I could be wrong. The future is uncertain, and we all have our own biases and agendas; while I don’t doubt Andreessen’s intelligence or sincere belief in the benefits of AI, but when we think that those who disagree with us are all simpletons or malefactors (Baptists or Bootleggers, in Andreessen’s phrasing), we should approach certainty with caution and the utmost humility. There are in-between options, and navigating the post-AI world is a high-risk, high-reward optimization exercise that merits careful thought; to paraphrase Ben Horowitz, it’s not clear to me that AI is, in fact, a statistics problem and not a calculus problem. Regulation is a more likely (and probably more beneficial) outcome than prohibition, and Andreessen could add important insight and nuance to the discussion of what regulations are appropriate in the space to help us solve it effectively.

Leave a comment

Who's the Coach?

Ben Ruiz Oatts is the insightful mastermind behind this coaching platform. Focused on personal and professional development, Ben offers fantastic coaching programs that bring experience and expertise to life.

Get weekly insights

We know that life's challenges are unique and complex for everyone. Coaching is here to help you find yourself and realize your full potential.

We know that life's challenges are unique and complex for everyone. Coaching is here to help you find yourself and realize your full potential.