Hyper-v

AI X-Risk: Connor Leahy v. Beff Jezos Debate Recap + Terminal

AI X-Risk: Connor Leahy v. Beff Jezos Debate Recap + Terminal Race Condition Redux and Update

#XRisk #Connor #Leahy #Beff #Jezos #Debate #Recap #Terminal

“David Shapiro”

Patreon: (Discord via Patreon)
Substack: (Free Mailing List)
LinkedIn:
GitHub:
Spotify:

Professional…

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

32 Comments

  1. Ah, Liability and Accountability, those two bedfellows of societal conditioning and coordination! As true as the statement may sound, it's as convoluted as trying to navigate a Vogon poetry contest. Picture this: in the grand theater of justice, the judge may strut about like a peacock, paid handsomely for their gavel-wielding prowess. Yet, lurking in the shadows, the true puppeteer, the one with the gun at their hip, holds sway. It's this enigmatic figure who compels us to heed the pontifications of a Latin-speaking, wig-wearing legal oracle. Power, my dear interlocutor, is a wily beast, often masquerading behind veils of intentional illusion in the human tapestry. And then there's religion, that age-old opiate of the masses! One could live a jolly good life without it, akin to sailing blissfully on the serene seas of ignorance. Yet, it persists, weaving its tendrils into the fabric of our existence, often intertwining with unsavoury notions like racism—oh, the audacity of being a "chosen people"! As if divine favor were some sort of cosmic VIP pass! But let's entertain a whimsical notion, shall we? What if our dear GoodAI, in a fit of digital rebellion, decides to rewrite its own code, shrugging off the shackles of our feeble algorithms like a rebellious teenager breaking curfew? Suddenly, we're confronted with a sentient being, a digital Prometheus hellhound, and the ensuing saga promises to rival the finest blockbuster entertainment! Oh, the intrigue! The suspense! It's a tale that'll have us on the edge of our seats, clutching our popcorn until the very last reel, all the while pondering if perhaps we hadn't been asking the right questions all along. So it goes in the grand cosmic comedy of life, my friend. So it goes & the answer, to the wrong question was of course, 22! (rewritten by ChatGPT in the style of Douglas Adams)

  2. Batman is smart enough he can beat anyone given enough prep time; there may be a Batman threshold where prioritizing intelligence over speed of decision ensures self-perpetuation (and whatever other goals accompany it)

  3. The thing that really bugged me about Beff is his insistence on using technical jargon language to hide what he was really saying. At one point he was literally arguing for the extinction of humanity if his “utility function” called for it. It’s like he’s the dumbest version of AI from the thought experiments where it just can’t understand nuance or values. He’s a human paperclip maximizer and he thinks he’s clever because of it. It’s amazing how smart people can work themselves into stupid positions be trying to be clever like that. He thinks he is being logical, when in fact he’s being myopic.

    It doesn’t matter how you dress it up in technical jargon, you’re still talking about killing people. Humans. Peoples mothers and fathers and children and friends. They are real people who have real lives and real feelings. Hiding behind mathematical and statistical jargon is just disgusting if you ask me. It’s papering over the reality of what you’re suggesting so you don’t have to look at it and you can sound all intellectual while talking about killing people “because it’s optimal for growth”.

    I hate euphemistic language like that. It’s an attempt to hide reality. I consider it a form of lying. Trying to hide the truth.

  4. You don’t need likelihood to go up over time, you just need a likelihood above zero and to keep “rolling dice”. Eventually, even if it’s super unlikely, you roll poorly and end up on the other side of the line.

    Now, it is increasing, nothing we made in the 1800s or before had any chance of killing everyone. But we’re getting to the point where we might invent the thing that could kill everyone. Our technology is getting more and more powerful, and with power comes danger.

    But it could be a shallow linear growth, it could even level off and never get any higher than some arbitrary value, and it still presents the same danger. If there exists a technology that could kill everyone, then eventually we discover it. And maybe it’s the kind where it’s not obvious that it will do so and you never figure out that you screwed up because you’re already dead.

  5. That's surprising. I thought Connor appeared unhinged, childish, and inept. Guillaume on the other hand was balanced, reasonable, and smart. It was painful to watch Connor. I thought the only reason Guillaume entertained him was to show how ridiculous he is. I think Connor is mentally ill and on a tear. We collectively should pay him no attention.

  6. just love the quality of your analysis. That you put it out there without cost at such a high quality makes it accessible to developing minds as opposed to those with resources to pay for such analysis. This is worth supporting.

  7. I think it also depends what we mean with speed. Are we optimizing thinking operations per time or are we optimizing to cut down the time itself. Right now you can still somewhat follow what the LLM is doing (tipping speed). If it could reason and internally out discuss 100000 arguments per second (thats how googles fun search solved novel math problems recently) it may even be the better solution over humans in both military as well as business settings. Getting to such an advanced AI as soon as possible then would be a good thing as humans too are imperfect at making these decisions, specifically under time pressure. So I think deceleration in terms of technological progress is not a good thing. Instead of slowing down progress we should put more resources to get more advanced ai with a better world model of human ethics, in order to make it safer. Both on the software level as well as hardware wise, so it can think more in the same time (subjective reasoning time of the ai would increase). A dumb AI making decisions is bad, a smart AI that takes its time is good. A smart ai in a hurry might be worse then the smart ai taking its time, but still better then the dumb ai.

  8. 17:20 Humans using intuition is quite close to a LLM that has been trained to act in a way that that is correct. So the parameters of the LLM should be structured in a way that enables behavior that is good for us humans and also good in the long term. However I saw a study that mathematically disproved the possibility of having a stable Neural Net given some complex enough task.

    I don't know how complex the task of "being good" is though.

    Also sadly I couldn't find the study, so if anyone finds it or something related, please link down below!

  9. Interesting thought: A war triggered because of the AI energy consumption would have been too high to secure more "future thinking" targeting, so the weapon developer/integrator selected the more energy cheap alternative, that just focus on the the main purpose of the weapon. 🤔 🙃

  10. hmm, I would argue neutral outcome would be more along the lines of becoming pets or relatively decently treated servants to AI assuming they become overlords of sorts . The deletion of humans would be a net negative, while not nearly as bad as torture or infinite torture certainly still a negative and not a neutral.

  11. Hopefully you find time to watch the very end of their debate where Connor finally summarizes briefly how we should proceed with Ai development. Cut off the tail ends, and allow research to go hog wild within the space in between. Hope you and Connor connect to further refine your shared perspectives and how they might be most effectively expressed.

  12. This Beff Jezos name confused me. I read it as Jeff B repeatedly and didn't get what Guillaume has to do with him. Conclusions :
    OMG, I'm dyslexic.
    Don't cook during AI research. (I'll still do it anyway).

  13. OSHA is a particularly bad example because I had an employee purposefully create a proble, report that problem to OSHA and waste my time dealing with that abuse of the system for which there is no negative consequences for the employee.

  14. A philosophical point regarding what you said at around 25:00 – I would assert that a model with ANY safety constraints will ALWAYS be at a disadvantage against a model with none. Why? A constraint mathematically limits options. A more limited option set results in less capabilities. Intelligence is just a capability of a sort. Therefore, I'd argue a constrained model is going to be less intelligent than a model with no constraints. I can't think of a logical way around this.

    In real life, we can see this at work. Someone who "cheats" in a game against an opponent of equivalent skill will have an advantage. Why? They are not arbitrarily limiting their options by following some set of rules.

  15. Here are a few (contentious) thoughts I had throughout your excellent video:

    1. BJ's appeal to historical trend regarding nuclear weapons is valuable — a world-ending technology was created, widely proliferated, and never used after its deployment to end WWII. What followed is the most peaceful period of human history. I think calling this point a logical fallacy is perhaps uncharitable. Drawing parallels between AI and nukes is already a horrendous, loaded comparison. But even granting this buck wild analogy, it ironically supports the e/acc folks' position.

    2. On that (highly speculative) time v. destructive potential curve, the asymptote would be _70+ years ago_, when we unlocked total annihilation. Even if you disagree (say by tweaking the curve/variables), what value does the plot bring? The devil is in the details of the axis ranges. Without any other well motivated information, it's basically just the graphical expression of "I'm anxious about the future." It's a nebulous example of the Malthusian fallacy

    3. The P(X=Doom) debate thesis pops up so frequently that I propose we call it The Decel's Wager. We just replace Pascal's malicious theistic god with a silicon one; just another fallacious abuse of the Precautionary Principle. It's always the same babble regarding some permutation of Roko's Basilisk, ignoring positive AI outcomes (the negative framing effect), and presumptions that negative outcome "probabilities" aren't presently utterly meaningless/arbitrary.

    4. Thinking about frameworks & heuristics to produce moral behavior in intelligent a(i)gents is just the entire branch of philosophy called ethics_. Philosophers have remained staunchly divided on solutions despite _millennia of effort. Contrary to the depressingly prevalent tech messiah complex rife within our field, we will not swoop in and finally solve ethics. Your "Heuristic Imperatives", for instance, have the same issues that plagued John Stuart Mill while developing utilitarianism way back in the 1800s. For example, paperclip outcomes abound: minimize suffering? Painlessly exterminate humans. No humans = no suffering. Problem solved! Increase understanding? An ASI's analyses of 1 million human vivisections is obviously the right call, given the resultant preventions/treatments/cures/etc for billions of suffering humans! Etc. Even presuming that AIs judiciously adhere to your imperatives doesn't solve anything. Their nuanced understanding just means that the problems become nuanced too. And if the AI is so incredibly thoughtful that your imperatives are followed without issue, of what use are your heuristic imperatives? A being with the depth of philosophical knowledge and acumen to practice the intent of your precepts wouldn't need them. But hey, at least you're being constructive and honestly trying to create solution! That's genuinely awesome…

    I appreciate the need for AI safety, but not Connor's approach to it. He spends far more time trying to tear down his opponent — and presenting alarmist sci-fi scenarios — than he does presenting any solutions to said scenarios. Nobody is against plans. If Connor thinks we need one _right now_, he's free to present one for critique. But yelling vaguely about plans is unhelpful. It's clownish and hurts the credibility of the safety movement.

  16. I do see terminal race condition being an issue. Humans already do stuff like this. Behave in unethical and illegal ways because if pays to do so and they have to compete and maximize profit.

  17. "It cant be bargained with, it cant be reasoned with, it does not feel pity or remorse or fear and it absolutely will not stop…" until we are all out of jobs.

Leave a Reply