Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, happy 4th July in advance…I guess.)

  • @[email protected]
    link
    fedilink
    English
    9
    edit-2
    1 day ago

    So, you know Ross Scott, the Stop Killing Games guy?
    About 2 years ago he actually interviewed Yudkowsky. The context being that Ross discussed his article on one of his monthly streams, and expressed skepticism that there was any threat at all from AI. Yudkowsky got wind of his skepticism, and reached out to Ross to do a discussion with him about the topic. He also requested that Ross not do any research on him.
    And here it is…
    https://www.youtube.com/watch?v=hxsAuxswOvM

    I can’t say I actually recommend watching it, because Yudkowsky spends the first 40 minutes of the discussion refusing to answer the question “So what is GPT-4, anyway?” (It’s not exactly that question, but it’s pretty close).
    I don’t know what they discussed afterwards because I stopped watching it after that, but, well, it’s a thing that exists.

    • @[email protected]
      link
      fedilink
      English
      711 hours ago

      Yudkowsky got wind of his skepticism, and reached out to Ross to do a discussion with him about the topic. He also requested that Ross not do any research on him.

      I pinky promise I’m an expert! no you’re not allowed to check my credentials, the fuck?

    • @[email protected]
      link
      fedilink
      English
      7
      edit-2
      16 hours ago

      I think we mocked this one back when it came out on /r/sneerclub, but I can’t find the thread. In general, I recall Yudkowsky went on a mini-podcast tour a few years back. I think the general trend was that he didn’t interview that well, even by lesswrong’s own standards. He tended to simultaneously assume too much background familiarity with his writing such that anyone not already familiar with it would be lost and fail to add anything actually new for anyone already familiar with his writing. And lots of circular arguments and repetitious discussion with the hosts. I guess that’s the downside of hanging around within your own echo chamber blog for decades instead of engaging with wider academia.

    • @[email protected]
      link
      fedilink
      English
      918 hours ago

      The comments are fun. Here’s the pinned comment, authored by the video’s author:

      I’m not the best at thinking on the fly, so here are two key points I tried to make that got a little lost in the discussion:
      1. I think our entire disagreement rests on Eliezer seeing increasingly refined AI conclusively making the jump to actual intelligence, whereas I do not see that. I only see software that mimics many observable characteristics of intelligence and gets better at it the more it’s refined.
      2. My main point of the stuff about real v. fake + biological v. machine evolution was only to say that just because a process shares some characteristics with another one, other emergent properties aren’t necessarily shared also. In many cases, they aren’t. This strikes me as the case for human intelligence v. machine learning.

      MY CONCLUSION
      By the end, I honestly couldn’t tell if he was making a faith-based argument that increasingly refined AI will lead to true intelligence, despite being unsubstantiated OR if he did substantiate it and I was just too dumb to connect the dots. Maybe some of you can figure it out!

      Here’s my favourite:

      “Ooh Ross making an interview!”
      5 minutes in
      “Ooh Ross is making an interview Neil Breen of AI”.