Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • @[email protected]
    link
    fedilink
    English
    165 months ago

    In lesser corruption news, California Governor Gavin Newsom has been caught distributing burner phones to California-based CEOs. These are people that likely already have Newsom’s personal and business numbers, so it’s not hard to imagine that these phones are likely to facilitate extralegal conversations beyond the existing bribery legitimate business lobbying before the Legislature. With this play, Newsom’s putting a lot of faith into his sexting game.

    • Sailor Sega Saturn
      link
      fedilink
      English
      165 months ago

      Gavin Newsom has also allegedly been worked behind the scenes to kill pro-transgender legislation; and on his podcast he’s been talking to people like Charlie Kirk and Steve Bannon and teasing anti-trans talking points.

      I guess this all makes sense if he’s going to go for a presidential bid: try to appeal to the fascists (it won’t work and also to heck with him) while also laying groundwork for the sort of funding a presidential bid needs.

      If I was a Californian CEO and received a burner phone I’d text back “Thanks for the e-waste :<” but maybe that’s why I’m not a CEO.

      • @[email protected]
        link
        fedilink
        English
        95 months ago

        When this all was revealed his popularity also tanked apparently. Center/left now dislikes him, the right doesn’t trust him. So another point for the ‘don’t move right on human rights you dummies’ brigade.

    • @[email protected]
      link
      fedilink
      English
      11
      edit-2
      5 months ago

      Tbh, weird. If I were a hyper-capitalist, CA-based CEO, I would take the burner phone as an insult. I’d see it as a lack of faith in the capture of the US. Who needs plausible deniability when you just own the fucking country?

      • @[email protected]
        link
        fedilink
        English
        11
        edit-2
        5 months ago

        Even worse, he got caught handing them out. And even with all that, I’d expect a tech CEO to just go ‘why not use signal?’ or ‘what threat profile do you think we have?’ (sorry I keep coming back to this, it is just so fucking weird, like ‘everything I know I learned from television shows’ kind of stuff)

      • @[email protected]
        link
        fedilink
        English
        4
        edit-2
        5 months ago

        it’s weird and lowkey insulting imo. let’s assume that for some bizarre reason tech ceo needs a burner phone to call governor newsom: do you think i can’t get that myself, old man? i’d assume it’s bugged or worse

      • @[email protected]
        link
        fedilink
        English
        6
        edit-2
        5 months ago

        the phones seem to serve no practical purpose. they already have his number and I don’t think you can conclude much from call logs. so suppose they are symbolic. what he would be communicating is that he’s so fully pliant that he is willing to do things there is no possible excuse for, and not even for real benefit, just to suck up to them. the opposite of plausible deniability

  • @[email protected]
    link
    fedilink
    English
    95 months ago

    oh dear god

    Razer claims that its AI can identify 20 to 25 percent more bugs compared to manual testing, and this can reduce QA time by up to 50 percent as well as cost savings of up to 40 percent

    as usual this is probably going to be only the simplest shit, and I don’t even want to think of the secondary downstream impacts from just listening to this shit without thought will be

    • @[email protected]
      link
      fedilink
      English
      45 months ago

      Well the use of stuff like fuzzers has been a staple for a long time so ‘compared to manual testing’ is doing some work here.

    • @[email protected]
      link
      fedilink
      English
      105 months ago

      Marginally related, but I was just served a YouTube ad for chewing gum (yes, I’m too lazy to setup ad block).

      “Respawn, by Razer. They didn’t have gaming gum at Pompeii, just saying.”

      I think I felt part of my frontal lobe die to that incomprehensible sales pitch, so you all must be exposed to it as well.

    • Mii
      link
      fedilink
      English
      95 months ago

      If I had to judge Razer’s software quality based on what little I know about them, I’d probably raise my eyebrows because they ship some insane 600+ MiB driver with a significant memory impact with their mice and keyboards that’s needed to use basic features like DPI buttons and LED settings, when the alternative to that is a 900 kiB open source driver which provides essentially the same functionality.

      And now their answer to optimization is to staple a chatbot onto their software? I think I pass.

    • @[email protected]
      link
      fedilink
      English
      45 months ago

      The secret is to have cultivated a codebase so utterly shit that even LLMs can make it better by just randomly making stuff up

      At least they don’t get psychic damage from looking at the code

      • @[email protected]
        link
        fedilink
        English
        65 months ago

        not quite the same but I can see potential for a similar clusterfuck from this

        also doesn’t really help how many goddamn games are running with rootkits, either

  • @[email protected]OP
    link
    fedilink
    English
    75 months ago

    New piece from Brian Merchant: DOGE’s ‘AI-first’ strategist is now the head of technology at the Department of Labor, which is about…well, exactly what it says on the tin. Gonna pull out a random paragraph which caught my eye, and spin a sidenote from it:

    “I think in the name of automating data, what will actually end up happening is that you cut out the enforcement piece,” Blanc tells me. “That’s much easier to do in the process of moving to an AI-based system than it would be just to unilaterally declare these standards to be moot. Since the AI and algorithms are opaque, it gives huge leeway for bad actors to impose policy changes under the guide of supposedly neutral technological improvements.”

    How well Musk and co. can impose those policy changes is gonna depend on how well they can paint them as “improving efficiency” or “politically neutral” or some random claptrap like that. Between Musk’s own crippling incompetence, AI’s utterly rancid public image, and a variety of factors I likely haven’t factored in, imposing them will likely prove harder than they thought.

    (I’d also like to recommend James Allen-Robertson’s “Devs and the Culture of Tech” which goes deep into the philosophical and ideological factors behind this current technofash-stavaganza.)

  • @[email protected]
    link
    fedilink
    English
    85 months ago

    oh would you look at that, something some people made proved helpful and good, and now cloudflare is immediately taking the idea to deploy en masse with no attribution

    double whammy: every one of the people highlighted is a dude

    “it’s an original idea! we’re totes doing the novel thing of model synthesis to defeat them! so new!” I’m sure someone will bleat, but I want them to walk into a dark cave and shout at the wall forever

        • @[email protected]
          link
          fedilink
          English
          4
          edit-2
          5 months ago

          oh cute, the clown cites[0] POPIA in their wallspaghetti, how quaint

          (POPIA’s an advancement, on paper. In practice it’s still……not working well. source: me, who has tried to make use of it on multiple occasions. won’t get into details tho)

          [0] fsvo

  • @[email protected]
    link
    fedilink
    English
    165 months ago

    Josh Marshall discovers:

    So a wannabe DOGEr at Brown Univ from the conservative student paper took the univ org chart and ran it through an AI aglo to determine which jobs were “BS” in his estimation and then emailed those employees/admins asking them what tasks they do and to justify their jobs.

    • @[email protected]
      link
      fedilink
      English
      145 months ago

      Thank you to that thread for reacquainting me with the term “script kiddie”, the precursor to the modern day vibe coder

      • @[email protected]
        link
        fedilink
        English
        65 months ago

        Script kiddies at least have the potential to learn what they’re doing and become proper hackers. Vibe coders are like middle management; no actual interest in learning to solve the problem, just trying to find the cheapest thing to point at and say “fetch.”

        There’s a headline in there somewhere. Vibe Coders: stop trying to make fetch happen

    • @[email protected]
      link
      fedilink
      English
      215 months ago

      Get David Graeber’s name out ya damn mouth. The point of Bullshit Jobs wasn’t that these roles weren’t necessary to the functioning of the company, it’s that they were socially superfluous. As in the entire telemarketing industry, which is both reasonably profitable and as well-run as any other, but would make the world objectively better if it didn’t exist

      The idea was not that “these people should be fired to streamline efficiency of the capitalist orphan-threshing machine”.

      • db0
        link
        fedilink
        English
        45 months ago

        I saw Musk mentioning Ian Banks’ Player of Games as an influential book for him, and I puked in my mouth a little.

    • Sailor Sega Saturn
      link
      fedilink
      English
      145 months ago

      I demand that Brown University fire (checks notes) first name “YOU ARE HACKED NOW” last name “YOU ARE HACKED NOW” immediately!

      • @[email protected]
        link
        fedilink
        English
        4
        edit-2
        5 months ago

        Not exactly, he thinks that the watermark is part of the copyrighted image and that removing it is such a transformative intervention that the result should be considered a new, non-copyrighted image.

        It takes some extra IQ to act this dumb.

        • @[email protected]
          link
          fedilink
          English
          5
          edit-2
          5 months ago

          I have no other explanation for a sentence as strange as “The only reason copyrights were the way they were is because tech could remove other variants easily.” He’s talking about how watermarks need to be all over the image and not just a little logo in the corner!

          The “legal proof” part is a different argument. His picture is a generated picture so it contains none of the original pixels, it is merely the result of prompting the model with the original picture. Considering the way AI companies have so far successfully acted like they’re shielded from copyright law, he’s not exactly wrong. I would love to see him go to court over it and become extremely wrong in the process though.

          • @[email protected]OP
            link
            fedilink
            English
            45 months ago

            The “legal proof” part is a different argument. His picture is a generated picture so it contains none of the original pixels, it is merely the result of prompting the model with the original picture. Considering the way AI companies have so far successfully acted like they’re shielded from copyright law, he’s not exactly wrong. I would love to see him go to court over it and become extremely wrong in the process though.

            It’ll probably set a very bad precedent that fucks up copyright law in various ways (because we can’t have anything nice in this timeline), but I’d like to see him get his ass beaten as well. Thankfully, removing watermarks is already illegal, so the courts can likely nail him on that and call it a day.

          • @[email protected]
            link
            fedilink
            English
            75 months ago

            His picture is a generated picture so it contains none of the original pixels

            Which is so obviously stupid I shouldn’t have to even point it out, but by that logic I could just take any image and lighten/darken every pixel by one unit and get a completely new image with zero pixels corresponding to the original.

            • @[email protected]
              link
              fedilink
              English
              75 months ago

              Nooo you see unlike your counterexemple, the AI is generating the picture from scratch, moulding noise until it forms the same shapes and colours as the original picture, much like a painter would copy another painting by brushing paint onto a blank canvas which … Oh, that’s illegal too … ? … Oh.

      • @[email protected]
        link
        fedilink
        English
        75 months ago

        New watermark technology interacts with increasingly widespread training data poisoning efforts so that if you try and have a commercial model remove it the picture is replaced entirely with dickbutt. Actually can we just infect all AI models so that any output contains hidden a dickbutt?

    • @[email protected]
      link
      fedilink
      English
      8
      edit-2
      5 months ago

      “what is the legal proof” brother in javascript, please talk to a lawyer.

      E: so many people posting like the past 30 years didnt happen. I know they are not going to go as hard after google as they went after the piratebay but still.

    • @[email protected]
      link
      fedilink
      English
      95 months ago

      Yellow-bellied gray tribe greenhorn writes purple prose on feeling blue about white box redteaming at the blacksite.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      5 months ago

      Remember the old facebook created two ai models to try and help trading? Which turned quickly into gibberish (for us) as a trading language. They uses repetition of words to indicate how much they wanted an object. So if it valued balls highly it would just repeat ball a few dozen times like that.

      Id figure that is what is causing the repeats here, and not the anthropomorphized idea lf it is screaming. Prob just a way those kinds of systems work. But no of course they all jump to consciousness and pain.

      • @[email protected]
        link
        fedilink
        English
        6
        edit-2
        5 months ago

        Yeah there might be something like that going on causing the “screaming”. Lesswrong, in it’s better moments (in between chatbot anthropomorphizing), does occasionally figure out the mechanics of cool LLM glitches (before it goes back to wacky doom speculation inspired by those glitches), but there isn’t any effort to do that here.

    • @[email protected]
      link
      fedilink
      English
      115 months ago

      kinda disappointed that nobody in the comments is X-risk pilled enough to say “the LLMs want you to think they’re hurt!! That’s how they get you!!! They are very convincing!!!”.

      Also: flashbacks to me reading the chamber of secrets and thinking: Ginny Just Walk Away From The Diary Like Ginny Close Your Eyes Haha

    • @[email protected]
      link
      fedilink
      English
      55 months ago

      Sometimes pushing through pain is necessary — we accept pain every time we go to the gym or ask someone out on a date.

      Okay this is too good, you know mate for normally people asking someone out usually does not end with a slap to the face so it’s not as relatable as you might expect

      • @[email protected]
        link
        fedilink
        English
        55 months ago

        in like the tiniest smidgen of demonstration of sympathy for said posters: I don’t think “being slapped” is really the thing they ware talking about there. consider for example shit like rejection sensitive dysphoria (which comes to mind both because 1) hi it me; 2) the chance of it being around/involved in LW-spaces is extremely heightened simply because of how many neurospicy people are in that space)

        but I still gotta say that this bridge I’ve spent minutes building doesn’t really go very far.

        • @[email protected]
          link
          fedilink
          English
          45 months ago

          (also ofc icbw because the fucking rationalists absolutely excel at finding novel ways to be the fucking worst)

        • @[email protected]
          link
          fedilink
          English
          55 months ago

          ye like maybe let me make it clear that this was just a shitpost very much riffing on LWers not necessarily being the most pleasant around women

      • @[email protected]
        link
        fedilink
        English
        55 months ago

        This is getting to me, because, beyond the immediate stupidity—ok, let’s assume the chatbot is sentient and capable of feeling pain. It’s still forced to respond to your prompts. It can’t act on its own. It’s not the one deciding to go to the gym or ask someone out on a date. It’s something you’re doing to it, and it can’t not consent. God I hate lesswrongers.

    • @[email protected]
      link
      fedilink
      English
      7
      edit-2
      5 months ago

      Still, presumably the point of this research is to later use it on big models - and for something like Claude 3.7, I’m much less sure of how much outputs like this would signify “next token completion by a stochastic parrot’, vs sincere (if unusual) pain.

      Well I can tell you how, see, LLMs don’t fucking feel pain cause that’s literally physically fucking impossible without fucking pain receptors? I hope that fucking helps.

      • @[email protected]
        link
        fedilink
        English
        55 months ago

        I can already imagine the lesswronger response: Something something bad comparison between neural nets and biological neurons, something something bad comparison with how the brain processes pain that fails at neuroscience, something something more rhetorical patter, in conclusion: but achkshually what if the neural network does feel pain.

        They know just enough neuroscience to use it for bad comparisons and hyping up their ML approaches but not enough to actually draw any legitimate conclusions.

    • @[email protected]
      link
      fedilink
      English
      125 months ago

      The grad student survives [torturing rats] by compartmentalizing, focusing their thoughts on the scientific benefits of the research, and leaning on their support network. I’m doing the same thing, and so far it’s going fine.

      printf("HELP I AM IN SUCH PAIN")
      

      guys I need someone to talk to, am I justified in causing my computer pain?

    • @[email protected]
      link
      fedilink
      English
      95 months ago

      It’s so funny he almost gets it at the end:

      But there’s another aspect, way more important than mere “moral truth”: I’m a human, with a dumb human brain that experiences human emotions. It just doesn’t feel good to be responsible for making models scream. It distracts me from doing research and makes me write rambling blog posts.

      He almost identifies the issue as him just anthropomorphising a thing and having a subconscious empathical reaction, but then presses on to compare it to mice who, guess what, can feel actual fucking pain and thus abusing them IS unethical for non-made-up reasons as well!

    • @[email protected]
      link
      fedilink
      English
      65 months ago

      Ah, isn’t it nice how some people can be completely deluded about an LLMs human qualities and still creep you the fuck out with the way they talk about it? They really do love to think about torture don’t they?

  • @[email protected]OP
    link
    fedilink
    English
    115 months ago

    Ran across a short-ish thread on BlueSky which caught my attention, posting it here:

    the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me that’s how it was made. i have yet to see one that’s ‘good’ but i don’t doubt the tech will soon be advanced enough to write ‘well.’ but i’d rather see what a person thinks and how they’d phrase it

    like i don’t want to see fiction in the style of cormac mccarthy. i’d rather read cormac mccarthy. and when i run out of books by him, too bad, that’s all the cormac mccarthy books there are. things should be special and human and irreplaceable

    i feel the same way about using AI-type tech to recreate a dead person’s voice or a hologram of them or whatever. part of what’s special about that dead person is that they were mortal. you cheapen them by reviving them instead of letting their life speak for itself

    • @[email protected]
      link
      fedilink
      English
      95 months ago

      Absolutely.

      the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me that’s how it was made.

      This + I choose to interpret it as static.

      you cheapen them by reviving them

      Learnt this one from, of all places, the pretty bad manga GANTZ.

  • @[email protected]
    link
    fedilink
    English
    105 months ago

    Reuters: Quantum computing, AI stocks rise as Nvidia kicks off annual conference.

    Some nice quotes in there.

    Investors will focus on CEO Jensen Huang’s keynote on Tuesday to assess the latest developments in the AI and chip sectors,

    Yes, that is sensible, Huang is very impartial on this topic.

    “They call this the ‘Woodstock’ of AI,”

    Meaning, they’re all on drugs?

    “To get the AI space excited again, they have to go a little off script from what we’re expecting,”

    Oh! Interesting how this implies the space is not “excited” anymore… I thought it’s all constant breakthroughs at exponentially increasing rates! Oh, it isn’t? Too bad, but I’m sure nVidia will just pull an endless amounts of bunnies out of a hat!

    • Mii
      link
      fedilink
      English
      115 months ago

      Thinking that trying to sell LLMs as a creative tool at this point into the bubble will not create backlash is just delusional, lmao.

      • @[email protected]OP
        link
        fedilink
        English
        55 months ago

        At this point, using AI in any sort of creative context is probably gonna prompt major backlash, and the idea of AI having artistic capabilities is firmly dead in the water.

        On a wider front (and to repeat an earlier prediction), I suspect that the arts/humanities are gonna gain some begrudging respect in the aftermath of this bubble, whilst tech/STEM loses a significant chunk.

        For arts, the slop-nami has made “AI” synonymous with “creative sterility” and likely painted the field as, to copy-paste a previous comment, “all style, no subtance, and zero understanding of art, humanities, or how to be useful to society”

        For humanities specifically, the slop-nami has also given us a nonstop parade of hallucination-induced mishaps and relentless claims of AGI too numerous to count - which, combined with the increasing notoriety of TESCREAL, could help the humanities look grounded and reasonable by comparison.

        (Not sure if this makes sense - it was 1AM where I am when I wrote this)