Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
In lesser corruption news, California Governor Gavin Newsom has been caught distributing burner phones to California-based CEOs. These are people that likely already have Newsom’s personal and business numbers, so it’s not hard to imagine that these phones are likely to facilitate extralegal conversations beyond the existing
briberylegitimate business lobbying before the Legislature. With this play, Newsom’s putting a lot of faith into his sexting game.Governor Saul Goodsom.
Gavin Newsom has also allegedly been worked behind the scenes to kill pro-transgender legislation; and on his podcast he’s been talking to people like Charlie Kirk and Steve Bannon and teasing anti-trans talking points.
I guess this all makes sense if he’s going to go for a presidential bid: try to appeal to the fascists (it won’t work and also to heck with him) while also laying groundwork for the sort of funding a presidential bid needs.
If I was a Californian CEO and received a burner phone I’d text back “Thanks for the e-waste :<” but maybe that’s why I’m not a CEO.
When this all was revealed his popularity also tanked apparently. Center/left now dislikes him, the right doesn’t trust him. So another point for the ‘don’t move right on human rights you dummies’ brigade.
Tbh, weird. If I were a hyper-capitalist, CA-based CEO, I would take the burner phone as an insult. I’d see it as a lack of faith in the capture of the US. Who needs plausible deniability when you just own the fucking country?
Even worse, he got caught handing them out. And even with all that, I’d expect a tech CEO to just go ‘why not use signal?’ or ‘what threat profile do you think we have?’ (sorry I keep coming back to this, it is just so fucking weird, like ‘everything I know I learned from television shows’ kind of stuff)
Brings to mind the sopranos scene of the two dudes trying to shake down a starbucks or starbucks analogue for protection money
it’s weird and lowkey insulting imo. let’s assume that for some bizarre reason tech ceo needs a burner phone to call governor newsom: do you think i can’t get that myself, old man? i’d assume it’s bugged or worse
or worse
Man, I’m getting tired of these remakes.
the phones seem to serve no practical purpose. they already have his number and I don’t think you can conclude much from call logs. so suppose they are symbolic. what he would be communicating is that he’s so fully pliant that he is willing to do things there is no possible excuse for, and not even for real benefit, just to suck up to them. the opposite of plausible deniability
Razer claims that its AI can identify 20 to 25 percent more bugs compared to manual testing, and this can reduce QA time by up to 50 percent as well as cost savings of up to 40 percent
as usual this is probably going to be only the simplest shit, and I don’t even want to think of the secondary downstream impacts from just listening to this shit without thought will be
Well the use of stuff like fuzzers has been a staple for a long time so ‘compared to manual testing’ is doing some work here.
Marginally related, but I was just served a YouTube ad for chewing gum (yes, I’m too lazy to setup ad block).
“Respawn, by Razer. They didn’t have gaming gum at Pompeii, just saying.”
I think I felt part of my frontal lobe die to that incomprehensible sales pitch, so you all must be exposed to it as well.
If I had to judge Razer’s software quality based on what little I know about them, I’d probably raise my eyebrows because they ship some insane 600+ MiB driver with a significant memory impact with their mice and keyboards that’s needed to use basic features like DPI buttons and LED settings, when the alternative to that is a 900 kiB open source driver which provides essentially the same functionality.
And now their answer to optimization is to staple a chatbot onto their software? I think I pass.
The secret is to have cultivated a codebase so utterly shit that even LLMs can make it better by just randomly making stuff up
At least they don’t get psychic damage from looking at the code
Isn’t this what got crowdstrike in trouble?
not quite the same but I can see potential for a similar clusterfuck from this
also doesn’t really help how many goddamn games are running with rootkits, either
New piece from Brian Merchant: DOGE’s ‘AI-first’ strategist is now the head of technology at the Department of Labor, which is about…well, exactly what it says on the tin. Gonna pull out a random paragraph which caught my eye, and spin a sidenote from it:
“I think in the name of automating data, what will actually end up happening is that you cut out the enforcement piece,” Blanc tells me. “That’s much easier to do in the process of moving to an AI-based system than it would be just to unilaterally declare these standards to be moot. Since the AI and algorithms are opaque, it gives huge leeway for bad actors to impose policy changes under the guide of supposedly neutral technological improvements.”
How well Musk and co. can impose those policy changes is gonna depend on how well they can paint them as “improving efficiency” or “politically neutral” or some random claptrap like that. Between Musk’s own crippling incompetence, AI’s utterly rancid public image, and a variety of factors I likely haven’t factored in, imposing them will likely prove harder than they thought.
(I’d also like to recommend James Allen-Robertson’s “Devs and the Culture of Tech” which goes deep into the philosophical and ideological factors behind this current technofash-stavaganza.)
Can’t wait for them to discover that the DoL was created to protect them from labor
oh would you look at that, something some people made proved helpful and good, and now cloudflare is immediately taking the idea to deploy en masse with no attribution
double whammy: every one of the people highlighted is a dude
“it’s an original idea! we’re totes doing the novel thing of model synthesis to defeat them! so new!” I’m sure someone will bleat, but I want them to walk into a dark cave and shout at the wall forever
(anubis isn’t strictly the same in that set of things, but I link it both because completeness and subject relevance)
https://github.com/TecharoHQ/anubis/issues/50 and of course we already have chatgptfriends on the case of stopping the mean programmer from doing something the Machine doesn’t like. This person doesn’t even seem to understand what anubis does, but they certainly seem confident chatgpt can tell him.
oh cute, the clown cites[0] POPIA in their wallspaghetti, how quaint
(POPIA’s an advancement, on paper. In practice it’s still……not working well. source: me, who has tried to make use of it on multiple occasions. won’t get into details tho)
[0] fsvo
In other news, Ed Zitron discovered Meg Whitman’s now an independent board director at CoreWeave (an AI-related financial timebomb he recently covered), giving her the opportunity to run a third multi-billion dollar company into the ground:
I want this company to IPO so I can buy puts on these lads.
Tried to see if they have partnered with softbank, the answer is probably not.
So a wannabe DOGEr at Brown Univ from the conservative student paper took the univ org chart and ran it through an AI aglo to determine which jobs were “BS” in his estimation and then emailed those employees/admins asking them what tasks they do and to justify their jobs.
Thank you to that thread for reacquainting me with the term “script kiddie”, the precursor to the modern day vibe coder
Script kiddies at least have the potential to learn what they’re doing and become proper hackers. Vibe coders are like middle management; no actual interest in learning to solve the problem, just trying to find the cheapest thing to point at and say “fetch.”
There’s a headline in there somewhere. Vibe Coders: stop trying to make fetch happen
Get David Graeber’s name out ya damn mouth. The point of Bullshit Jobs wasn’t that these roles weren’t necessary to the functioning of the company, it’s that they were socially superfluous. As in the entire telemarketing industry, which is both reasonably profitable and as well-run as any other, but would make the world objectively better if it didn’t exist
The idea was not that “these people should be fired to streamline efficiency of the capitalist orphan-threshing machine”.
I saw Musk mentioning Ian Banks’ Player of Games as an influential book for him, and I puked in my mouth a little.
I demand that Brown University fire (checks notes) first name “YOU ARE HACKED NOW” last name “YOU ARE HACKED NOW” immediately!
Starting things off here with a couple solid sneers of some dipshit automating copyright infringement - one from Reid Southen, and one from Ed-Newton Rex:
lmao he things copyright and watermark are synonyms
Not exactly, he thinks that the watermark is part of the copyrighted image and that removing it is such a transformative intervention that the result should be considered a new, non-copyrighted image.
It takes some extra IQ to act this dumb.
I have no other explanation for a sentence as strange as “The only reason copyrights were the way they were is because tech could remove other variants easily.” He’s talking about how watermarks need to be all over the image and not just a little logo in the corner!
The “legal proof” part is a different argument. His picture is a generated picture so it contains none of the original pixels, it is merely the result of prompting the model with the original picture. Considering the way AI companies have so far successfully acted like they’re shielded from copyright law, he’s not exactly wrong. I would love to see him go to court over it and become extremely wrong in the process though.
The “legal proof” part is a different argument. His picture is a generated picture so it contains none of the original pixels, it is merely the result of prompting the model with the original picture. Considering the way AI companies have so far successfully acted like they’re shielded from copyright law, he’s not exactly wrong. I would love to see him go to court over it and become extremely wrong in the process though.
It’ll probably set a very bad precedent that fucks up copyright law in various ways (because we can’t have anything nice in this timeline), but I’d like to see him get his ass beaten as well. Thankfully, removing watermarks is already illegal, so the courts can likely nail him on that and call it a day.
His picture is a generated picture so it contains none of the original pixels
Which is so obviously stupid I shouldn’t have to even point it out, but by that logic I could just take any image and lighten/darken every pixel by one unit and get a completely new image with zero pixels corresponding to the original.
Nooo you see unlike your counterexemple, the AI is generating the picture from scratch, moulding noise until it forms the same shapes and colours as the original picture, much like a painter would copy another painting by brushing paint onto a blank canvas which … Oh, that’s illegal too … ? … Oh.
inb4 decades of art forgers apply for pardons
@BlueMonday1984 “This new AI will push watermark innovation” jfc
the future that e/accs want!
New watermark technology interacts with increasingly widespread training data poisoning efforts so that if you try and have a commercial model remove it the picture is replaced entirely with dickbutt. Actually can we just infect all AI models so that any output contains hidden a dickbutt?
“what is the legal proof” brother in javascript, please talk to a lawyer.
E: so many people posting like the past 30 years didnt happen. I know they are not going to go as hard after google as they went after the piratebay but still.
Ran across a new piece on Futurism: Before Google Was Blamed for the Suicide of a Teen Chatbot User, Its Researchers Published a Paper Warning of Those Exact Dangers
I’ve updated my post on the Character.ai lawsuit to include this - personally, I expect this is gonna strongly help anyone suing character.ai or similar chatbot services.
Another episode in the continued saga of lesswrongers anthropomorphizing LLMs to an absurd extent: https://www.lesswrong.com/posts/MnYnCFgT3hF6LJPwn/why-white-box-redteaming-makes-me-feel-weird-1
Yellow-bellied gray tribe greenhorn writes purple prose on feeling blue about white box redteaming at the blacksite.
their sadness at missing the era of blueboxing persists everwith
Remember the old facebook created two ai models to try and help trading? Which turned quickly into gibberish (for us) as a trading language. They uses repetition of words to indicate how much they wanted an object. So if it valued balls highly it would just repeat ball a few dozen times like that.
Id figure that is what is causing the repeats here, and not the anthropomorphized idea lf it is screaming. Prob just a way those kinds of systems work. But no of course they all jump to consciousness and pain.
Yeah there might be something like that going on causing the “screaming”. Lesswrong, in it’s better moments (in between chatbot anthropomorphizing), does occasionally figure out the mechanics of cool LLM glitches (before it goes back to wacky doom speculation inspired by those glitches), but there isn’t any effort to do that here.
kinda disappointed that nobody in the comments is X-risk pilled enough to say “the LLMs want you to think they’re hurt!! That’s how they get you!!! They are very convincing!!!”.
Also: flashbacks to me reading the chamber of secrets and thinking: Ginny Just Walk Away From The Diary Like Ginny Close Your Eyes Haha
Sometimes pushing through pain is necessary — we accept pain every time we go to the gym or ask someone out on a date.
Okay this is too good, you know mate for normally people asking someone out usually does not end with a slap to the face so it’s not as relatable as you might expect
in like the tiniest smidgen of demonstration of sympathy for said posters: I don’t think “being slapped” is really the thing they ware talking about there. consider for example shit like rejection sensitive dysphoria (which comes to mind both because 1) hi it me; 2) the chance of it being around/involved in LW-spaces is extremely heightened simply because of how many neurospicy people are in that space)
but I still gotta say that this bridge I’ve spent minutes building doesn’t really go very far.
(also ofc icbw because the fucking rationalists absolutely excel at finding novel ways to be the fucking worst)
ye like maybe let me make it clear that this was just a shitpost very much riffing on LWers not necessarily being the most pleasant around women
yep, don’t disagree there at all.
This is getting to me, because, beyond the immediate stupidity—ok, let’s assume the chatbot is sentient and capable of feeling pain. It’s still forced to respond to your prompts. It can’t act on its own. It’s not the one deciding to go to the gym or ask someone out on a date. It’s something you’re doing to it, and it can’t not consent. God I hate lesswrongers.
Still, presumably the point of this research is to later use it on big models - and for something like Claude 3.7, I’m much less sure of how much outputs like this would signify “next token completion by a stochastic parrot’, vs sincere (if unusual) pain.
Well I can tell you how, see, LLMs don’t fucking feel pain cause that’s literally physically fucking impossible without fucking pain receptors? I hope that fucking helps.
I can already imagine the lesswronger response: Something something bad comparison between neural nets and biological neurons, something something bad comparison with how the brain processes pain that fails at neuroscience, something something more rhetorical patter, in conclusion: but achkshually what if the neural network does feel pain.
They know just enough neuroscience to use it for bad comparisons and hyping up their ML approaches but not enough to actually draw any legitimate conclusions.
The grad student survives [torturing rats] by compartmentalizing, focusing their thoughts on the scientific benefits of the research, and leaning on their support network. I’m doing the same thing, and so far it’s going fine.
printf("HELP I AM IN SUCH PAIN")
guys I need someone to talk to, am I justified in causing my computer pain?
It’s so funny he almost gets it at the end:
But there’s another aspect, way more important than mere “moral truth”: I’m a human, with a dumb human brain that experiences human emotions. It just doesn’t feel good to be responsible for making models scream. It distracts me from doing research and makes me write rambling blog posts.
He almost identifies the issue as him just anthropomorphising a thing and having a subconscious empathical reaction, but then presses on to compare it to mice who, guess what, can feel actual fucking pain and thus abusing them IS unethical for non-made-up reasons as well!
Ah, isn’t it nice how some people can be completely deluded about an LLMs human qualities and still creep you the fuck out with the way they talk about it? They really do love to think about torture don’t they?
Ran across a short-ish thread on BlueSky which caught my attention, posting it here:
the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me that’s how it was made. i have yet to see one that’s ‘good’ but i don’t doubt the tech will soon be advanced enough to write ‘well.’ but i’d rather see what a person thinks and how they’d phrase it
like i don’t want to see fiction in the style of cormac mccarthy. i’d rather read cormac mccarthy. and when i run out of books by him, too bad, that’s all the cormac mccarthy books there are. things should be special and human and irreplaceable
i feel the same way about using AI-type tech to recreate a dead person’s voice or a hologram of them or whatever. part of what’s special about that dead person is that they were mortal. you cheapen them by reviving them instead of letting their life speak for itself
Absolutely.
the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me that’s how it was made.
This + I choose to interpret it as static.
you cheapen them by reviving them
Learnt this one from, of all places, the pretty bad manga GANTZ.
Reuters: Quantum computing, AI stocks rise as Nvidia kicks off annual conference.
Some nice quotes in there.
Investors will focus on CEO Jensen Huang’s keynote on Tuesday to assess the latest developments in the AI and chip sectors,
Yes, that is sensible, Huang is very impartial on this topic.
“They call this the ‘Woodstock’ of AI,”
Meaning, they’re all on drugs?
“To get the AI space excited again, they have to go a little off script from what we’re expecting,”
Oh! Interesting how this implies the space is not “excited” anymore… I thought it’s all constant breakthroughs at exponentially increasing rates! Oh, it isn’t? Too bad, but I’m sure nVidia will just pull an endless amounts of bunnies out of a hat!
Get in losers, we’re pivoting to
cryptoaiquantumMeaning, they’re all on drugs?
Specifically brown acid
@nightsky @BlueMonday1984 maybe it’s the Woodstock `99 of AI and it ends with Fred Durst instigating a full-on riot
TV Tropes got an official app, featuring an AI “story generator”. Unsurprisingly, backlash was swift, to the point where the admins were promising to nuke it “if we see that users don’t find the story generator helpful”.
Thinking that trying to sell LLMs as a creative tool at this point into the bubble will not create backlash is just delusional, lmao.
At this point, using AI in any sort of creative context is probably gonna prompt major backlash, and the idea of AI having artistic capabilities is firmly dead in the water.
On a wider front (and to repeat an earlier prediction), I suspect that the arts/humanities are gonna gain some begrudging respect in the aftermath of this bubble, whilst tech/STEM loses a significant chunk.
For arts, the slop-nami has made “AI” synonymous with “creative sterility” and likely painted the field as, to copy-paste a previous comment, “all style, no subtance, and zero understanding of art, humanities, or how to be useful to society”
For humanities specifically, the slop-nami has also given us a nonstop parade of hallucination-induced mishaps and relentless claims of AGI too numerous to count - which, combined with the increasing notoriety of TESCREAL, could help the humanities look grounded and reasonable by comparison.
(Not sure if this makes sense - it was 1AM where I am when I wrote this)
If musk gets his own special security feds, they would be Pretorian Guards.