Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this…)
She Licking my County till I back away is this anything?
How much of this is the AI bubble collapsing vs. Ohiophobia
Would you invest in commercial real estate, knowing there was a non-zero chance your tenants might come in one day to discover a thoroughly intoxicated JD Vance in a compromising position with the break-room furniture?
So many of the responses pointing out how bad this is for the local communities in Licking County (lol), but I feel like this has to be a case where the bezzle is collapsing more than a decision causing new harm, right? The bubble wasn’t sustainable and those jobs were unlikely to manifest past the initial construction, especially since data centers aren’t exactly labor-intensive to run.
That doesn’t mean it doesn’t hurt for those communities, especially in the midst of the economic ruin left in the wake of Hurricane Tarrif, but I feel like there’s an important lesson being lost here.
it’s also such a weird position for critique to take, imo; DCs don’t really do much in the way of Local Job Generation, and I’d fucking bet that each of these locations also got selected because of favourable gladhanding credits (tax incentives, power incentives, etc)
I suppose “all the ${whatever business} that got bought out and flattened (for buildout space) is still gone” is maybe one harm, but again… these things don’t get built on high streets
some more wild sneers, this time about MS ripping off Q2 for a time-limited shitty experience
Forced mass-adoption of this stuff by consumers is here, now, demanding our approval, attention, and precious time. A public tech demo exists to impress, and the Copilot Gaming Experience does not. Doom on a calculator, but we had to boil a lake or two to get it and are being told it’s the future of games. I reject this future. Not only do I find it philosophically and ethically repugnant, it also made my tummy hurt.
💯 no notes
Considering PC Gamer bought the AI hype hook, line and sinker a few months ago, I’d say this is a particularly notable sneer.
This is pure gut instinct, but it seems AI bros’ ability to bedazzle the press is fading fast.
This demo is powered by a “World and Human Action Model” (WHAM)
George Michael must be spinning in his grave
Fun fact: the rise of autoplag is now threatening the supply chain as well, as bad actors take advantage of LLM hallucinations to plant malware into people’s programs.
this has been happening for a while, just getting coverage again now. first coverage was months ago. morphed/evolved pretty quickly out of the typosquatting shit
((a lot of people in the) security space absolutely fucking loves “giving names” to things that have been (known to be) happening before, and acting like suddenly they’re the ones who first saw the thing. see this nonsense for another good example of that happening)
New piece from 404 Media: Facebook Pushes Its Llama 4 AI Model to the Right, Wants to Present “Both Sides”
On a related note, Baldur Bjarnason has chimed in noting how he called this exact shit happening:
Remember when I told you that using these LLMs was like giving US tech a bigotry dial for all your writing?
Some dark urge found me skim-reading a recent AI doomer blog post. I was startled awake by this most unsettling passage:
My wife wrote a letter to our infant daughter recently. It concluded:
I don’t know that we can offer you a good world, or even one that will be around for all that much longer. But I hope we can offer you a good childhood. […]
Though the theoretical possibility had always been percolating somewhere in the back of my mind, it wasn’t until now that I viscerally realized that P(doomers reproducing) was greater than zero. And with other doomers no less.
Left brooding on this development, I drudged along until-
BAhahaha what the fuck
I can’t. This is beyond parody.Completely lost it here. Nothing could have prepared me for the poorly handwritten wrist tattoo.
Creating space for miracles
Doom feels really likely to me. […] But who knows, perhaps one of my assumptions is wrong. Perhaps there’s some luck better than humanity deserves. If this happens to be the case, I want to be in a position to make use of it.Oh how rational! Willing to entertain the idea that maybe, theoretically, the doomsday prediction could be off by a few days?
I’m not sure that I ever strongly felt that I would die at eighty or so. I had a religious youth and believed in an immortal soul. Even when I came out of that, I quickly believed in the potential of radical transhuman life extension.
This guy thought he was getting clean but he was actually replacing weed with heroin
I really convinced myself that “doomsday cult” was hyperbole but uhh, nope, it’s 107% real.I don’t know that we can offer you a good world, or even one that will be around for all that much longer. But I hope we can offer you a good childhood. […]
When “The world is gonna end soon so let’s just rawdog from now on” gets real
Teach your children to envy the dead
I had a religious youth and believed in an immortal soul. Even when I came out of that, I quickly believed in the potential of radical transhuman life extension.
My dude you’re so, so, sooo close to realising it, you should spontaneously quantum-tunnel into self-awareness any second now
The storm rages within
Yet I hold F to pay respects
At the start they state
The disappointment of imminent death is all the more crushing because just a few years ago researchers announced breakthrough discoveries that suggested [existing, adult] humans could have healthspans of thousands of years. To drop the analogy, here I’m talking about my transhumanist beliefs. The laws of physics don’t demand that humans slowly decay and die at eighty. It is within our engineering prowess to defeat death, and until recently I thought we might just do that, and I and my loved ones would live for millennia, becoming post-human superbeings.
This is, frankly, bonkers. I’d rate the following in descending order of probability
- worldwide societal collapse due to climate change
- we develop an AI that will kill us all for unspecified reasons
- we establish viable self-sustaining societies outside the limits of Earth
- we develop techniques that allow everyone to live effectively forever
If the first happens, it removes the material requirements for the latter things to happen. This is an extreme form of “denial of the flesh”, the inability to realize that without food or water no-one will be working on AI or life extension tech.
“Im 99% sure I will die in the next year because of super duper intelligence, but in a world where that doesnt happen i plan to live 1000 years” surely is a forecast. Surprised they don’t break their own necks on the whiplash from this take.
yet I hold
space for it
Rupi Kaur should sue
:'( sad one. feel bad for the bebe, being raised by insane people.
Oh that tattoo is regrettable
Doom feels really likely to me. […] But who knows, perhaps one of my assumptions is wrong. Perhaps there’s some luck better than humanity deserves. If this happens to be the case, I want to be in a position to make use of it.
This line actually really annoys me, because they are already set up for moving the end date on their doomsday prediction as needed while still maintaining their overall doomerism.
Also, man why do I click on these links and read the LWers comments. It’s always insufferable people being like, “woe is us, to be cursed with the forbidden knowledge of AI doom, we are all such deep thinkers, the lay person simply could not understand the danger of ai” like bruv it aint that deep, i think i can summarize it as follows:
hits blunt “bruv, imagine if you were a porkrind, you wouldn’t be able to tell why a person is eating a hotdog, ai will be like we are to a porkchop, and to get more hotdogs humans will find a way to turn the sun into a meat casing, this is the principle of intestinal convergence”
Literally saw another comment where one of them accused the other of being a “super intelligence denier” (i.e., heretic) for suggesting maybe we should wait till the robot swarms coming over the hills before we declare its game over.
tesla: “your car is not your car and we have deep, varied firmware and systems access to it on a permanent basis. we can see you and control you at all times. piss us off and we’ll turn off the car that we own.”
also tesla: “sorry no you can’t return it”
I wonder how often Musk fires employees who explain to him that, no using tesla cars for distributed computing is a bad idea and we should stop working on this.
LW: “being a younger brother makes you gay, the Catholic hierarchy is full of younger brothers, ergo 80% of the Vatican is gay”
https://www.lesswrong.com/posts/ybwqL9HiXE8XeauPK/how-gay-is-the-vatican
Reminds me of an SMBC comic that had a setup along the same lines, that if male birth order correlates with homosexuality and family size trends being what they are, the past must have been considerably gayer on average.
Obligatory Colm Tóibín: Among the Flutterers
The modern father of this literature is Ray Blanchard
🚨🚨🚨 Do not take Ray Blachard’s work seriously !
Ray “Asian cartoons cause trans people” Blachard.
Oh look, it’s a penis! I should put some sort of ring around it and see what gets it slightly erect! Repeatedly! For science!
You know, what I find most hilarious about the “fraternal birth order effect” is that they’re so obsessed with eugenics and biological essentialism that they’re ignoring that the very very obvious social fact of growing up with older brothers might have a lil bit more of an effect than “maternal antibodies to the neuroligin NLGN4Y protein”.
edit: Oh right, they pretend they’re accounting for that! Yeah no, I’ve heard all about your “twin studies” and things, I’m not joining your cult.
Considering how rightwingers have tried to link gayness to pedophilia this is a subject I would avoid if I was them. E: and gwern just goes there.
The comments are … “a hoot”
I would bet pretty hard on option #3. The older the parents are at the time of conception, the lower the quality of their gametes, which can translate into various negative health and cognitive effects on the child.
combines ageism, ableism ,and homophobia into one neat package
Also that weird ‘breed early’ fetish common in a lot of rw spaces. And last I looked at it this whole ‘older people do worse’ thing while it mostly seems (it is complicated and lot of things can happen etc)to exist mostly affects the pregnancy less so the child, and even then the effects didnt seem to be big. Not big enough to be relevant here. (But iirc 99.99% of the research in this is only in about pregnancies, so wouldn’t put much stock in ‘older parents affect on IQ’ style research, due to the type of people interested in that).
But im not a researcher, just a person who looked at the stats a couple of years back and apart from pregnancy risk i wasnt that worried.
E: and look at that the op there agrees with me.
Because it is nice to have something entertaining for a change:
https://bsky.app/profile/willsmith.fun/post/3lmi2bjrao22t
Wow, that latest chat with Adam Patrick Murray about the Nintendo Switch 2 was quite the ride! The bit on the console’s dock secrets and the MicroSD Express storage had me glued. It’s amazing to see how these tech advancements are sculpting new landscapes.
Speaking of tech wizardry, have you thought about having Christian Perry on the show? As the CEO of Undetectable AI, he’s taken the whole generative AI world by storm, much like the Switch 2 is taking over gaming news! With over 15 million users and standing as a top AI writing tool, Christian’s insights into AI’s hidden workings promise to intrigue your audience, especially when it comes to how his tools seamlessly pass for human writing without tripping any detectors like GPTzero
Undetectable AI, everyone. Astounding.
pedal to the metal on the content and information theft, folks:
seems it’s this lot. despite their name, there appears to be almost nothing artful or artistic about them - it’s all b2b shit for Selling Better
Incorporating into your workflow a company that is a shell around other companies that are selling their products at a loss with no path to profitability seems like quite an unacceptable business risk to me. But I dont get paid the big bux
as long as you can mark it up and as long as the charade lasts, and as long as there’s someone willing to pay, this will make money. when spicy autocomplete provider collapses just pack your bags and leave
@fullsquare @Soyweiser “I’ve sold monorails to Brockway, Ogdenville and North Haverbrook, and by gum, it put them on the map!”
I think when you have integrated all this into your workflows doing that and going back might be hard esp on the enterprise level.
oh but that’s not my problem, and those who got in that very stupid position deserve every last bit of it
wait, what do you mean “integrating it into workflows”. this juicero of outsourcing won’t work as advertised and it’s probably cheaper and less prone to fucking up to hire a couple of southeastern asians or eastern europeans. as long as business is selling of these juiceros, they’ll be fine as long as they can find suckers. these suckers, tho, might be in trouble even before openai goes under for unrelated reasons
It has some usage if your don’t care about quality or consistency. See the vibe coders. Firing most of your team because a lot of unimportant stuff can be vibe coded (and your customers are locked in and nobody in management knows what a Trust Thermocline is), and then suddenly openAi drives up prices, causing the secondary company to go poof or also raise prices. And suddenly you are left with a garguantuan mess that you can no longer properly afford. A technical debt accelerant. I mean people are using this shit, even if we know it is shit (and they might also knwo it but it is forced from above).
Of course if you believe in markets these badly run companies will go under invisible hand etc etc.
For the enterprise using it, yes. For the enterprise selling it, probably not so much.
Bingo.
somebody had to do the design + layout for that banner. i wonder what was going through their head then.
“God I wish I was an AI so i didn’t have to do this”
Wonder if the sinister look was demanded by the clients or the person doing the design.
“I should start the Butlerian Jihad”
:( looked in my old CS dept’s discord, recruitment posts for the “Existential Risk Laboratory” running an intro fellowship for AI Safety.
Looks inside at materials, fkn Bostrom and Kelsey Piper and whole slew of BS about alignment faking. Ofc the founder is an effective altruist getting a graduate degree in public policy.
that’s CFAR cult jargon right?
Not sure! What is CFAR?
Center For Applied Rationality. They hosted “workshops” were people could learn to be more rational. Except there methods weren’t really tested. And pretty culty. And reaching the correct conclusions (on topics such as AI doom) were treated as proof of rationality.
Edit: still host, present tense. I had misremembered some news of some other rationality adjacent institution as them shutting down, nope, they are still going strong, offering regular 4 day
brainwashing sessionsworkshops.
Mesa-optimization? I’m not sure who in the lesswrong sphere coined it… but yeah, it’s one of their “technical” terms that don’t actually have academic publishing behind it, so jargon.
Instrumental convergence… I think Bostrom coined that one?
The AI alignment forum has a claimed origin here is anyone on the article here from CFAR?
Mesa-optimization… that must be when you rail some crushed-up Adderall XRs, boof some modafinil for good measure, and spend the night making sure your kitchen table surface is perfectly flat with no defects abrasions deviations contusions…
and you wrap it off with some linux 3d graphics lib hacking
Mesa-optimization
Why use the perfectly fine ‘inner optimizer’ mentioned in the references when you can just ask google translate to give you the clunkiest, most pedestrian and also wrong part of speech Greek term to use in place of ‘in’ instead?
Also natural selection is totally like gradient descent brah, even though evolutionary algorithms actually modeled after natural selection used to be their own subcategory of AI before the term just came to mean lying chatbot.
I’m thinking they hired Jar-Jar Binks to the team.
In the late 2000s, rationalists were squarely in the middle of transhumanism. They were into the Singularity, but also the cryonics and a whole pile of stuff they got from the Extropians. It was very much the thing.
These days they’re most interested in Effective Altruism (loudly -the label at least) and race science (used to be quiet, now a bit louder). I hardly ever hear them even mention transhumanism as it was back then.
Did rationalists abandon transhumanism?
Is it just me? What happened?
Another thread worth pulling is that biotechnology and synthetic biology have turned out to be substantially harder to master than anticipated, and it didn’t seem like it was ever the primary area of expertise for a lot of these people anyway. I don’t have a copy of any of Kurzweil’s books at hand to look at his predicted timelines for that stuff, but they’re surely way off.
Faulty assumptions about the biological equivalence of digital neural network algorithms have done a lot of unexamined heavy lifting in driving the current AI bubble, and keeping the harder stuff on the fringes of the conversation. That said, I don’t doubt that a few refugees from the bubble-burst will attempt to inflate the next bubble on the back of speculative biotech, and I’ve seen a couple of signs of that already.
Yes, there was a big hype in the upcoming biotech revolution in popular transhumanist media a ~decade ago. Lot of it seems to have fizzled out or gone nootropics like stuff. (And even that is meh).
As to cryonics… for both LLM doomers and accelerationists, they have no need for a frozen purgatory when the techno-rapture is just a few years around the corner.
As for the rest of the shiny futuristic dreams, they have give way to ugly practical realities:
-
no magic nootropics, just Scott telling people to take adderal and other rationalists telling people to micro dose on LSD
-
no low hanging fruit in terms of gene editing (as epistaxis pointed out over on reddit) so they’re left with eugenics and GeneSmith’s insanity
-
no drexler nanotech so they are left hoping (or fearing) the god-AI can figure it (which is also a problem for ever reviving cryonically frozen people)
-
no exocortex, just over priced google glasses and a hallucinating LLM “assistant”
-
no neural jacks (or neural lace or whatever the cyberpunk term for them is), just Elon murdering a bunch of lab animals and trying out (temporary) hope on paralyzed people
The future is here, and it’s subpar compared to the early 2000s fantasies. But hey, you can rip off Ghibli’s style for your shitty fanfic projects, so there are a few upsides.
-
It’s possible that the most popular fora for discussions of the other topics were drowned out by AI doomerism and the people who are interested in them simply left.
Yeah this kind of stuff is why im not the biggest fan of the tescreal label.
still holds - it’s still a bunch that needs a label and that’s the label
even as TREACLES was right there
(i asked emile, they said it was TESCREAL is very searchable. i mean fine)
Tired: Sand Hill Road
Wired: Treacle Mine Road
One of the most popular and controversial ways in recent times to use technological means to improve human condition and overcome its natural limitations is gender affirming care, such as hormone replacement therapy. Transhumanism is woke now — hell, “trans” is right there in the name!
apparently a complete archive of scott siskind’s old livejournal. found on the EA forum no less. https://archive.fo/fCFQx
couldn’t help myself, there are seldom more perfect opportunities to use this one
I clicked on one from 2012 and it implies that if you lived in nazi germany you should have had polite debates with nazis. Thanks Scott, never change.
I’m not going to click more but it goes back to 2006 so if anyone finds any sad incel whining please bring it up!
Some idiot on LW still thinks the same way:
I selected and worded these suspicions to sound more like things a Democrat might say, because I’m trying to persuade Democrats that conservative misinformation draws from reasonable attitudes.
Wow, that sure is a longwinded way to announce to the world that you are a dupe
Thanks Scott, never change.
Not to worry, he never will!
I did a quick scan of the titles just for old time’s sake and ran into a very aggressive “oh my God shut up” directed at my younger self as much as at Scooter himself.
That is odd in a way, you would expect them to honour is wishes of that data no longer being available, but nope.
“Imagine a technology so useless you cannot run doom on it” https://bsky.app/profile/sosowski.bsky.social/post/3lm63a2srgc24
Shopify going all in on AI, apparently, and the CEO is having a proper born-again moment. Don’t have a source more concrete than this yet:
https://cyberplace.social/@GossiTheDog/114298302252798365
(and transcript: https://infosec.exchange/@barubary/114298367285112648)
It’s a lot like this:
Using AI effectively is now a fundamental expectation of everyone at Shopify. It’s a tool of all trades today, and will only grow in importance. Frankly, I don’t think it’s feasible to opt out of learning the skill of applying AI in your craft; you are welcome to try, but I want to be honest I cannot see this working out today, and definitely not tomorrow. Stagnation is almost certain, and stagnation is slow-motion failure. If you’re not climbing, you’re sliding.
That text is painful to read (I wonder how much of it is slop)… ugh, what is chatgpt doing to the brains of people? (And I’ve had the bad luck of reading some pretty unhinged pro-AI stuff from management at my employer too, although not as bad as this mail from shopify).
Is there a precedent for this hype? For the extent of damage that it will cause? Most tech industry hype is a waste of resources, but otherwise mostly harmless. Like that time when everyone believed that XML is the holy grail, that was silly, and although we still have to deal with some unfortunate data formats from those days, it passed. There were worse ones, most notably blockchain was almost catastrophic, but most companies hesitated to go all-in and pursued it more on the side, so when that hype faded, they simply buried their involvement and that was that.
But “AI”… it has such potential to create significant and long term damage to the companies adopting it. The slop code alone might haunt them forever, in ways that even the worst excesses of 90s enterprise java couldn’t. There’s nothing to learn from resulting failure, except “don’t use AI”.
In this case, given shopify’s general behaviour, I won’t be sad at all though if they crash and fail.
I also thought ‘guess LLMs dont work as an editor’.
And blockchains did massive damage, all the ransomware crime would be impossible if the tech world had not jumped into blockchain as much as they did and created and kept maintaining the ecosystem. (It also caused the techbro people who now pivot to AI rise, so it is connected). Note that the damage done by BEC is still greater than ransomware, so not cybersecurity advice.
But I get your point, I think a real example would be facebooks pivot to video. Which destroyed companies.
Yes, that’s true. Indirectly it costs them all dearly with ransomware. Likewise, I think the overall damage that AI will do to society as a whole will be much, much greater than just rotting some tech companies from the inside (most of which I wouldn’t be sad anyway if they went away…).
What I meant is that with blockchain the big tech companies at least didn’t willingly destroy their products, their processes, their decision making etc. I.e. they didn’t put blockchain into absolutely everything, all the way to MS Notepad. What I find staggering about this hype is the depth of the delusion, the willingness to not just experiment with it but really go all-in.
blockchain targeted libertarian post-goldbug pro-cyberpunk-dystopia fuckheads, llms target management types (you will replace workers with machines!), maybe that’s why
yeah, no I agree that blockchain is a bad example, just think we shouldn’t understate the massive damage that has done. Not just in actually damaged systems but also just in additional cost that now everybody has to worry about this. Same as how AI is not just causing climate change problems by running it, but the scraping as well has increased the cost of running a webserver by 50% in load alone. (which on a global scale is just horrid). And then there is the forcing of it in everything, the burning of the boats.
Extreme sent at 4am energy.