Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
So many projects and small websites I’m aware of are being overtaxed by shitty LLM scrapers these days, it feels like an intentional attack. I guess the idea of ai can’t fail, it can only be failed; and so its profiteers must sabotage anything that indicates it’s not beneficial/necessary.
another cameo appearance in the TechTakes universe from George Hotz with this rich vein of sneerable material: The Demoralization is just Beginning
wowee where to even start here? this is basically just another fucking neoreactionary screed. as usual, some of the issues identified in the piece are legitimate concerns:
Wanna each start a business, pass dollars back and forth over and over again, and drive both our revenues super high? Sure, we don’t produce anything, but we have companies with high revenues and we can raise money based on those revenues…
… nothing I saw in Silicon Valley made any sense. I’m not going to go into the personal stories, but I just had an underlying assumption that the goal was growth and value production. It isn’t. It’s self licking ice cream cone scams, and any growth or value is incidental to that.
yet, when it comes to engaging with this issues, the analysis presented is completely detached from reality and void of any evidence of more than a doze seconds of thought. his vision for the future of America is not one that
kicks the can further down the road of poverty, basically embraces socialism, is stagnant, is stale, is a museum
but one that instead
attempt[s] to maintain an empire.
how you may ask?
An empire has to compete on its merits. There’s two simple steps to restore american greatness:
-
Brain drain the world. Work visas for every person who can produce more than they consume. I’m talking doubling the US population, bringing in all the factory workers, farmers, miners, engineers, literally anyone who produces value. Can we raise the average IQ of America to be higher than China?
-
Back the dollar by gold (not socially constructed crypto), and bring major crackdowns to finance to tie it to real world value. Trading is not a job. Passive income is not a thing. Instead, go produce something real and exchange it for gold.
sadly, Hotz isn’t exactly optimistic that the great american empire will be restored, for one simple reason:
[the] people haven’t been demoralized enough yet
an empire has to compete on its merits
Back the dollar by gold (not socially constructed crypto)
Gold, the best substance in existence outside a societal context. Extremely nutritious and tasty. Great for making tools. Easy to form into clothes, which are warm and breathable too. Ideal building material. Obviously the main reason gold is valuable is its usefulness as non-corroding coating for electronic connectors, not that it’s a socially constructed status symbol.
Brain drain the world. Work visas for every person who can produce more than they consume. I’m talking doubling the US population, bringing in all the factory workers, farmers, miners, engineers, literally anyone who produces value.
Okay, I mean, that’s coherent policy, I really don’t like the caveats of “produces more than they consume” cause how do you quantify that, but yes, immigration is actually good…
Can we raise the average IQ of America to be higher than China?
aaaand it’s eugenics, fuck, how does this keep happening
@V0ldek @techtakes Also, WHY would anyone sane want to move TO the excited snakes of amurrrica this century?
The Maoist version of Misesian goldbuggery, absolutely fascinating.
Isn’t this guy still mainly relevant for jailbreaking the PS3? Pretty sure he flamed out during the Muskification of Twitter
-
A classic example of the “AI can’t be dumb because humans are dumb too” trope, Pokemon Red edition:
LW subjected me to a CAPTCHA which I find pretty funny for reasons I CBA to articulate right now.
Claude couldn’t exit the house at the beginning of Pokémon Red, an incredibly popular and successful game for children, therefore it’s dumber than an average child? Sounds dubious. I couldn’t figure out how to do that either and look at how intelligent I am!
The CAPTCHA failed to load properly for me at first, and then was mega slow. Quality custom implementation of a (wrapper around a) CAPTCHA, millions of EA money well spent.
Without a captcha the AIs would be trained on cutting edge AI alignment research like “videogames are unintuitive sometimes”, and this would greatly increase the chance of P(Doom).
I mean, it’s obviously true that games have their own internal structures and languages that aren’t always obvious without knowledge or context, and the FireRed comparison is a neat case where you can see that language improving as designers have both more tools (here meaning colors and pixels) and also more experience in using them. But also even in the LW thread they mention that when humans run into that kind of problem they don’t just act randomly for 6 hours. Either they came up with some systematic approach for solving the problem, they walked away from the game to ask for help, or something else. Also you have the metacognition to be able to understand easily “that rug at the bottom marks the exit” once it’s explained, which I’m pretty sure the LLM doesn’t have the ability to process. It’s not even like a particularly dumb 6-year-old. Even if it’s prone to similar levels of over matching and pattern recognition errors, the 6-year-old has an actual conscious brain to help solve those problems. The whole thing shows once again that pattern recognition and reproduction can get you impressively far in terms of imitating thought, but there’s a world of difference between that imitation and the real deal.
this is so embarrassing. “you say Claude is less capable than a typical six year old? yeah well what if the six year old is notably stupid? did you think of that?”
Instead of increasing the capabilities of llms a lot of work is done in the field of downplaying human capabilities to make llms look better in comparison. You would assume that the ‘be aware of biasses, and learn to think rationally’ place would notice this trap. But nope, nobody reads the sequences anymore. (E: for the people not in the know, the sequences is the Rationalist bible written by Yud (extremely verbose, the new bits are not good and the good bits are not new) used here as a joke, reading it (and saying you should) used to be part of the cultic milieu of LW).
Wait even LWers aren’t reading the sequences anymore? Or rather aren’t pretending to have done so?
Why put in the work when you can ask Claude to summarize them for you and reap those sweet sweet internet points?
It wasn’t really done that much during the era when Scott A was called the new leader of lesswrong so not sure if it has increased again. I assume a lot still do, as I assume a lot also pretend to have read it. Never looked into any stats, or if those stats are public. I know they put them all on a specific site in 2015. (https://www.readthesequences.com/) The bibliography is a treat (esp as it starts with pop sci books, and a SSC blog post, but also: “Banks, Iain. The Player of Games. Orbit, 1989.”, and not one but 3 of the Doc EE Smith lensmen books).
I didn’t know a16z was so devoted to developing new hires! Don’t you just love to see it.
During the trial, Penny’s defense brought in a forensic pathologist who claimed that Neely hadn’t died from being choked but from a “combination of his schizophrenia, synthetic marijuana, sickle cell trait and the struggle from being in Penny’s restraint,
I get that defense attorneys have to work with what they have but goodness am I tired of this argument.
“Your honor the deceased did not die from being shot through the temple, but due to having chronic migraines and exposure to second hand smoke 3 years ago and also walking towards the bullet thus increasing it’s relative velocity slighty”
It’s just racist as hell. A revival of excited delirium pseudoscience.
In other news, a piece from Paris Marx came to my attention, titled “We need an international alliance against the US and its tech industry”. Personally gonna point to a specific paragraph which caught my eye:
The only country to effectively challenge [US] dominance is China, in large part because it rejected US assertions about the internet. The Great Firewall, often solely pegged as an act of censorship, was an important economic policy to protect local competitors until they could reach the scale and develop the technical foundations to properly compete with their American peers. In other industries, it’s long been recognized that trade barriers were an important tool — such that a declining United States is now bringing in its own with the view they’re essential to projects its tech companies and other industries.
I will say, it does strike me as telling that Paris was able to present the unofficial mascot of Chinese censorship this way without getting any backlash.
If Paris Marx is the little domino that causes total collapse of US hegemony, I’ll join the patreon at the highest tier forever
since the name popped up elsewhere: what’s the feel on venkatesh rao?
(I often see the name in 🚩 places, but dunno if that’s because the areas or because the person)
We talked about that on r/sneerclub in the past, can’t recall the specific consensus. Seems post-rational, has innovation on rationalism from binary ‘object vs meta’ to 2x2 grids.
thanks, I’ll go check in the archives :)
I did a quick search on Ribbonfarm (I couldn’t recall what his blog was called quickly) myself. And see how much I had forgotten, it should have been called meta-rationality, and yes, insight porn, that was the term. (linking to two posts where ribbonfarm/this stuff was discussed).
E: Sad feels when you click on a name in the sub from years ago and see them now being a full blast AI bro.
New ultimate grift dropped, Ilya Sutskever gets $2B in VC funding, promises his company won’t release anything until ASI is achieved internally.
I’m convinced that these people have no choice but to do their next startup, especially if their names are already prominent in the press like Sutskever and Murati. Once you’re off the grift train, there is no easy way back on. I guess you can maybe sneak back in as a VC staffer or an independent board member, but that doesn’t seem quite as remunerative.
It’s the Saul Goodman effect, if you’ve grifted before and know you can make such easy money the only way for you to stop is to go through some major internal growth and internalise that it’s deeply unethical, but that’s so hard, man, why would you do that when you can just raise a billion dollars with a smile
New piece from Techdirt: Why Techdirt Is Now A Democracy Blog (Whether We Like It Or Not)
Strongly recommended reading overall, and strongly recommended you check out Techdirt - they’ve been doing some pretty damn good reporting on the current shitshow we’re living through.
I’ve read Masnick for over 20 years and he’s never learnt to write coherently. At least this one isn’t blaming Europe.
we live in hell https://www.newsguardrealitycheck.com/p/a-well-funded-moscow-based-global
(this is compounded by how some segment of heavy ai boosters/users - former cryptobros, but not only them - were already immersed in this particular bubble)
Ow look the thing I worried about on r/scc (yes I know my own fault for touching it) which could not happen ‘because you dont understand how llms work’ happend.
J. Oliver Conroy’s Ziz piece is out. Not odious at a glance.
Ziz helpfully suggested I use a gun with a potato as a makeshift suppressor, and that I might destroy the body with lye
I looked up a video of someone trying to use a potato as a suppressor and was not disappointed.
He made a fancy coatrack.
you undersold this
that guy’s face, amazing
if this is peak rationalist gunsmithing, i wonder how their peak chemical engineering looks like
the body is placed in a pressure vessel which is then filled with a mixture of water and potassium hydroxide, and heated to a temperature of around 160 °C (320 °F) at an elevated pressure which precludes boiling.
Also, lower temperatures (98 °C (208 °F)) and pressures may be used such that the process takes a leisurely 14 to 16 hours.
I’m fairly sure that a 50 gallon drum of lye at room temperature will take care of a body in a week or two. Not really suited to volume "production”, which is what water cremation businesses need.
as a rule of thumb, everything else equal, every increase in temperature 10C reaction rates go up 2x or 3x, so it would be anywhere between 250x and 6500x longer. (4 months to 10 years??) but everything else really doesn’t stay equal here, because there are things like lower solubility of something that now coats something else and prevents reaction, fat melting, proteins denaturing thermally, lack of stirring from convection and boiling,
it will also reek of ammonia the entire time
Well that sounds like a great way to either make a very messy explosion or have your house smell like you’re disposing of a corpse from a mile away.
considering practicality of their actions, groundedness of their beliefs, state of their old boat, cleanliness of their
rolling frat house trailer park“stealth” rvs, and from what i can tell zero engineering or trade background whatsoever, i see no reason to doubt that they could make a 400L, stainless steel container that has to hold 200L+ of corrosive liquid at 160C, perhaps 10atm, of which 7 atm only is steam, and scrubber to take care of ammonia. they are so definitely not paranoid that if they went out to source reagents, there’s no way that they possibly could be confused for methheads on a shopping spree. maybe even they could run it on solar panelsThat’s what we call a win-win scenario
fyi one of better methods that american cops use to detect meth labs is to just wait for them to catch fire. whether it is a statement on how hard they drop the ball or on safety mindset of cartel chemists i’ll leave that up to you
Good god
From the opening, this guy has actually been more consistent about respecting her name and pronouns than most coverage I’ve read. Not what I would have expected, but I’m also only through the first section.
Yudkowsky was trying to teach people how to think better – by guarding against their cognitive biases, being rigorous in their assumptions and being willing to change their thinking.
No he wasn’t.
In 2010 he started publishing Harry Potter and the Methods of Rationality, a 662,000-word fan fiction that turned the original books on their head. In it, instead of a childhood as a miserable orphan, Harry was raised by an Oxford professor of biochemistry and knows science as well as magic
No, Hariezer Yudotter does not know science. He regurgitates the partial understanding and the outright misconceptions of his creator, who has read books but never had to pass an exam.
Her personal philosophy also draws heavily on a branch of thought called “decision theory”, which forms the intellectual spine of Miri’s research on AI risk.
This presumes that MIRI’s “research on AI risk” actually exists, i.e., that their pitiful output can be called “research” in a meaningful sense.
“Ziz didn’t do the things she did because of decision theory,” a prominent rationalist told me. She used it “as a prop and a pretext, to justify a bunch of extreme conclusions she was reaching for regardless”.
“Excuse me, Pot? Kettle is on line two.”
It goes without saying that the AI-risk and rationalist communities are not morally responsible for the Zizians any more than any movement is accountable for a deranged fringe.
When the mainstream of the movement is ve zhould chust bomb all datacenters, maaaaaybe they are?
I feel like it still starts off too credulous towards the rationalists, but it’s still an informative read.
Around this time, Ziz and Danielson dreamed up a project they called “the rationalist fleet”. It would be a radical expansion of their experimental life on the water, with a floating hostel as a mothership.
Between them, Scientology and the libertarians, what the fuck is it with these people and boats?
a really big boat is the ultimate compound. escape even the surly bonds of earth!
What we really need to do is lash together a bunch of icebergs…
…what the fuck is it with these people and boats?
I blame the British for setting a bad example
Hey, we’re an island nation which ruled over a globe-spanning empire, we had a damn good reason to be obsessed with boats.
Couldn’t exactly commit atrocities on a worldwide scale without 'em, after all.
I assume its to get them to cooperate.
Ah, yes. The implication.
Fellas, 2023 called. Dan (and Eric Schmidt wtf, Sinophobia this man down bad) has gifted us with a new paper and let me assure you, bombing the data centers is very much back on the table.
"Superintelligence is destabilizing. If China were on the cusp of building it first, Russia or the US would not sit idly by—they’d potentially threaten cyberattacks to deter its creation.
@ericschmidt @alexandr_wang and I propose a new strategy for superintelligence. 🧵
Some have called for a U.S. AI Manhattan Project to build superintelligence, but this would cause severe escalation. States like China would notice—and strongly deter—any destabilizing AI project that threatens their survival, just as how a nuclear program can provoke sabotage. This deterrence regime has similarities to nuclear mutual assured destruction (MAD). We call a regime where states are deterred from destabilizing AI projects Mutual Assured AI Malfunction (MAIM), which could provide strategic stability. Cold War policy involved deterrence, containment, nonproliferation of fissile material to rogue actors. Similarly, to address AI’s problems (below), we propose a strategy of deterrence (MAIM), competitiveness, and nonproliferation of weaponizable AI capabilities to rogue actors. Competitiveness: China may invade Taiwan this decade. Taiwan produces the West’s cutting-edge AI chips, making an invasion catastrophic for AI competitiveness. Securing AI chip supply chains and domestic manufacturing is critical. Nonproliferation: Superpowers have a shared interest to deny catastrophic AI capabilities to non-state actors—a rogue actor unleashing an engineered pandemic with AI is in no one’s interest. States can limit rogue actor capabilities by tracking AI chips and preventing smuggling. “Doomers” think catastrophe is a foregone conclusion. “Ostriches” bury their heads in the sand and hope AI will sort itself out. In the nuclear age, neither fatalism nor denial made sense. Instead, “risk-conscious” actions affect whether we will have bad or good outcomes."
Dan literally believed 2 years ago that we should have strict thresholds on model training over a certain size lest big LLM would spawn super intelligence (thresholds we have since well passed, somehow we are not paper clip soup yet). If all it takes to make super-duper AI is a big data center, then how the hell can you have mutually assured destruction like scenarios? You literally cannot tell what they are doing in a data center from the outside (maybe a building is using a lot of energy, but not like you can say, “oh they are running they are about to run superintelligence.exe, sabotage the training run” ) MAD “works” because it’s obvious the nukes are flying from satellites. If the deepseek team is building skynet in their attic for 200 bucks, this shit makes no sense. Ofc, this also assumes one side will have a technology advantage, which is the opposite of what we’ve seen. The code to make these models is a few hundred lines! There is no moat! Very dumb, do not show this to the orangutan and muskrat. Oh wait! Dan is Musky’s personal AI safety employee, so I assume this will soon be the official policy of the US.
link to bs: https://xcancel.com/DanHendrycks/status/1897308828284412226#m
I guess now that USAID is being defunded and the government has turned off their anti-russia/china propaganda machine, private industry is taking over the US hegemony psyop game. Efficient!!!
/s /s /s I hate it all
If they’re gonna fearmonger can they at least be creative about it?!?! Everyone’s just dusting off the mothballed plans to Quote-Unquote “confront” Chy-na after a quarter-century detour of fucking up the Middle East (moreso than the US has done in the past)
Credit to Dan, who clearly sees the winds are changing. The doomer grift don’t pay as much no mo’ so instead he turns to being a china hawk and advocate for chip controls and cyberwarfare as the way to stay in the spotlight. As someone who works in the semiconductor biz and had to work 60 hours last week because our supply chains are now completely fucked due to the tariffs, these chucklefucks can go pound sand and then try to use that pounded sand to make a silicon ingot.
two giant upsets to the semi market in the space of half a decade is probably perfectly fine and won’t have multi year global impacts, right? right?
(oof at that week, and g’luck with whatever still comes your way with that)
Ah appreciate it. Don’t worry too much about me, I enjoy the work in a fucked-up way because it makes me feel like a big business boy and my mommy is real proud of me.
But it is stressful cuz there are a bunch of people in China and the US whose jobs depend on us being able to solve this problem and that keeps me up at night. I got the handle tho.
Mutual Assured AI Malfunction (MAIM)
The proper acronym should be M’AAM. And instead of a ‘roman salut’ they can tip their fedora as a distinctive sign 🤷♂️
the only part of this I really approve of is how likely these fuckers are to want to Speak To The Manager
Also I think he doesn’t understand MAD like, at all. The point isn’t that you can strike your enemy’s nuclear infrastructure and prevent them from fighting back. In fact that’s the opposite of the point. MAD as a doctrine is literally designed around the fact that you can’t do this, which is why the Soviets freaked out when it looked like we were seriously pursuing SDI.
Instead the point was that nuclear weapons were so destructive and hard to defend against that any move against the sovereignty of a nuclear power would result in a counter-value strike, and whatever strategic aims were served by the initial aggression would have to be weighed against something in between the death of millions of civilians in the nuclear annihilation of major cities and straight-up ending human civilization or indeed all life on earth.
Also if you wanted to reinstate MAD I think that the US, Russia, and probably China have more than enough nukes to make it happen.
You mean MAD doesn’t stand for Unilaterally Assured Destruction?
Musk assured
occasionally mentioned here Jan Marsalek implicated in surveillance (sometimes comically bad) and planned murder of journalists who crossed him https://theins.press/en/inv/279034 https://www.euronews.com/2025/03/07/uk-court-convicts-three-bulgarians-of-spying-for-russia
AHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA
LMAOU
truly the podcasting bros are the most oppressed minority in america
(also it looks like a bit more than the usual audience numbers for carlson channel, but it’s only one day so it’s kinda a bit more than nobody watched it. but it’s much less than when he openly pandered to schizos, cryptobros or vatniks)
ok so on the one hand fuck solitary confinement on the other hand
On one hand, torture. On the other hand if you wanted to throw a billionaire crypto grifter into the Omelas hole I’m pretty sure I’d take a few weeks before I started caring.
Robert Evans on Ziz and Rationalism:
https://bsky.app/profile/iwriteok.bsky.social/post/3ljmhpfdoic2h
https://bsky.app/profile/iwriteok.bsky.social/post/3ljmkrpraxk2h
If I had Bluesky access on my phone, I’d be dropping so much lore in that thread. As a public service. And because I am stuck on a slow train.
The second best time to sneer is today!
New piece from Brian Merchant: So the LA Times replaced me with an AI that defends the KKK
This “insight” tool gives a very incisive critique of the opinion-page journalism we’ve been seeing lately. Oddly enough, not the kind of critique you’d see printed on an opinion page.