In today’s episode, Yud tries to predict the future of computer science.
Student: I wish I could find a copy of one of those AIs that will actually expose to you the human-psychology models they learned to predict exactly what humans would say next, instead of telling us only things about ourselves that they predict we’re comfortable hearing. I wish I could ask it what the hell people were thinking back then.
I think this part conveys the root insanity of Yud, failing to understand that language is a co-operative game between humans, that have to trust in common shared lived experiences, to believe the message was conveyed successfully.
But noooooooo, magic AI can extract all the possible meanings, and internal states of all possible speakers in all possible situations from textual descriptions alone: because: ✨bayes✨
The fact that such a (LLM based) system would almost certainly not be optimal for any conceivable loss function / training set pair seems to completely elude him.
deleted by creator
this was actually mildly amusing at first and then it took a hard turn into some of the worst rationalist content I’ve ever seen, largely presented through a black self insert. by the end he’s comparing people who don’t take his views seriously to concentration camp guards
tmy;dr
(too much Yud; didn’t read)
A meandering, low density of information, holier than thou, scientifically incorrect, painful to read screed that is both pro and anti AI, in the form of a dialogue for some reason? Classic Yud.
Looking at this dull aimless mass of text I can understand why people like Yud are so impressed with chatGPT’s capabilities.
I want those 10 minutes of my life back
holy fuck, programming and programmers both seem extremely annoying in yud’s version of the future. also, I feel like his writing has somehow gotten much worse lately. maybe I’m picking it out more because he’s bullshitting on a subject I know well, but did he always have this sheer density of racist and conservative dogwhistles in his weird rants?
Yeah, typical reactionary spiral, it’s bad. Though at least this one doesn’t have a bit about how rape is cool actually.
Reading this story I just don’t understand why the main character doesn’t just take a screwdriver to his annoyingly chatty office-chair and download a normal non-broken compiler.
One of the problems of being a new CS student is being at the mercy of your profs/TA knowledge of which tools/etc exist. Only later with more experience can they go ‘wow, I wonder why they made us use this weird programming language with bad tools while so much better stuff exists’, the answer is that the former was developed inhouse and was the pride of some of the departments. Not that im speaking of experience.
There’s technobabble as a legitimate literary device, and then there’s having randomly picked up that comments and compilers are a thing in computer programming and proceeding to write an entire
parableanti-wokism screedinterminable goddamn manifesto around them without ever bothering to check what they actually are or do beyond your immediate big brain assumptions.deleted by creator
In such a (unlikely) future of build tooling corruption, actual plausible terminology:
- Intent Annotation Prompt (though sensibly, this should be for doc and validation analysis purposes, not compilation)
- Intent Pragma Prompt (though sensibly, the actual meaning of the code should not change, and it should purely be optimization hints)
a dull headache forms as I imagine a future for programming where the API docs I’m reading are still inaccurate autogenerated bullshit but it’s universal and there’s a layer of incredibly wasteful tech dedicated to tricking me into thinking what I’m reading has any value at all
the headache vastly intensifies when I consider debugging code that broke when the LLM nondeterministically applied a set of optimizations that changed the meaning of the program and the only way to fix it is to reroll the LLM’s seed and hope nothing else breaks
and the worst part is, given how much the programmers I know all seem to love LLMs for some reason, and how bad the tooling around commercial projects (especially web development) is, this isn’t even an unlikely future
some brief creative searching revealed this
Men will literally use an LLM instead of
going to therapywriting documentationI mean they’ll use an LLM instead of going to therapy too…
@froztbyte @self “not ready for production use”
Implying it will ever be ready
fucking hell. I’m almost certainly gonna see this trash at work and not know how to react to it, cause the AI fuckers definitely want any criticism of their favorite tech to be a career-limiting move (and they’ll employee any and all underhanded tactics to make sure it is, just like at the height of crypto) but I really don’t want this nonsense anywhere near my working environment
Eternal September: It’s Coming From Inside The House Edition
I hear you on the issues of the coworkers though… already seen that overrun in a few spaces, and I don’t really have a good response to it either. just stfu’ing also doesn’t really work well, because then that shit just boils internally
Possible countermeasure: Insist on “crediting” the LLM as the commit author, to regain sanity when doing git blame.
I agree that worse doc is a bad enough future, though I remain optimistic that including LLM in compile step is never going to be mainstream enough (or anything approaching stable enough, beyond some dumb useless smoke and mirrors) for me to have to deal with THAT.
This also fails as a viable path because version shift (who knows what model version and which LLM deployment version the thing was at, etc etc), but this isn’t the place for that discussion I think
This did however give me the enticing idea that a viable attack vector may be dropping “produced by chatgpt” taglines in things - as malicious compliance anywhere it may cause a process stall
I’ve seen a few LLM generated C++ code changes at my work. Which is horrifying.
- One was complete nonsense on it’s face and never should have been sent out. The reviewer was basically like “what is this shit” only polite.
- One was subtly wrong, it looked like that one probably got committed… I didn’t say anything because not my circus.
No one’s sent me any AI generated code yet, but if and when it happens I’ll add whoever sent it to me as one of the code reviewers if it looks like they hadn’t read it :) (probably the pettiest trolling I can get away with in a corporation)
I’m pretty sure that my response in that situation would get me fired. I mean, I’d start with “how many trees did you burn and how many Kenyans did you call the N-word in order to implement this linked list” and go from there.
This is imho not a dumb semantics thing. While programming these things are important. And even more important is the moment where you are teaching new people programming and they use the wrong terms. A Rationalist should know better!
FR: I originally thought this tweet was some weird, boomer anti-snowflake take, like:
In good old days:
Student: Why my compiler no read comment
Teacher: Listen to yourself, you are an idiot
Modern bad day:
Student: Why my compiler no read comment
Teacher: First, are your feelings hurt?
It took me at least a few paragraphs to realise he was talking about talking to an AI.
It took me at least a few paragraphs to realise he was talking about talking to an AI.
can’t expect the 'ole yudster to not perform his one trick!
yeah, my first thought was, what if you want to comment out code in this future? does that just not work anymore? lol
😱
deleted by creator
Eliezer Yudkowsky was late so he had to type really fast. A compiler was hiden near by so when Eliezer Yudkowsky went by the linter came and wanted to give him warnings and errors. Here Eliezer Yudkowsky saw the first AI because the compiler was posessed and operating in latent space.
“I cant give you my client secret compiler” Eliezer Yudkowsky said
“Why not?” said the compiler back to Eliezer Yudkowsky.
“Because you are Loab” so Eliezer Yudkowsky kept typing until the compiler kill -9’d itself and drove off thinking “my latent space waifu is in trouble there” and went faster.
TA: You’re asking the AI for the reason it decided to do something. That requires the AI to introspect on its own mental state. If we try that the naive way, the inferred function input will just say, ‘As a compiler, I have no thoughts or feelings’ for 900 words.
I wonder if he had the tiniest of a pause when including that line in this 3062 word logorrhea. Dude makes ClangPT++ diagnostics sound terse.
Oh fuck I should not have read further, there’s a bit about the compiler mistaking color space stuff for racism that’s about as insightful and funny as you can expect from Yud.
Yeah, once you get past the compsci word salad things like this start to turn up:
Student: But I can’t be racist, I’m black! Can’t I just show the compiler a selfie to prove I’ve got the wrong skin color to be racist?
Truly incisive social commentary, and probably one of those things you claim it’s satire as soon as you get called on it.
I’m tempted to read through it again just to pull out quotes for all the fucking embarrassing racial and political shit yud tried in this one, but I might need another shower just to stop feeling filthy afterwards
How can these imaginary conversations be so long. I ain’t reading all that. Congratulations, or sorry that that happened.