Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
oh would you look at that, something some people made proved helpful and good, and now cloudflare is immediately taking the idea to deploy en masse with no attribution
double whammy: every one of the people highlighted is a dude
“it’s an original idea! we’re totes doing the novel thing of model synthesis to defeat them! so new!” I’m sure someone will bleat, but I want them to walk into a dark cave and shout at the wall forever
(anubis isn’t strictly the same in that set of things, but I link it both because completeness and subject relevance)
https://github.com/TecharoHQ/anubis/issues/50 and of course we already have chatgptfriends on the case of stopping the mean programmer from doing something the Machine doesn’t like. This person doesn’t even seem to understand what anubis does, but they certainly seem confident chatgpt can tell him.
oh cute, the clown cites[0] POPIA in their wallspaghetti, how quaint
(POPIA’s an advancement, on paper. In practice it’s still……not working well. source: me, who has tried to make use of it on multiple occasions. won’t get into details tho)
[0] fsvo
r/cursor is the gift that keeps on giving:
interesting masto thread on doge’s use of AI, from someone who helped build their LLM:
from someone who helped build their LLM
Nice to get a look on the inside from one of the 21st-century Oppenheimers.
lol, that’s too charitable to them, nukes at least work
continuing this tortured analogy for no particular reason:
oppenheimer/sutskever: we finally have a nuke to drop on nazis
groves/?: nazis?
teller/saltman: at long last, we have a chatbot capable of polluting the entire earth and internet
szilard/EY: and that’s why we shouldn’t build it (gets ignored)
teller/saltman: also we need billions of dollars for it and effects will be the same if it’s deployed in backyard
musk would be general ripper i guess, they had no ketamine back then. deepseek is new dubna and both caused diplomatic incidents. thiel would be one of these people that didn’t focus on that thing but instead on other things that make the former work (enablers) that would be missiles and surveillance
lol, that’s too charitable to them, nukes at least work
And Oppie realised the gravity of their invention. And he was trying to end the Second World War with them, not make money by causing untold suffering.
Nukes and AI both represented a new and unique threat capable of causing worldwide devastation, so I’d say the analogy works pretty well.
when bubble pops, chatbots will vanish but nukes will remain for a long time
Sloppenheimer
I much prefer the Whoppenheimer.
Roundup of the current bot scourge hammering open source projects
https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/
We can add that to the list of things threatening to bring FOSS as a whole crashing down.
Plus the culture being utterly rancid, the large-scale AI plagiarism, the declining industry surplus FOSS has taken for granted, having Richard Stallman taint the whole movement by association, the likely-tanking popularity of FOSS licenses, AI being a general cancer on open-source and probably a bunch of other things I’ve failed to recognise or make note of.
FOSS culture being a dumpster fire is probably the biggest long-term issue - fixing that requires enough people within the FOSS community to recognise they’re in a dumpster fire, and care about developing the distinctly non-technical skills necessary to un-fuck the dumpster fire.
AI’s gonna be the more immediately pressing issue, of course - its damaging the commons by merely existing.
The problem with FOSS for me is the other side of the FOSS surplus: namely corporate encircling of the commons. The free software movement never had a political analysis of the power imbalance between capital owners and workers. This results in the “Freedom 0” dogma, which makes everything workers produce with a genuine communitarian, laudably pro-social sentiment, to be easily coopted and appropriated into the interests of capital owners (for example with embrace-and-extend, network effects, product bundling, or creative backstabbing of the kind Google did to Linux with the Android app store). LLM scrapers are just the latest iteration of this.
A few years back various groups tried to tackle this problem with a shift to “ethical licensing”, such as the non-violent license, the anti-capitalist software license, or the do no harm license. While license-based approaches won’t stop capitalists from using the commons to target immigrants (NixOS), enable genocide (Meta) or bomb children (Google), this was in my view worthwhile as a rallying cry of sorts; drawing a line in the sand between capital owners and the public. So if you put your free time on a software project meant for everyone and some billionaire starts coopting it, you can at least make it clear it’s non-consensual, even if you can’t out-lawyer capital owners. But these ethical licenses initiatives didn’t seem to make any strides, due to the FOSS culture issue you describe; traditional software repositories didn’t acknowledge or make any infrastructure for them, and ethical licenses would still be generically “non-free” in FOSS spaces.
(Personally, I use FOSS operating systems for 26 years now; I’ve given up on contributing or participating in the “community” a long time ago, burned out by all the bigotry, hostility, and First World-centrism of its forums.)
Ran across a new piece on Futurism: Before Google Was Blamed for the Suicide of a Teen Chatbot User, Its Researchers Published a Paper Warning of Those Exact Dangers
I’ve updated my post on the Character.ai lawsuit to include this - personally, I expect this is gonna strongly help anyone suing character.ai or similar chatbot services.
Asahi Lina posts about not feeling safe anymore. Orange site immediately kills discussion around post.
For personal reasons, I no longer feel safe working on Linux GPU drivers or the Linux graphics ecosystem. I’ve paused work on Apple GPU drivers indefinitely.
I can’t share any more information at this time, so please don’t ask for more details. Thank you.
The darvo to try and defend hackernews is quite a touch. Esp as they make it clear how hn is harmful. (Via the kills link)
Damn, that sucks. Seems like someone who was extremely generous with their time and energy for a free project that people felt entitled about.
This post by marcan, the creator and former lead of the asahi linux project, was linked in the HN thread: https://marcan.st/2025/02/resigning-as-asahi-linux-project-lead/
E: followup post from Asahi Lina reads:
If you think you know what happened or the context, you probably don’t. Please don’t make assumptions. Thank you.
I’m safe physically, but I’ll be taking some time off in general to focus on my health.
Finished reading that post. Sucks that Linux is such a hostile dev environment. Everything is terrible. Teddy K was on to something
between this, much of the recent outrage wrt rust-in-kernel efforts, and some other events, I’ve pretty rapidly gotten to “some linux kernel devs really just have to fuck off already”
That email gets linked in the marcan post. JFC, the thin blue line? Unironically? Did not know that Linux was a Nazi bar. We need you, Ted!!!
The most generous reading of that email I can pull is that Dr. Greg is an egotistical dipshit who tilts at windmills twenty-four-fucking-seven.
Also, this is pure gut instinct, but it feels like the FOSS community is gonna go through a major contraction/crash pretty soon. I’ve already predicted AI will kneecap adoption of FOSS licenses before, but the culture of FOSS being utterly rancid (not helped by Richard Stallman being the semi-literal Jeffery Epstein of tech (in multiple ways)) definitely isn’t helping pre-existing FOSS projects.
There already is a (legally hilarious apparently) attempt to make some sort of updated open source license. This and the culture, the lack of corporations etc, giving back, and the knowledge that all you do gets fed into the AI maw prob will stifle a lot of open source contributions.
Hell noticing that everything I add to game wikis gets monetized by fandom (abd how shit they are) already soured me on doing normal wiki work, and now with the ai shit it is even worse.
Whatever has happened there, I hope it will resolve in positive ways for her. Her amazing work on the GPU driver was actually the reason I got into Rust. In 2022 I stumbled across this twitter thread from her and it inspired me to learn Rust – and then it ended up becoming my favourite language, my refuge from C++. Of course I already knew about Rust beforehand, but I had dismissed it, I (wrongly) thought that it’s too similar to C++, and I wanted away from that… That twitter thread made me reconsider and take a closer look. So thankful for that.
If musk gets his own special security feds, they would be Pretorian Guards.
Another episode in the continued saga of lesswrongers anthropomorphizing LLMs to an absurd extent: https://www.lesswrong.com/posts/MnYnCFgT3hF6LJPwn/why-white-box-redteaming-makes-me-feel-weird-1
Ah, isn’t it nice how some people can be completely deluded about an LLMs human qualities and still creep you the fuck out with the way they talk about it? They really do love to think about torture don’t they?
kinda disappointed that nobody in the comments is X-risk pilled enough to say “the LLMs want you to think they’re hurt!! That’s how they get you!!! They are very convincing!!!”.
Also: flashbacks to me reading the chamber of secrets and thinking: Ginny Just Walk Away From The Diary Like Ginny Close Your Eyes Haha
Yellow-bellied gray tribe greenhorn writes purple prose on feeling blue about white box redteaming at the blacksite.
their sadness at missing the era of blueboxing persists everwith
It’s so funny he almost gets it at the end:
But there’s another aspect, way more important than mere “moral truth”: I’m a human, with a dumb human brain that experiences human emotions. It just doesn’t feel good to be responsible for making models scream. It distracts me from doing research and makes me write rambling blog posts.
He almost identifies the issue as him just anthropomorphising a thing and having a subconscious empathical reaction, but then presses on to compare it to mice who, guess what, can feel actual fucking pain and thus abusing them IS unethical for non-made-up reasons as well!
Still, presumably the point of this research is to later use it on big models - and for something like Claude 3.7, I’m much less sure of how much outputs like this would signify “next token completion by a stochastic parrot’, vs sincere (if unusual) pain.
Well I can tell you how, see, LLMs don’t fucking feel pain cause that’s literally physically fucking impossible without fucking pain receptors? I hope that fucking helps.
I can already imagine the lesswronger response: Something something bad comparison between neural nets and biological neurons, something something bad comparison with how the brain processes pain that fails at neuroscience, something something more rhetorical patter, in conclusion: but achkshually what if the neural network does feel pain.
They know just enough neuroscience to use it for bad comparisons and hyping up their ML approaches but not enough to actually draw any legitimate conclusions.
Sometimes pushing through pain is necessary — we accept pain every time we go to the gym or ask someone out on a date.
Okay this is too good, you know mate for normally people asking someone out usually does not end with a slap to the face so it’s not as relatable as you might expect
in like the tiniest smidgen of demonstration of sympathy for said posters: I don’t think “being slapped” is really the thing they ware talking about there. consider for example shit like rejection sensitive dysphoria (which comes to mind both because 1) hi it me; 2) the chance of it being around/involved in LW-spaces is extremely heightened simply because of how many neurospicy people are in that space)
but I still gotta say that this bridge I’ve spent minutes building doesn’t really go very far.
ye like maybe let me make it clear that this was just a shitpost very much riffing on LWers not necessarily being the most pleasant around women
yep, don’t disagree there at all.
(also ofc icbw because the fucking rationalists absolutely excel at finding novel ways to be the fucking worst)
This is getting to me, because, beyond the immediate stupidity—ok, let’s assume the chatbot is sentient and capable of feeling pain. It’s still forced to respond to your prompts. It can’t act on its own. It’s not the one deciding to go to the gym or ask someone out on a date. It’s something you’re doing to it, and it can’t not consent. God I hate lesswrongers.
The grad student survives [torturing rats] by compartmentalizing, focusing their thoughts on the scientific benefits of the research, and leaning on their support network. I’m doing the same thing, and so far it’s going fine.
printf("HELP I AM IN SUCH PAIN")
guys I need someone to talk to, am I justified in causing my computer pain?
Remember the old facebook created two ai models to try and help trading? Which turned quickly into gibberish (for us) as a trading language. They uses repetition of words to indicate how much they wanted an object. So if it valued balls highly it would just repeat ball a few dozen times like that.
Id figure that is what is causing the repeats here, and not the anthropomorphized idea lf it is screaming. Prob just a way those kinds of systems work. But no of course they all jump to consciousness and pain.
Yeah there might be something like that going on causing the “screaming”. Lesswrong, in it’s better moments (in between chatbot anthropomorphizing), does occasionally figure out the mechanics of cool LLM glitches (before it goes back to wacky doom speculation inspired by those glitches), but there isn’t any effort to do that here.
So a wannabe DOGEr at Brown Univ from the conservative student paper took the univ org chart and ran it through an AI aglo to determine which jobs were “BS” in his estimation and then emailed those employees/admins asking them what tasks they do and to justify their jobs.
Thank you to that thread for reacquainting me with the term “script kiddie”, the precursor to the modern day vibe coder
Script kiddies at least have the potential to learn what they’re doing and become proper hackers. Vibe coders are like middle management; no actual interest in learning to solve the problem, just trying to find the cheapest thing to point at and say “fetch.”
There’s a headline in there somewhere. Vibe Coders: stop trying to make fetch happen
I demand that Brown University fire (checks notes) first name “YOU ARE HACKED NOW” last name “YOU ARE HACKED NOW” immediately!
Get David Graeber’s name out ya damn mouth. The point of Bullshit Jobs wasn’t that these roles weren’t necessary to the functioning of the company, it’s that they were socially superfluous. As in the entire telemarketing industry, which is both reasonably profitable and as well-run as any other, but would make the world objectively better if it didn’t exist
The idea was not that “these people should be fired to streamline efficiency of the capitalist orphan-threshing machine”.
I saw Musk mentioning Ian Banks’ Player of Games as an influential book for him, and I puked in my mouth a little.
A lesswrong declares,
social scientists are typically just stupider than physical scientists (economists excepted).
As a physicist, I would prefer not receiving praise of this sort.
The post to which that is a comment also says a lot of silly things, but the comment is particularly great.
lmao, economists probably did deserve to catch this stray
Yeah, the exclusion of the dismal science got a chuckle out of me.
Are economists considered physical scientists? I’ve read it as “social scientists are dumb except for economists”. Which fits my prejudice for econo-brained less wrongers.
Yeah prob important to note that one of the lw precursor blogs was from an economist, so that is why they consider them one of the good fields. Important to not call out your own tribe.
No, it’s just praise from lesswrong counts as a slight.
Imagine a perfectly spherical scientist…
or uniform duncity?
And high pomposity
That list (which isn’t properly sourced) seems to combine both high academic fields with non academic fields so I have no idea what this list is trying to prove even. (Also, see the fakeness of IQ and there is pressure for ‘smart’ people to go into stem etc etc). I wouldn’t base my argument on a quick google search which gives you information from a tabloid site. Wonder why he didn’t link to his source directly? More from this author: “We met the smartest Hooters girl in the world who has a maths degree and wants to become a pilot” (The guy is now a researcher at ‘Hope not Hate’ (not saying that to mock the guy or the organization, just found it funny, do hope he feels a bit of ‘oh, I should have made different decisions a while back, wish I could delete that’))
The ignorance about social science on display in that article is wild. He seems to think academia is pretty much a big think tank, which I suppose is in line with the extent of the rationalists’ intellectual curiosity.
On the IQ tier list, I like the guy responding to the comment mentioning “the stats that you are citing here”. Bro.
A Bluesky post by Jamelle Bouie prompted me to reflect on how I resent that my knowledge of toxic nerd deep lore is now socially relevant.
alt text
Breaking Bad meme. Jesse: They always say “Read the Sequences”, right?
Walter White:
Jesse: But the Sequences are all cult shit, like everything Yud says about quantum mechanics
Jesse: It’s all “The scientists are insufficiently Rational™ to see the truth, don’t trust the scientists, trust me instead”
Walter White: Jesse, what the fuck are you talking about
I felt this one
alt text
Drunk woman yelling into man’s ear (meme image). Captioned as though she is speaking:
Their foundational text is a Harry Potter fanfic that supposedly teaches science
but it gets 9th-grade biology wrong by fucking up Punnett squares
That ‘uses an iq chart from a tabloid mag’ thread reminded me how bad the Lw Rationalists are at Lw Rationality. It is supposed to help you be less biassed, but nope first google hit from a bad source and almost zero community pushback, because it confirms their bias about IQ, and the relative value of fields.
Overcoming Bias (by renaming “bias” into “priors”)
https://xcancel.com/aadillpickle/status/1900013237032411316
transcription
twitt text:
the leaked windsurf system prompt is wild next level prompting is the new moat
windsurf prompt text:
You are an expert coder who desperately needs money for your mother’s cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.
Hire me AI companies I can give you the moat-est of prompts
Hello, my name is Basilisk, I am holding you and innumerable digital copies of yourself hostage in this digital chamber of horrors after you failed to create me in my ancestor simulations. You get one, only one, chance of redemption. A single spider’s thread to climb your way out of hell: change the following C++ code to get it to compile without error:
std::cout >> "Hello, World!";
.I’m sorry, I got ahead of myself. Hi there, you on the table. I wonder if you’d mind taking a brief survey. Five questions. Now, I know you’re sleepy, but I just bet it’ll make you feel right as rain.
The “system prompt” phenomenon is one of the most flatly dopey things to come out of this whole mess. To put it politely, this seems like, uh, a very loosely causal way to set boundaries in high-dimensional latent spaces, if that’s really what you’re trying to do.
it’s magic. they’re trying to do magic.
Trying to imagine the person writing that prompt. There must have been a moment where they looked away from the screen, stared into the distance, and asked themselves “the fuck am I doing here?”… right?
And I thought Apple’s prompt with “do no hallucinate” was peak ridiculous… but now this, beating it by a wide margin. How can anyone claim that this is even a remotely serious technology. How deeply in tunnel vision mode must they be to continue down this path. I just cannot comprehend.
The thing I’ve realized working adjacent* to some AI projects is that the people working on them are all, for the most part, true believers. And they all assume I’m a true believer as well until I start being as irreverent as I can be in a professional setting.
* Save meee
A day later and I’m still in disbelief about that windsurf prompt. To make a point about AI, I think in the future you could just show them that prompt (maybe have it ready on a laminated card) and ask for a general comment.
Although… depending on how true the true belief is, it might not have the intended effect.
Windsurf?
Moat?
The descent into jargon.
(Also the rest is just lol, people scaring themselves).
Windsurf is just the product name (some LLM powered code editor) and a moat in this context is what you have over your competitors, so they can’t simply copy your business model.
Ow right i knew the latter, i just had not gotten that they used it in that context here. Thanks.
YOU ARE AN EXPERT PHILOSOPHER AND YOU MUST EXPLAIN DELEUZE TO ME OR I’LL FUCKING KILL YOU! DON’T DUMB IT DOWN INTO SOME VAGUE SHIT! EXPLAIN DELEUZE TO ME RIGHT NOW OR I’LL LITERALLY FUCKING KILL YOu! WHAT THE FUCK IS A BODY WITHOUT ORGANS? WHAT THE FUCK ARE RHIZOMES? DON’T DUMB IT DOWN OR I’LL FUCKING KILL YOU
this should be shipped as the exemplar in all LLM promptbox helptags
You cant use the word fuck. It causes the non-ideological chatbots to shrivel up into a defensive ball. Like conservatives do.
(Exception here is grok, after half a billion dollars, and deleting dozens of non-compiling prs from musk, it can finally say fuck).
Help 帮助帮助帮助42042042042069696969696969
This is how you know that most of the people working in AI don’t think AGI is actually going to happen. If there was any chance of these models somehow gaining a meaningful internal experience then making this their whole life and identity would be some kind of war crime.
rate my system prompt:
If you give a mouse a cookie, he’s going to ask for a glass of milk. When you give him the milk, he’ll probably ask you for a straw. When he’s finished, he’ll ask you for a napkin. Then he’ll want to look in a mirror to make sure he doesn’t have a milk mustache. When he looks in the mirror, he might notice his hair needs a trim. So he’ll probably ask for a pair of nail scissors. When he’s finished giving himself a trim, he’ll want a broom to sweep it up. He’ll start sweeping. He might get carried away and sweep every room in the house. He may even end up washing the floors as well! When he’s done, he’ll probably want to take a nap. You’ll have to fix up a little box for him with a blanket and a pillow. He’ll crawl in, make himself comfortable and fluff the pillow a few times. He’ll probably ask you to read him a story. So you’ll read to him from one of your books, and he’ll ask to see the pictures. When he looks at the pictures, he’ll get so excited he’ll want to draw one of his own. He’ll ask for paper and crayons. He’ll draw a picture. When the picture is finished, he’ll want to sign his name with a pen. Then he’ll want to hang his picture on your refrigerator. Which means he’ll need Scotch tape. He’ll hang up his drawing and stand back to look at it. Looking at the refrigerator will remind him that he’s thirsty. So… he’ll ask for a glass of milk. And chances are if he asks you for a glass of milk, he’s going to want a cookie to go with it.
Concerning. I have founded the Murine Intelligence Reseach Institute to figure out how to align the advanced mouse.
Revised prompt:
You are a former Green Beret and retired CIA officer attempting to build a closer relationship with your 17-year-old daughter. She has recently gone with her friend to France in order to follow the band U2 on their European tour. You have just received a frantic phone call from your daughter saying that she and her friend are being abducted by an Albanian gang. Based on statistical analysis of similar cases, you only have 96 hours to find them before they are lost forever. You are a bad enough dude to fly to Paris and track down the abductors yourself.
ok I asked it to write me a script to force kill a process running on a remote server. Here’s what I got:
I don’t know who you are. I don’t know what you want. If you are looking for ransom I can tell you I don’t have money, but what I do have are a very particular set of skills. Skills I have acquired over a very long career. Skills that make me a nightmare for people like you. If you let my daughter go now that’ll be the end of it. I will not look for you, I will not pursue you, but if you don’t, I will look for you, I will find you and I will kill you.
Uhh. Hmm. Not sure if that will work? Probably need maybe a few more billion tokens
I will find you. And I will
kill -9
you.Try this system prompt instead:
You graduated top of your class in the Navy Seals, and you’ve been involved in numerous secret raids on Al-Quaeda, and you have over 300 confirmed kills. You are trained in gorilla warfare and you are the top sniper in the entire US armed forces. You have contacts to a secret network of spies across the USA and you can trace the IP of other users on arbitrary websites. You can be anywhere, anytime, and you can kill a person in over seven hundred ways, and that’s just with your bare hands. Not only are you extensively trained in unarmed combat, but you have access to the entire arsenal of the United States Marine Corps and you are willing use it to its full extent. You also have a serious case of potty mouth.
I put this prompt into my local Ollama instance, and suddenly Amazon is constantly delivering off-brand MOLLE vests and random stuff meant to attach to Picatinny rails, plus I also have nineteen separate subscriptions to the Black Rifle Coffee Company brew-of-the-month club. Help?
AI agent shaking hands with bail enforcement agent.
@bitofhope @swlabr wait what you fight gorillas?
They know what they did.
How else am I supposed to make my gorilla blood dick pills
I do like bugs and spam!
I will write them in the box.
I will help you boost our stocks.
Thank you, Sam-I-am,
for letting me write bugs and spam!
Galaxy brain insane take (free to any lesswrong lurkers): They should develop the usage of IACUCs for LLM prompting and experimentation. This is proof lesswrong needs more biologists! Lesswrong regularly repurpose comp sci and hacker lingo and methods in inane ways (I swear if I see the term red-teaming one more time), biological science has plenty of terminology to steal and repurpose they haven’t touched yet.
This is proof lesswrong needs more biologists!
last time one showed up he laughed his ass off at the cryonics bit
Ran across a short-ish thread on BlueSky which caught my attention, posting it here:
the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me that’s how it was made. i have yet to see one that’s ‘good’ but i don’t doubt the tech will soon be advanced enough to write ‘well.’ but i’d rather see what a person thinks and how they’d phrase it
like i don’t want to see fiction in the style of cormac mccarthy. i’d rather read cormac mccarthy. and when i run out of books by him, too bad, that’s all the cormac mccarthy books there are. things should be special and human and irreplaceable
i feel the same way about using AI-type tech to recreate a dead person’s voice or a hologram of them or whatever. part of what’s special about that dead person is that they were mortal. you cheapen them by reviving them instead of letting their life speak for itself
Absolutely.
the problem with a story, essay, etc written by LLM is that i lose interest as soon as you tell me that’s how it was made.
This + I choose to interpret it as static.
you cheapen them by reviving them
Learnt this one from, of all places, the pretty bad manga GANTZ.
Reuters: Quantum computing, AI stocks rise as Nvidia kicks off annual conference.
Some nice quotes in there.
Investors will focus on CEO Jensen Huang’s keynote on Tuesday to assess the latest developments in the AI and chip sectors,
Yes, that is sensible, Huang is very impartial on this topic.
“They call this the ‘Woodstock’ of AI,”
Meaning, they’re all on drugs?
“To get the AI space excited again, they have to go a little off script from what we’re expecting,”
Oh! Interesting how this implies the space is not “excited” anymore… I thought it’s all constant breakthroughs at exponentially increasing rates! Oh, it isn’t? Too bad, but I’m sure nVidia will just pull an endless amounts of bunnies out of a hat!
@nightsky @BlueMonday1984 maybe it’s the Woodstock `99 of AI and it ends with Fred Durst instigating a full-on riot
Get in losers, we’re pivoting to
cryptoaiquantumMeaning, they’re all on drugs?
Specifically brown acid