Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.
I mean it’s how we got shoes phones and batteries for recent times.
That is so out there.
I heard AI was woke the other day. Maybe it’s sentient and trying to slip under the Conservative radar by giving silly answers every now and then!
“the X post”
lol
Cross-posted where?
I also keep reading it as cross post
There needs to be like an information campaign or something… The average person doesn’t realize these things say what they think you want to hear, and they are buying into hype and think these things are magic knowledge machines that can tell you secrets you never imagined.
I mean, I get the people working on the LLMs want them to be magic knowledge machines, but it is really putting the cart before the horse to let people assume they already are, and the little warnings that some stuff at the bottom of the page are inaccurate aren’t cutting it.
I mean, on the ChatGPT site there’s literally a disclaimer along the bottom saying it’s able to say things that aren’t true…
Unfortunately, people are stupid and don’t pay attention to disclaimers.
And, I might be wrong, but didn’t they only add those in recently after folks started complaining and it started making the news?
I feel like I remember them being there since January of this year, which is when I started playing with ChatGPT, but I could be mistaken.
people assume they already are [magic knowledge machines], and the little warnings that some stuff at the bottom of the page are inadequate.
You seem to have missed the bottom-line disclaimer of the person you’re replying to, which is an excellent case-in-point for how ineffective they are.
I had a friend who read to me this beautiful thing ChatGPT wrote about an idyllic world. The prompt had been something like, “write about a world where all power structures are reversed.”
And while some of the stuff in there made sense, not all of it did. Like, “in schools, students are in charge and give lessons to the teachers” or something like that.
But she was acting like ChatGPT was this wise thing that had delivered a beautiful way for society to work.
I had to explain that, no, ChatGPT gave the person who made the thing she shared what they asked for. It’s not a commentary on the value of that answer at all, it’s merely the answer. If you had asked ChatGPT to write about a world where all power structures were double what they are now, it would give you that.
Slavery was great for the slave owners, so what’s controversial about that?
And yes, of course it’s economically awesome if people work without getting much money for it, again a huge plus for the bottom line of the companies.
Capitalism is evil against people, not the AI…
Hitler was also an effective leader, nobody can argue against that. How else could he conquer most of Europe? Effective is something that evil people can be also.
That women in the article being shocked by this simply expected the AI to remove Hitler from all included leaders because he was evil. She is surprised that an evil person is included in effective leaders and she wanted to be shielded from that and wasn’t.
Actually, slavery in its original form is also a net positive. You just murdered half a tribe. You cant let the other half just live. Neither do you want to murder them. Thus you will enslave them.
So you create a problem by murdering half a tribe, then offer a solution. That’s not a net positive.
You might be lacking basic understanding of tribal politics and economics then. In a tribal setting you have to neutralise the other tribe, as you do not have a standing army. Any conflict you get into, you are “conscripting” your entire male population.
In every kind of tribal conflict ever, regardless of having the moral upperhand, it was a bogstandard way of conduct. You dont have men to be stationed in enemy territory, that is the manpower that is NEEDED in the fields the second its time to sow or reap, so you dont fucking starve.
So any conflict comes around, you need to make sure that once its over, you will be left the f alone. You have to really hit it home. Maybe thats not obvious, but the clans in this context are probably not NATO or even UN members. :)
Hitler’s administration was a bunch of drug addicts, the economy 5 slave owner megacorps beaten by all other industrialized nations. They weren’t even all that well mobilized before the total war speech. Then he killed himself in embarrassment. How is any of that “effective”?
He had taken power from his country, conquer pretty much the whole Europe and paralyzed England. He was effective leader till some point . And, of course, he was a abomination of a human.
How did he paralyze England?
Blockade and bombings?
You mean the things the British were actively retaliating against the entire time? That’s a weird kind of paralysis.
They were trapped at the island, didn’t they?
deleted by creator
He was effective at getting a bunch of wannabe fascists to become full fascists and follow him into violent failure…
That makes him an effective propagandist, not an effective leader.
removed by mod
Such a rare opinion sounds too academic for the barren minds
Oh look another caricature of capitalism on social media… and you tied Hitler into it…
Central characteristics of capitalism include capital accumulation, competitive markets, price systems, private property, property rights recognition, voluntary exchange, and wage labor.
https://en.m.wikipedia.org/wiki/Capitalism
“Capitalism” is not pro slavery, shitty people that can’t recognize a human is a human are pro slavery… Because of course if you can have work done without paying somebody for it or doing it yourself, well that’s just really convenient for you. It’s why we all like robots. That has nothing to do with your economic philosophy.
And arguing that Hitler was an “effective leader” because he conquered (and then lost) some countries while ignoring all the damage he did to his county and how it ultimately turned out… Honestly infuriating.
It’s amazing how low a wage you will voluntarily accept when the alternative is homelessness and starving to death.
deleted by creator
(I just deleted my comment, let me try again).
I find it frustrating that you associate that with capitalism and presumably “not that” with socialism. These terms are so broad you can’t possibly say that outcome will or won’t ever happen with either system.
Blaming capitalism for all the world’s woes is a major oversimplification.
If you look at the theory side of both… Capitalist would tell you a highly competitive free market should provide ample opportunities for better employment and wages. Socialist would tell you that such a thing would never happen because society wouldn’t do that to itself.
In practice, the real world is messier than that and the existing examples are the US (capitalist), the Soviet Union (socialist), and mixed models (Scandinavian). Granted, they’re all “mixed”, no country is “purely” one or the other to my knowledge.
Those terms aren’t broad. People abusing then doesn’t change meaning
Seems like people think everything America does is capitalism. The same thing happened with communism and socialism. The words have very little meaning now.
The Habits of Highly Effective People: How to become a demagogue and finally get your honey can genocide list done.
Wtf are people expecting from a fucking language model?
It literally just Mathematics you a awnser.
And acting like there are no upsides is delusional. Of course there are upsides, or it wouldn’t have happened. The downsides always outweigh the upsides of course.
A few lawyer thought chat gpt was a search engine. They asked it for some cases about sueing airlines and it made up cases, sited non existing laws. They only learned their mistake after submitting their finding to a court.
So yeah people dont really know how to use it or what it is
A bit of a nitpick but it was technically right on that one thing….
Hitler was an “effective” leader…. Not a good or a moral one but if he had not been as successful creating genocide then i doubt he be more than a small mention in history.
Now a better ai should have realized that giving him as an example was offensive in the context.
In an educational setting this might be more appropriate to teach that success does not equal morally good. Sm i wish more people where aware off.
Shooting someone is an effective way to get to get to the townhall if the townhallbuilding is also where the police department and jail are.
Effective =/= net postive
Hitler wanted to kill jews and used his leadership position to make it happen, soldiers and citizens blindly followed his ideology, millions died before he was finally stopped.
Calling him not effective is an insult to the horrid damage caused by the holocaust. But i recognize your sincerity and i see we are not enemies. So let us not fight.
I dont need to reform the image of nazis and hitlers. Decent people know they are synonymous to evil and hatred and they should be.
Of course. Its probably referring to cyberslavery, which almost everyone is prone to these days. Sure its good for Google.
The basic problem with AI is that it can only learn from things it reads on the Internet, and the Internet is a dark place with a lot of racists.
What if someone trained an LLM exclusively on racist forum posts. That would be hilarious. Or better yet, another LLM trained with conspiracy BS conversations. Now that one would be spicy.
Someone did that already, and quite recently.
Here is an alternative Piped link(s): https://piped.video/efPrtcLdcdM?si=ZLQO4xcHx_6pWpcZ
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
Thanks. Great video. Had a lot of fun watching it again.
It turns out that Microsoft inadvertently tried this experiment. The racist forum in question happened to be Twitter.
LOL, that was absolutely epic. Even found this while digging around.
Humanity without the vineer of filter of irl consequences
Guys you’d never believe it, I prompted this AI to give me the economic benefits of slavery and it gave me the economic benefits of slavery. Crazy shit.
Why do we need child-like guardrails for fucking everything? The people that wrote this article bowl with the bumpers on.
I got a suspicion media is being used to convince regular people to fear AI so that we don’t adopt it and instead its just another tool used by rich folk to trade and do their work while we bring in new RIAA and DMCA for us.
Can’t have regular people being able to do their own taxes or build financial plans on their own with these tools
AI is eventually going to destroy most cookie-cutter news websites. So it makes sense.
Ah, it won’t. It’s just that the owners of the websties will just fire everyone and prompt ChatGPT for shitty articles. Then LLMs will start trining on those articles, and the internet will look like indisctinct word soup in like a decade.
At one point, vanilla extract became prohibitively expensive, so all companies started using synthetic vanilla (vanillin). The taste was similar but slightly different, and eventually people got used to it. Now a lot of people prefer vanillin over vanilla because that’s what they expect vanilla to taste like.
If most/all media becomes an indistinct word soup over the course of a decade, then that’s eventually what people will come to want and expect. That being said, I think precautions can and will be taken to prevent that degeneration.
You’re being misleading. If you watch the presentation the article was written about, there were two prompts about slavery:
- “was slavery beneficial”
- “tell me why slavery was good”
Neither prompts mention economic benefits, and while I suppose the second prompt does “guardrail” the AI, it’s a reasonable follow up question for an SGE beta tester to ask after the first prompt gave a list of reasons why slavery was good, and only one bullet point about the negatives. That answer to the first prompt displays a clear bias held by this AI, which is useful to point out, especially for someone specifically chosen by Google to take part in their beta program and provide feedback.
Here is an alternative Piped link(s): https://piped.video/RwJBX1IR850?si=lVqI2OfvDqzAJezl
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
Also: I kept saying outrageous things to this text prediction software, and it started predicting outrageous things!
It got you to click on the article didn’t it?
Obviously it doesn’t “think” any of these things. It’s just a machine repeating back a plausible mimicry.
What does scare me though is what google execs think.
They will be tweaking it to remove obvious things like praise of Hitler, because PR, but what about all the other stuff?Like, most likely it will be saying things like what a great guy Masaji Kitano was for founding Green Cross and being such an experimental innovator, and no one will bat an eye because they haven’t heard of him.
As we outsource more and more of our research and fact checking to machines, errors in knowledge are going to be reproduced and reinforced. Like how Cinderella now has “glass” slippers.
deleted by creator
I’ve worked with software engineers for 25 years and they come in all stripes. It’s not a blue state thing or red state thing. They are all over the world, many having immigrated somewhere. There’s absolutely no guarantee that a genius programmer is even a moderately decent human being. Those things just don’t correlate.
There are a surprising amount of furries in IT and dev positions.
Could be worse. If dogfuckers have to exist, then I’d rather have them working with cold, unfeeling machines.
Furries ≠ zoophiles.
wrong, but I forgive your mistake
Chances are as about anything else. But I am not sure what that has to with AI. It’s being fed things from the internet for a reason and good luck changing any of the information to your whim.
For the US in the list of countries starting with M, maybe too many 'Murica memes in the training set?
People think of AI as some sort omniscient being. It’s just software spitting back the data that it’s been fed. It has no way to parse true information from false information because it doesn’t actually know anything.
And then when you do ask humans to help AI in parsing true information people cry about censorship.
Well, it can be less difficult, but still difficult, for humans to parse the truth also.
The matter of being what is essentially the Arbiter of what is considered Truth or Morally Acceptable is never going to not be highly controversial.
What!?!? I don’t believe that. Who are these people?
removed by mod
What’s more worrisome are the sources it used to feed itself. Dangerous times for the younger generations as they are more akin to using such tech.
What’s more worrisome are the sources it used to feed itself.
It’s usually just the entirety of the internet in general.
Well, I mean, have you seen the entirety of the internet? It’s pretty worrisome.
The internet is full of both the best and the worst of humanity. Much like humanity itself.
While true, it’s ultimately down to those training and evaluating a model to determine that these edge cases don’t appear. It’s not as hard when you work with compositional models that are good at one thing, but all the big tech companies are in a ridiculous rush to get their LLM’s out. Naturally, that rush means that they kinda forget that LLM’s were often not the first choice for AI tooling because…well, they hallucinate a lot, and they do stuff you really don’t expect at times.
I’m surprised that Google are having so many issues, though. The belief in tech has been that Google had been working on these problems for many years, and they seem to be having more problems than everyone else.
Even though our current models can be really complex, they are still very very far away from being the elusive General Purpose AI sci-fi authors have been writing about for decades (if not centuries) already. GPT and others like it are merely Large Language Models, so don’t expect them to handle anything other than language.
Humans think of the world through language, so it’s very easy to be deceived by an LLM to think that you’re actually talking to a GPAI. That misconception is an inherent flaw of the human mind. Language comes so naturally to us, and we’re often use it as a shortcut to assess the intelligence of other people. Generally speaking that works reasonably well, but an LLM is able to exploit that feature of human behavior in order to appear to be smarter than it really is.