- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
I mean, they could just do what reddit does and restore from backup automatically lol
If we can’t delete our questions and answers, can we poison the well by uploading masses of shitty questions and answers? If they like AI we could have it help us generate them.
You are literally the same mentality as the coal rollers
Tech that could improve life for everyone and instead of using it to make open source software or coding solutions to problems you attack it like a crab in a bucket simply because you fear change.
The poison was there all along the way. The poison is us
Inserts spider man meme
Poison the well by using AI-generated comments and answers. There isn’t currently a way to reliably determine if content is human or AI-generated, and training AI on AI is the equivalent of inbreeding.
Stackalabama Exchange
Sounds good then.
Can we change our answers? Change your answers to garbage, don’t delete them. Do it slowly.
If you have low karma, then edits are reviewed by multiple people before the edit is saved. That’s primarily in place to prevent spam, who could otherwise post a valid question then edit it a few months later transforming the message into a link to some shitty website.
Even with high karma, that just means your edit is temporarily trusted. It’s gets reviewed and will be reverted if it’s a bad edit.
And any time an edit is reverted, that’s a knock against your karma. There’s a community enforced requirement for all edits to be a measurable improvement.
Even moderation decisions are reviewed by multiple people - so if someone rejects a post because it’s spam, when they should have rejected it because it’s off topic (or approved it) then that is also going to be caught and undone. And any harmful contribution (edit or moderation decision) will result in your action being undone and your karma going down. If your karma goes down too fast, your access to the site is revoked. If you do something really bad, then they’ll ban your IP address.
Moderators can also lock a controversial post, so only people with high karma can touch it at all.
… keep in mind Stack Overflow doesn’t just allow editing your own posts, you can edit any content on the website, similar to wikipedia.
It’s honestly a good overall approach, but around when Jeff Attwood left in 2008 it started drifting off course towards the shit show that is stack overflow today.
It’s a shame, only corporate are going to be benefiting from hard work & labour of so many talented people.
If the Stack Overflow site remains available then it still serves the same purpose it did before. I personally use ad blockers and don’t pay to use the site, which must not be cheap to operate. The bigger problem is if talented people refuse to share their expertise with people like me because they aren’t being compensated for their efforts.
In the article the dude was banned for 7 days for changing his answer.
So wait a few days, then do it slowly.
I’m almost sure the site has already been scrapped of current contest for the LLM.
Yup, but that’s not the point IMO, it’s to remove quality content from the site so visitors see how crappy it is and stop using it.
Great idea. Then I’ll turn to ChatGPT for higher quality answers.
Maybe we need a technical questions and answers siteon the fediverse!
I fully understand why they are doing this, but we are just losing a mass of really useful knowledge. What a shame…
Good to know that stackoverflow will not be a trustable place to find solutuons anymore.
While at the same time they forbid AI generated answers on their website, oh the turntables.
I am not deleting anything. They can have all of my poorly written misleading answers.
I’ll just keep asking copilot about the damn exceptions until the effin code works. Na-na-nah!
Begun, the AI wars have.
Faces on T-shirts, you must print print. Fake facts into old forum comments, you must edit. Poison the data well, you must.
Anyone care to explain why people would care that they posted to a public forum that they don’t own, with content that is now further being shared for public benefit?
The argument that it’s your content becomes false as soon as you shared it with the world.
Lol it ain’t for public benefit unless it’s a FOSS model with which I’d have no issue
Well no, when you post something it is public and out of your control
No, you can’t post something in public and have it appropriated by a mega corp for money and then prevent you from deleting or modifying the very things you posted.
I’m pro-AI btw. But AI for all.
You agreed to it
It is your content. But SE specifically only accepts CC licensed content, which makes you right.
It’s not shared for public benefit, though. OpenAI, despite the Open in their name, charges for access to their models. You either pay with money or (meta)data, depending on the model.
Legally, sure. You signed away your rights to your answers when you joined the forum. Morally, though?
People are pissed that SO, that was actively encouraging Mods to use AI detection software to prevent any LLM usage in the posted questions and answers, are now selling the publicly accessible data, made by their users for free, to a closed-source for-profit entity that refuses to open itself up.
Basically the same story as with reddit.
Agreed. As you said it’s a similar situation as with reddit, where I decided to delete my comments.
My reasoning is that those contributions were given under the premise that everybody was sharing to help each other.
Now that premise has changed: the large tech companies are only taking and the platform providers are changing the rules aswell to profit from it.
So as a result I packed my things and left, in case of reddit to here.
That said I think both views are valid and I wouldn’t fault those that think differently.
I can only really speak to reddit, but I think this applies to all of the user generated content websites. The original premise, that everyone agreed to, was the site provides a space and some tools and users provide content to fill it. As information gets added, it becomes a valuable resource for everyone. Ads and other revenue streams become a necessary evil in all this, but overall directly support the core use case.
Now that content is being packaged into large language models to be either put behind a paywall or packed into other non-freely available services. Since they no longer seem interested in supporting the model we all agreed on, I see no reason to continue adding value and since they provided tools to remove content I may as well use them.
But from the very beginning years ago, it was understood that when you post on these types of sites, the data is not yours, or at least you give them license to use it how they see fit. So for years people accepted that, but are now whining because they aren’t getting paid for something they gave away.
This is legal vs rude. It certainly is legal and was in the terms of service for them to use the data in any way they see fit. But, also it’s rude to bait and switch from being a message board to being an AI data source company. Users we led to believe they were entering into an agreement with one type of company and are now in an agreement with a totally different one.
You can smugly tell people they shouldn’t have made that decision 15 years ago when they started, but a little empathy is also cool.
Additionally: When you owe your entire existence and value to user goodwill it might not be a great idea to be rude to them.
I don’t understand what anyone wins from this
Corporations are foundationally evil
And how do they not win more if we poison the entire Internet?
It’s like being in a toxic relationship with kids involved
Set boundaries
Follow rules
Don’t destroy the fucking fruit of your bodies just because you are angry at each other
Fuck those guys, like a lot, for taking your given data and selling
And fuck open ai for trying to make money from scientific discoveries meant for all of humanity
But what the fuck with ruining the entire Internet?
Who gets anything then?
If language models will ruin Internet why be afraid that normal human responses are available? Wut?
You really don’t need anything near as complex as AI…a simple script could be configured to automatically close the issue as solved with a link to a randomly-selected unrelated issue.
So vanilla stack overflow?
That’s the joke
I’m slow.
Based and same-here-often…pilled
Welp.
primary use for AI is self destructing your website.
I dunno. AlphaFold 3 is pretty big.
Pleasing tech illiterate sharholders
Remember when adding the word blockchain to an Iced Tea company’s name caused share prices to jump?
is this real? I can’t tell anymore.
I googled it and I wish it wasn’t
a little-known micro-cap stock called Long Island Iced Tea Corp. (LTEA) said Thursday that it’s now “Long Blockchain Corp.,” and its stock leaped more than 200 percent at the open of trading. Shares closed up 183 percent.
🤦♂️🤦♂️🤦♂️🤦♂️🤦♂️
This is like my friend who “invested” in Doggy (not Doge) coin “because it was going to explode and become highly valuable” even though it was only worth like .1% of what Doge was worth like two years back… He’s a teacher.
Or my other friend that invested thousands in Etherium like 2 years back, while knowing basically nothing about “The Etherium Network”, or anything crypto related. He just knew that he could potentially make money off of it like he could with stocks. I asked him like a year later if he ever made anything off of it and he said “not really”, and said he had reinvested the money into other things (I forget which, it wasn’t crypto related) 🤣
For years, the site had a standing policy that prevented the use of generative AI in writing or rewording any questions or answers posted. Moderators were allowed and encouraged to use AI-detection software when reviewing posts. Beginning last week, however, the company began a rapid about-face in its public policy towards AI.
I listened to an episode of The Daily on AI, and the stuff they fed into to engines included the entire Internet. They literally ran out of things to feed it. That’s why YouTube created their auto-generated subtitles - literally, so that they would have more material to feed into their LLMs. I fully expect reddit to be bought out/merged within the next six months or so. They are desperate for more material to feed the machine. Everything is going to end up going to an LLM somewhere.
I think auto generated subtitles were to fulfil a FCC requirement, some years ago, for content subtitling. It has however turned out super useful for LLM feeding.
Did it really fulfil the requirement?
Like Homer Simpson eating all the food at the buffet
Or when he went to Hell
There really isn’t much in the way of detection. It’s a big problem in schools and universities and the plagiarism detectors can’t sense AI.