How Cloudflare uses generative AI to slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect “no crawl” directives.
Maybe it will learn discretion and what sarcasm are instead of being a front loaded google search of 90% ads and 10% forums. It has no way of knowing if what it’s copy pasting is full of shit.
I find this amusing, had a conversation with an older relative who asked about AI because I am “the computer guy” he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.
He observed, “oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That’s good, religions that have become untethered from day to day practical life have never caused problems for anyone.”
This will only make models of bad actors who don’t follow the rules worse quality.
You want to sell a good quality AI model trained on real content instead of other misleading AI output? Just follow the rules ;)
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
Got enough on my plate dealing with a semi-sentient olestra stain trying to recreate the third reich, as is.
People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.
Maybe it will learn discretion and what sarcasm are instead of being a front loaded google search of 90% ads and 10% forums. It has no way of knowing if what it’s copy pasting is full of shit.
I find this amusing, had a conversation with an older relative who asked about AI because I am “the computer guy” he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.
He observed, “oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That’s good, religions that have become untethered from day to day practical life have never caused problems for anyone.”
Which I found scarily insightful.
Oh good.
now I can add digital jihad by hallucinating AI to the list of my existential terrors.
Thank your relative for me.
Not if we go butlerian jihad on them first
lol, I was gonna say a reverse butlerian jihad but i didnt think many people would get the reference :p
i mean this is just designed to thwart ai bots that refuse to follow robots.txt rules of people who specifically blocked them.
This will only make models of bad actors who don’t follow the rules worse quality. You want to sell a good quality AI model trained on real content instead of other misleading AI output? Just follow the rules ;)
Doesn’t sound too bad to me.
deleted by creator
I’m a person.
I dont want AI, period.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
Got enough on my plate dealing with a semi-sentient olestra stain trying to recreate the third reich, as is.
deleted by creator
You must be fun at parties.
deleted by creator
Good job.
Whats next? Call me a fool for thinking Olestra stains are capable of sentience and thats not how Olestra works?