• @[email protected]
    link
    fedilink
    English
    1213 months ago

    Imagine how much power is wasted on this unfortunate necessity.

    Now imagine how much power will be wasted circumventing it.

    Fucking clown world we live in

    • @[email protected]
      link
      fedilink
      English
      173 months ago

      On on hand, yes. On the other…imagine frustration of management of companies making and selling AI services. This is such a sweet thing to imagine.

      • @[email protected]
        link
        fedilink
        English
        23 months ago

        I just want to keep using uncensored AI that answers my questions. Why is this a good thing?

        • @[email protected]
          link
          fedilink
          English
          73 months ago

          Because it only harms bots that ignore the “no crawl” directive, so your AI remains uncensored.

          • @[email protected]
            link
            fedilink
            English
            2
            edit-2
            3 months ago

            Good I ignore that too. I want a world where information is shared. I can get behind the

            • @[email protected]
              link
              fedilink
              English
              153 months ago

              Get behind the what?

              Perhaps an AI crawler crashed Melvin’s machine halfway through the reply, denying that information to everyone else!

              • @[email protected]
                link
                fedilink
                English
                13 months ago

                Capitalist pigs are paying media to generate AI hatred to help them convince you people to get behind laws that all limit info sharing under the guise of IP and copyright

            • Echo Dot
              link
              fedilink
              English
              73 months ago

              That’s not what the no follow command means

        • @[email protected]
          link
          fedilink
          English
          23 months ago

          Because it’s not AI, it’s LLMs, and all LLMs do is guess what word most likely comes next in a sentence. That’s why they are terrible at answering questions and do things like suggest adding glue to the cheese on your pizza because somewhere in the training data some idiot said that.

          The training data for LLMs come from the internet, and the internet is full of idiots.

          • @[email protected]
            link
            fedilink
            English
            23 months ago

            That’s what I do too with less accuracy and knowledge. I don’t get why I have to hate this. Feels like a bunch of cavemen telling me to hate fire because it might burn the food

            • @[email protected]
              link
              fedilink
              English
              33 months ago

              Because we have better methods that are easier, cheaper, and less damaging to the environment. They are solving nothing and wasting a fuckton of resources to do so.

              It’s like telling cavemen they don’t need fire because you can mount an expedition to the nearest valcanoe to cook food without the need for fuel then bring it back to them.

              The best case scenario is the LLM tells you information that is already available on the internet, but 50% of the time it just makes shit up.

              • @[email protected]
                link
                fedilink
                English
                23 months ago

                Wasteful?

                Energy production is an issue. Using that energy isn’t. LLMs are a better use of energy than most of the useless shit we produce everyday.

                • @[email protected]
                  link
                  fedilink
                  English
                  13 months ago

                  Did the LLMs tell you that? It’s not hard to look up on your own:

                  Data centers, in particular, are responsible for an estimated 2% of electricity use in the U.S., consuming up to 50 times more energy than an average commercial building, and that number is only trending up as increasingly popular large language models (LLMs) become connected to data centers and eat up huge amounts of data. Based on current datacenter investment trends,LLMs could emit the equivalent of five billion U.S. cross-country flights in one year.

                  https://cse.engin.umich.edu/stories/power-hungry-ai-researchers-evaluate-energy-consumption-across-models

                  Far more than straightforward search engines that have the exact same information and don’t make shit up half the time.

    • @[email protected]
      link
      fedilink
      English
      13 months ago

      From the article it seems like they don’t generate a new labyrinth for every single time: Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval."

  • @[email protected]
    link
    fedilink
    English
    15
    edit-2
    3 months ago

    Generating content with AI to throw off crawlers. I dread to think of the resources we’re wasting on this utter insanity now, but hey who the fuck cares as long as the line keeps going up for these leeches.

  • baltakatei
    link
    fedilink
    English
    53 months ago

    Relevant excerpt from part 11 of Anathem (2008) by Neal Stephenson:

    Artificial Inanity

    Note: Reticulum=Internet, syndev=computer, crap~=spam

    “Early in the Reticulum—thousands of years ago—it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information,” Sammann said.

    “Crap, you once called it,” I reminded him.

    “Yes—a technical term. So crap filtering became important. Businesses were built around it. Some of those businesses came up with a clever plan to make more money: they poisoned the well. They began to put crap on the Reticulum deliberately, forcing people to use their products to filter that crap back out. They created syndevs whose sole purpose was to spew crap into the Reticulum. But it had to be good crap.”

    “What is good crap?” Arsibalt asked in a politely incredulous tone.

    “Well, bad crap would be an unformatted document consisting of random letters. Good crap would be a beautifully typeset, well-written document that contained a hundred correct, verifiable sentences and one that was subtly false. It’s a lot harder to generate good crap. At first they had to hire humans to churn it out. They mostly did it by taking legitimate documents and inserting errors—swapping one name for another, say. But it didn’t really take off until the military got interested.”

    “As a tactic for planting misinformation in the enemy’s reticules, you mean,” Osa said. “This I know about. You are referring to the Artificial Inanity programs of the mid–First Millennium A.R.”

    “Exactly!” Sammann said. “Artificial Inanity systems of enormous sophistication and power were built for exactly the purpose Fraa Osa has mentioned. In no time at all, the praxis leaked to the commercial sector and spread to the Rampant Orphan Botnet Ecologies. Never mind. The point is that there was a sort of Dark Age on the Reticulum that lasted until my Ita forerunners were able to bring matters in hand.”

    “So, are Artificial Inanity systems still active in the Rampant Orphan Botnet Ecologies?” asked Arsibalt, utterly fascinated.

    “The ROBE evolved into something totally different early in the Second Millennium,” Sammann said dismissively.

    “What did it evolve into?” Jesry asked.

    “No one is sure,” Sammann said. “We only get hints when it finds ways to physically instantiate itself, which, fortunately, does not happen that often. But we digress. The functionality of Artificial Inanity still exists. You might say that those Ita who brought the Ret out of the Dark Age could only defeat it by co-opting it. So, to make a long story short, for every legitimate document floating around on the Reticulum, there are hundreds or thousands of bogus versions—bogons, as we call them.”

    “The only way to preserve the integrity of the defenses is to subject them to unceasing assault,” Osa said, and any idiot could guess he was quoting some old Vale aphorism.

    “Yes,” Sammann said, “and it works so well that, most of the time, the users of the Reticulum don’t know it’s there. Just as you are not aware of the millions of germs trying and failing to attack your body every moment of every day. However, the recent events, and the stresses posed by the Antiswarm, appear to have introduced the low-level bug that I spoke of.”

    “So the practical consequence for us,” Lio said, “is that—?”

    “Our cells on the ground may be having difficulty distinguishing between legitimate messages and bogons. And some of the messages that flash up on our screens may be bogons as well.”

  • @[email protected]
    link
    fedilink
    English
    23 months ago

    Will it actually allow ordinary users to browse normally, though? Their other stuff breaks in minority browsers. Have they tested this well enough so that it won’t? (I’d bet not.)

  • @[email protected]
    link
    fedilink
    English
    43 months ago

    People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      3 months ago

      Maybe it will learn discretion and what sarcasm are instead of being a front loaded google search of 90% ads and 10% forums. It has no way of knowing if what it’s copy pasting is full of shit.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        3 months ago

        I’m a person.

        I dont want AI, period.

        We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.

        Got enough on my plate dealing with a semi-sentient olestra stain trying to recreate the third reich, as is.

              • @[email protected]
                link
                fedilink
                English
                0
                edit-2
                3 months ago
                1. See someone make a comment about a AI going rogue after being forced to produce too much goblin tentacle porn
                2. Get way to serious over the factual capabilities of a goblin tentacle porn generating AI.
                3. Act holier than thou over it while being completely oblivious to comedic hyperbole.

                Good job.

                Whats next? Call me a fool for thinking Olestra stains are capable of sentience and thats not how Olestra works?

    • @[email protected]
      link
      fedilink
      English
      53 months ago

      This will only make models of bad actors who don’t follow the rules worse quality. You want to sell a good quality AI model trained on real content instead of other misleading AI output? Just follow the rules ;)

      Doesn’t sound too bad to me.

    • @[email protected]
      link
      fedilink
      English
      173 months ago

      I find this amusing, had a conversation with an older relative who asked about AI because I am “the computer guy” he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.

      He observed, “oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That’s good, religions that have become untethered from day to day practical life have never caused problems for anyone.”

      Which I found scarily insightful.

      • @[email protected]
        link
        fedilink
        English
        43 months ago

        Oh good.

        now I can add digital jihad by hallucinating AI to the list of my existential terrors.

        Thank your relative for me.

          • @[email protected]
            link
            fedilink
            English
            23 months ago

            lol, I was gonna say a reverse butlerian jihad but i didnt think many people would get the reference :p

    • katy ✨
      link
      fedilink
      English
      43 months ago

      i mean this is just designed to thwart ai bots that refuse to follow robots.txt rules of people who specifically blocked them.

  • @[email protected]
    link
    fedilink
    English
    43 months ago

    Jokes on them. I’m going to use AI to estimate the value of content, and now I’ll get the kind of content I want, though fake, that they will have to generate.

  • Rose
    link
    fedilink
    English
    613 months ago

    I have no idea why the makers of LLM crawlers think it’s a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than “well, we just don’t want you to do that”. They’re usually more like “why would you even do that?”

    Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said “please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)”. Again: Why would anyone index those?

    • @[email protected]
      link
      fedilink
      English
      4
      edit-2
      3 months ago

      Because it takes work to obey the rules, and you get less data for it. The theoretical competitor could get more ignoring those and get some vague advantage for it.

      I’d not be surprised if the crawlers they used were bare-basic utilities set up to just grab everything without worrying about rules and the like.

    • Phoenixz
      link
      fedilink
      English
      313 months ago

      Because you are coming from the perspective of a reasonable person

      These people are billionaires who expect to get everything for free. Rules are for the plebs, just take it already

      • @[email protected]
        link
        fedilink
        English
        13 months ago

        That’s what they are saying though. These shouldn’t be thought of as “rules”, they are suggestions near universally designed to point you to the most relevant content. Ignoring them isn’t “stealing something not meant to be captured”, it’s wasting time and resources of your own infra on something very likely to be useless to you.

    • @[email protected]
      link
      fedilink
      English
      33 months ago

      They want everything, does it exist, but it’s not in their dataset? Then they want it.

      They want their ai to answer any question you could possibly ask it. Filtering out what is and isn’t useful doesn’t achieve that

  • @[email protected]
    link
    fedilink
    English
    183 months ago

    So we’re burning fossil fuels and destroying the planet so bots can try to deceive one another on the Internet in pursuit of our personal data. I feel like dystopian cyberpunk predictions didn’t fully understand how fucking stupid we are…

    • @[email protected]
      link
      fedilink
      English
      13 months ago

      They probably knew, but the truth is just boring and it’s funner to dramatize things, haha.

  • fmstrat
    link
    fedilink
    English
    13 months ago

    And this, ladies and gentleman, is how you actually make profits on AI.

  • @[email protected]
    link
    fedilink
    English
    23 months ago

    Why do I have the feeling that I will end up in that nightmare with my privacy focused and ad-free Browser setup. I already end up in captcha hell too often because of it.

  • @[email protected]
    link
    fedilink
    English
    443 months ago

    So the world is now wasting energy and resources to generate AI content in order to combat AI crawlers, by making them waste more energy and resources. Great! 👍

    • @[email protected]
      link
      fedilink
      English
      10
      edit-2
      3 months ago

      The energy cost of inference is overstated. Small models, or “sparse” models like Deepseek are not expensive to run. Training is a one-time cost that still pales in comparison to, like, making aluminum.

      Doubly so once inference goes more on-device.

      Basically, only Altman and his tech bro acolytes want AI to be cost prohibitive so he can have a monopoly. Also, he’s full of shit, and everyone in the industry knows it.

      AI as it’s implemented has plenty of enshittification, but the energy cost is kinda a red herring.