• Lv_InSaNe_vL
    link
    fedilink
    432 days ago

    I honestly don’t really see the problem here. This seems to mostly be targeting scrapers.

    For unauthenticated users you are limited to public data only and 60 requests per hour. And for authenticated users it’s 60k/hr.

    What could you possibly be doing besides scraping that would hit those limits?

    • @[email protected]
      link
      fedilink
      English
      122 days ago

      60 requests per hour per IP could easily be hit from say, uBlock origin updating filter lists in a household with 5-10 devices.

    • @[email protected]OP
      link
      fedilink
      English
      28
      edit-2
      2 days ago

      You might behind a shared IP with NAT or CG-NAT that shares that limit with others, or might be fetching files from raw.githubusercontent.com as part of an update system that doesn’t have access to browser credentials, or Git cloning over https:// to avoid having to unlock your SSH key every time, or cloning a Git repo with submodules that separately issue requests. An hour is a long time. Imagine if you let uBlock Origin update filter lists, then you git clone something with a few modules, and so does your coworker and now you’re blocked for an entire hour.

    • @[email protected]
      link
      fedilink
      82 days ago

      I hit those many times when signed out just scrolling through the code. The front end must be sending off tonnes of background requests