• darcy
    link
    fedilink
    292 years ago

    someone’s never used a good api. like mastodon

  • @[email protected]
    link
    fedilink
    202 years ago

    It’s all fun and games until you have to support all this shit and it breaks weekly!

    That being said, I do miss the simplicity of maintaining selenium projects for work

  • Chemical Wonka
    link
    fedilink
    English
    122 years ago

    Let’s see what WEI (if implemented ) will do with the scrapers. The future doesn’t look promising.

  • @[email protected]
    link
    fedilink
    1632 years ago

    Everyone loves the idea of scraping, no one likes maintaining scrapers that break once a week because the CSS or HTML changed.

  • @[email protected]
    link
    fedilink
    132 years ago

    Fuck, I think I’ve been doing it wrong and this meme gave me more things to learn than any YouTube video has

  • @[email protected]
    link
    fedilink
    22 years ago

    So uh…as someone who’s currently trying to scrape the web for email addresses to add to my potential client list … where do I start researching this?

    • @[email protected]
      link
      fedilink
      2
      edit-2
      2 years ago

      Step one will be learning to code in any language. Step two is using a library to help with it. HtmlAgilityPack has always been there for me. Don’t use regex.

    • @[email protected]
      link
      fedilink
      42 years ago

      Start looking into selenium, probably in Python. It’s one of the easier to understand forms of scraping. It’s mainly used to web testing, though you can definitely use it for less… nice purposes.

  • @[email protected]
    link
    fedilink
    62 years ago

    I really hope Libreddit switches to scraping, the “Error: Too many request” thing is so annoying, I have to click the redirect button in Libredirect like 20 times until I can actually see a post.

    Still a better experience than Reddits official site tho.

  • @[email protected]
    link
    fedilink
    752 years ago

    Just a heads up for companies thinking it’s wrong to scrap: if you don’t want info to be scraped, don’t put it on the internet.

  • @[email protected]
    link
    fedilink
    312 years ago

    Hold on, I thought it was supposed to be realism on the virgin’s behalf and ridiculous nonsense on the chad behalf:

    All I see is realism on both sides lol

  • @[email protected]
    link
    fedilink
    English
    92 years ago

    Sorry, I’m ignorant in this matter. Why exactly would you want to scrape websites aside from collecting data for ML? What kind of irreplaceable API are you using? Someone please educate me here.

    • @[email protected]
      link
      fedilink
      302 years ago

      API might cost a lot of money for the amount of requests you want to send. API may not include some fields in the data you want. API is rate limited, scraping might not be. API requires agreement to usage terms, scraping does not (though the recent LinkedIn scraping case might weaken that argument.)

      • @[email protected]
        link
        fedilink
        English
        12 years ago

        My understanding is that the result of the LinkedIn case is that you can scrape data that you have permission to view but not to access data that you were not intended to. The end result that ClickWrap agreements are unenforceable.

      • @[email protected]
        link
        fedilink
        92 years ago

        This kinda reminds me of pirating vs paying. Using api = you know it will always be the same structure and you will get the data you asked for. Otherwise you will be notified unless they version their api. There is usual good documentation. You can always ask for help.

        Scraping = you need to scout the whole website yourself. you need to keep up to date with the the websites structure and to make sure they haven’t added ways to block bots (scraping). Error handling is a lot more intense on your end, like missing content, hidden content, query for data. the website may not follow the same standards/structuree throughout the website so you need to have checks for when to use x to get y. The data may need multiple request because they do not show for example all the user settings on one page but in an api call they would or it is a ajax page and you need to run Javascript scripts and click on buttons that may change id, class or text info and they may load data when you do x with Javascript so you need to emulate the webpage.

        So my guess is that scraping is used most often when you only need to fetch simple data structures and you are fine with cleaning up the data afterwards. Like all the text/images on a page, checking if a page has been updated or just save the whole page like wayback machine.

        • TheHarpyEagle
          link
          fedilink
          52 years ago

          As someone who used to scrape government websites for a living (with permission from them cause they’d rather us break their single 100yr old server than give us a csv), I can confirm that maintaining scraping scripts is a huge pain in the ass.

          • @[email protected]
            link
            fedilink
            12 years ago

            Ooof, i am glad you don’t have to do it anymore. I have a customer who is in the same situation. The company with the site were also ok with it (it was a running joke “this [bot] is our fastest user”) but it was very sketchy because you had to login as someone to run the bot. thankfully did they always tell us when they made changes so we never had to be surprised.

  • @[email protected]
    link
    fedilink
    9
    edit-2
    2 years ago

    So, where can I find the Chad scrapper for reddit? They definitely have made it harder to track admin shadow ban and removal shenanigans, specially because sites like reveddit have decided to play ball as if reddit was acting in good faith in the first place.

    • @[email protected]
      link
      fedilink
      English
      52 years ago

      If you wanted a chad scraper, look at Pushshift. Reveddit relied on it before Reddit got it taken down.