I wanted to build a Discord bot that would check NIST for new CVEs every 24 hours. But their API leaves quiiiiiiite a bit to be desired.
Their pages, however…
Just use this https://github.com/CVEProject/cvelistV5/tree/main/cves
Oh yeah, that’s much more robust
I use scrapy. It has a steeper learning curve than other libraries, but it’s totally worth it.
Splash ftw
I scrape with bash lord help me.
you scrape WITH BASH?
Awk all the things!
pipe sed pipe grep pipe tr pipe grep… I would say I am a bit of a plumber
as a windows user i say kindly on our behalf thank you for pushing the envelope ✉
there’s literally dozens of us!
or maybe just 2 idk
My undergrad project was a scraper - there just wasn’t a name for it yet,
Scrapers have been a thing since the web exists.
One of the first search engines is even called WebCrawler
So, where can I find the Chad scrapper for reddit? They definitely have made it harder to track admin shadow ban and removal shenanigans, specially because sites like reveddit have decided to play ball as if reddit was acting in good faith in the first place.
If you wanted a chad scraper, look at Pushshift. Reveddit relied on it before Reddit got it taken down.
I’m down with scraping, but “parses HTML with regex” has got me fucked up.
What’s wrong with parsing HTML with regex?
Go and look it up on stack overflow
In short, it’s the wrong tool for the job.
In practice, if your target is very limited and consistent, it’s probably fine. But as a general statement about someone’s behavior, it really sounds like someone is wasting a lot of time and regularly getting sub-par results.
13 years ago my god. I wonder what Jon Skeet is doing these days.
I remember when he passed me in the reputation ranking back in the early days and thinking that I needed to be a little bit more active on the site to catch him lol.
That was a great read. Thanks!
This is the way
Ok then make a spotify scraper
How exactly do you make money scraping?
Imagine an investment firm looking at a property market. They need data like price trends in the surrounding area.
Real estate API is expensive, scraping is free. By hiring an employee the can save money.
There’s a ton of money to be made from scraping, consolidating, and organizing publicly accessible data. A company I worked for did it with health insurance policy data because every insurance company has a different website with a different data format and data that updates every day. People will pay da big bux for someone to wrap all that messiness into a neat, consistent package. Many sites even gave us explicit permission to scrape because they didn’t want to set up an api or find some way to send us files.
Right now, gathering machine learning data is hot, cause you need a lot of it to train a model. Companies may specialize in getting, say, social media posts from all kinds of sites and putting them together in a consistent format.
By getting someone to hire you to do it.
No I mean more what is the use case where it would be worth scrapping on a massive scale?
When the data is on multiple sites or sources.
API licenses can be expensive, and some sources might not even have an API.
I get the concept but a concrete example. What company could possibly want to pay for scraping a site?
Some dude as a hobby I get it, but what, like Amazon will pay some guy to scrape competition prices or something?
I can’t imagine data scraping is something companies will quickly admit to, considering the legal issues involved. It was also the norm for a long time – APIs for accessing user generated data is a relatively new thing.
As for a concrete example: companies using chatGPT. A lot of useful data comes from scraping sites that don’t offer an API.
Maybe you’ve got a small company involved in toy buying and reselling, and they want to scrape toy postings from ebay etc. so that they can scroll through a database of different postings and sort it by price or estimated profit or whatever.
Mind blowing stuff
someone’s never used a good api. like mastodon
So uh…as someone who’s currently trying to scrape the web for email addresses to add to my potential client list … where do I start researching this?
Step one will be learning to code in any language. Step two is using a library to help with it. HtmlAgilityPack has always been there for me. Don’t use regex.
Virgin library user vs. Chad regex dev
Start looking into selenium, probably in Python. It’s one of the easier to understand forms of scraping. It’s mainly used to web testing, though you can definitely use it for less… nice purposes.
I used Twitter Scraper to get twitter data for my thesis. Shortly after, it became obsolete.
https://github.com/taspinar/twitterscraper/issues/368 rip twitter scraper
Hold on, I thought it was supposed to be realism on the virgin’s behalf and ridiculous nonsense on the chad behalf:
All I see is realism on both sides lol
Let me introduce you to WooB (formerly WEBooB).
Why on earth would they have changed that. WEBooB is a way better name.
But it’s got boob in it.
I really hope Libreddit switches to scraping, the “Error: Too many request” thing is so annoying, I have to click the redirect button in Libredirect like 20 times until I can actually see a post.
Still a better experience than Reddits official site tho.
Sorry, I’m ignorant in this matter. Why exactly would you want to scrape websites aside from collecting data for ML? What kind of irreplaceable API are you using? Someone please educate me here.
API might cost a lot of money for the amount of requests you want to send. API may not include some fields in the data you want. API is rate limited, scraping might not be. API requires agreement to usage terms, scraping does not (though the recent LinkedIn scraping case might weaken that argument.)
This kinda reminds me of pirating vs paying. Using api = you know it will always be the same structure and you will get the data you asked for. Otherwise you will be notified unless they version their api. There is usual good documentation. You can always ask for help.
Scraping = you need to scout the whole website yourself. you need to keep up to date with the the websites structure and to make sure they haven’t added ways to block bots (scraping). Error handling is a lot more intense on your end, like missing content, hidden content, query for data. the website may not follow the same standards/structuree throughout the website so you need to have checks for when to use x to get y. The data may need multiple request because they do not show for example all the user settings on one page but in an api call they would or it is a ajax page and you need to run Javascript scripts and click on buttons that may change id, class or text info and they may load data when you do x with Javascript so you need to emulate the webpage.
So my guess is that scraping is used most often when you only need to fetch simple data structures and you are fine with cleaning up the data afterwards. Like all the text/images on a page, checking if a page has been updated or just save the whole page like wayback machine.
As someone who used to scrape government websites for a living (with permission from them cause they’d rather us break their single 100yr old server than give us a csv), I can confirm that maintaining scraping scripts is a huge pain in the ass.
Ooof, i am glad you don’t have to do it anymore. I have a customer who is in the same situation. The company with the site were also ok with it (it was a running joke “this [bot] is our fastest user”) but it was very sketchy because you had to login as someone to run the bot. thankfully did they always tell us when they made changes so we never had to be surprised.
My understanding is that the result of the LinkedIn case is that you can scrape data that you have permission to view but not to access data that you were not intended to. The end result that ClickWrap agreements are unenforceable.