How to get continuous stream of data from these websites without getting stopped? Scraping logic depends upon the HTML sent out by the web server on page requests, if anything changes in the output, its most likely going to break your scraper setup.
If you are running a website which depends upon getting continuous updated data from some websites, it can be dangerous to reply on just a software.
Some of the challenges you should think:
1. Web masters keep changing their websites to be more user friendly and look better, in turn it breaks the delicate scraper data extraction logic.
2. IP address block: If you continuously keep scraping from a website from your office, your IP is going to get blocked by the "security guards" one day.
3. Websites are increasingly using better ways to send data, Ajax, client side web service calls etc. Making it increasingly harder to scrap data off from these websites. Unless you are an expert in programing, you will not be able to get the data out.
4. Think of a situation, where your newly setup website has started flourishing and suddenly the dream data feed that you used to get stops. In today's society of abundant resources, your users will switch to a service which is still serving them fresh data.
Getting over these challenges
Let experts help you, people who have been in this business for a long time and have been serving clients day in and out. They run their own servers which are there just to do one job, extract data. IP blocking is no issue for them as they can switch servers in minutes and get the scraping exercise back on track. Try this service and you will see what I mean here.
Source: http://ezinearticles.com/?Why-Web-Scraping-Software-Wont-Help&id=4550594
No comments:
Post a Comment