Download Our Latest Case Study
Frequently Asked Questions
Frequency would depend on your specific requirements. We can extract the data at a frequency ranging from a few minutes to once in a month.
It would depend from site to site. However, we should generally be able to provide you with the preceding URL from where we discovered the final page URL.
While setting up crawlers, we setup automated check points to monitor structural changes. In case a site changes its structure, we would be notified and shall fix them accordingly.
The data can be delivered in XML, JSON or CSV format. The default mechanism for delivering data is via our RESTful API. We can also push the data to one of your file sharing servers (FTP, SFTP, Amazon S3, Dropbox, Gdrive, Box or MS Azure). If you’re not very technically inclined, you can simply use the one-click data download option on CrawlBoard.
Yes, our platform, by default, handles IP rotation and mechanism to handle other common blocking issues.
As a client, you’d have access to our portal – CrawlBoard. This would be your centralized portal for technical support, billing and keeping a tab on the crawler activities and stats. You’d also be able to schedule ad-hoc crawls for the future. Error handling happens via our ticketing system.