PromptCloud Inc, 16192 Coastal Highway, Lewes De 19958, Delaware USA 19958
We are available 24/ 7. Call Now. marketing@promptcloud.com- Home
- FAQ’s
FAQ's
Frequently Asked Questions
Frequency would depend on your specific requirements. We can extract the data at a frequency ranging from a few minutes to once in a month.
It would depend from site to site. However, we should generally be able to provide you with the preceding URL from where we discovered the final page URL.
While setting up crawlers, we setup automated check points to monitor structural changes. In case a site changes its structure, we would be notified and shall fix them accordingly.
The data can be delivered in XML, JSON or CSV format. The default mechanism for delivering data is via our RESTful API. We can also push the data to one of your file sharing servers (FTP, SFTP, Amazon S3, Dropbox, Gdrive, Box or MS Azure). If you’re not very technically inclined, you can simply use the one-click data download option on CrawlBoard.
Yes, our platform, by default, handles IP rotation and mechanism to handle other common blocking issues.
As a client, you’d have access to our portal – CrawlBoard. This would be your centralized portal for technical support, billing and keeping a tab on the crawler activities and stats. You’d also be able to schedule ad-hoc crawls for the future. Error handling happens via our ticketing system.
Yes! In fact, our offerings are also classified based on volumes. We will also be happy to work out attractive discounts in case your monthly data volumes are expected to be in millions.
No, for customized delivery mechanisms (FTP, Amazon S3, Dropbox or Box), there will be an additional fee of $30 per month. Our default delivery mechanism is via PromptCloud Data API, which is free.
In most cases, we expect you to notify at least a month in advance to release your project-specific resources. Each contract has a specific term with termination and renewal clauses.
We as a crawling company respect robots.txt and crawl a site only if bots are allowed in robots.txt file. If crawling is disallowed in robots.txt, even though crawling might be feasible technically, it involves legal issues for us as well as our clients. Also in cases where bots are allowed and we give data to clients, it is up to clients to conform to the Terms of Service for the usage of that data.
Yes, we have system components in place to overcome IP blocking.
No. We respect the robots.txt and crawl a site only if bots are allowed in robots.txt file.
Yes. For a more detailed information, please read our blog on ‘Is Web Scraping Legal?‘
Monthly.
We have a generous referral program for all of our existing customers. Get up to $100 credit for every friend you successfully refer and use that to pay for our data solutions.
Monthly bill is calculated based on the crawling frequency and data volume. Eg: If we are crawling a site on weekly basis and deliver, say, 50,000 records in a month, cost for this site on the monthly bill would be: $15 (for volume fee) + $79 (towards site maintenance & monitoring fee) = $94 for the month. Note – this is assuming volume fee to be $5 per 10k records (prorated) and monthly site maintenance & monitoring fee as $79/site.
The monthly maintenance and monitoring fee covers technical support, overheads in maintaining the data pipeline and related infrastructure as well as fixing the crawlers in case a target site undergoes structural changes.
Yes, our pricing plan is based on crawling frequency. Volume charges may increase based on the number of records we deliver, which may be directly related to the crawl frequency.
Yes, new sites can be added any time. Just that the newly added sites will incur their own setup fee and time for setting up crawlers.
We accept all the major credit cards.
We may be able to group similar websites and setup crawlers. Hence, there is a possibility of offering a different pricing model for you.
Ours is a custom solution and do not have a specific software that can be demonstrated. The final deliverable would be data files in a format that you may specify. The best we could do is to share sample data from past projects that are similar in nature.
In order to provide a proof of concept, we’ll have to setup the crawlers in its entirety, which is a key step in the whole process. Hence, this will be a paid engagement. We provide 30-days paid PoC for a maximum of up to 2 sites.
We basically deal with large-scale data and operate on Data as a Service (DaaS). So you do not have to be involved in any of the set-up or monitoring and we take care of end-to-end data delivery. Our solution has been quite useful for clients who wanted to scale with data and had issues both crawling at that scale and then converting unstructured to structured. Other than that, we’ve a pretty low turnaround time and our set-up is capable of uploading data every few minutes from a site that’s more active.