The client wanted to crawl clothing data from around 100 fashion sites like GAP, Macys and Nordstrom. The required data points were product data along with all possible variants of a particular product like different colors and sizes.
The client provided us with the list of sources to be crawled and the data points required. The extraction was to be done on daily basis which meant fresh data sets have to be provided everyday. Our team set up crawlers to fetch the required data fields from the source sites provided by the client. This use case comes under our site specific crawl offering since the websites in the list had different structuring and design. The client needed the extracted data in CSV format and be uploaded to their S3 servers. The initial setup was complete in a few days and the crawlers started delivering data immediately. About 200 k records were delivered to the client during the first crawl.
Client shared the list of source websites and the data points to be extracted. The frequency of data was daily, meaning fresh data was needed from all the sources every day. Our team set up crawlers for the source websites to extract the required data fields like product name, description, specifications, price, discounts for each color and size variation. Site specific crawl was used for this since every site in the list had a different structure. The details were extracted using the specifically programmed web crawlers and delivered to the client in their desired frequency and file format directly onto their S3 locations. The data was large in quantity with 1 Million records being scraped and delivered in clean and structured format daily.