The client wanted content to be extracted on a continuous basis from Brazilian news sites to power their news portal. The list of websites included popular blogs, news sites, forums and a few content bookmarking sites. The required data points were date of publishing, author name, title, main text content and tags.
Once we were provided with the list of source websites and data points, our team started working on the project. As this use case was for news data, the frequency of crawls had to be very high. This meant fresh data sets had to be provided every day. Since each site in the list had a different structure and design, site specific crawl and extraction was the solution used for this case.
Once our team finished setting up the web crawlers, the data started flowing in. The data was then cleaned and formatted to be uploaded to the client’s Dropbox servers in XML format. The number of records being delivered per day was above 300,000.