This is the continuation of the tutorial series on How to use web scraper chrome extension to extract data from the web. In the first part, we explained the basics of data scraping using web scraper. Once you have scraped enough data, you can close the popup window. This will stop the scraping process and the data scraped so far will be cached. You can browse the collected data by clicking on ‘browse’ option under ‘sitemap’. Let’s see what else can be done with the scraped data.
To export the extracted data to a CSV file, you can click on the ‘Sitemap’ tab and then select ‘Export data as CSV’. Click on the ‘Download now’ button and select your preferred save location. Now you should have your scraped data from the website in a CSV file.
The CSV file should have a column named gifs (our selector id) and several rows depending on the number of URLs scraped. Your CSV file should look similar to this.
It will have just one column with the same name as our selector id (gif) and many rows depending on the number of URls scraped.
For convenience in handling the collected data while using it in a website, you might want to import the scraped data into a MySQL table. Now that we have the CSV file containing scraped data, it can be easily achieved using a few lines of code.
Create a new MySQL table with the same structure as our CSV file and name it ‘awesomegifs’. Only two columns are required in this case. An id column which would auto increment and the column for URLs. Here is the code for that.
Now all you have to do is execute the below SQL command after replacing the path of the CSV file with yours.
If everything went smoothly, you should have all of the scraped URLs from the CSV file inserted into your MySQL database and ready to be used. That’s it, you just learned to scrape a website with the web scraper chrome extension and even made a MySQL table out of it.
Now that you know how to set up the web scraper extension to crawl and extract image URLs from awesomegifs.com, you can try scraping other sites too. Obviously, you will first have to spend some time figuring out how to scrape a particular site since every site is different. Although the ‘selector’ tool lets you easily point and choose any element on the web page with a mouse click, it might not always give you the expected results. To get the more complicated websites scraped, you will also need to have some programming knowledge. Looking into the source code (CTRL+U), you should be able to find out the attributes of your required data in most cases.
After all, there is no scraping tool that can scrape data from every website out of the box. This is the main reason why businesses prefer custom web scraping services instead of DIY tools like the web scraper extension for chrome.
Web scraping tools aren’t for everyone. Tools can be a good option if you are a student or hobbyist looking for ways to collect some data without spending much money or learning complicated technology behind the serious kind of web scraping. If you are a business in need of data to gain competitive intelligence, tools wouldn’t be a reliable option. You are much better off with a dedicated web scraping service that can provide you just the data you need without the associated headaches.