Clicky

How to use the Web scraper chrome extension to extract data from websites: Part 1 | PromptCloud
 

How to Use the Web Scraper Chrome Extension to Extract Web Data: Part 1

How to Use the Web Scraper Chrome Extension to Extract Web Data: Part 1

This post is about DIY web scraping tools. If you are looking for a fully customizable web scraping solution, you can add your project on CrawlBoard.

Web scraping is becoming a vital ingredient in business and marketing planning regardless of the industry. There are several ways to scrape the web for useful data depending on your requirements and budget. Did you know that your favorite web browser could also act as a great web scraping tool? You can install the Web Scraper extension from the chrome web store to make it an easy to use data scraping tool. The best part is, you can stay in the comfort zone of your browser while the scraping happens. This doesn’t demand much technical skills which makes it a good option when you need to do some quick data scraping. Let’s get started with the tutorial.[cbdgc-form form_id=32952]

How to crawl web data with chrome

About the Web scraper extension

Web Scraper is an extension for chrome browser made exclusively for web data scraping. You can setup a plan (sitemap) on how to navigate a website and specify the data to be extracted. The scraper will traverse the website according to the setup and extract the relevant data. It lets you export the extracted data to CSV. Multiple pages can be scraped using the tool making it all the more powerful. It can even extract data from dynamic pages that use Javascript and Ajax.

What you need:

  • Google Chrome browser
  • A working internet connection

Installation and setup

The method

After installation, open the Google chrome developer tools by pressing F12. (You can alternatively right click on the screen and select inspect element). In the developer tools, you will find a new tab named ‘Web scraper’ as shown in the screenshot below.

Now let’s see how to use this on a live web page. We will use a site called www.awesomegifs.com for this tutorial. This site contains gif images and we will scrape these image URLs using our web scraper.

Step 1: Creating a sitemap

  • Go to http://www.awesomegifs.com/
  • Open developer tools by right clicking anywhere on the screen and then selecting inspect
  • Click on the web scraper tab in developer tools
  • Click on ‘create new sitemap’ and then select ‘create sitemap’
  • Give the sitemap a name and enter the URL of the site in the start URL field.
  • Click on ‘Create Sitemap’

To scrape multiple pages from a website, we need to understand the pagination structure of that site. You can easily do that by clicking the ‘Next’ button a few times from the homepage. Doing this on Awesomegifs.com revealed that the pages are structured as http://awesomegifs.com/page/1/ , http://awesomegifs.com/page/2/ and so on. To switch to a different page, you only have to change the number at the end of this URL. Now, we need the scraper to do this automatically.

To do this, create a new sitemap with the start URL as http://awesomegifs.com/page/[001-125]. The scraper will now open the URL repeatedly while incrementing the final value each time. This means the scraper will open pages starting from 1 to 125 and scrape the elements that we require from each page.

Step 2: Scraping elements

Every time the scraper opens a page from the site, we need to extract some element. In this case, it’s the gif image URLs. First, you have to find the CSS selector matching the images. You can find the CSS selector by looking at the source file of the web page (CTRL+U). An easier way is to use the selector tool to click and select any element on the screen. Click on the Sitemap that you just created, click on ‘Add new selector’. In the selector id field, give the selector a name. In the type field, you can select the type of data that you want to be extracted. Click on the select button and select any element on the web page that you want to be extracted. When you are done selecting, click on ‘Done selecting’. It’s easy as clicking on an icon with the mouse. You can check the ‘multiple’ checkbox to indicate that the element you want can be present multiple times on the page and that you want each instance of it to be scraped.

Now you can save the selector if everything looks good. To start the scraping process, just click on the sitemap tab and select ‘Scrape’. A new window will pop up which will visit each page in the loop and scrape the required data. If you want to stop the scraping process in between, just close this window and you will have the data that was extracted till then.

Once you stop scraping, go to the sitemap tab to browse the extracted data or export it to a CSV file. The only downside is that you have to manually perform the scraping every time since it doesn’t have many automation features built in. If you want to scrape data on a large scale, it is better to go with a data scraping service instead of tools like these. With the second part of this series, we will show you how to make a MySQL database using the extracted data. Stay tuned for that!

Web scraping service cta

[/cbdgc-form]

Related Posts

17 Comments
  • David
    Posted at 19:44h, 14 September Reply

    You do realize that your screen shots are impossible to read right?

    • Jacob Koshy
      Posted at 14:29h, 19 September Reply

      Hi David, thank you for bringing this to our attention! We have fixed the screenshots and they’re legible now.

  • Nick
    Posted at 06:03h, 30 October Reply

    Awesome thank you! Cleared some things up for me. Now i Can scrape a wholesaler account!

    • Jacob Koshy
      Posted at 09:51h, 20 February Reply

      Glad you found it useful, Nick.

  • John
    Posted at 12:50h, 16 February Reply

    Hi, How to scrape the data from google map using web scraper Chrome Extension. like Address, phone number, wbsites url. etc…
    I’m really struggling on this. i need you help.

  • Artem
    Posted at 04:47h, 19 February Reply

    Great feature for pagination! Is it possible to setup pagination step though? Like if pagination url is changing by 10 items rather then by 1? something like “url[001-200;10]”?

  • Mukul Raman
    Posted at 16:59h, 09 May Reply

    Hi,

    Do you have an y video tutorial on this

    • Raj Bhatt
      Posted at 09:22h, 16 May Reply

      Hi Mukul, we don’t have a video tutorial yet 🙂

  • John
    Posted at 02:03h, 24 May Reply

    Please help. I’m trying to scrape yellow pages data. I found a list of 64 pages of stores. I added a selector for business name, address and phone number. I right clicked each field for inspect/copy/copy selector for the name, address, and phone number. I scraped the URL changing only the end to read pages/[001-064]. I clicked scrape and to my surprise the only data scraped was for the page 001. I clicked the multiple tab in each selector field (for name, address and phone). Why did I only get data for the first page? Should the scrape tool know that I wanted the same data for each company (30 per page) for all 64 pages? Thanks in advance.

    • Jacob Koshy
      Posted at 10:51h, 24 May Reply

      Hi John, DIY web scraping tools such as this are usually meant to handle simple websites that use traditional navigation systems and coding practices. It appears that the site you are trying to scrape is a bit too complex for this DIY tool. Unfortunately, since these tools are not customizable, you won’t be able to do anything about this. It’s recommended to go with a dedicated web scraping service like ours if you want to overcome the limitations of scraper tools and get uninterrupted data.

    • Darren
      Posted at 11:15h, 19 June Reply

      Hi John, try this fix (it worked for me)
      Go to “edit Metadata” and add the url of each page of the search results to the starting url list. It’s a bit messy, but it did work.

  • akanksha rashmi
    Posted at 14:48h, 28 June Reply

    Hi Jacob , I need to scrape a site which requires logging in .I then need to navigate to another link on the same page and scrape that page.Can you please help ?

    • Jacob Koshy
      Posted at 15:01h, 28 June Reply

      Hi Akanksha, that’s surely possible, but not with a DIY tool like the one we’ve discussed above. You can reach out via sales@promptcloud.com and our team will assist you with the requirement.

  • Adriza Deo
    Posted at 20:26h, 08 July Reply

    Hi Jacob,

    I want to scrape the data of LinkedIn profile members. For example, multiple fields like name, title, location, company name, profile URL etc. I also want the pagination to happen automatically like I want the data from page 1 to 24. But the challenge is that instead of selecting multiple fields, the tool is considering only 1 filed which is selected in last and it’s not moving to other page as well. Under the “Start URL” option, I am pasting the link and after that I am also providing [001-024] so that while scraping the data it will move to other page. But the pagination is not taking place. Can you help me with it. Thank you.

  • Akhil AR
    Posted at 01:38h, 09 July Reply

    Hi!

    I need to scrape job descriptions from linkedin using web scraper plugin. I can scrape data from profiles but I am not able to scrape the job postings. Can you suggest something?

    • Jacob Koshy
      Posted at 15:56h, 17 July Reply

      Hi Akhil, you can try our newly launched job feeds solution for getting the job postings directly from company websites. You can check it out here: https:.//www.jobspikr.com

Post A Comment

Ready to discuss your requirements?

REQUEST A QUOTE
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
  • Click here to see if your requirement is a right fit for our services.
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.

Price Calculator

  • Total number of websites
  • number of records
  • including one time setup fee
  • from second month onwards
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.
  • This field is for validation purposes and should be left unchanged.

  • This field is for validation purposes and should be left unchanged.