Web Scraping Amazon Python



  • Web scraping helps in automating data extraction from websites. In this tutorial, we will build an Amazon scraper for extracting product details and pricing. We will build this simple web scraper using Python and SelectorLib and run it in a console.
  • Python web-scraping urllib. Improve this question. But ws not able to retrieve anything, this might be because as you say that amazon is anti scraping site.
  • Master web scraping with Scrapy and Python 3. Includes databases, web crawling, creating spiders and scraping Amazon. Created by Attreya Bhatt, Last Updated 26-Sep-2020, Language: English What Will I Get?

Wouldn’t it be great if you could build your own FREE API to get product reviews from Amazon? That’s exactly what you will be able to do once you follow this tutorial using Python Flask, Selectorlib and Requests.

What can you do with the Amazon Product Review API?

“Web scraping” wikipedia.org. In practice, web scraping encompasses any method allowing a programmer to access the content of a website programmatically, and thus, (semi-) automatically. Here are three approaches (i.e. Python libraries) for web scraping which are among the most popular: Sending an HTTP request, ordinarily via Requests, to a.

Python

An API lets you automatically gather data and process it. Some of the uses of this API could be:

  1. Getting the Amazon Product Review Summary in real-time
  2. Creating a Web app or Mobile Application to embed reviews from your Amazon products
  3. Integrating Amazon reviews into your Shopify store, Woocommerce or any other eCommerce store
  4. Monitoring reviews for competitor products in real-time

The possibilities for automation using an API are endless so let’s get started.

Why build your own API?

You must be wondering if Amazon provides an API to get product reviews and why you need to build your own.

APIs provided by companies are usually limited and Amazon is no exception. They no longer allow you to get a full list of customers reviews for a product on Amazon through their Product Advertising API. Instead, they provide an iframe which renders the reviews from their web servers – which isn’t really useful if you need the full reviews.

How to Get Started

In this tutorial, we will build a basic API to scrape Amazon product reviews using Python and get data in real-time with all fields, that the Amazon Product API does not provide.

We will use the API we build as part of this exercise to extract the following attributes from a product review page. (https://www.amazon.com/Nike-Womens-Reax-Running-Shoes/product-reviews/B07ZPL752N/ref=cm_cr_dp_d_show_all_btm?ie=UTF8&reviewerType=all_reviews)

  1. Product Name
  2. Number of Reviews
  3. Average Rating
  4. Rating Histogram
  5. Reviews
    • Author
    • Rating
    • Title
    • Content
    • Posted Date
    • Variant
    • Verified Purchase
    • Number of People Found Helpful

Installing the required packages for running this Web Scraper API

We will use Python 3 to build this API. You just need to install Python 3 from Python’s Website.

We need a few python packages to setup this real-time API

  1. Python Flask, a lightweight server will be our API server. We will send our API requests to Flask, which will then scrape the web and respond back with the scraped data as JSON
  2. Python Requests, to download Amazon product review pages’ HTML
  3. Selectorlib, a free web scraper tool to markup data that we want to download

Install all these packages them using pip3 in one command:

The Code

You can get all the code used in this tutorial from Github – https://github.com/scrapehero-code/amazon-review-api

In a folder called amazon-review-api, let’s create a file called app.py with the code below.

Here is what the code below does:

  1. Creates a web server to accept requests
  2. Downloads a URL and extracts the data using the Selectorlib template
  3. Formats the data
  4. Sends data as JSON back to requester

Free web scraper tool – Selectorlib

You will notice in the code above that we used a file called selectors.yml. This file is what makes this tutorial so easy to scrape Amazon reviews. The magic behind this file is a tool called Selectorlib.

Selectorlib is a powerful and easy to use tool that makes selecting, marking up, and extracting data from web pages visual and simple. The Selectorlib Chrome Extension lets you mark data that you need to extract, and creates the CSS Selectors or XPaths needed to extract that data, then previews how the data would look like. You can learn more about Selectorlib and how to use it here

If you just need the data we have shown above, you don’t need to use Selectorlib because we have done that for you already and generated a simple “template” that you can just use. However, if you want to add a new field, you can use Selectorlib to add that field to the template.

Here is how we marked up the fields in the code for all the data we need from Amazon Product Reviews Page using Selectorlib Chrome Extension.

Once you have created the template, click on ‘Highlight’ to highlight and preview all of your selectors. Finally, click on ‘Export’ and download the YAML file and that file is the selectors.yml file.

Here is how our selectors.yml looks like

You need to put this selectors.yml in the same folder as your app.py

Running the Web Scraping API

To run the flask API, type and run the following commands into a terminal:

Then you can test the API by opening the following link in a browser or using any programming language.

Your response should be similar to this:

This API should work for to scrape Amazon reviews for your personal projects. You can also deploy it to a server if you prefer.

However, if you want to scrape websites for thousands of pages, learn about the challenges here Scalable Large Scale Web Scraping – How to build, maintain and run scrapers. If you need help your web scraping projects or need a custom API you can contact us.

We can help with your data or automation needs

Turn the Internet into meaningful, structured and usable data



The online retail and eCommerce industry is highly data-driven. Keeping the right data always in your stockpile has become more of a necessity not just to beat the competition but also to stay in the business line.

Amazon is one of the most popular and largest online stores. A survey shows there are over 353 million products listed over various marketplaces in Amazon. Consider the option of you getting a particular product from those. Manual copy pasting might seem to be a tedious and arduous task. That’s where automated scraper comes in handy.

So what is meant by automated scraper or web scraping?

Web scraping or web harvesting is the process of scouring the web for necessary details and furnishing the collated information in your preferred format like CSV, Excel, API etc. Ideally, a web scraping uses a software program called bots or scraper that uses the URL provided to make HTTP requests, parses the HTML webpage, accumulates the content.

Benefits of scraping eCommerce websites

Competitive Price Monitoring

When it comes to retail industry price is the key player. Right from the socks for your shoes to any large-scale appliances like TV, refrigerators everything is available online these days. A consumer often compares the product online even before deciding to buy. So doing a comparative study with your competitors always helps in pricing your product accordingly.

Product Ranking

Google chrome of microsoft edge. The customer buys products that appear on top of the search list. Amazon ranks their top-selling products on an hourly basis. By collating the product listings details, sellers can understand how and why other products are ranked higher than theirs and work on displaying their products first on the page.

Product Categorisation

“Sapiens: A Brief History of Humankind” should appear under the category Books, Books > History > World, and Books > Yuval Noah Harari. When a simple book can be categorised in three ways, calculate the various combinations on how to classify your product. The categorisation of the products can be improved by understanding the various contexts where the same products can be sold.

Customer Information Management

Seller needs to know who their buyers are. Accumulating customer information like customer name, location, age, what product is being brought is essential to form effective market insights. This results in increased sales and builds the customer relationship.

Sentiment Analysis

Amazon provides the customers to voice out their feedback on the quality of the product, the delivery, and the seller. A seller can enhance their customer experience by aggregating the reviews provided by the customers in the Amazon product webpage.

To form effective insights like these you need to have the relevant information at hand first. Let’s develop a simple crawler to scrape product information from Amazon using Python.

How to scrape Amazon listings using Python

The following code will show how to scrape the Amazon product listings using Python.
Here, Python 2.7 is used over other versions because this particular version has many modules and libraries that are built exclusively for web scraping.

Prerequisites:

Before going into the actual coding, make sure the following requirements are met.

  • Have Python 2.7 version installed and running in your system.
  • Install the LXML and Requests module up and running in your system.

After installing and executing Python in your system, follow the below steps.

Let’s keep this as a simple crawler bot that scrapes the product listings that appear on a customer search and fetches their links.

Step 1: Import the necessary modules and library that are required for scraping.

Step 2: Create an object to store the session for a particular HTTP request.

Python Web Scraping Library

Step 3: Create a user-agent object. This is used to identify the device from where the request is made either desktop, tablet or mobile and fake the number of browser hits.

Step 4: Store the website URL to be scraped in the url object.

Step 5: Pass this url in sess.get() to get the link to be scraped for that particular session and store the result in a variable termed “res”.

Step 6: This result will be in machine-readable format. All the content fetched is stored in a variable “data”.

Step 7: The collated information is structured using HTML.fromstring() and stored in a variable – tree.

Step 8: The structured information is stored in a file – cont.html using the write().

Step 9: On inspecting the HTML page, the required information is present in a particular DOM structure. Find out that structure and pass it to the file to pick up only those contents. This file is searched for that particular format and the contents here the links to the listings are then fetched. These links are stored in a text file namely Links.txt.

The scraped data would be stored in a structured text format like below.

Major road-blocks while scraping eCommerce websites

Even though scraping has become simpler with Python, individual retail scraper bots face many hurdles. Scraping eCommerce websites have proved to be a more challenging task than any other industries.

The following are the key challenges encountered while trying to scrape any retail webpage.

  • Massive dataset
  • Bot Modernization
  • Legal issues
  • Bot bypassing
  • CAPTCHA and IP blocks

Web Scraping Python Example

Every day hundreds and thousands of products get added to the already large database of Amazon list. Scraping a specific brand or seller proves to be a prolonged and tiresome process. Moreover, these listings are ranked and updated every hour. So the program that you have written also needs constant enhancements to cater to the changes.

Web Scraping Amazon Using Python Github

The number of HTTP requests made to the server is monitored. If there are many requests coming from the same IP address the source might detect the scraping bot and block the identified IP access to their site. Moreover, bots are usually blocked at the CAPTCHA pages.

Scraping

That’s where scraping services brighten your business. At Scrapeworks, we take care of all the technical tasks so that you can improve the quality of your operations. Utilize our various retail scraping services to increase your sales.