python data analysis concept

Python for Marketers: Basic web scraper to CSV file

  • What this does: Scrapes pages to get alt tags and page titles, and saves as CSV
  • Requires: Python Anaconda distribution, basic knowledge of Pandas and HTML structure
  • Concepts covered: Basic scraper with BeautifulSoup, Scrape multiple pages, Loops, Export to CSV
  • Download the entire Python file

Python has a lot of great uses for marketers, and one of the coolest and most practical tools is a web scraper.

There are many situations where you may need to collect data quickly from a website and save into a usable format. One example is getting image alt or title attributes, which have value for SEO purposes.

In this post, we’ll create a simple web scraper in Python that will collect the alt attributes of images and the title of the page on which they appear.

The scraper uses a library called BeautifulSoup. For a full tutorial on using BeautifulSoup, I’d recommend this tutorial, which provides a really great explanation of how it works.

Getting started

First, we’ll import our libraries.

from bs4 import BeautifulSoup
import requests
import csv
import pandas as pd

Next, we’ll generate the CSV file.

#Create csv
outfile = open("scrape.csv","w",newline='')
writer = csv.writer(outfile)

Next, we’ll define the URLs we want to scrape in a list.

#define URLs
urls = ['example.com/home', 
        'example.com/blog']

Then, we’ll create a blank dataframe.

#define dataframe
df = pd.DataFrame(columns=['pagename','alt'])

Conceptualizing data scraping

Our end goal for the data is to have two columns. The first column will have the page name and the second column will have the alt attribute. So, it should look a little something like this:

pagename alt
Blog Home Computer screen
Blog Home Pie chart
Portfolio Mountains
Portfolio Lake

So, we can conceptualize the scraping process like this:

 

Scraping with BeautifulSoup

Because we’re going to be scraping multiple URLs, we’ll need to create a loop to repeat the steps for each page. Be sure to pay attention to the indents in the code (or download the .py file).

#Loop to get data
for url in urls:
    response = requests.get(url)
    soup = BeautifulSoup(response.content, 'html.parser')

For the page title, we’ll want to scrape the H1 tag. We’ll use the find() function to find the H1 tag. We’ll print that information and also store it as a variable for a later step.

    #print titles
    h1 = soup.find('h1')
    print(h1.get_text())
    page_title = h1.get_text()

Next, we’ll scrape the images and collect the alt attributes. Because some images like the logo are repeated on every page, I don’t want to scrape these. Instead, I’ll use .find_all() and only return images with the class “content-header”. Once it finds the images, we’ll print the alt attributes.

Because there may be multiple images on the page, we’ll have to create another loop within the larger loop.

    #print alt attributes
    images = soup.find_all(class_ = 'content-header')
    for image in images:
        print(image['alt'])

Here comes the cool part. We’ll create a variable defined as the alt attribute. Using this and the variable for the H1 tag we created earlier, we’ll couple these and append them to the dataframe. This step will be repeated each time the loop runs, so for every image on the page with the content header class.

        alt_attr = image['alt']
        df2 = pd.DataFrame([[page_title,alt_attr]],columns=['pagename','alt'])
        df = df.append(df2,ignore_index=True)

Finally, we’ll save our dataframe to a CSV file.

#save to CSV
df.to_csv('scrape.csv')
outfile.close()

 

You may also like