How to Scrape Images from Website?

Scraping images from websites is a useful skill for gathering data, building datasets, or just collecting images you find interesting online. With a few lines of Python code, you can easily download all the images from a webpage or entire website.

In this comprehensive guide, I'll explain several methods for scraping images, ranging from simple to more advanced. By the end, you'll have the knowledge to build Python scrapers that can download images from almost any site. Let's get started!

Finding and Downloading Images with Requests and BeautifulSoup

The simplest way to scrape images is to use the Requests library to download a page and then use BeautifulSoup to parse and extract the image elements. Here's a quick example:

import requests
from bs4 import BeautifulSoup

url = ""
response = requests.get(url)

soup = BeautifulSoup(response.text, 'html.parser')
images = soup.find_all('img')

for image in images: 
   src = image['src']
   # download image here

BeautifulSoup lets you select elements using CSS selectors like'img') or traverse the parse tree to find tags like soup.find_all('img'). Once you have an image element, you can extract the src attribute to get the full URL. Then use Requests again to download the image and save it locally:

import requests
from pathlib import Path

url = '' 

response = requests.get(url)
img_data = response.content # get image bytes

img_name = url.split('/')[-1] # get filename from url 

filepath = Path(f'images/{img_name}')
filepath.write_bytes(img_data) # write to file

And that's the core of a basic image scraper! Run it on all pages of a site, recursively follow links to scrape new pages, and you can download all images.


  • Simple and beginner friendly
  • Works on most basic static sites
  • A good introduction to web scraping


  • Slow, downloads images one at a time
  • Easily blocked on large scrapes
  • Won't work on complex Javascript sites

This approach is great for getting started, but we'll need some more advanced techniques to scrape efficiently at scale.

Scraping Faster with Asyncio

One problem with basic Requests scraping is that it downloads images sequentially. To speed things up, we can use asyncio to scrape asynchronously. This example uses the Asyncio http client httpx along with asyncio:

import asyncio

import httpx
from bs4 import BeautifulSoup

async def download_image(url):
    async with httpx.AsyncClient() as client:
        response = await client.get(url)
        filename = url.split('/')[-1] # get filename from url
        with open(f'images/{filename}', 'wb') as f:
            f.write(response.content) # write image to file

async def main():
    resp = httpx.get('')
    soup = BeautifulSoup(resp.text, 'html.parser')

    image_urls = [img['src'] for img in soup.find_all('img')]

    await asyncio.gather(*[download_image(url) for url in image_urls])

Rather than waiting for each download to complete, we await multiple download_image calls concurrently using asyncio.gather(). This allows Python to process each image download in parallel. On a site with 100 images, it will start all 100 downloads immediately, rather than waiting for each one to finish before starting the next.

Some benchmarks show asyncio can provide 5-10x speedups compared to sequential scraping. This parallel approach is essential when scraping large sites or datasets.


  • Drastically faster scraping thanks to concurrency
  • Makes efficient use of network bandwidth
  • Easy to integrate with existing code


  • Can still get blocked by sites
  • More complex code and debugging

Asyncio lets you scrape at scale, but many sites will block excessive requests. To handle that, we need proxies…

Using Proxies to Bypass Blocks

Many sites will block or blacklist scrapers that make too many requests too quickly. To bypass these protections, we can route requests through residential proxies using services like Smartproxy.

Smartproxy provides access to millions of rotating proxies in real residential IP addresses around the world. By routing each request through a different proxy, you effectively spoof your location and bypass IP blocks.

To integrate Smartproxy, you first sign up for an account and get a username and password. Then pass these credentials to the httpx.Proxy() authenticator:

from httpx import Proxy

# credentials for Smartproxy username/password access
proxy_auth = Proxy(
    url="http://username:[email protected]:10000"

Now you can send requests through the proxy by specifying it as a parameter:

async with httpx.AsyncClient(proxies=proxy_auth) as client:
    response = await client.get(url, proxies=proxy_auth)

Smartproxy are located in residential IP addresses, so each request will come from a different random end-user location. This allows you to scrape at scale without getting blocked. Some key benefits of Smartproxy for image scraping:

  • Millions of proxies with high bandwidth for scraping many images
  • Automatic IP rotation prevents detection and blocking
  • Global residential IPs emulate real users for better results
  • Backconnect proxies support Javascript rendering for dynamic sites

Scraping blocking can be an arms race, so proxies provide a reliable way to circumvent protections. Next let's look at scraping JavaScript-heavy sites.

Scraping JavaScript Sites with Selenium

Some sites load content dynamically using JavaScript. The initial HTML may contain minimal code, then images, text, etc get loaded in after page render. Since Requests only gets the initial HTML, it will miss anything that loads later with JS. To scrape these pages, we need a full browser like Selenium that can execute JavaScript.

Here's how to integrate Selenium into an image scraper:

from selenium import webdriver
from bs4 import BeautifulSoup

driver = webdriver.Chrome()

# Wait for JavaScript to load

html = driver.page_source
soup = BeautifulSoup(html, 'html.parser')

images = soup.find_all('img')
# extract images as normal...


The key difference is that Selenium actually loads the full interactive page, waits for JavaScript to run, then grabs the HTML for parsing. This allows it to scrape content that Requests would miss. Some tips:

  • Use headless browser mode to hide Chrome GUI
  • Wait sufficient time for JavaScript to load
  • Close browser to avoid system resource leaks

If a site relies heavily on JavaScript, Selenium with Chrome provides a robust scraping solution.


  • Can scrape complex JavaScript-dependent sites
  • Enables scraping of dynamic content
  • Handles site interactions for you


  • Slower page load times
  • More complex setup and code
  • Can still get blocked if overused

Selenium provides great capabilities but requires more overhead than Requests. Next, let's compare some higher level tools.

Scrapy vs BeautifulSoup for Scraping Images

There are many libraries that build on top of Requests and BeautifulSoup to make scraping easier. Two popular options are:

  • Scrapy – Full framework for scraping with built-in queues, caching, pipelines, etc
  • BeautifulSoup – Simple HTML parsing library to extract data

For image scraping, both can work well. Here's a quick comparison:


  • More robust framework for large scrapers
  • Built-in queues, caches, pipelines for images
  • Easier to scale across sites
  • Steeper learning curve


  • Simple and lightweight parsing
  • Usually enough for one-off scrapers
  • Integrates easily into existing code
  • Lower overhead to get started


If building a large production scraper, Scrapy is likely the better choice. It includes batteries like caching, concurrency, and strong resilience. For small scrapers on a few pages, BeautifulSoup provides an easy way to integrate scraping into your workflow.

Either can work, depending on your use case! The requests + BeautifulSoup approach from the first section is also a great starting point before investing in Scrapy.

Scraping Ethically and Legally

When scraping images, it's important to follow ethical guidelines and legal considerations. Here are a few best practices:

  • Always respect the robots.txt file – don't scrape sites that prohibit it
  • Read a site's Terms & Conditions for usage rights and scraping policies
  • Limit request rate and use delays to reduce burden on sites
  • Avoid scraping private, copyrighted, or offensive content
  • Do not use scraped images commercially without permission
  • Consider using public image aggregators like Flickr, Pixabay, Unsplash instead of scraping sites directly

According to Moz, general guidelines are:

  • Images marked public domain or CC0 are safe to use
  • Most sites allow scraping a reasonable volume for personal use
  • Avoid reposting others' images without permission or attribution

For commercial use, always get direct permission from the copyright holder. When in doubt, refrain from using scraped imagery.

While images may seem free to scrape, respecting owners' wishes and copyrights ultimately creates a better web for everyone. For personal projects, stick to public domain images and limit scraping volume to be courteous.


This covers several robust techniques for scraping images from websites using Python:

  • Basic scraping with Requests and BeautifulSoup
  • Asyncio for fast parallel downloads
  • Smartproxy proxies to bypass blocks
  • Selenium browser automation for Javascript sites

With the right architecture combining these approaches, you can build scrapers to download images from almost any public site. Remember to follow ethical guidelines, limit server load, and respect copyright. Image scraping can be a useful tool for research, personal projects, and datasets, but be sure to use it responsibly.

Let me know if you have any other questions! I'm always happy to chat more about creative ways to leverage web scraping.

John Rooney

John Rooney

John Watson Rooney, a self-taught Python developer and content creator with a focus on web scraping, APIs, and automation. I love sharing my knowledge and expertise through my YouTube channel, My channel caters to all levels of developers, from beginners looking to get started in web scraping to experienced programmers seeking to advance their skills with modern techniques. I have worked in the e-commerce sector for many years, gaining extensive real-world experience in data handling, API integrations, and project management. I am passionate about teaching others and simplifying complex concepts to make them more accessible to a wider audience. In addition to my YouTube channel, I also maintain a personal website where I share my coding projects and other related content.

We will be happy to hear your thoughts

      Leave a reply

      Compare items
      • Total (0)