How to Capture Background Requests and Responses in Playwright?

Web scraping using Playwright and Python allows us to not only load and parse the DOM of web pages but also capture and inspect network requests made in the background. Understanding these background requests is crucial for robust web scraping.

In this comprehensive guide, we'll cover:

  • What are background requests?
  • Why capture background requests and responses?
  • How to intercept requests and responses in Playwright
  • Extracting data from background requests
  • Modifying requests on the fly
  • Blocking requests to reduce bandwidth

What are Background Requests?

When a webpage loads in the browser, we normally only see the HTML, CSS, and JavaScript that constructs the DOM and renders the UI we interact with.

But behind the scenes, the browser makes additional network requests to fetch resources like images, scripts, stylesheets, fonts, and data from APIs and databases. These requests happen asynchronously in the background without blocking the main UI.

Common examples include:

  • XHR/Fetch requests to REST or GraphQL APIs to load data.
  • Requests to ad networks and analytics services.
  • Polling for real-time updates from the server.
  • Downloading assets like images and videos.

These types of background requests are essential for modern dynamic web apps. But they often go unnoticed from simple inspection of the loaded DOM.

Why Capture Background Requests and Responses?

There are several key reasons we may want to intercept and analyze background requests when web scraping:

  • 1. Extracting API Data – Modern sites and apps use APIs to load data asynchronously after the initial page load. Capturing and parsing these API responses allows scraping dynamic content.
  •  Debugging and Reverse Engineering – Inspecting background requests helps understand how the site works and identifies client-side dependencies.
  • Bandwidth Reduction – Blocking unnecessary requests can improve efficiency and reduce bandwidth usage.
  • Anti-Scraping Circumvention – Some sites try to detect bots by fingerprinting based on background requests. Modifying headers or blocking requests can help bypass these protections.
  • Session Management – Background requests often contain session cookies and authorization tokens required to scrape content across multiple pages.

Overall, having visibility into background requests unlocks possibilities that are not available by only analyzing the initial HTML DOM. Next, let's see how to achieve this in Playwright.

How to Intercept Requests and Responses in Playwright

Playwright provides a flexible API for intercepting requests and responses using middleware handlers. This allows for inspecting and modifying network traffic on the fly. Here is an example Python script to intercept requests and responses in Playwright:

from playwright.sync_api import sync_playwright 

def intercept_request(request):
  print("Intercepted", request.url)  
  # Modify request here
  return request

def intercept_response(response):
  print("Intercepted", response.url)
  # Inspect response here 
  return response

with sync_playwright() as p:
  browser = p.chromium.launch()
  page = browser.new_page()

  # Add request interception
  page.on("request", intercept_request) 

  # Add response interception
  page.on("response", intercept_response)


The page.on() method registers callback handlers that get invoked for every request and response. This allows for inspecting and modifying traffic as needed. Some key points:

  • Works for all requests, including XHR, Fetch, WebSockets, etc.
  • Can introspect headers, postData, status, etc.
  • Can change request headers, body, and abort requests.
  • Simple synchronous API.
  • Can track redirect chains.

This basic request/response interception unlocks many possibilities!

Extracting Data from Background Requests

Once we can intercept network requests, we can parse and extract data from responses to API calls that fetch dynamic content. For example, say an e-commerce product page makes an XHR request to /api/reviews load reviews. We can capture that in the response handler:

def handle_reviews_api(response):
  if response.url == "":
    # extract reviews from JSON response.text

Some common cases where background requests contain scrapeable data:

  • APIs that return JSON/GraphQL responses.
  • Endpoints for sorting, filtering, and pagination.
  • User profile and activity feeds.
  • Search suggestions and autocompletion.
  • Shopping cart and checkout updates.

Analyzing these requests allows scraping data that is not even present in the HTML DOM!

Modifying Requests on the Fly

Intercepting requests also allows modifying them before they are sent. This is useful for:

  • Adding/removing headers: We can add new headers or strip identifiable ones like user-agents.
  • Changing POST data: Modify API request bodies and parameters.
  • Blocking requests: Return null from the request handler to block.

Some examples:

# Add a custom header  
request.headers['X-Custom'] = 'foo'

# Modify POST data
if request.method == 'POST':
  request.post_data = '{"updated":"data"}' 

# Block request  
if request.resource_type == 'image':
  return null

This makes it easy to experiment and reverse engineer what headers and data a website expects.

Blocking Requests to Reduce Bandwidth

An additional benefit of request interception is that we can block requests entirely to resources that are not necessary for the scrape. This reduces bandwidth usage and speeds up page processing.

Some common examples of requests that can be blocked without affecting the core page content:

  • Advertisements, analytics scripts, social widgets
  • Large images, videos, fonts
  • Browser favicon requests

Blocking requests to these unneeded assets optimizes scraping and reduces the load on the target site. Here is a sample code to block image requests:

def block_images(request):
  if request.resource_type == "image":
    return null # block request

  return request # allow

Just be careful not to block any requests essential for rendering the page content you want to scrape!


Capturing and analyzing background network requests unlocks new possibilities for robust web scraping using Playwright and Python. Learning how a site functions behind the scenes by capturing its network traffic is an invaluable skill for web scrapers. Playwright provides great low-level control over requests and responses to support this.

I hope this guide provides a solid overview of the possibilities!

John Rooney

John Rooney

John Watson Rooney, a self-taught Python developer and content creator with a focus on web scraping, APIs, and automation. I love sharing my knowledge and expertise through my YouTube channel, My channel caters to all levels of developers, from beginners looking to get started in web scraping to experienced programmers seeking to advance their skills with modern techniques. I have worked in the e-commerce sector for many years, gaining extensive real-world experience in data handling, API integrations, and project management. I am passionate about teaching others and simplifying complex concepts to make them more accessible to a wider audience. In addition to my YouTube channel, I also maintain a personal website where I share my coding projects and other related content.

We will be happy to hear your thoughts

      Leave a reply

      Compare items
      • Total (0)