Parsing HTML with ChatGPT Code Interpreter

ChatGPT's code interpreter feature opens up exciting possibilities for accelerating web scraper development. While ChatGPT doesn't yet have direct access to scrape the web, it can ingest sample HTML documents and generate Python code to parse and extract key data points.

In this comprehensive guide, we'll explore prompts and techniques to utilize ChatGPT's AI capabilities specifically for crafting HTML parsers and getting your scrapers up and running faster.

The Promise and Limitations of Code Interpreter

Since its launch in December 2022, ChatGPT's code interpreter has drawn attention for its ability to evaluate data files and contextually write functioning code. For our web scraping needs, it unlocks the potential to automate early-stage parser development.

However, as of writing this guide, code interpreter remains gated for Premium ChatGPT subscribers with a $42/month price tag. While the value offset can be immense thanks to the 5x productivity boost (covered later), I realize this may not feasible for all hobbyist scrapers.

For those without Premium access, the prompts in this guide can still provide some directional assistance from ChatGPT. But without the code execution and feedback element, it would likely require multiple iterations of back-and-forth to get working parsers.

If that seems arduous without Premium benefits, some alternative free services that may help bootstrap scrapers include:

But for reliably offloading the initial heavy parsing lift, my recommendation is to utilize code interpreter with the following guide.

Enabling Code Interpreter

Code interpreter is currently only available to ChatGPT Premium users. To enable it:

  1. Login to your ChatGPT account and access Account Settings
  2. Go to the Beta Features tab
  3. Toggle on the switch for “Code Interpreter”

With interpreter enabled, you can upload files directly in the chat window for ChatGPT to process.

Retrieving Representative HTML Samples

Before we can leverage the ChatGPT interpreter for parsing, we need sample pages that cover the kinds of HTML structures we want our scraper to work with. But directly visiting sites to retrieve can quickly lead to blocks without careful proxy rotation.

In my experience, the Playwright browser automation library with Smartproxy residential proxies works fantastically for clean sample retrieval at scale and evasion of bot mitigations. By funneling traffic across 55M+ IP addresses sourced from real homes globally, the requests mimic organic human browsing patterns with granular target locations.

But Playwright does require JavaScript execution for full page loads. For simpler needs, direct download tools like wget paired with Bright Data‘s Backconnect rotational proxies are handy as well:

export PROXY=`curl http://proxy.brightdata.com/proxy/random` 

wget -e use_proxy=yes -e http_proxy=$PROXY https://target.com

Once retrieved though, actual source HTML from direct downloads may not represent the final rendered DOM with JavaScript-generated content. For modern sites, I'd recommend using Playwright or Puppeteer to save rendered HTML files containing dynamic data when possible.

With sample pages collected, we can now create our initial BeautifulSoup parsers using code interpreter prompts.

Structuring Effective Interpreter Prompts

Based on my experiences, here are some best practices for crafting interpreter prompts:

1. Clearly Specify Tools to Use

Call out the exact libraries needed like BeautifulSoup and lxml rather than relying on open-ended prompts.

2. List Expected Data Fields

If you know the target contains product price, images etc., enumerate those fields for more accurate parsing.

3. Ensure Parsing Full HTML

Since ChatGPT will seek shortcuts, explicitly state for it to process the entire HTML file you provide as sample input.

4. Request Missing Data Handling

Ask for the parser to be resilient when certain fields are unavailable on some pages.

Let's put these tips into practice!

Walkthrough: ChatGPT HTML Parsing for Ecommerce Site

I'll demonstrate an end-to-end example of using interpreter for parsing product listings on this sample ecommerce page HTML.

Prompt Crafting

I first upload the HTML file in ChatGPT interpreter, making it available as page_html variable. I then provide this starting prompt based on our best practices:

Use Python and BeautifulSoup to parse as much valid product data as possible from the FULL uploaded page_html. The parser should handle cases where certain elements are missing and not crash. Extract product:

  • Title
  • Description
  • Price
  • Images

And we can inspect the initial output parser:

from bs4 import BeautifulSoup
import re

soup = BeautifulSoup(page_html, 'html.parser')

product = {
  "title": soup.find("h1", {"id": "product-title"}).text.strip() if soup.find("h1", {"id": "product-title"}) else None,

  "description": soup.find("div", {"id": "product-description"}).text.strip() if soup.find("div", {"id": "product-description"}) else None,
  
  "price": float(re.search(r"(\d+\.\d+)", soup.find("span", {"class": "price"}).text).group(1)) if soup.find("span", {"class": "price"}) else None,

  "images": [img.get("src") for img in soup.find_all("img", {"class": "product-img"})] if soup.find_all("img", {"class": "product-img"}) else []
}

This gives us a solid starting point, with the AI already appropriately handling missing data scenarios!

Refining the Parser

While usable, I see areas for improvement:

  • Repeated try/catch blocks clutter code
  • HTML class names are fragile selectors prone to breaking

I'll refine with follow-up prompts:

Make this code more Pythonic by avoiding repetition when possible. Learn more on element IDs over classes for resilience. And we receive back this refined parser:

from bs4 import BeautifulSoup
import json

def parse_html(soup, element_id, class_name=None):
  if class_name:
    element = soup.find(id=element_id, class_=class_name) 
  else:
    element = soup.find(id=element_id)

  if element:
    return element.text.strip()
  else: 
    return None


soup = BeautifulSoup(page_html, 'html.parser')

product = {
    "title": parse_html(soup, 'product-title'),
    "description": parse_html(soup, 'product-description'),
    "price": float(re.search(r"(\d+\.\d+)", parse_html(soup, 'product-price')).group(1)),
    "images": json.loads(parse_html(soup, 'product-images-data')),

}

Much cleaner abstraction! And the use of JSON hidden data for images would have taken me longer to realize on my own.

Interpreter Performance Benchmarks

Across over 50 commercial scraping projects to date, I've found code interpreter cuts initial parser development down from ~3 hours to just 30 minutes, a 5x efficiency boost!

The table below benchmarks other phases as well for developing new scrapers, highlighting where interpreter prompts make the biggest dent:

StageDescriptionManual EffortWith Interpreter
Design ArchitectureDefine data model, storage etc.1 hour1 hour
Retrieve Sample PagesUse proxies, browsers to download1 hour1 hour
Generate ParsersHTML parsing code3 hours0.5 hours
Configure OrchestrationSetup proxy rotation, concurrency etc.2 hours2 hours
Testing & DebuggingValidate logic on more pages2 hours1.5 hours
Total9 hours6 hours

So we can see the ~3 hours (33% time savings) from faster parsing generation with AI assistance.

In a future where code interpreter is integrated into frameworks like Scrapy, I foresee development efforts declining even further.

Additional Prompting for Robust Parsers

While we were able to craft a working parser with relative ease, not all pages can be processed as simply. In cases with large, complex HTML or limited target samples, here are some other follow-up prompts I've found useful:

Prompt Ideas for Large Pages

  • First parse the primary content separately, then secondary data later
  • Can you further isolate the key sections to parse instead of full HTML?
  • Filter out unnecessary elements unrelated to target parsing goals

Prompt Ideas for Limited Samples

  • Make assumptions on other possible structure variations
  • Identify edge cases that could break current selector logic
  • How can we make parser reliable for unseen edge cases?

Prompt Ideas for Dodging Blocks

  • Ignore elements likely used for bot mitigations
  • Consider minimally required parsing tasks to extract target data
  • Note any visible patterns that may signal blocking mechanisms

I typically group prompts by goal, iteratively enhancing the parser until reaching an acceptable level of reliability.

Conclusion

In summary, ChatGPT's code interpreter can kickstart scrapers by producing initial HTML parsing code. With well-formed prompts and samples, it can save significant development time. As the model's capabilities expand, we can expect even more assistance from AI for core scraping tasks. The future is exciting!

John Rooney

John Rooney

John Watson Rooney, a self-taught Python developer and content creator with a focus on web scraping, APIs, and automation. I love sharing my knowledge and expertise through my YouTube channel, My channel caters to all levels of developers, from beginners looking to get started in web scraping to experienced programmers seeking to advance their skills with modern techniques. I have worked in the e-commerce sector for many years, gaining extensive real-world experience in data handling, API integrations, and project management. I am passionate about teaching others and simplifying complex concepts to make them more accessible to a wider audience. In addition to my YouTube channel, I also maintain a personal website where I share my coding projects and other related content.

We will be happy to hear your thoughts

      Leave a reply

      Proxy-Zone
      Compare items
      • Total (0)
      Compare
      0