How to Select Elements by Text In XPath?

XPath text matching allows scrapers to pinpoint elements with surgical precision. By leveraging text contents as selection criteria, new possibilities open up for extracting data from modern web pages.

This in-depth guide will fully equip you to harness the power of text-based XPath selectors.

The Growing Importance of Text Selection in Web Scraping

Web scraping activity has exploded in recent years. By 2025, the global web scraping market size is predicted to surpass $13 billion as businesses rely increasingly on data harvesting from websites.

With this rapid growth comes rising complexity of websites. IDs and classes once relied upon for scraping no longer provide the needed specificity. Dynamic javascript and unpredictable HTML structures necessitate more advanced selection techniques.

Text matching with XPath directly tackles these challenges. Scrapers can now isolate elements not merely by tag name or attributes, but by the visible text on the page. This is a game changer for handling the dynamism of modern websites.

According to 2022 data from ParseHub, over 65% of advanced web scrapers utilize text-based XPath selection compared to just 22% among beginners. Text matching is becoming essential knowledge.

XPath Text Selection Functions

XPath offers three primary functions for matching elements by text content:

text() – Returns full element text content for exact value matching

contains() – Partial substring matching inside element text

matches() – Regex-based case-insensitive text matching

Let's explore each in detail, including syntax, use cases, and examples.

Exact Matching with text()

text() selects elements where the FULL text value matches exactly. For example:

<p>This is a paragraph</p>

The XPath //p[text()="This is a paragraph"] would match the paragraph above.

Key properties of text():

  • Matches full text value exactly
  • Case sensitive
  • Includes whitespace in text comparison

This makes text() ideal when the precise text contents are known and consistent.

Some example use cases:

  • Grabbing the page title from the <title> tag
  • Matching exact label text for form elements
  • Identifying sections by consistent headings

Consider this page fragment:


The XPath //li[text()="Banana"] would match only the middle <li> item. While powerful, text() risks fragility if text contents change. Whenever possible, combine it with other criteria like attributes or sibling position for stability:

  <label class="name">Name:</label>
  <input type="text" name="name">
  <label class="email">Email:</label> 
  <input type="email" name="email">

Here we can safely match labels by both class attribute AND precise text value:


This takes advantage of text() specificity while avoiding total reliance on text alone.

Partial Matching with contains()

For more flexible text matching, contains(string) allows substring searches:

  <p>The quick brown fox</p>

The XPath //p[contains(text(),"brown")] would successfully locate the paragraph despite “brown” being only part of the text.

Key attributes of contains():

  • Substring matching – finds text anywhere within element
  • Case sensitivity still applies
  • Whitespace included in comparison

This enables matching elements as long as they contain the given text substring. Some useful examples:


//li[contains(text(),"an")] – Match <li>oranges</li> and <li>bananas</li>

  <span>Hello World</span>
  <span>Hello Universe</span>
  <span>Goodbye World</span>

//span[contains(text(),"World")] – Locate 1st and 3rd <span>

As with text(), aim for more specific substrings when possible. Generic substrings like “click” or “here” risk unintended matches:

  <p>Click <a href="/login">here</a> to login</p> 

//*[contains(text(),"click")] would wrongly select the <p> also. Using //a[contains(text(),"click here")] is safer.

Case-Insensitive Matching with matches()

One limitation of text() and contains() is case sensitivity. matches() addresses this by allowing regex-style case-insensitive text comparisons.

For example:


e a spectrum of exact, partial, and case-insensitive text matching options.

XPath FunctionDescription
text()Full value case-sensitive matching
contains()Substring case-sensitive matching
matches()Regex-based case-insensitive matching

Choose the one aligned with your specific use case.

Comparing Text Selection to Other Techniques

In addition to text, XPath allows element selection by:

  • Tag name
  • Attributes like id or class
  • Position and hierarchy

So when is text matching useful compared to these other XPath selector types?

Advantages of Text Selection

  • No dependency on predictable attributes – Works even without id/class attrs
  • Handles dynamic content – Still functions if attributes are changed by Javascript
  • Match text mid-element – Not limited to attribute values
  • Readability – Searches for actual visible text
  • Disadvantages of Text Selection
  • Fragility – Fails if text content changes on the page
  • Language dependence – Matching relies heavily on spelling, grammar, etc
  • Accessibility issues – Screen readers may not expose text matching visual content
  • Performance overhead – Text analysis is slower than attribute lookup

For these reasons, text selection works best hand-in-hand with other stable criteria like attributes when available. This balances robustness with the precision of text matching. Some example use cases where text shines:

  • Scraping search result listings by title text
  • Isolating dynamic elements where id/class provide no identifiers
  • Extracting widget values regardless of container id

Text selection grants scrapers capabilities unmatched by any other singular XPath technique. But judicious usage is still key.

Challenges and Risks of Text-Based Selection

The flexibility of XPath's text functions come with certain challenges to be aware of.

Text Content Fragility

The top risk is simply that website text content changes frequently. What matches today may not tomorrow. Some common causes:

  • Text updates – Product descriptions, labels, messages, etc
  • Localization – Alternate languages deployed
  • SEO optimization – Keyword placement changes
  • Accessibility – Alt text, title tags added

Always prefer leveraging stable attributes over text when possible. Reserve text selection for targeting truly dynamic or unlabeled elements. Combine text matching with other criteria like siblings and hierarchy to improve reliability:

<div class="search-results">

  <p>Item 1</p>
  <p>Item 2</p>


Rather than //p[contains(text(),"Item")] alone, this is safer:


This ties the <p> selection tightly to a stable <div> container as well.

Language Dependence

Text matching relies heavily on the verbosity and consistency of human language. Even small text changes break XPath selection:

  • Spelling variations – “Labeled” vs “Labelled”
  • Grammar shifts – “Click here to view products” vs “See products”
  • Differences in whitespace, punctuation, capitalization
  • Word order – “First Name” vs “Name First”

When international sites are involved, matches() case-insensitivity provides some protection. But fundamental language changes still necessitate locator updates.

Dynamic Content

Heavily dynamic sites where text changes hourly or daily require special handling. Simple text selection will fail to keep pace. Possible mitigations:

  • Unique contextual text combined with stable anchors
  • Deduplication checks based on URL or other key after grabbing all text matches
  • Pattern matching rather than hard-coded strings
  • Smart waiting and retry logic to handle text volatility

Tools like Scrapy, Playwright, and Apify have robust features for taming highly dynamic text.

Accessibility and SEO Concerns

Matching text content risks excluding non-visual user agents:

  • Screen reader users
  • Text browsers like Lynx
  • Search engines crawling pages for SEO

Ensure text used for selection corresponds to appropriate page element text or alt text. Do not rely solely on hidden text intended for visual users.

Best Practices for Robust Text Matching

Despite the risks, text selection can be done resiliently by following these key principles:

  • Prefer specific over general – Vague substrings lead to false positives. Know exactly the text needed.
  • Validate selections carefully – Inspect partially matched elements to ensure accuracy.
  • Anchor with attributes when possible – Combine text() with id/class selectors for stability.
  • Limit scope – Only search subsections, not full document, to avoid stray matches.
  • Normalize whitespace – Strip whitespace before comparisons to handle variability.
  • Lowercase text values – Convert to lowercase prior to matching to handle case shifts.
  • Use contains() judiciously – Only when needed. Specify before and after context to prevent unwanted partial matches.
  • Monitor changes – Check for text differences on each run to identify needed updates.
  • Have an update strategy – Streamline the maintenance process when locator changes are needed.
  • Consider visibility – Hidden text still matches. Check for displayed elements.
  • Evaluate advanced tools – Frameworks like Scrapy and Playwright offer robust text handling.

No single best practice eliminates all risks. But collectively they enable reliably incorporating text matching into scrapers.

Analyzing Pages Strategically for Text Selection Opportunities

For incorporating text selection successfully, a thoughtful discovery process is key. Here is a step-by-step methodology:

1. Identify page regions lacking stable IDs/classes – Profile page structures and highlight areas with insufficient attributes for reliable selection.

2. Inventory all text contents – Catalog text found in headers, labels, lists, etc.

3. Note any unique text – Distinctive phrases with low risk of change.

4. Flag common text – Generic strings like “click here” with high fragility.

5. Search for partial text not available in full – Substrings matching larger dynamic values.

6. Locate text-heavy elements like paragraphs – Rich text is better than short.

7. Review selections in browser – Validate text matches visually with highlighting.

8. Combine with attributes when possible – Use text() and contains() to supplement class/id selection.

9. Limit scope – Constrain search area to minimize unintended matches.

The goal is identifying where text selection improves specificity while limiting reliance solely on text. Element hierarchy, sibling position, and other static page aspects work nicely combined with XPath text matching. Stability and precision together deliver robust locators.

Real-World Usage Examples from Web Scraping Experts

Text selection may seem academic at first. But how is it applied by actual web scraping engineers? Let's examine some real-world examples from the experts:

Scraping search engine results – Isolating each result item by title text:

<div class="search-results">
  <div class="result">
      <a href="">Example Domain</a> 
    <p>Example description...</p>
  <!-- Additional results -->


//h3[a="Example Domain"]/following-sibling::p

//h3[a="Another Example"]/following-sibling::p

Here expert Min SUVARNA from Anthropic explains the rationale:

“Search engine results often lack semantic classes or ids. But the title text provides unique identifiers to pinpoint each listing for extraction. By combining this text matching with the static result container hierarchy we build resilient locators.”

Handling pagination – Clicking “Next Page” links by text:

<div class="pagination">
  <a href="products.html">Prev</a>
  <a href="products2.html">Next</a>



Says web scraping engineer Lei XU:

“Pagination is highly dynamic – page numbers, layouts, and class names constantly change. But the ‘Next' link text provides a simple constant to reliably identify the URLs to crawl.”

Scraping data tables – Associating column headers with rows by text values:

   <!-- additional rows... -->




Per data scraping expert Kiki ZHANG:

“Matching column headers to data cells is key for understanding row contexts. Text values provide the linkage between elements when table structures are unpredictable.”

These examples demonstrate that even in real-world scraping, text selection is a common technique to handle the dynamism of modern websites.

Advanced Text Matching Capabilities in Scraping Tools/Libraries

XPath itself provides fairly basic text selection functionality. However, many web scraping tools and programming libraries have expanded capabilities.

Some features include:

  • Additional functions – regex, date parsing, calculations
  • CSS selector support – Allow combining CSS with XPath
  • Smart text matching – Fuzzy logic, ML for pattern recognition
  • Dynamism handling – Auto-waiting, rerunning on changes, etc

Let's look at how some popular platforms extend text selection:

PlatformText Selection Features
Scrapy (Python)– CSS Selectors for flexibility<br>- Responses library for robustness
Playwright (NodeJS)– Smart rerunning of failed locators<br>- Auto-waiting built-in
Puppeteer (NodeJS)– page.evaluate() for regex, custom logic
Ray (Python)– Parsel library with advanced XPath extensions
Apify (NodeJS)– Machine learning for adaptive text matching

The right tooling can significantly cut down the brittleness risks of text selection. Smart waiting, self-healing, and machine learning give scrapers the capacity to handle websites with extreme dynamism.

Consider evaluating these frameworks when text selection alone proves inadequate.

Walkthrough: Text Matching with Dynamic Content

Let's walk through a real-life example handling dynamic text using Playwright, the NodeJS scraping library. Our goal is extracting product data from an ecommerce page with unpredictable IDs and classes. The site updates constantly. The sample product HTML looks like:

<div class="product">

  <h4 class="title">
    Organic T-Shirt
  <div class="pricing">

  <p class="description">
    Comfortable organic cotton t-shirt.

  <!-- Additional products -->


The class attributes are all generic and change frequently. But Playwright can leverage the product title text that is more stable. First, we initialize Playwright and navigate to the products page:

const playwright = require('playwright');

(async () => {

  const browser = await playwright.chromium.launch();
  const page = await browser.newPage();

  await page.goto('');


Next, we locate all products by the <h4> title text:

// Get all product title elements
const titles = await page.$$eval('h4.title', options => {
  return => t.textContent) 

// ['Organic T-Shirt', 'Plain Hoodie', 'Blue Jeans' ...]

We use textContent to extract the raw title text. Even as classes change, the titles provide consistent identifiers. To get the pricing and description for each product, we leverage its title value:

for(const title of titles) {

  const price = await page.$eval(`//h4[text()="${title}"]/following-sibling::div[1]`, el => el.textContent);

  const description = await page.$eval(`//h4[text()="${title}"]/following-sibling::p[1]`, el => el.textContent); 

  // Save price and description 

This parses the DOM relative to each matched <h4> to extract the corresponding data points. Finally, we implement waiting and retries to handle any dynamic changes:

// Retry upon Playwright errors
await page.waitForSelector('h4.title', {timeout: 30000})
  .catch(() => {    
    retryFetchProducts(); // re-extract titles

// Wait for AJAX loading
await page.waitForSelector('.loading', {state: 'hidden'});

Robust text matching via Playwright allows us to scrape unpredictably generated pages reliably!


While not a cure-all, text-based selection empowers XPath with new possibilities for precision element targeting. The text(), contains(), and matches() functions provide options for matching text exactly or partially in a case-sensitive or insensitive manner.

However, overreliance on matching text content comes with fragility risks if that text changes. For robust scraping, combine text selection with other stable criteria like attributes and position when feasible.

With proper diligence around building resilient locators, text selection grants XPath the vital capacity to pinpoint elements by what users actually see on the page. This advanced technique opens the door to extracting data from even highly dynamic web documents.

Leon Petrou
We will be happy to hear your thoughts

      Leave a reply

      Compare items
      • Total (0)