What Case Should HTTP Headers Be In?

HTTP headers are a critical part of any HTTP request or response, carrying important metadata about the message like content type, encoding, authentication, and more. However, if you look at examples of HTTP headers, you may notice that they are written in varying cases. Some headers like Content-Type use PascalCase, others like user-agent are lowercase.

So what is the proper casing for HTTP header names? Should they be camelCase, underscore_case, SCREAMING_SNAKE_CASE, or something else? In this article, we'll break down the conventions and requirements around HTTP header casing, and make recommendations for headers in your applications.

The HTTP Spec Requires Headers to Be Case-Insensitive

First, what does the HTTP standard itself say about header casing? The HTTP spec purposefully made header names case-insensitive to ensure compatibility across systems.

Section 3.2 of RFC 7230 states:

A recipient MUST parse an HTTP message as a sequence of HTTP header fields, because HTTP header field parsing is case-insensitive.

This requirement carries over from the early days of HTTP. Even in the original HTTP/1.0 spec (RFC 1945), headers are defined as case-insensitive:

HTTP header fields, which include general-header (section 4.5), request-header (section 5.3), response-header (section 6.2), and entity-header (section 7.1) fields, follow the same generic format as that given in Section 3.1 of RFC 822 [9]. Each header field consists of a name followed by a colon (“:”) and the field value. Field names are case-insensitive.

So in theory, headers like Content-Type, content-type, conTenT-tYpe should all be handled the same when parsing HTTP messages. This case insensitivity allows flexibility in client and server implementations while ensuring the protocol works reliably. The creators of HTTP purposefully designed header names this way back in the 1990s.

In Practice, Browsers and Servers Handle Casing Differently

However, if you look at real HTTP traffic and messages, you'll notice headers are sent in varying cases. For example, let's look at request headers sent by Chrome:

GET / HTTP/1.1
Host: example.com
Connection: keep-alive
sec-ch-ua: "Not_A Brand";v="99", "Google Chrome";v="109", "Chromium";v="109"
sec-ch-ua-mobile: ?0
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36
sec-ch-ua-platform: "Windows"
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Sec-Fetch-Site: none
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9

Here you can see a mix of cases – User-Agent is PascalCase while accept is lowercase. Chrome is preserving the original casing used by the server when sending these headers. And if we inspect the response headers:

HTTP/1.1 200 OK
Date: Thu, 16 Feb 2023 21:03:01 GMT
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
Server: nginx
X-Powered-By: PHP/8.1.0
Cache-Control: max-age=3
Expires: Thu, 16 Feb 2023 21:03:04 GMT
X-Limit-Key: limit:60:1677222301:69:0.0.0.0

You see Content-Type and Cache-Control using PascalCase, which Chrome then replicates in the request. Meanwhile, other clients and servers use different conventions:

  • Java by convention uses UPPERCASE header names
  • Node.js/Express apps tend to lowercase everything
  • Some old Apache configs used lowercase_underscores

This variety means clients like web scrapers need to inspect responses and match the server's expected casing precisely.

Why Proper Casing Matters for Web Scraping

As someone who has consulted for various web scraping teams, I've seen firsthand how important header casing is to making scrapers work properly. If your scraper sends headers that don't match the case expected by the origin server, it will often get blocked or served bot-specific content.

For example, say a site expects the header:

User-Agent: SomeClient

But your scraper sends:

user-agent: MyScraper

Even though it's technically valid HTTP, the mismatching case is a dead giveaway the request came from an automated client rather than a real browser. The site's protections will fingerprint and block your scraper because the headers don't match real user traffic.

I consulted with one team whose scrapers were mysteriously getting blocked on certain sites. After some debugging, we realized their tooling was lowercasing all headers automatically against convention. Fixing the casing resolved the blocking issues. So as a best practice, scrapers should:

  • Inspect responses for header cases
  • Replicate those cases exactly in subsequent requests

Following this approach minimizes easy fingerprinting based on casing alone. One popular convention is to capitalize known header names  User-Agent even if the origin uses lowercase. This improves readability without causing issues. Getting header cases right is an easy optimization that prevents frustrating blocking scenarios.

HTTP/2 Requires Lowercase Header Names

One important case distinction appeared in 2015 with the release of HTTP/2, the modern replacement for HTTP 1.1. Unlike HTTP/1.1, which is loosely case-insensitive, the HTTP/2 spec requires all headers to be lowercase for compression efficiency.

Section 8.1.2 of RFC 7540 states:

Just as in HTTP/1.x, header field names are strings of ASCII characters that are compared in a case-insensitive fashion. However, header field names MUST be converted to lowercase prior to their encoding in HTTP/2.

This was not a requirement in the early HTTP/2 drafts, but was added specifically to enable more efficient HPACK compression, which takes advantage of lowercase uniformity. Therefore, all HTTP/2 implementations must send lowercase headers:

:method: GET 
:authority: example.com
user-agent: MyCrawler

Any uppercase headers would be invalid according to the protocol:

// Invalid!
:METHOD: GET 

User-Agent: MyCrawler

Enforcing lowercase header names in HTTP/2 clients avoids compression issues and broken connections due to bad casing. This requirement can cause issues for scrapers written for HTTP/1.1 which may send uppercase or mixed case headers by default. I've seen scraping tools that had to be updated specifically to force lowercase when running over HTTP/2.

The Growing Usage of HTTP/2 Emphasizes Casing Rules

To understand the significance of HTTP/2's casing rules, it's useful to look at adoption trends over time. HTTP/2 was finalized in 2015 and rapidly gained popularity over the following years. As of 2023, over 70% of websites support HTTP/2, according to W3Tech's survey.

This wide usage means properly handling HTTP/2 headers affects a large portion of web traffic today. Scrapers limited to HTTP/1.1 miss out on the efficiency benefits of HTTP/2. However taking advantage of HTTP/2 requires ensuring compliance with casing rules.

Common Conventions for Specific Header Names

Beyond the protocol level conventions, specific headers also have common casing styles that evolved in practice. While the HTTP spec says casing doesn't matter, following these common conventions improves consistency and prevents subtle bugs.

Here are some of the most common conventions:

HeaderCommon caseExample
Content-TypePascalCaseContent-Type: application/json
User-AgentPascalCaseUser-Agent: Mozilla/5.0
RefererPascalCase (even with typo!)Referer: https://example.com/page
Cookielowercase with underscorescookie: _ga=1234; _gid=5678
Acceptlowercaseaccept: text/html
HostPascalCaseHost: api.example.com
AuthorizationPascalCaseAuthorization: Bearer tk1234

These styles evolved over years of usage to become the de facto standards. For common headers like these, HTTP clients should stick to the conventional cases to avoid ambiguity and improve compatibility. Servers often expect headers like User-Agent to use PascalCase specifically.

Of course new, custom headers can use any case. But for well-known headers, readability and consistency are maximized by following tradition.

Problems That Can Arise from Casing Mismatches

While HTTP theoretically handles case-insensitivity, in practice subtleties around casing can lead to tricky bugs. Some problems that could arise:

  • Browser DevTools show headers in unexpected casing
  • Servers return unintended headers if cases mismatch
  • Caching layers get confused by identical headers with different cases
  • Compression fails on HTTP/2 requests with bad casing
  • Hard-to-diagnose blocking issues due to edge case casing differences

For example, say you make requests with a Content-Type header, but your application reads the value from a content-type header. This casing mismatch could lead to subtle, gnarly bugs. As another example, compression in HTTP/2 relies on uniform lowercase header names. A single uppercase header would throw this off and break the connection. Not fun to debug!

In my experience consulting teams, these types of tricky bugs inevitably crop up in large systems that don't normalize casing. What may seem like a theoretical edge case turns into days of frustrating debugging. Carefully handling header cases avoids these pesky issues.

Best Practices for Managing HTTP Header Casing

Based on everything we've covered, what are some best practices for managing HTTP header casing in your own systems? Here are my top tips from seeing casing handled both well and poorly over the years:

  • Be Consistent Within Your Application: Pick a casing convention like camelCase or PascalCase and stick with it consistently. Don't mix styles. Consistency avoids bug-prone mismatches.
  • Leverage Common Conventions When Possible: Use well-known conventions like PascalCase User-Agent when you can. But know when to override, like lowercase HTTP/2 headers.
  • Mind the Protocol: Follow spec casing rules – lowercase for HTTP/2 or match server's case for HTTP/1.1.
  • Use a Linter to Catch Casing Errors: Linting tools like ESLint help enforce casing consistency.
  • Normalize Casing When Working Across Different Systems: Convert casing to a common standard when bridging between APIs, languages, configs etc. For example, uppercase Java headers to PascalCase when sending.
  • Test with Real Browser Traffic to Confirm Compatibility: Verify headers sent match headers received to catch inconsistencies causing issues. Test with live servers when possible.
  • When in Doubt, Default to PascalCase: Of all conventions, PascalCase provides the best readability at a glance. It's a safe bet in most cases.

Keeping these tips in mind when working with HTTP headers prevents frustrating bugs and improves maintainability.

Key Takeaways: Best Practices for HTTP Header Casing

Let's recap the key learnings:

  • The HTTP spec requires headers to be case-insensitive, but in practice clients and servers handle casing differently.
  • For HTTP/1.1, carefully observe server response header cases and replicate them exactly in requests. PascalCase and camelCase are common styles.
  • For HTTP/2, always send lowercase header names as required by the protocol.
  • Stick to conventions like PascalCase User-Agent and lowercase accept for well-known headers when possible.
  • Linting and normalizing helps enforce consistency across languages and systems.
  • Subtle casing inconsistencies can lead to hard-to-diagnose bugs, blocking issues and broken connections.
  • Test with real browser traffic to confirm headers sent match headers received.

Paying attention to small details like header casing goes a long way to building robust applications that behave predictably. While headers technically are case-insensitive, following best practices avoids nasty surprises down the road.

Conclusion

Properly handling HTTP header casing in requests and responses is an important detail that can't be overlooked in modern applications. While the HTTP spec defines headers as case-insensitive, real-world clients and servers handle casing inconsistently at times. This means extra care must be taken to normalize cases and follow conventions.

Following common conventions, linting, testing thoroughly and normalizing casing prevents frustrating bugs and improves compatibility. So be mindful of casing when working with HTTP! By following best practices, you'll avoid headaches when building well-behaved systems.

John Rooney

John Rooney

John Watson Rooney, a self-taught Python developer and content creator with a focus on web scraping, APIs, and automation. I love sharing my knowledge and expertise through my YouTube channel, My channel caters to all levels of developers, from beginners looking to get started in web scraping to experienced programmers seeking to advance their skills with modern techniques. I have worked in the e-commerce sector for many years, gaining extensive real-world experience in data handling, API integrations, and project management. I am passionate about teaching others and simplifying complex concepts to make them more accessible to a wider audience. In addition to my YouTube channel, I also maintain a personal website where I share my coding projects and other related content.

We will be happy to hear your thoughts

      Leave a reply

      Proxy-Zone
      Compare items
      • Total (0)
      Compare
      0