HTTP/1.1 vs HTTP/2 vs HTTP/3
HTTP/1.1 came out in 1997. HTTP/2 in 2015. HTTP/3 in 2022. Each generation solved problems the previous one couldn’t fix. Today, all three coexist on the internet — your browser tries the newest first and falls back as needed.
HTTP/1.1 (1997) — the foundation
The version that ran the web for two decades. Key features:
- Persistent connections — one TCP connection serves many requests (vs HTTP/1.0 which closed after each)
- Pipelining — send multiple requests without waiting for each response (rarely worked in practice)
- Host header — required, lets one IP serve many domains
- Chunked transfer encoding — stream responses without knowing total size upfront
The problems
- Head-of-line blocking — one slow response blocks all the others on the same connection
- One request per connection at a time — browsers opened 6+ connections per origin to compensate, wasting handshakes
- Header bloat — same headers (cookies, user-agent) sent on every request, no compression
- Plain text protocol — easy to debug, slow to parse
HTTP/2 (2015) — multiplexing
Built on Google’s SPDY. Same semantics as HTTP/1.1 (same methods, headers, status codes) but a totally different wire format.
What changed
- Binary framing — faster to parse, less ambiguous
- Multiplexing — many requests on ONE connection, interleaved
- Header compression (HPACK) — repeated headers shrink to a few bytes
- Server push — server can send resources before the client asks (mostly disabled in practice)
- Stream priority — client tells server which resource matters most
The remaining problem
HTTP/2 still runs over TCP. TCP has its own head-of-line blocking — if ONE packet is lost, ALL streams on that connection pause until retransmit. Multiplexing many streams on one TCP connection MAGNIFIES this problem under packet loss.
HTTP/3 (2022) — abandoning TCP
HTTP/3 runs on QUIC, a UDP-based transport that Google built to fix TCP’s limitations.
What QUIC fixes
- 0-RTT or 1-RTT handshake — combines TCP + TLS handshake into one
- Per-stream loss recovery — packet loss in one stream doesn’t pause others
- Connection migration — switch from Wi-Fi to cellular without breaking the connection
- Always encrypted — no plaintext fallback, prevents middleboxes from breaking things
- Faster congestion control evolution — QUIC is in user space, kernel doesn’t gate updates
The downsides
- Higher CPU cost than TCP (encryption per packet)
- Some firewalls and networks still block UDP / QUIC
- Debugging requires QUIC-aware tools (Wireshark added QUIC support)
Performance comparison
Loading a typical webpage with 50 assets, on a 100ms RTT connection:
| Version | Connections needed | First-byte time | Behavior on packet loss |
|---|---|---|---|
| HTTP/1.1 | 6 (browser default) | ~300 ms | Per-connection block |
| HTTP/2 | 1 | ~250 ms | All streams blocked |
| HTTP/3 | 1 | ~150 ms | Only affected stream |
How browsers choose
The first response includes an Alt-Svc header that tells the browser “you can also reach me on HTTP/3 over UDP at port 443.” The browser remembers this and uses HTTP/3 next time.
Check what version your site is using
# Test specific version
curl --http3 https://sudoflare.com -I # HTTP/3
curl --http2 https://sudoflare.com -I # HTTP/2
curl --http1.1 https://sudoflare.com -I # HTTP/1.1
# Online check
# https://http3check.net/?host=sudoflare.com
Server support
- nginx — HTTP/2 since 2015, HTTP/3 since 2024 (1.25+, experimental earlier)
- Apache — HTTP/2 yes, HTTP/3 via mod_http3 (still maturing)
- Caddy — HTTP/3 by default, automatic certs included
- Cloudflare / Fastly / CloudFront — full HTTP/3 support, just enable in the dashboard
What to learn next
REST API design — how HTTP gets used to build APIs. Up next.