The other day, my colleague and I were chatting in the office (hey, Kacper!), and we stumbled onto one of those tech frustrations that feels like it should have been solved a decade ago. We’re living in 2026, we have incredible speeds, yet browser downloads still feel incredibly fragile.
I haven't actually dealt with this personally in quite a while. I rarely download massive files directly in the browser these days, so I usually escape the "99%... Failed" situations. But the conversation made me curious. Why is a basic file transfer still such a "fragile negotiation" between our machines and the server?
I decided to dig into the networking stack to see what’s actually happening under the hood.
The Networking Stack: Why Downloads Should Resume
When you hit download, you are entering a complex agreement. For a download to resume after your Wi-Fi blips, your browser and the server have to speak a very specific language defined by RFC 7233.
Nitpick: Strictly speaking, RFC 7233 was obsoleted in June 2022 by RFC 9110 (HTTP Semantics). However, the underlying mechanics for Range Requests remain unchanged.
Range Requests: The Invisible Handshake
It centers on the HTTP Range request. If your connection drops at 50%, the browser’s download manager has to effectively say, "Hey, send me everything starting from byte X".
But, the server has to explicitly say "yes" to this beforehand. It does this using the Accept-Ranges: bytes header. If that header is missing, the browser knows that any interruption means starting over from byte zero.
Data Integrity: Avoiding the "Franken-file"
A huge risk in resuming downloads is data stitching corruption. If a developer updates a file on the server while you’re halfway through downloading it, and you resume, you might end up with half of version A and half of version B. To prevent this "Franken-file," the web uses Validators:
- ETags (Entity Tags): Unique fingerprints for a file version. If one byte changes, the ETag changes.
- If-Range: When resuming, your browser sends the ETag it already has. If the server sees the file has changed, it refuses to resume and sends the entire new file instead to keep your data safe.
This is a critical part of the HTTP standard. Without an ETag (or a Last-Modified date), a browser cannot safely resume a download because it has no way of knowing if the server's file is the same one it started downloading an hour ago.
Why Downloads Still Fail: Browser vs. Dedicated Manager
So if the protocol exists, why did download still fail? It usually comes down to architectural choices:
1. The Browser's Philosophy
Browsers are built to be document viewers, not heavy-duty file transfer clients. They often prioritize low resource usage over complex recovery. While a dedicated download manager might use 30+ threads to grab a file in parallel, your browser usually sticks to one. If it crashes or restarts, it often loses the volatile session memory (like ETags or cookies) needed to even ask for a resume.
2. Expiring Security
Many sites use signed URLs with tokens that expire. If your download is paused past that expiration time, the server returns a 403 Forbidden. While a manager could re-authenticate, your browser usually just marks it as "Failed".
3. Middlebox Interference
Your request rarely goes straight to the server. It hits firewalls and proxies. Some "middleboxes" are configured to strip out Range headers for security or memory reasons. If a firewall strips that header, the server never even knows you're trying to resume.
The Future of Downloads: HTTP/3 (QUIC)
Is there a light at the end of the tunnel?
Actually, yes. We’re seeing a shift with HTTP/3 (QUIC).
The old way (TCP) is bound to your IP address. If you switch from office Wi-Fi to a cellular network, your IP changes, the connection dies, and you have to hope for a successful HTTP resume.
HTTP/3 uses "Connection Migration". It identifies you by a unique Connection ID instead of an IP. You can switch networks entirely, and the download just... continues. It effectively removes the need for resumption by making the connection itself more resilient.
Summary
Ultimately, for a download to resume in 2026, four things have to align perfectly:
- The server and all proxies must support Range headers.
- The file must have a Strong ETag that hasn't changed.
- The browser must keep its metadata and temporary files intact.
- The network outage can't exceed system timeouts (which range from 20 to 75 seconds depending on your OS).
Seems that until HTTP/3 becomes the universal standard, we're likely going to keep having these office discussions :D
