I've got a fairly simple website: a few static pages, one form, one POST endpoint generating some data, some JS to handle and present the response as an interactive plot. It works on Dancer2 reverse-proxied by nginx; additionally, nginx is set up to serve the public/ directory by itself and ask the browsers to cache it: root /srv/whatever/public; location / { try_files $uri @proxy; expires 1d; } Some time ago, I added a new <input> into the form and made appropriate changes to the JavaScript and the server-side parts. Some time later (both much more than one day ago), I added a catch-all error handler to let me know when the client-side JavaScript crashes: window.addEventListener('error', function(e) { try { // snip: tell user we fucked up var req = new XMLHttpRequest(); req.open('POST', '/log_error'); req.send(e.filename + ':' + e.lineno + ':' + e.colno + ': ' + e.message); } catch (e) {}}, { passive: true }); I was looking through the HTTP logs and saw a user access the website, followed by a bunch of POST /log_error requests. The user never sends the "generate and plot the data" request, which is the sole purpose of the website. The error is the same in all cases: Cannot set properties of null in an input update handler. The relevant piece of code is doing the equivalent of document.getElementById('a_certain_input').disabled = a_complicated_conditional;. But the <input> is there, and no one complained about it before. I take a look at the logs again. There's a GET for every CSS and JS file and for every image on the page, but not for the HTML page itself. The user must be running the old version of the page, which doesn't have the <input> yet. Why would the browser (Opera 89.0.4447.71 on 64-bit Windows 10, according to the User-Agent header) download all the dependencies of a page, but not the page itself, when they have the same cache expiration date? How could I have prevented this? On a related note, what are decent deployment strategies for such websites? For now, it's served straight from a VCS checkout, with a cron job taking care of periodically pulling changes, recompiling the backend executable and restarting the Dancer app if needed. I did this to encourage my colleagues to contribute (they did commit the documentation page updates once or twice) and I'm careful not to serve the VCS files, so it's been working mostly fine. The problem is that it's hard to integrate cache-busting in such a scheme. Additionally, the rare cases when I need to update the dependency of the backend executable and the website itself at the same time are an extreme pain in the ass, since they live in separate repos and have been built separately until now.