HTTP and HTML5 media = broken
Well, as far as I'm convinced, the specification is broken.
My use case is PHP related (yes yes, I know), but in this case let me explain why I think it's messed up.
I have a solution in PHP that can serve arbitrary chunks of files on demand. Great, HTTP/1.1 supports this with the Range header. You can request arbitrary chunks quite happily without any problem.
Except there is a problem: you have to know how much to request.
There's two ways you can do this.
You can issue a HEAD request which would function like a GET except for actually returning content. This is great for testing existence as well as getting the content length.
You can be retarded and just issue a GET with
Range: 0-header which means 'just get everything'.
The problem here is with the latter - all the browsers seem to just do that. They have no idea how big the content is and will just try to request everything without checking how big it is - even though they're quite capable of asking first.
This wouldn't be so bad if you could return partial content without the browsers choking on it. The spec is a bit vague on this point.
The theory is that you return a HTTP 206 Partial Content header and indicate what content you've sent, but by definition if you are sending any part of a file that isn't everything in response to
0-as a request, you're really supposed to send a 416 Not Satisfiable response, which is also stupid.
I wouldn't mind if the browsers silently accepted 206's with partial content even when 'everything' was requested if they could do that without choking on it, but apparently not. Such that even though - as far as I read the spec - it would appear to be legal to respond to
0-with 'here you go, here's bytes 1-10000' and expect the browser to figure it out but neither Chrome nor Firefox seem to manage this.
If only the browsers sent HEAD requests - or even the HTML5 specification just allowed for 'here's the size of the file, hope it's useful', none of this would be a problem. The specification yields basically the ideal solution to this but no browser implements it sanely.
Unless, of course, I'm TRWTF for missing something truly obvious.
You can be retarded and just issue a GET with Range: 0- header which means 'just get everything'.
Are you sure that's happening? I don't recall ever seeing such a header being issued by a browser; they normally just don't send
Range:at all. Which is OK by the spec. You might just be misinterpreting the missing header as some (fairly sensible!) defaults and thinking that you're in a partial-transfer scenario when you're really in a full-transfer one.
But then I don't normally deliver media (other than images and CSS) in my code so I've not tested the specific case…
VinDuv last edited by
Are you sure that's happening? I don't recall ever seeing such a header being issued by a browser; they normally just don't send Range: at all. Which is OK by the spec. You might just be misinterpreting the missing header as some (fairly sensible!) defaults and thinking that you're in a partial-transfer scenario when you're really in a full-transfer one.
The thing is, media files tend to be very big, so sending them in one part may cause problems, especially with PHP which has an execution time limit. So it makes sense to try to send them in chunks on browsers which support this (and the Range header is a good indication of that).
@Arantor, why are you trying to send a media file with PHP? Why not serve it directly from the Web server (or even better, another web server specifically made to serve static files)?
If the PHP code is used for authentication purposes, you can put the media files in a folder with a random name which is difficult to guess, and tell PHP to redirect the browser there. It’s not perfect, but I’m under the impression that most websites which stream HTML5 media do this.
I guess it’s probably why some adaptative streaming protocols (like HTTP Live Streaming) split up the stream in many 10 seconds chunks; it avoids tying a Web process for a long time.
So it makes sense to try to send them in chunks on browsers which support this (and the Range header is a good indication of that).
That should be done with the header:
Bulb last edited by
My use case is PHP related (yes yes, I know)
Both Nginx and Apache Httpd can do Ranges natively and/or use chunked encoding and send huge files and use
sendfile(2)for doing all that with essentially no CPU use. Sending media files with PHP is TRWTF.
Yes, it would be nice if browsers handled 206 by just coming back with request for the rest, but because most people serve media with the web server directly and these don't need to send partial responses, the browsers were never tested for it.
As for requesting the size first, that would be extremely idiotic thing to do. The browser does not need to know the size in advance and round-trips are expensive.
I wouldn't mind if the browsers silently accepted 206's with partial content even when 'everything' was requested
Hm, somebody apparently even configures nginx to do just that. So either they have very special purpose or (which I would believe more) it actually does work and the PHP kludge you have does not send the right headers.
First of all, yes, I do know that the Range: 0- header is happening. I already checked this because I'm not a complete tool. Yes, I am supplying Accept-Range on response as per the specification.
Why am I sending it via PHP? Because I'd sort of like to be doing authentication before serving files on the basis of having private files hidden away from all users.
I'd love it if I could somehow hand a cookie off to the server, do authentication on it and then serve it without having to dig into PHP at all but sucks to be me, I guess. I don't necessarily want to upload possibly sensitive files to a server where they can be accessed without any kind of authentication.
As for the 'browser not needing to know the size first', what if it's a 1GB file? Thing is, the audio and video tags already expose partial methods for getting metadata, whether to not do any preloading, preloading metadata or preloading the entire file. In the latter two cases, Chrome and Firefox both start out with Range: 0- headers for audio, regardless of the filetype as far as I can tell (since you can indicate the file type, to select mp3 vs ogg for example)
This is kind of my point: I see no reason why the file size couldn't have been made optional to supply in the spec, or failing that request a sane part of the file rather than everything.
I did manage to get Chrome to actually make the requests, in chunks of 512KB, but it would receive the first chunk (0-524247), then make a random request from approximately 600KB onwards, all the way up to the file length, except it would then complain about the content length being mismatched, and that's in no small part because it's requesting the wrong parts of the file (it is receiving exactly what it asks for, but it's asking for things that make no sense)
Firefox just complains if you serve it a partial file like that.
It's irritatingly hard to search for things relating to this problem, but this bug report seems fairly informative; in particular, it says what Chromium is looking for from the audio server and notes that front-end caches may also be causing problems.
Going off and checking RFC 7233, I see that if you're handling a range (as opposed to declaring that you don't) then you have to respond with a 206 if things are successful, and a 416 if things fail. Anything else (when you'd do a 200 without the range processing) indicates that the client is dealing with a server that isn't really range-aware.
I suppose you could use the 416 to give the length of the file by supplying this header (assuming the file length is 12345):
Content-Range: bytes */12345
If you sent that back, the browser ought to say “aha! I can get ranges but not the one I asked for”. Yet my suspicion is that there's just a plain problem with browsers' implementations where they just decided that requesting everything was a better option.
That RFC is both highly informative and confusing. As usual.
I've been deliberately avoiding caching on both sides since I'm trying to get a rugged implementation going before I start worrying about caching.
RFC 7233 is woefully vague; it doesn't actually define a behaviour whereby browser requests 0- without specifying an end (i.e. everything) and whether a server is permitted to return part of it. 416 primarily says 'the range you asked for is not satisfiable' but returning the first 512KB only is not satisfying the range requested. But the spec doesn't define this.
Yup, the idea is that if a server doesn't offer ranges, it's bound to return a 200 with full content. I did try with the 416 but Chrome just whined about it.
See: here's the fundamental problem - the server doesn't get to indicate Accept-bytes until it actually returns with the request whereby it is doing that very thing!
This is the ideal case, as far as I'm concerned, for a pre-emptive HEAD request which would indicate the content size as well as whether the server will be offering ranges.
It's not even like the API exposes the option to request that. I can't even feed a value into it from JS or anything beyond the three very limited profiles for fetching.
I'm pretty stumped as to what should happen in that case too. I suspect some of it is due to flaky coding in browsers; failing to handle a server that has some limits because “it works with native Apache and nginx, and who would use anything else?” or something like that.
About the only thing that's clear is that you can send the file length encoded in the
Content-Rangeheader. Even if the dumb browsers ignore you…
Sending the length in Content-Range is fine - even if you only send partial content, that's still the proper venue for indicating what you're sending.
It doesn't help that some servers are sending 200 OK responses even when ranges are requested - and even when they indicate they would otherwise support ranges (related to that bug report)
I think the spec is not as well as thought out as it could be, and it's like real world use was never truly considered.
chubertdev last edited by
This type of thing makes my Heartbleed™.
No, there's no SSL involved but if there was, it wouldn't make any difference in practice.
chubertdev last edited by
Yeah, but IIRC, it was caused by requesting a content-length beyond the size of the actual content and getting non-authorized content instead of an error.
Yes, yes it was I already handle Content-Range correctly
VaelynPhi last edited by
Well, as far as I'm convinced, the specification is broken.
The specification yields basically the ideal solution to this but no browser implements it sanely.
Not to pick nits or anything, but you seem to be saying the spec is good but browser implementations are broken or at the very least really dumb, which contradicts your first statement.
IMHO, implementations of a decent spec being broken is the status quo since time immemorial, though there are plenty of examples of specs that are crap or need some TLC. (SVG, anyone?)
There's two specifications here at work.
On the one hand, there's the HTML5 specification, such as it is, which could afford the solution (indicate file size as part of the audio or video tags) - this specification is broken.
On the other hand, there's the HTTP/1.1 specification which lays out how the partial requests should work - except the browsers implement this varying levels of badly. After hammering away at this I got Chrome to actually behave and not shit itself completely after requesting 'the entire damn file' and getting 512KB chunks from it, but Firefox still refuses to do the same. There are methods in HTTP/1.1 for requesting a file's size and other meta data, but no browser implements that. No browser requests the size up front of something potentially large (which would make sense) and just requests everything, with a form that theoretically should allow partial and incomplete returns to be handled safely but they don't do either consistently or properly; spec is good here though it's confusing as sin to wade through.
So no, my statements aren't contradictory except when taken out of context.
HardwareGeek last edited by
spec is good here though it's confusing as sin to wade through.
I understand that you are using "good" here to mean "adequately specifies behavior." However, I would argue that "good" in a general sense and "confusing as sin to wade through" are mutually exclusive, as it can lead to differing interpretations of the spec. At the very least, "confusing as sin to wade through" seriously undermines its "good"ness.
At the very least, "confusing as sin to wade through" seriously undermines its "good"ness.
Specifications have to be exact before they can be readable. If they're readable but inexact, you might as well be reading Spot The Dog Goes Shopping.
Yes, I'm referring to the fact that HTTP/1.1 adequately covers the behaviours required even though the extremely wordy form of all of the internet related specifications (and in general everything the IETF writes) is so convoluted that it's a nightmare to follow, even to the point where it could possibly be ambiguous because of the level of detail provided.
One can be exact without drowning in details.
If they're readable but inexact, you might as well be reading
Spot The Dog Goes ShoppingMarkdown.