Proofreading Service needs Proofreading Service



  • I don't know much about these sorts of services, but a colleague fowarded this to me. I guess file this one under Spel Checkor. The original is at http://www.scribendi.com/quote.php but they've already fixed it (my colleague notified them of their mistake). In a minor WTF, my colleague apparently does not know how to do screen captures, because he appears to have printed the web page and scanned the print out back in before mailing it to me.

    It actually took me a while to find the goof because my brain was just correcting the mistake as I read. I just went right past it.

    Proofreading mistake 



  • Instead of a scanner and a wooden table, you used a fax machine!



  • Must've been written by a English as a second-language speaker; they often make that mistake in my experience!



  • It took me a while to find it, I had to read through it twice to see it. :P



  • They spelled résumé wrong.



  • Besides résumé, is another mistake "anyone... their"?



  • @bobday said:

    Besides résumé, is another mistake "anyone... their"?

    Yeah, so putting the cute accents into resume is nice, but hardly a major error. Anyone...their is something that most editors would catch. Anyone is singular, their is plural. Still, that's not the most glaring error. There is a word missing in the last real sentence on the page (after "What you will receive:"). 



  • @bobday said:

    Besides résumé, is another mistake "anyone... their"?

    There is a singular "their" in the English language, we've had recently a discussion about it. 

    http://forums.worsethanfailure.com/forums/permalink/134568/134566/ShowThread.aspx#134566


     http://en.wikipedia.org/wiki/Singular_they



  • They also use a hyphen where they should be using a dash.



  • "... plus suggestions on how improve..."

    Anyway... This is supposed to be funny?

     

     



  • The real WTF is how that image manages to look terrible in both FF and IE, yet in Photoshop (or any app on my Mac), it looks much better...

     Edit: Ok, its a scaling issue and it only looks OK in Photoshop when at 50% or less zoom (WTF? even 51% looks terrible) seems most PC apps (including browsers) use bad scaling algorithms (at least for monochrome images).
     



  • @mallard said:

    The real WTF is how that image manages to look terrible in both FF and IE, yet in Photoshop (or any app on my Mac), it looks much better...

     Edit: Ok, its a scaling issue and it only looks OK in Photoshop when at 50% or less zoom (WTF? even 51% looks terrible) seems most PC apps (including browsers) use bad scaling algorithms (at least for monochrome images).
     

    Photoshop resamples only for 50%, 25% etc, using the resample algorithm set in the Prefs.

    IE6 does not resample.
    FFX 2 does not resample, but maybe there's an addon.

    IE7 only resamples when you use the zoom; not when it uses its auto-fit in the window. (wtf?)

    Opera does not have auto-fit, and resamples when you zoom.

    FFX3 will resample.

    Safari resamples.



  • @mallard said:

    The real WTF is how that image manages to look terrible in both FF and IE, yet in Photoshop (or any app on my Mac), it looks much better...

     Edit: Ok, its a scaling issue and it only looks OK in Photoshop when at 50% or less zoom (WTF? even 51% looks terrible) seems most PC apps (including browsers) use bad scaling algorithms (at least for monochrome images).
     

    The RealWTF is that the image was not resized before posting, or that the OP didn't just link to the full-size image using a thumbnail. 

     

    IMO, browsers that resize images when the image size doesn't match the <img> tag size encourage laziness, and waste resources on resampling that should be used for other page rendering tasks.  An image-heavy page where the image sizes don't match the <img> tags will bloat the browser.  Now, resampling for page zoom is another issue, and an appropriate use.
     



  • @purge said:

    IMO, browsers that resize images when the image size doesn't match the <img> tag size encourage laziness, and waste resources on resampling that should be used for other page rendering tasks.  An image-heavy page where the image sizes don't match the <img> tags will bloat the browser.  Now, resampling for page zoom is another issue, and an appropriate use.

    A simple resample/scale is a very quick image operation, even for sizeable images. The fact that FFX becomes a little slow on the scroll when it displays a redimensioned image is FFX's problem.

    In addition, a big image made smaller by HTML or CSS can reveal more detail as the user zooms in.



  • That's beside the point. IMNSHO, it's a good thing that current browsers are mostly crap at resizing. It encourages developers to supply images of the correct dimensions. Bandwidth still ain't free, you know.



  • The real WTF is that your friend obviously:

    1. Took a picture of the web page with a Polaroid,

    2  Then ran the picture through a fax machine (probably destroying the machine),  so that he could fax the image to himself,

    3. And then he sent you the fax via snail mail.



  • @Monomelodies said:

    That's beside the point. IMNSHO, it's a good thing that current browsers are mostly crap at resizing. It encourages developers to supply images of the correct dimensions. Bandwidth still ain't free, you know.

    bandwith is cheap as hell, at least where i live. and like dhromed said, when browsers would implement proper scaling algo's a lot more fun/cool stuff would be possible with CSS/JS/HTML. For instance you could make a really cool fractal explorer with just the use of 1 image and a bit of JS/CSS. (and a fat tube to download 10G worth of fractal image :P (i take my fractals seriously, i always expect them to screw up somewhere along the line)  )

    Also on a less cool note, web- developers/designers don't like resizing pictures, it's something that should just be handled. mmm thinking about it, it would be pretty cool if apache simply did it. I think it's pretty possible to write a post-processor plugin me-thingy for apache that checks this, and resizes images appropriately.
     



  • @stratos said:

    bandwith is cheap as hell, at least where i live.

    Where I live, it's $40/month.  Cheap as hell would be under $5/month. 



  • Er, that depends on the kind of traffic you're dealing with. Standard accounts are cheap enough, but as soon as you exceed your bandwidth limit most providers start billing extra. It won't bankrupt anyone, but I'd hate to have to explain to my clients why *my* laziness is costing them money.

    Anyway, I used "free" more in the programmers' sense. Speaking of which, let Apache do it? How should Apache know what size an image is supposed to be? Besides that, processor time and memory ain't free either. You *really* don't want that on a high-traffic site, believe me, even if you cache generated images. Image modules tend to be rather bulky (and for good reason of course, there's a lot of stuff possibly going on in a variety of formats) and it would have to be loaded on every instance. For every image...



  • @Monomelodies said:

    Er, that depends on the kind of traffic you're dealing with. Standard accounts are cheap enough, but as soon as you exceed your bandwidth limit most providers start billing extra. It won't bankrupt anyone, but I'd hate to have to explain to my clients why my laziness is costing them money.

    Anyway, I used "free" more in the programmers' sense. Speaking of which, let Apache do it? How should Apache know what size an image is supposed to be? Besides that, processor time and memory ain't free either. You really don't want that on a high-traffic site, believe me, even if you cache generated images. Image modules tend to be rather bulky (and for good reason of course, there's a lot of stuff possibly going on in a variety of formats) and it would have to be loaded on every instance. For every image...

    I was just pitching the idea that popped im my head while writing,  but i don't think processor time and memory consumption will be that bad, it could basically be a regex that looks for image tags. Then looks if that image has been processed already, if so it can rewrite the img tag to refer to the cached image, if not it could look at the hight&width attributes of the image tag and create a resized image.

    But thinking about it, while such a thing  would have been great to have ~5 years ago or so, it's use today would probably be limited, because lots of stuff get's handled by CSS or JS today, and it would probably be too much of a hassle to also start parsing linked/inline CSS. Also doing the JS and finding out if it messes with the images would probebly even fall in the nigh impossible category.
     



  • @Monomelodies said:

    That's beside the point. IMNSHO, it's a good thing that current browsers are mostly crap at resizing. It encourages developers to supply images of the correct dimensions. Bandwidth still ain't free, you know.


    Ah, but these are different symptoms. We don't need "ugly image" to communicate the crapness of using a big, slow, lazy image. The image will be slow to load, and that's annoying and clear enough. Developers will be encouraged to supply correct images because they know that an oversized image is going to load less zippy.

    And the art of putting a design from Photoshop into HTML/CSS isn't entirely what I might call "development". It's not programming, really.

    @stratos said:
    Also on a less cool note, web- developers/designers don't like resizing pictures, it's something that should just be handled. mmm thinking about it, it would be pretty cool if apache simply did it. I think it's pretty possible to write a post-processor plugin me-thingy for apache that checks this, and resizes images appropriately.


    Mwoah. Are you talking about thumbnails? I don't think I'd like having the full-size image downloaded into a thumbnail size. You'll want a separate thumbnail image, or on-the-fly server resampling, so that the final page has dimensions that match the HTML/layout.



  • @dhromed said:

    Ah, but these are different symptoms. We don't need "ugly image" to communicate the crapness of using a big, slow, lazy image. The image will be slow to load, and that's annoying and clear enough. Developers will be encouraged to supply correct images because they know that an oversized image is going to load less zippy.

    Obviously, but with the current spread of broadband it's easy to forget for the less optimisation-minded developers. Besides, a 30k JPEG will load fast enough, but why serve that if you could serve a 2k thumbnail instead? Not really a problem for your average personal blog, but as soon as your site gets tens of thousands of hits a day these differences start to add up.

    @dhromed said:

    And the art of putting a design from Photoshop into HTML/CSS isn't entirely what I might call "development". It's not programming, really.

    That's why I called it "development", not "programming" ;-)



  • @dhromed said:

    Ah, but these are different symptoms. We don't need "ugly image" to communicate the crapness of using a big, slow, lazy image.

    Yes, we do. Most web developers I've known have been glorified Photoshop monkeys who simply can't grasp why they should care about "nerd" issues like bandwidth, rendering speed, etc. Ugly rendering is the only leverage we have against these people.



  • @Monomelodies said:

    For instance you could make a really cool fractal explorer with just the use of 1 image and a bit of JS/CSS. (and a fat tube to download 10G worth of fractal image :P (i take my fractals seriously, i always expect them to screw up somewhere along the line)  )

    A really cool fractal explorer exists. It's called XaoS. And it doesn't take up masses of disc space. On my (32bit) machine, it zooms in up to 10^17 times. So you're talking on the order of 10^34 bytes for your static image, just to match XaoS on ONE fractal. If you turned the entire moon into BluRay discs you'd probably have enough storage.
     



  • @Zylon said:

    @dhromed said:

    Ah, but these are different symptoms. We don't need "ugly image" to communicate the crapness of using a big, slow, lazy image.

    Yes, we do. Most web developers I've known have been glorified Photoshop monkeys who simply can't grasp why they should care about "nerd" issues like bandwidth, rendering speed, etc. Ugly rendering is the only leverage we have against these people.

    Personally, I would have simply made the browser check to see if the actual image dimensions matched those quoted in the tag, and if not, displayed a baboon instead.



  • @asuffield said:

    @Zylon said:

    @dhromed said:

    Ah, but these are different symptoms. We don't need "ugly image" to communicate the crapness of using a big, slow, lazy image.

    Yes, we do. Most web developers I've known have been glorified Photoshop monkeys who simply can't grasp why they should care about "nerd" issues like bandwidth, rendering speed, etc. Ugly rendering is the only leverage we have against these people.

    Personally, I would have simply made the browser check to see if the actual image dimensions matched those quoted in the tag, and if not, displayed a baboon instead.

    The problem is then you lose out on actual beneficial uses of image scaling - like the layout at getelementsby.com



  • Personally, I think having your layout depend on supplied dimensions is bad design (thus no need to supply them). But that's just me, prolly.



  • @Monomelodies said:

    Personally, I think having your layout depend on supplied dimensions is bad design (thus no need to supply them). But that's just me, prolly.

    Interesting point, however because i also want my page to load pretty, it's preferred to, when needed, define your dimensions. If you don't; Most browsers will first draw the page, and then load the images and adjust the images on the page to the proper dimensions. This creates  a very annoying effect, where the layout changes constantly in the first few moments of loading the page. By defining hight and width you can prevent this. The browser will use those attributes to adjust the layout from the start. Thus creating a much less "jumpier" loading.



  • @stratos said:

    @Monomelodies said:

    Personally, I think having your layout depend on supplied dimensions is bad design (thus no need to supply them). But that's just me, prolly.

    Interesting point, however because i also want my page to load pretty, it's preferred to, when needed, define your dimensions. If you don't; Most browsers will first draw the page, and then load the images and adjust the images on the page to the proper dimensions. This creates  a very annoying effect, where the layout changes constantly in the first few moments of loading the page. By defining hight and width you can prevent this. The browser will use those attributes to adjust the layout from the start. Thus creating a much less "jumpier" loading.

    I tend to agree. I hate it when I have to chase a link down the page because the bloody browser is loading images up the top - for example, loading images in a table which increase the height of each row as they go, and I'm trying to click on the link in row 20.


    So yeah, I think it's just Monomelodies. :)

     



  • @stratos said:

    @Monomelodies said:

    Personally, I think having your layout depend on supplied dimensions is bad design (thus no need to supply them). But that's just me, prolly.

    Interesting point, however because i also want my page to load pretty, it's preferred to, when needed, define your dimensions. If you don't; Most browsers will first draw the page, and then load the images and adjust the images on the page to the proper dimensions. This creates  a very annoying effect, where the layout changes constantly in the first few moments of loading the page. By defining hight and width you can prevent this. The browser will use those attributes to adjust the layout from the start. Thus creating a much less "jumpier" loading.

    Except that proper HTML will not, or very rarely, use images for layout/design bits.

    @Quinnum said:

    I tend to agree. I hate it when I have to
    chase a link down the page because the bloody browser is loading images
    up the top - for example, loading images in a table which increase the
    height of each row as they go, and I'm trying to click on the link in
    row 20.

    Oh yes, totally.

     



  • @dhromed said:

    Except that proper HTML will not, or very rarely, use images for layout/design bits.

    QED. :-) 

    @Quinnum said:

    I tend to agree. I hate it when I have to chase a link down the page because the bloody browser is loading images up the top - for example, loading images in a table which increase the height of each row as they go, and I'm trying to click on the link in row 20.

    Yup, that's why table-based layouts are also a bad idea ;-) And if I would have such a thing (I don't know, maybe on a MySpace-type site with lots of user-thumbs) I'd use min-height (or height in IE6) for the surrounding element since that's the one that should have the height - my point was it's bad design to let it depend on the image since in your example it's a property of the containing element. If nothing else, image sizes might change in the future.

    But this is becoming a quite theoretical argument of course ;-) 



  • @dhromed said:

    @stratos said:

    @Monomelodies said:

    Personally, I think having your layout depend on supplied dimensions is bad design (thus no need to supply them). But that's just me, prolly.

    Interesting point, however because i also want my page to load pretty, it's preferred to, when needed, define your dimensions. If you don't; Most browsers will first draw the page, and then load the images and adjust the images on the page to the proper dimensions. This creates  a very annoying effect, where the layout changes constantly in the first few moments of loading the page. By defining hight and width you can prevent this. The browser will use those attributes to adjust the layout from the start. Thus creating a much less "jumpier" loading.

    Except that proper HTML will not, or very rarely, use images for layout/design bits.



    I disagree, lots of new designs use big & wide visuals as a header of the page. Also lots of small icons get used the accentuate certain parts of a website or special purpose stuff. This is where you want to explicitly define the height and width.



  • @stratos said:

    @Monomelodies said:

    Personally, I think having your layout depend on supplied dimensions is bad design (thus no need to supply them). But that's just me, prolly.

    Interesting point, however because i also want my page to load pretty, it's preferred to, when needed, define your dimensions. If you don't; Most browsers will first draw the page, and then load the images and adjust the images on the page to the proper dimensions. This creates  a very annoying effect, where the layout changes constantly in the first few moments of loading the page. By defining hight and width you can prevent this. The browser will use those attributes to adjust the layout from the start. Thus creating a much less "jumpier" loading.

    Of course, this is itself nothing more than a design flaw in the web. There's no particular reason why this behaviour should exist, and there's several obvious ways that it could have been designed to eliminate it. Someday it may even be corrected, although probably not while Microsoft still has a stranglehold on the market. 



  • @asuffield said:

    @stratos said:

    @Monomelodies said:

    Personally, I think having your layout depend on supplied dimensions is bad design (thus no need to supply them). But that's just me, prolly.

    Interesting point, however because i also want my page to load pretty, it's preferred to, when needed, define your dimensions. If you don't; Most browsers will first draw the page, and then load the images and adjust the images on the page to the proper dimensions. This creates  a very annoying effect, where the layout changes constantly in the first few moments of loading the page. By defining hight and width you can prevent this. The browser will use those attributes to adjust the layout from the start. Thus creating a much less "jumpier" loading.

    Of course, this is itself nothing more than a design flaw in the web. There's no particular reason why this behaviour should exist, and there's several obvious ways that it could have been designed to eliminate it. Someday it may even be corrected, although probably not while Microsoft still has a stranglehold on the market. 

    What in the name of god does Microsoft have to do with how web browsers render HTML? The only way to completely avoid this problem is for browsers to not display anything until the page has finished loading. And that would suck. So that's why it's good practice to specify your image sizes whenever possible.



  • @stratos said:

    @dhromed said:
    Except that proper HTML will not, or very rarely, use images for layout/design bits.

    I disagree, lots of new designs use big & wide visuals as a header of the page. Also lots of small icons get used the accentuate certain parts of a website or special purpose stuff. This is where you want to explicitly define the height and width.

    Logos and icons aside, there's still a whole class of graphical elements that should not be present in HTML. 

    And I personally put icons, if placed next to a text label, in the background as well. Easier positioning + cleaner HTML. Single icons without accompanying text are usually images, though.

    The point is basically to have a page that is an understandable document without CSS. I re-structured my site recently to conform to this.
     



  • @Zylon said:

    @asuffield said:
    @stratos said:

    @Monomelodies said:

    Personally, I think having your layout depend on supplied dimensions is bad design (thus no need to supply them). But that's just me, prolly.

    Interesting point, however because i also want my page to load pretty, it's preferred to, when needed, define your dimensions. If you don't; Most browsers will first draw the page, and then load the images and adjust the images on the page to the proper dimensions. This creates  a very annoying effect, where the layout changes constantly in the first few moments of loading the page. By defining hight and width you can prevent this. The browser will use those attributes to adjust the layout from the start. Thus creating a much less "jumpier" loading.

    Of course, this is itself nothing more than a design flaw in the web. There's no particular reason why this behaviour should exist, and there's several obvious ways that it could have been designed to eliminate it. Someday it may even be corrected, although probably not while Microsoft still has a stranglehold on the market. 

    What in the name of god does Microsoft have to do with how web browsers render HTML? The only way to completely avoid this problem is for browsers to not display anything until the page has finished loading. And that would suck. So that's why it's good practice to specify your image sizes whenever possible.

    You don't appear to have thought about how to solve this problem properly, instead of the current workaround (which consists of duplicating the image metadata by hand). The root cause is that web pages are assembled from multiple discrete files with data scattered all over the place, and some data which is essential to progressive display is just not delivered early enough. The obvious solution is to change the way in which these files are combined and delivered to the client (there's loads of variations on the theme, but a few minor changes to HTTP and a slightly smarter server would probably be the simplest way to get the job done). Obviously this requires the cooperation of the client, and in case you hadn't heard, Microsoft has control over the most widely deployed web client and is notorious for failing to implement new features.



  • Concrete examples, or you're just babbling.



  • If you can't be arsed to do your share of the thinking, I can't be arsed to explain things to you.



  • @asuffield said:

    @Zylon said:
    @asuffield said:

    Of course, this is itself nothing more than a design flaw in the web. There's no particular reason why this behaviour should exist, and there's several obvious ways that it could have been designed to eliminate it. Someday it may even be corrected, although probably not while Microsoft still has a stranglehold on the market. 

    What in the name of god does Microsoft have to do with how web browsers render HTML? The only way to completely avoid this problem is for browsers to not display anything until the page has finished loading. And that would suck. So that's why it's good practice to specify your image sizes whenever possible.

    You don't appear to have thought about how to solve this problem properly, instead of the current workaround (which consists of duplicating the image metadata by hand). The root cause is that web pages are assembled from multiple discrete files with data scattered all over the place, and some data which is essential to progressive display is just not delivered early enough. The obvious solution is to change the way in which these files are combined and delivered to the client (there's loads of variations on the theme, but a few minor changes to HTTP and a slightly smarter server would probably be the simplest way to get the job done). Obviously this requires the cooperation of the client, and in case you hadn't heard, Microsoft has control over the most widely deployed web client and is notorious for failing to implement new features.

    Oddly enough, IE already has the feature you're asking for: MHTML - the webpage and all images are in a single file, MIME-encoded with the images as attachments. I don’t know that there are any web servers in the wild that deliver pages in this format, but IE does support it. That this has not been widely implemented, perhaps, indicates that there are shortcomings in this approach other than Microsoft having a “stranglehold on the market”.

    Proof that it works. Open the URL directly in IE; downloading it in firefox and loading the file in IE works too, but proves nothing. Note this is probably a static .mht file, but the client can't tell the difference.



  • @Zylon said:

    What in the name of god does Microsoft have to do with how web browsers render HTML? The only way to completely avoid this problem is for browsers to not display anything until the page has finished loading. And that would suck. So that's why it's good practice to specify your image sizes whenever possible.

    You're missing the point, in all due respect. The main framework of your page should render correctly in all of the following cases:

    - fully
    - without images
    - without css
    - without javascript
    - without any combination of the above 3

    Whether or not it would still look good is beside the point (except for javascript, arguably and depending on the type of site) - it should still make sense. Actually, come to think of it, the whole <img> tag ought to be used as scarcely as possible. The few images you use it for (e.g., an illustration at the top of your blog post) won't mess up your page layout too much whilst loading.

    By all means, supply dimensions for those images, but they are derived from the source file, not the other way round. Your HTML (or CSS) in that case is doing the browser a favour, it's not behaviour the server should or could rely on.

    (What if some user agent has a default zoom for the visually impaired? Say, 200%? How should the server know?)
     



  • @Monomelodies said:

    @Zylon said:

    What in the name of god does Microsoft have to do with how web browsers render HTML? The only way to completely avoid this problem is for browsers to not display anything until the page has finished loading. And that would suck. So that's why it's good practice to specify your image sizes whenever possible.

    You're missing the point, in all due respect. The main framework of your page should render correctly in all of the following cases:

    - fully
    - without images
    - without css
    - without javascript
    - without any combination of the above 3

    Whether or not it would still look good is beside the point (except for javascript, arguably and depending on the type of site) - it should still make sense. Actually, come to think of it, the whole <img> tag ought to be used as scarcely as possible. The few images you use it for (e.g., an illustration at the top of your blog post) won't mess up your page layout too much whilst loading.

    By all means, supply dimensions for those images, but they are derived from the source file, not the other way round. Your HTML (or CSS) in that case is doing the browser a favour, it's not behaviour the server should or could rely on.

    (What if some user agent has a default zoom for the visually impaired? Say, 200%? How should the server know?)

    what if, what if ..  what ever.

    In the current "real world" (tm), usability and accessibility are add-ons, not core functionality.

    The basics of a good website, is solid content and a visually pleasing design. I don't care that your blog can be as easily read in lynx as it can be viewed with full ajaxy hotness in a desktop browser.
    Real business websites and webshops and the likes don't have the incentive to pay for having all that stuff. Yes there are clients that will, but mostly the're government or have some kind of moral background.

    I agree with what dhromed said above, the html should be clean and optimal use should be made of css to ensure that a website is easy to adapt and extend. A incidental effect of this, is that your website will be pretty usable even without css. However the stuff you list is fluff and special request only.  99.95%* of the internet viewing public has javascript support. Javascript regression is imho pointless fluff.

    And believe it or not, but images ARE content. They are part of what makes a website nice to visit.

    * statistics courtesy of my thumb  



  • @stratos said:

    99.95%* of the internet viewing public has javascript support. Javascript regression is imho pointless fluff.

    * statistics courtesy of my thumb  

    Even if it was a statistic from a website which relies on Javascript available&enabled, the "self fullfilling prophecy" effect would still render the statistic worthless.

    If a website only works well with browser X, most visitors use X. Because Y users will go away imediately an not come back. 

    <font size="-5">
    </font>


  • @Random832 said:

    Oddly enough, IE already has the feature you're asking for: MHTML - the webpage and all images are in a single file, MIME-encoded with the images as attachments. I don’t know that there are any web servers in the wild that deliver pages in this format, but IE does support it. That this has not been widely implemented, perhaps, indicates that there are shortcomings in this approach other than Microsoft having a “stranglehold on the market”.

    As I understand it, it doesn't quite do what I described: it's just concatenated the images onto the end. In order for this to work properly in the progressive style, it needs to send the image headers first, and the image bodies later. That way the browser can do the layout while waiting for the (large, slow) image bodies to load.

    Right general idea, but not quite taken far enough to be useful. 



  • @asuffield said:

    @Random832 said:

    Oddly enough, IE already has the feature you're asking for: MHTML - the webpage and all images are in a single file, MIME-encoded with the images as attachments. I don’t know that there are any web servers in the wild that deliver pages in this format, but IE does support it. That this has not been widely implemented, perhaps, indicates that there are shortcomings in this approach other than Microsoft having a “stranglehold on the market”.

    As I understand it, it doesn't quite do what I described: it's just concatenated the images onto the end. In order for this to work properly in the progressive style, it needs to send the image headers first, and the image bodies later. That way the browser can do the layout while waiting for the (large, slow) image bodies to load.

    Right general idea, but not quite taken far enough to be useful. 

    I think it's a solution for something that isn't a problem. the img tag has a height and a width attribute for a reason, it simply isn't the same information as the images width and height (despite what some people would wish).

    Although it would be a nice fall-back detection device for when no height and width are defined. And since most image formats give you the dimensions in the header you could probably implement it without having to change a thing to how web pages are served. By first requesting the images and after reading the header simply already apply the height and width to the layout while waiting for the rest of the data.



  • @pacohope said:

    he appears to have printed the web page and scanned the print out back in before mailing it to me.

    The image looks far to straight and orderly to be a scan-and-print. Everything's pixel-perfect and the dithering is not at all aliased. I would guess he used some form of "print to file" tool, maybe PDF Creator set to output images instead of PDFs.

    (anybody who hasn't already: right-click the image and View Image to see the full-size, non-crappifiied one) 



  • @stratos said:

    By first requesting the images and after reading the header simply already apply the height and width to the layout while waiting for the rest of the data.

    Problem with that idea is that many web servers are configured to limit multiple connections from the same source. So if the browser tries and obtains all the images near-simultaneously, the server may simply refuse to server more than one at a time, negating this idea. Or the browser can abort downloading the image after receiving the header, but even so, the browser is still making more requests than it should.

    Beside, in general, the client shouldn't have to kludge for the server's mistakes. The website should have height and width in its img tags. A html editor that could add the attributes automatically would be a good idea IMHO. (Does one exist?)



  • @m0ffx said:

    @stratos said:

    By first requesting the images and after reading the header simply already apply the height and width to the layout while waiting for the rest of the data.

    Problem with that idea is that many web servers are configured to limit multiple connections from the same source. So if the browser tries and obtains all the images near-simultaneously, the server may simply refuse to server more than one at a time, negating this idea. Or the browser can abort downloading the image after receiving the header, but even so, the browser is still making more requests than it should.

    Beside, in general, the client shouldn't have to kludge for the server's mistakes. The website should have height and width in its img tags. A html editor that could add the attributes automatically would be a good idea IMHO. (Does one exist?)

    It's not really a server mistake, because the server 'serves' files. It doesn't even really understand what it's serving, nor should it really care.

    You could do it at the server level though, it wouldn't be so hard for a apache module to search for image tags and their source and put the height & width in the html. Although doing that would give you some problems to solve surrounding performance and such.

    But i think this is something that should be solved at client level, because it's a html processing thing. The other option is to change or modify one of the protocolls, perhaps in a http header variable, like is done with meta information for email header stuff, "X-IMAGEWIDTH & X-IMAGEHEIGHT", although i don't know if http allows for custom headers, so that's why the spec might have to be adjusted for that.

    browsers already try to download images in parallel, so limiting connections to only 1 would already be a problem. And the limit should just be something sensible, like perhaps 10 or 20 connections at a time, at that level a DOS attack is made less likely to be affective, and a DDOS won't be stopped by limiting the connections anyway. But the user experience is still that of a page that seems to load faster. (taking into account that the average website shouldn't really have more then that many remote entities anyway, besides special purpose websites of course like stock image sites or flickr or whatever)

    Having the editor do it is a moot point, i would find it annoying, perhaps someone else wouldn't.



  • @asuffield said:

    @Random832 said:

    Oddly enough, IE already has the feature you're asking for: MHTML - the webpage and all images are in a single file, MIME-encoded with the images as attachments. I don’t know that there are any web servers in the wild that deliver pages in this format, but IE does support it. That this has not been widely implemented, perhaps, indicates that there are shortcomings in this approach other than Microsoft having a “stranglehold on the market”.

    As I understand it, it doesn't quite do what I described: it's just concatenated the images onto the end. In order for this to work properly in the progressive style, it needs to send the image headers first, and the image bodies later. That way the browser can do the layout while waiting for the (large, slow) image bodies to load.

    Right general idea, but not quite taken far enough to be useful. 

    Yes, when you save a webpage as a file, the HTML is first. But there's no reason a server couldn't put the images first and have it still work in IE. My point was: IE has support for this feature. It's not IE's fault that web servers don't use it. It would require no further action from the developers of IE to allow web servers to do what you're suggesting.



  • Just realized what you're actually saying - well, there's no inherent reason the servers couldn't just include the dimensions of each image along with the HTML file, rather than having to serve up the whole header at that time. Maybe the dimensions could be included in the img tag, as some sort of attribute.



  • @Random832 said:

    Just realized what you're actually saying - well, there's no inherent reason the servers couldn't just include the _dimensions_ of each image along with the HTML file, rather than having to serve up the whole header at that time. Maybe the dimensions could be included in the img tag, as some sort of attribute.

    Precisely. There's no reason why it couldn't be done, and there's dozens of variations on how you could do it (changes to html, to file packaging, to http, whatever). It just hasn't been done.


Log in to reply