Credit card number autocompletion



  • Today, I was buying a textbook, and I had to go back and change the delivery option.  This made me re-enter my credit card information.  Fortunately, I didn't have to pull my credit card out again to read the numbers:

    Isn't there some nonstandard hackamajig to tell browsers to not autocomplete sensitive fields?


  • Considered Harmful

    It will be standardized in HTML5.

    I had a similar problem with Firefox developing an Edit Profile page.

    It was laid out as such:

    -[ Personal Information ] ---
    First Name: [_______]
    Last Name: [_______]
    ... snip ...
    Phone: [_______]
    

    -[ [x] Change Password ] ---
    New Password: []
    New Password (confirm): [
    ]

    The damned browser kept filling in the username in the phone field (even overwriting the pre-propulated value there) and the password in the next box down, despite the fact that it was in a different fieldset and even disabled until you ticked the checkbox. That in turn would kick off the validator saying the two passwords didn't match.

    And don't get me started on the Google Toolbar users complaining about yellow textboxes...



  • It's also hell in several administrative interfaces, as you sometimes have a password box (e.g. for protecting a subforum) and browsers fill your main password in there.

    Good thing HTML5 will contain it.


  • Considered Harmful

    Personally, I've come to terms with adding in browser extension attributes post-load via DOM as an acceptable alternative to non-conformant inline attributes or custom DTDs.

    $( 'input[name="acct"],select[name="exp-mo"],select[name="exp-yr"],input[name="cvv"]' )
    .attr( { autocomplete: 'no' } );

    Problem solved, validators are happy.

    …and if you use NoScript, you've made your bed: expect it to break.



  • @joe.edwards said:

    Personally, I've come to terms with adding in browser extension attributes post-load via DOM as an acceptable alternative to non-conformant inline attributes or custom DTDs.

    $( 'input[name="acct"],select[name="exp-mo"],select[name="exp-yr"],input[name="cvv"]' )
    .attr( { autocomplete: 'no' } );

    Problem solved, validators are happy.

    …and if you use NoScript, you've made your bed: expect it to break.

    You could just not give a crap about validation.

    The problem with adding stuff like this in post-load is that, on a ton of sites, it's easy to fill-in the form before the load even fires. This really annoys me (for example) on sites that have those search fields pre-filled with something like "type in a movie or actor name", and so you click in it and start typing but the text doesn't clear because you're typing before the load event fires, and so the handler that clears the example text hasn't been installed yet. Then you hit "enter" and you get a search like "type in a mokeanu reevesvie or actor name" which of course returns the wrong results. Leading to a fucking terrible user experience. (Naming names: Netflix.)

    Twitter has an awful one, also, where if you hit the Login button before the page is fully loaded, instead of logging you in it'll take you to a *second* static-HTML login page. If your internet connection is buggy (say, I'm on a commuter train with spotty wi-fi), this second login page won't load properly, and if you hit refresh to load it you'll get a 500 error. Which is freakin' annoying, and an even *worse* user experience, especially since on a slow internet connection it's almost impossible to tell whether Twitter's load event has already fired or not. (It fires long after all the content looks rendered.)

    I blame the rise of "not really buggy, but kind of does a lot of stuff wrong" jQuery for all of this. Also web developers who only test their site on super-fast office internet connections and don't eat their own fucking dogfood.

    In short, don't do form DOM stuff at load unless:
    1) You're adding the field itself during the load event (i.e. the user can't interact with it before because it doesn't exist on the page)
    2) The thing you're adding doesn't affect the functionality of the form

    Don't rely on "eh, the page is small, it'll load fast enough", because they won't help people in the "spotty wifi on commuter train" scenario. And unless you take option 1, which is stupid because it completely breaks your site for people with JS off, you can't solve this problem and still validate. Tough crap.


  • Considered Harmful

    True, It's rare that I'm not on a connection that is >10 Mbit or more. Note that jQuery used properly should hook on the DOM ready event, not onload, which means it runs as soon as the complete document is parsed rather than when all resources are loaded. I hardly even see a flicker when the ready events run; they seem to fire in unison with the CSS loading. I use one script file and one CSS file for an entire site so it gets cached in the first page load and doesn't need to be fetched again for the rest of the session.

    Any watermarked textbox that behaves as you describe is simply doan it rong. The watermark text needs to be set after DOM ready, and then only if the textbox is already blank and is not focused. Blur resets to the watermark if it is empty, focus and form submit set the textbox to blank iff it contains the original watermarked text. Done this way, I have never seen it misbehave.



  • @joe.edwards said:

    Note that jQuery used properly should hook on the DOM ready event, not onload, which means it runs as soon as the complete document is parsed rather than when all resources are loaded.
     

    .ready() fires on DOM complete, which is quite quick.
    .load() fires on page + resources complete, which is often a good deal later than DOMContentLoaded.

    I discovered the clear distinction between these when writing a script that fits images to a box onload, and Chrome kept telling me that the images were 0×0, thus failing to scale the images.

     

    It's said that much slowness can also come from the amount of connections. I even see some sites use a php/asp merger script to combine the multitude of CSS and JS into a single file before writing the LINK or SCRIPT tag. I have no definitive benchmarks on whether this actually makes a difference. After all, bittorrent creates and destroys thousands upon thousands of connections to yank your favourite porn movie, so as far as my network expertise goes (which is rather little) that casts doubt on the the "many connections = slow" idea.



  •  @dhromed said:

    It's said that much slowness can also come from the amount of connections. I even see some sites use a php/asp merger script to combine the multitude of CSS and JS into a single file before writing the LINK or SCRIPT tag. I have no definitive benchmarks on whether this actually makes a difference. After all, bittorrent creates and destroys thousands upon thousands of connections to yank your favourite porn movie, so as far as my network expertise goes (which is rather little) that casts doubt on the the "many connections = slow" idea.

    As far as my experience goes (which, I admit, is rather little as well), it's not so much the number of connections but the number of resources. Downloading a resource can be quite fast if you have a good connection but in HTTP, you still need a full packet round trip to actually request a resource. If you have many small resources (e.g. CSS or JS snippets) then that overhead can seriously slow you down. Add to that the fact that most browsers still have a limit of how many connections they will hold open simultaneously (I think something between 4 and 8 per server). This means that you can request at most 8 resources (per server) in parallel and have to put all other resources into a queue. And then there are still some servers from the middle-ages that require that you tear down and re-establish the connection after each resource. And that doesn't even take redirects into account...

    Compare that with bittorrent where the resource you want is usually huge, you load from hunderds of connections in parallel as opposed to 8 and where I think there are also a lot of complicated optimisations to squeeze the most out of a single connection.



  • @joeyadams said:

    Isn't there some nonstandard hackamajig to tell browsers to not autocomplete sensitive fields?

    I want to get the discussion back to this. A couple years I entered my CC in a friend's browser to help him with some subscription; we were both surprised when the number showed there from the previous time I'd helped him.

    This is one of my best friends, so in this case I didn't mind too much (but friends fall out) but I REALLY don't want this to happen on a more public PC (say in a hotel lobby).



  • Why the hell would you even [b]think[/b] of putting in private information like CC into a public PC in the first place? Form caching in the browser is the last thing you have to worry about, those PCs are usually so rife with spyware that you're guaranteed to be sharing anything you're doing to who knows who.



  • @dhromed said:

    @joe.edwards said:

    Note that jQuery used properly should hook on the DOM ready event, not onload, which means it runs as soon as the complete document is parsed rather than when all resources are loaded.
     

    .ready() fires on DOM complete, which is quite quick.
    .load() fires on page + resources complete, which is often a good deal later than DOMContentLoaded.

    I discovered the clear distinction between these when writing a script that fits images to a box onload, and Chrome kept telling me that the images were 0×0, thus failing to scale the images.

    The pure quantity of huge-name sites that get this wrong tells me that there's something terribly wrong with:

    1) How they learned to use jQuery (sites that get this wrong are nearly universally jQuery sites)

    2) Their QA process*

    I mean, we're not talking about Mom and Pop Flowers, Inc. We're talking about fucking Twitter and Netflix, sites you'd expect to be on the leading edge of web usability. And they're wrong wrong wrong.

    ASTERISK: No modems in the office? Of course Fiddler can simulate a modem but, needless-to-say, all these trendy douche sites are made by Linux fans in Linux, which doesn't have good tools for web development like Fiddler. Not that that stops Linux fans from saying Linux is good for web development-- hah!

    @dhromed said:

    It's said that much slowness can also come from the amount of connections. I even see some sites use a php/asp merger script to combine the multitude of CSS and JS into a single file before writing the LINK or SCRIPT tag.

    If your site is on SSL and you don't have a hardware SSL appliance. But... mostly bunk. The slow part is (can be) the handshake, not the number of connections.

    I mean, in theory it's helpful, because most browsers limit themselves to two connections per domain, but a modem user can't download more than 2 files at once anyway, and a broadband user is probably loading the site fast enough that two connections is plenty. If you were in some weird no-man's-land, like a 128k connection, this might be more significant.

    @dhromed said:

    After all, bittorrent creates and destroys thousands upon thousands of connections to yank your favourite porn movie, so as far as my network expertise goes (which is rather little) that casts doubt on the the "many connections = slow" idea.

    Yeah, but Bittorrent doesn't do SSL handshakes. And leaves modem users in the dust entirely, whereas the web should not.



  • @dhromed said:

    It's said that much slowness can also come from the amount of connections. I even see some sites use a php/asp merger script to combine the multitude of CSS and JS into a single file before writing the LINK or SCRIPT tag. I have no definitive benchmarks on whether this actually makes a difference. After all, bittorrent creates and destroys thousands upon thousands of connections to yank your favourite porn movie, so as far as my network expertise goes (which is rather little) that casts doubt on the the "many connections = slow" idea.

    CMSs sometimes do this. I know that Drupal does, and it speeds up the experience quite a bit. It even caches different combinations of JS/CSS files, so that you can have some files loaded on certain pages but not others, and you still only have one download per page for JS and one for CSS.

    And IE6 was a right bastard (no surprise!); it wouldn't even load our Drupal pages correctly, because it has a limit of 16 or 18, and due to 3rd-party module styles we went over that. Merging CSS solved that.



  • @blakeyrat said:

    ASTERISK: No modems in the office? Of course Fiddler can simulate a modem but, needless-to-say, all these trendy douche sites are made by Linux fans in Linux, which doesn't have good tools for web development like Fiddler. Not that that stops Linux fans from saying Linux is good for web development-- hah!
    Weh? Google says that [url=http://sourceforge.net/projects/nettool/]you're[/url] [url=http://www.parosproxy.org/]wrong[/url], but that sort of manipulation is done better in the browser anyway, imo, via Firefox extensions or whatever. I don't blame you for not understanding the Linux user's mindset, after all, I can hardly understand yours, but dropping inflammatory remarks like that makes me wonder if I'm just falling for troll bait...



  • @JamesKilton said:

    Why the hell would you even think of putting in private information like CC into a public PC in the first place?

    How about when you're stranded somewhere (say, because of volcanic ash over Iceland?) without a laptop or a working mobile phone or similar, and you need to make some kind of payment that can only be made online, by CC?



  • @PSWorx said:

    This means that you can request at most 8 resources (per server) in parallel and have to put all other resources into a queue. And then there are still some servers from the middle-ages that require that you tear down and re-establish the connection after each resource. And that doesn't even take redirects into account...
     

     Actually, HTTP/1.1 allows the browser to request multiple resources over 1 connection in parallel, without waiting for the first reply to arrive at the client. It's called HTTP pipelining and saves some round trip times, even when using only one HTTP connection.

     

    (However, theoretically not all servers support pipelining, so it can only be used by the browser after a response from the server indicating that it supports pipelining has been received - usually the HTML page requested, and then all images/flash files/CSS/JS/.. can be requested all at the same time.)



  • @joeyadams said:

    Isn't there some nonstandard hackamajig to tell browsers to not autocomplete sensitive fields?

    I remember my experiences with that field about a year ago to be quite a different tone.

    I used to use Blogger for my StackOverflow account OpenID. Since I automatically cleared my cookies after every session back then, every time I wanted to authenticate via Blogger, I'd have to log in. The problem is that Blogger, in what I believe to be "security"*, disables the autocomplete field, on a simple login form of all things, so that I had to type my (exceedingly long) Google password in every time.

    Eventually, I found out that if you go to the main Blogger site in another tab and login there then refresh the OpenID login page, it works just as well. The difference in this method? The main Blogger site allows passwords to be filled.

    In conclusion, we've got a login field, the kind of field that password filling works on, with autocomplete blocked, but if you go to another page, you get a login to the same service without this stupid limitation.

    * Not to be confused with security. Handy reference: Using SSL is security. ROT13ing the input using JavaScript before submitting it** is "security".
    ** Blogger doesn't do this, but my bank used to (I don't remember if they still do or not).



  • @Juifeng said:

     Actually, HTTP/1.1 allows the browser to request multiple resources over 1 connection in parallel, without waiting for the first reply to arrive at the client. It's called HTTP pipelining and saves some round trip times, even when using only one HTTP connection.

    (However, theoretically not all servers support pipelining, so it can only be used by the browser after a response from the server indicating that it supports pipelining has been received - usually the HTML page requested, and then all images/flash files/CSS/JS/.. can be requested all at the same time.)

     

    Yes, the problem is that there are too many servers out there that don't support it and that it's extremely difficult to reliably determine if a server supports pipelining or not.  That's why, I think, every browser uses their own heuristics to find that out and most only use pipelining in a fraction of all cases.



  •  IMHO, Javascript should only be used to enhance a website after everything else is finished and working 100%, and absolutely should NEVER be used for anything critical to functionality unless it is absolutely impossible to do without it.


  • Considered Harmful

    Greasemonkey can attach the autocomplete attribute to any field on any site you wish, with very little code required. I customize all the sites I frequent in small, convenience-enhancing ways; like adding keyboard shortcuts for pagination, or AJAX load-the-next-page-on-this-one, or automatically hide posts from BlakeyRat<add>SpectateSwamp</add>.



  • @Master Chief said:

     IMHO, Javascript should only be used to enhance a website after everything else is finished and working 100%, and absolutely should NEVER be used for anything critical to functionality unless it is absolutely impossible to do without it.

     

    I'm sorry but please update your version of the internet. You still seem to be running version 1.0. There are some important new technologies you should get to know about.



  • @blakeyrat said:

    The pure quantity of huge-name sites that get this wrong tells me that there's something terribly wrong with:

    1) How they learned to use jQuery (sites that get this wrong are nearly universally jQuery sites)

    2) Their QA process*

    I mean, we're not talking about Mom and Pop Flowers, Inc. We're talking about fucking Twitter and Netflix, sites you'd expect to be on the leading edge of web usability. And they're wrong wrong wrong.

    You could say that jQuery is 'the new PHP'. Tons of bad examples and bad practices around.

    The fact that most traditional (read: seasoned, senior) developers still treat javascript as some third class citizen doesn't help either. They'll happily acknowledge that they're ok with copying & pasting what they know is gunk, "because it's just some silly script anyway" and "it's not like it's real code we have to maintain" so "as long as it works it's okay".

    Typically these are the same people perpetuating the cargo-cult doctrine of Yahoo's web performance guidelines, some of which are outdated by years and most of which were drafted far too general to be applied to the fast changing face of the modern web.

    Setting these things straight will require such a massive change in the mentality of enterprise web development that you're better off quiting while you're ahead. Really depressing...



  •  @PSWorx said:

    @Master Chief said:

     IMHO, Javascript should only be used to enhance a website after everything else is finished and working 100%, and absolutely should NEVER be used for anything critical to functionality unless it is absolutely impossible to do without it.

     

    I'm sorry but please update your version of the internet. You still seem to be running version 1.0. There are some important new technologies you should get to know about.

    If you're doing it right, you still have a fallback that is just as functional (if slightly slower to use) without javascript. I haven't come across anything that absolutely requires something AJAX-y that can't be reproduced with simple posts/gets of the entire page. Is there such a thing? (honestly asking for personal knowledge, not attempting to troll)



  • @EJ_ said:

    If you're doing it right, you still have a fallback that is just as functional (if slightly slower to use) without javascript. I haven't come across anything that absolutely requires something AJAX-y that can't be reproduced with simple posts/gets of the entire page. Is there such a thing? (honestly asking for personal knowledge, not attempting to troll)

     

    This is true. However, for most non-trivial use cases of AJAX, you don't build your site around the HTML only version and then add JS like the icing on the cake. Instead you use a decent framework like GWT that generates both the ajax and the fallback version for you from the same source.

    I agree that there are probably very few pages that strictly require JS. (Though even that may change with new HTML5 tags like <canvas> that don't work without JS support). However, many sites would only be useable in principle but would be an absolute horrific experience in practice (Like chat pages). Also, to make the most of ajax, you have to design your processing model very differently than for a traditional app - this includes the server side. I think in many cases, it's actually easier to tack a HTML fallback version onto a completed ajax app than to tack ajax onto a traditional HTML based app.

    (Disclaimer: I'm talking about sites like GMail, Facebook or DeviantArt that have the majority of their UI running via ajax, not about the occasional "like" button on an otherwise static page.)



  • @PSWorx said:

    I agree that there are probably very few pages that strictly require JS.
     

    And twitter isn't one of them.

    I mention this as commentary on their appalling all-ajax-all-the-time policy.



  • @dhromed said:

    @PSWorx said:

    I agree that there are probably very few pages that strictly require JS.
     

    And twitter isn't one of them.

     

    Which is why you use a specialized client if you want to twitter and not the web page.

    Edit: Misunderstood you. But yeah, what they do use and how they do it might be another reason...



  • @Master Chief said:

     IMHO, Javascript should only be used to enhance a website after everything else is finished and working 100%, and absolutely should NEVER be used for anything critical to functionality unless it is absolutely impossible to do without it.

    Corollary: If Javascript must be used on a site, then place a proper <noscript> block to inform JS-less users of this.

    Yes, there are plenty of websites out there that don't do this.



  • @blakeyrant said:

    This really annoys me (for example) on sites that have those search fields pre-filled with something like "type in a movie or actor name", and so you click in it and start typing but the text doesn't clear because you're typing before the load event fires, and so the handler that clears the example text hasn't been installed yet. Then you hit "enter" and you get a search like "type in a mokeanu reevesvie or actor name" which of course returns the wrong results.

    That also happens if you try dragging text into one of those fields. The hint text is only cleared on focus, and not when you're hovering a drag item over it. I don't know whether this is resolved in Windows where this is now a standard feature, because Windows text boxes rarely supported drag and drop, nor ^A to select all.


Log in to reply