An _old_ PHP documentation representative line



  • Yes, the particular feature described has long since changed, but I think it explains a lot.

    From the PHP/FI 2.0 documentation:

    PHP will detect both GET and POST method data coming from HTML forms. One important point to understand is that POST method data is always treated first if both are present. If a PHP variable is defined by the POST method data, or if the variable is defined by the HTTP daemon in the Unix environment, then GET method data cannot overwrite it. This is to prevent somebody from adding ?REMOTE_HOST=some.bogus.host to their URL's and thus tricking the PHP logging mechanism into recording this alternate data. POST method data is however allowed to overwrite these variables.

    Yes, you read that right. POST data was treated as secure. I was spoofing POST data before I knew _anything_ about programming (specifically, for a CGI chatroom which stored your username in a hidden field, and allowed you to change it to someone else's username.)



  • You could always set "variables_order" parameter in php.ini to "EGCSP" to replicate that behaviour in php 4/5 :)



  • Yes, register globals is a huge WTF.



  • Are there any browser plugins/extensions that allow you to spoof POST data or do people just hack up an HTML form and submit it to the remote page?  I know I used to do the latter about 10 years ago so I could get a custom avatar in this old CGI-based chatroom.  I've never treated POST data as secure for that reason.



  • @bighusker said:

    Are there any browser plugins/extensions that allow you to spoof POST data or do people just hack up an HTML form and submit it to the remote page?

    Dude. wget. 



  • @bighusker said:

    Are there any browser plugins/extensions that allow you to spoof POST data or do people just hack up an HTML form and submit it to the remote page?  I know I used to do the latter about 10 years ago so I could get a custom avatar in this old CGI-based chatroom.  I've never treated POST data as secure for that reason.

    Indeed, there are entire e-commerce platforms written to provide a "checkout" page that rely on post variables to determine the item added, the description, and the... price. WTF. As if a 10-year-old with Firebug/Web Developer Toolbar/etc couldn't hack that in about 30 seconds.



  • Woah, that's really bad... From the looks of that, you could have REMOTE_HOST as a POST variable, and it would be treated as secure :o

    @bighusker said:

    Are there any browser plugins/extensions that allow you to spoof POST data or do people just hack up an HTML form and submit it to the remote page?  I know I used to do the latter about 10 years ago so I could get a custom avatar in this old CGI-based chatroom.  I've never treated POST data as secure for that reason.

    I usually use Opera's source editing features. Right click → Source to view the source. Edit it however you want, and then use the "Apply Changes" button :)
    Firebug for Firefox has a similar feature (editing the source with instant updates).

    Indeed, there are entire e-commerce platforms written to provide a "checkout" page that rely on post variables to determine the item added, the description, and the... price. WTF.

    I believe that PayPal's shopping cart works like this.



  • @asuffield said:

    @bighusker said:

    Are there any browser plugins/extensions that allow you to spoof POST data or do people just hack up an HTML form and submit it to the remote page?

    Dude. wget. 

     I've used wget before (admitedly, it was usually just to download some c++ source code directly onto my linux box).  However, I never knew it had this feature.  Cool.
     



  • @sootzoo said:

    @bighusker said:

    Are there any browser plugins/extensions that allow you to spoof POST data or do people just hack up an HTML form and submit it to the remote page?  I know I used to do the latter about 10 years ago so I could get a custom avatar in this old CGI-based chatroom.  I've never treated POST data as secure for that reason.

    Indeed, there are entire e-commerce platforms written to provide a "checkout" page that rely on post variables to determine the item added, the description, and the... price. WTF. As if a 10-year-old with Firebug/Web Developer Toolbar/etc couldn't hack that in about 30 seconds.

     

    Wow!  It's amazing how trusting some developers are. 



  • I can't understand why so many people who supposedly design web applications for a living are so clueless about security. Everyone (ESPECIALLY programmers) should know by now that clients can NEVER be trusted with sensitive information. PHP in my opinion shouldn't even be used anymore; its designers over the years have introduced so many incredible security holes like register globals, magic quotes (not exactly a security hole, but part of an attempt to solve another unforgivable omission from the database libraries) and god knows what else I might know about had I actually spent any time working with PHP once I learned how useless it was, that are still enabled just so that some people can run legacy code written 6 years ago by people who can probably barely understand BASIC, that they should be banned from any software-related company or organization for the rest of their lives.

    Also, why are we still sending so many passwords in cleartext? A couple years ago I hacked together a little salted-hash-based password form with maybe a hundred lines of Perl and Javascript... please tell me I'm not the only person who's ever tried this?? Of course hacks like that wouldn't even be necessary if SSL were easier to implement. But we have this "certificate authority oligopoly" where everyone is forced to pay through the nose for security that's just as fraud-susceptible as it would be if it were free. And maybe if there were more SSL sites floating around by now, web server/browser writers would have come up with a TLS implementation that works with virtual hosts and doesn't need to use a different port to explicitly enable encryption. Then there's the "encrypt some stuff so you look secure" approach that Gmail and others take: they go to the trouble of getting a certificate and configuring an SSL server, but only use it for the login, so once you're logged in, unless you typed httpS://mail.google.com in the first place, it drops you back to a cleartext channel so all that precious information that you're using a password to secure in the first place is right there for anyone sitting near the same wireless hotspot as you to easily and *undetectably* grab out of the ether. Does running an entire site through SSL really make it that much slower, especially when you're processing everything through non-byte-compiled Ruby scripts? I swear I've seen hardware crypto accelerators for sale for a few years....

    And are self-signed certs really that bad? Unless someone happens to be dns-spoofing the site you're connecting to the first time you accept the certificate, and that same person does it with the same fake certificate every other time, what's the difference?

    (That whole password/SSL part wasn't connected to the thread at all; I just wanted to do an off-topic security rant. I'm sure I could think of more things to complain about, but it's approaching 4 am :] )
     



  • @ailivac said:

    Everyone (ESPECIALLY programmers) should know by now that clients can NEVER be trusted with sensitive information.


    It's a matter of hearing it, or a matter of sitting down and just applying logic, yet empirical evidence shows that people don't. 

    The problem, as I see it, is to make sure that everyone who enters the programming world, by whatever book, (possibly incompetent) teacher, learns this maxim. Reaching all those who are already in it will be impossible, short of something that would reach every person in the whole world, most of which it would be irrelevant to.

    But what I really don't get is why anyone would proclaim that everyone should know anything in particular.

    A couple years ago I hacked together a little salted-hash-based password form with maybe a hundred lines of Perl and Javascript... please tell me I'm not the only person who's ever tried this??


    In decreasing order of probability, your system had one of these properties:
    -Storing plaintext passwords on the server
    -Constant (per-user) salt, meaning no protection against replay attacks.
    -HUGE DATABASE of password hashes valid for a particular user.

    The first seems to be a bigger no-no than transmitting them in the clear, I guess particularly as the size of the site (usercount) increases. 
    And I suppose "server breakin" is (considered) more likely than someone sniffing the wire near the server.

    Then there's the "encrypt some stuff so you look secure" approach that Gmail and others take: they go to the trouble of getting a certificate and configuring an SSL server, but only use it for the login, so once you're logged in, unless you typed httpS://mail.google.com in the first place, it drops you back to a cleartext channel so all that precious information that you're using a password to secure in the first place is right there for anyone sitting near the same wireless hotspot as you to easily and undetectably grab out of the ether.


    First of all, most browsers, by default, warn you of this. If you turn it off, that should be because you're not interested in knowing, right? (Well, most don't care and only see it as a bother/hindrance to what they're trying to do, but that's not the browser's fault.)

    On the other hand, look at what you do protect.
    The ability to
    -access the data in the future (when the user has moved on to another place)
    -alter or remove the data
    -send mail (verifiably) from that user
    -probably more that I missed.


    Does running an entire site through SSL really make it that much slower, especially when you're processing everything through non-byte-compiled Ruby scripts?
     

    Even AES, which was selected on performance grounds, is a drain on processing power which will probably exceed even interpreted languages producing the mentioned output. We are then talking of more than a doubling of processing power required.

    I swear I've seen hardware crypto accelerators for sale for a few years....


    You sound like you (again) assume everybody (who would be maintaining a site which it would be useful for, at least) know about something you do.

    These products have not become very famous then. I can think of a few possible reasons:
    -Problems integrating such hardware with whatever webserver setup you are using.
    -Stiff price, since they may often be needed in bulk. (I'm only guessing)
    -Low need, because people are happy to have only the "access control" part secured. (In ignorance, sure, but it is a reason)
    -Low visibility where it is indeed installed.
    -If the cards aren't capable enough they may become the bottlenecks themselves. This is one more thing to troubleshoot...
    -Marketing  (This being a factor as important as any technical merits is one of my pet peeves, but it's there.)

    And are self-signed certs really that bad? Unless someone happens to be dns-spoofing the site you're connecting to the first time you accept the certificate, and that same person does it with the same fake certificate every other time, what's the difference?


    A user on YOUR next-hop can create the certificates on-the-fly and self-sign them, doing a mitm attack on you for every site you visit.

    I take it you were tired... it shows.



  • @MaHuJa said:

    First of all, most browsers, by default, warn you of this. If you turn it off, that should be because you're not interested in knowing, right? (Well, most don't care and only see it as a bother/hindrance to what they're trying to do, but that's not the browser's fault.)

    Three of the four major browsers (#4, Safari, I don't know about) indicate https: in a fairly obvious way. IE and FFX colour the entire bar, while Opera only uses a small portion. This is a great step up from the inconspicuous 16x16 lock icon tucked in the corner.



  • @sootzoo said:

    @bighusker said:

    Are there any browser plugins/extensions that allow you to spoof POST data or do people just hack up an HTML form and submit it to the remote page?  I know I used to do the latter about 10 years ago so I could get a custom avatar in this old CGI-based chatroom.  I've never treated POST data as secure for that reason.

    Indeed, there are entire e-commerce platforms written to provide a "checkout" page that rely on post variables to determine the item added, the description, and the... price. WTF. As if a 10-year-old with Firebug/Web Developer Toolbar/etc couldn't hack that in about 30 seconds.

    I have worked with a few online pay systems, and they all either do it like this or support this way of communication. The trick though is that you also have to supply a token, mostly a sha1 key based on the data and a shared secret.

    Last year we even found a cross site scripting exploit that the merchant could use to steal payment data from within the payment provider screen, because they allowed you to "style" the payment screen.
    When we notified the payment provider we got a thank you for submitting it but nothing changed.

    I just keep close tabs on my credit card receipts, it's insured for being stolen so if i ever see payments i didn't make i can get my card blocked and the money refunded. Although since that has not yet happened  i don't know what kind of hassle that will be.



  • @stratos said:

    I have worked with a few online pay systems, and they all either do it like this or support this way of communication. The trick though is that you also have to supply a token, mostly a sha1 key based on the data and a shared secret.

    Websites cq browers don't need to re-tell the application/server what the price of an item is. There's no point in communicating the price back at any time because the application knows everything. All that the browser has to do is ask for an order of 3 × product #481 or somesuch.

    Wouldn't you be stunned if you were buying some product at a store where the clerk asks you:

    "Okay... diamond ring.. umm... So what was the price again?"

    "Er..  $2".

    "Okay! Here you go! Thanks for coming!"



  • @bighusker said:

    Are there any browser plugins/extensions that allow you to spoof POST data or do people just hack up an HTML form and submit it to the remote page?  I know I used to do the latter about 10 years ago so I could get a custom avatar in this old CGI-based chatroom.  I've never treated POST data as secure for that reason.

     For Firefox, there would be Tamper Data for example.
     



  • @dhromed said:

    @stratos said:

    I have worked with a few online pay systems, and they all either do it like this or support this way of communication. The trick though is that you also have to supply a token, mostly a sha1 key based on the data and a shared secret.

    Websites cq browers don't need to re-tell the application/server what the price of an item is. There's no point in communicating the price back at any time because the application knows everything. All that the browser has to do is ask for an order of 3 × product #481 or somesuch.

    Wouldn't you be stunned if you were buying some product at a store where the clerk asks you:

    "Okay... diamond ring.. umm... So what was the price again?"

    "Er..  $2".

    "Okay! Here you go! Thanks for coming!"

    Most webshops don't have the certificates to handle payment data, so they use a payment provider, like ogone, bibit or paypal or whatever. 
    Those don't know about your product, so you have to give them the details.



  • @stratos said:

    @dhromed said:

    @stratos said:

    I have worked with a few online pay systems, and they all either do it like this or support this way of communication. The trick though is that you also have to supply a token, mostly a sha1 key based on the data and a shared secret.

    Websites cq browers don't need to re-tell the application/server what the price of an item is. There's no point in communicating the price back at any time because the application knows everything. All that the browser has to do is ask for an order of 3 × product #481 or somesuch.

    Wouldn't you be stunned if you were buying some product at a store where the clerk asks you:

    "Okay... diamond ring.. umm... So what was the price again?"

    "Er..  $2".

    "Okay! Here you go! Thanks for coming!"

    Most webshops don't have the certificates to handle payment data, so they use a payment provider, like ogone, bibit or paypal or whatever. 
    Those don't know about your product, so you have to give them the details.

     The payment providers could still make a "register vendor account"/"register product" function. Then again, there are already many busyness models that don't depend on fixed prices anymore, so it maybe wouldn't matter much anyway. (Take donation systems and the "play money" of MMORPGs for example)

    By the way, there seems to be another way to make client requests "secure". (At least I've seen that) :Encoding the GET query string via base64 before submitting. Surely no one will ever get behind this secret...



  • @MaHuJa said:


    In decreasing order of probability, your system had one of these properties:
    -Storing plaintext passwords on the server
    -Constant (per-user) salt, meaning no protection against replay attacks.
    -HUGE DATABASE of password hashes valid for a particular user.

    If I remember right (I think I have a backup somewhere from before the first hard drive on my old laptop failed, but I don't feel like digging it up now) it stored a single password hash on the server end. It was actually more of a nonce than a salt. Each time you requested the login page it generated a random string, stored this in a server-side session container (with a standard session ID cookie) and sent this along with a small script and a link to a javascript hash library. The script would pull the plaintext password out of the form, hash it, append the nonce to the hash, then hash that thing, and submit that in lieu of the original password. The CGI script already had the hashed password and the nonce (in session storage; the client doesn't send it back) so it runs the same process, without the initial password hashing step, and compares the submitted hash-of-hash-plus-nonce. Then it removed that session information so replaying with the same nonce wouldn't do anything.


    First of all, most browsers, by default, warn you of this. If you turn it off, that should be because you're not interested in knowing, right? (Well, most don't care and only see it as a bother/hindrance to what they're trying to do, but that's not the browser's fault.)

    I think you're talking about the "You are about to leave a secure page" warning when the form action is an insecure URL, which isn't what's going on here. The login is sent from a secure page to another secure page, which then issues a redirect (after the form submission is done) to an insecure page.

    On the other hand, look at what you do protect.
    The ability to
    -access the data in the future (when the user has moved on to another place)
    -alter or remove the data
    -send mail (verifiably) from that user
    -probably more that I missed.

    Valid point; I know that having someone be able to read a few of your messages one time isn't a huge loss in the long run, but do you at least admit that it's fairly stupid of Google, who has the capability to encrypt everything, and will happily do it if someone specifically asks them to, to not encrypt more than the login by default?



  • @PSWorx said:

    The payment providers could still make a "register vendor account"/"register product" function

    Many established ones simply don't - though strangely, they'll often happily store details like coupon promotions, gift certificate balances, etc - but as mentioned earlier, hopefully the merchant is vigilant enough to recognize that they're being ripped off if somebody tries gaming the payment system / shopping cart / etc.



  • I didn't think of that solution... but then again it's equivalent to my first option; you may as well store the plaintext password. 
    I mean, as far as the system is concerned, the hashed password is pretty much treated as a plaintext password in those systems that just store the plaintext password. With the same weaknesses.
    (With a list of user/pwhashes, and replacing the (first) password hashing process with a passthrough, he could enter the pwhash where a normal user would enter the password, and he'd be in.)
    There's one point where it's better than plaintext passwords, and that's in how the passwords can be used on other sites (same password on lots of sites, the usual way people do it) - at a minimum those that don't use the same scheme.

    -

    I seem to remember seeing "you are about to leave..." when it switches to insecure, regardless of it being a form submission or not. I also know few people (err... none) who doesn't use the checkbox to never show it again immediately. Either way, I guess that makes it rather irrelevant. The worse problem is the cases where the lock icons etc are still on, but the contents going back and forth are not encrypted. I believe that when this happens, it's a matter of a frameset page that was encrypted, while the contents were not. The only way to detect this is to examine the url/connection properties of the specific page.

    -

    It would certainly be in their customers interest to have the encryption on full-time, but I don't know if I agree with the word "stupid" as it may as well be "clever" depending on how much load they have spare. 
    If it would overload their servers to turn it on by default, I would apply the word "stupid" to what would effectively be DOSing their own servers.
    If they would be at their limits or it would cause a hardware upgrade need that much faster, the word "questionable" might be appropriate.



  • @MaHuJa said:

    The worse problem is the cases where the lock icons etc are still on, but the contents going back and forth are not encrypted. I believe that when this happens, it's a matter of a frameset page that was encrypted, while the contents were not. The only way to detect this is to examine the url/connection properties of the specific page.


    It's late for me so i'm not going to look it up, but shouldn't this trigger some kind of warning about insecure entities/items on the page? I seem to remember that a secure page can't contain items from non-secure places. Although this might very well be browser dependant.
     



  • Indeed, there are entire e-commerce platforms written to provide a "checkout" page that rely on post variables to determine the item added, the description, and the... price. WTF. As if a 10-year-old with Firebug/Web Developer Toolbar/etc couldn't hack that in about 30 seconds.

    Not only payment suppliers usually have a hash with a shared secret to make sure the numbers are right, but every supplier I've used, actually has in the callback to the shop the value of how much was actually paid. Any shop built by anyone with more than two braincells would check if paid money < total money. Usually if that check fails, most shops just fail you, and now you have to go through the trouble of figuring out how you'll ask the shop to give you your money back without mentioning that you were tampering with their forms.



  • @Sunstorm said:

    Any shop built by anyone with more than two braincells...

    Are you on the same internet as the rest of us?



  • @PSWorx said:

    By the way, there seems to be another way to make client requests "secure". (At least I've seen that) :Encoding the GET query string via base64 before submitting. Surely no one will ever get behind this secret...

    Don't patent it just yet. I just found prior art ;)

    HTTP Form Encoding

    [...] 

          Instead of sending a PunchOutOrderMessage directly to the procurement application,
          the supplier’s website encodes it as a hidden HTML Form field and the user’s browser
          submits it to the URL specified in the BrowserFormPost element of the
          PunchOutSetupRequest. The hidden HTML Form field must be named either cxml-
          urlencoded or cxml-base64, both case insensitive. Taken from the above example, the
          following code fragment inserts a hidden form field named cxml-urlencoded containing
          the PunchOutOrderMessage document to be posted:


                    <FORM METHOD=POST ACTION=<%= url%>>
                       <INPUT TYPE=HIDDEN NAME="cxml-urlencoded" VALUE="<% CreateCXML toUser,
                    fromUser, buyerCookie, unitPrice, supPartId, supPartAuxId, desc%>">
                       <INPUT TYPE=SUBMIT value=BUY>
                    </FORM>


          This encoding permits the supplier to design a checkout Web page that contains the
          cXML document. When users click the supplier’s “Check Out” button, the supplier’s
          website presents the data, invisible to users, to the procurement application as an
          HTML Form Submit.


    source: http://www.cxml.org

    Although it isn't to make the string secure, it's just to avoid breaking the HTML.


Log in to reply