A secure key



  • Just found this in a class called Encryption, on a ASP.NET application I'm working on...

    protected const string KEY = "DONKEY";

    Which in turn is used by two methods EncryptTripleDES and DecryptTripleDES.

    Well it appears they're taking security seriously, they're using Triple DES :-)



  • Well it is protected....

    Hardcoded passwords aren't that uncommon from what I've seen.  I mean, that's C#, not JavaScript right?  Not like a user can view source and get it.
     



  • Yes, it's not like it's posted on a public website...

     

    But seriously, I've had the exact same problem. Someone with access to the executable can use .Net Reflector to have a look at the password.

    Of course, we didn't use such a stupid password (we used a mix of our initials and birth dates), so it wasn't [?] so easy to spot.
     



  • @Zecc said:

    Yes, it's not like it's posted on a public website...

    But seriously, I've had the exact same problem. Someone with access to the executable can use .Net Reflector to have a look at the password.

    Of course, we didn't use such a stupid password (we used a mix of our initials and birth dates), so it wasn't [?] so easy to spot.

     

    Umm, correct me if I'm wrong, but if someone has access to the executable, aren't you completely and utterly borked already?  If you didn't obfuscate your code, they can just open up the executable in a text editor and look for strings that look like passwords.  If you did, they can use a decompiler and do the same.  If that fails, they can try brute-forcing the pass or a dictionary attack, or alternatively they could just find the if statement controlling the password response and reverse the logic using the aforementioned hex editor.

    This is what Bill Gates and his merry band of "programmers" failed to realize back when they came up with the idea of "proprietary software."   It can't be done.  The whole point of digital data is that it's easily copyable and manipulatable.  Any attempt to make it neither is like trying to make water not wet.



  • And now that you've given out the secret encryption key to your company, all we need is to do is figure out which one and we're set!



  • @Atrophy said:

    This is what Bill Gates and his merry band of "programmers" failed to realize back when they came up with the idea of "proprietary software."

    Bill Gates invented proprietary software? 



  • @bobday said:

    @Atrophy said:

    This is what Bill Gates and his merry band of "programmers" failed to realize back when they came up with the idea of "proprietary software."

    Bill Gates invented proprietary software? 

     

    [url=http://www.blinkenlights.com/classiccmp/gateswhine.html]As the majority of hobbyists must be aware, most of you steal your software.[/url]



  • @Atrophy said:

    Umm, correct me if I'm wrong, but if someone has access to the executable, aren't you completely and utterly borked already?
    I won't correct you.

    I agree it is ultimately only a matter of bigger or smaller obfuscation. 



  • @bobday said:

    @Atrophy said:

    This is what Bill Gates and his merry band of "programmers" failed to realize back when they came up with the idea of "proprietary software."
    Bill Gates invented proprietary software? 

     If Gore gets the internet, I say lets go ahead and give Bill proprietary software.... Jobs can have the word "pretty"



  • @matthewr81 said:

    @bobday said:

    @Atrophy said:

    This is what Bill Gates and his merry band of "programmers" failed to realize back when they came up with the idea of "proprietary software."
    Bill Gates invented proprietary software? 

     If Gore gets the internet, I say lets go ahead and give Bill proprietary software.... Jobs can have the word "pretty"

     

    LOL I love Jobs... I'm relatively sure that the iPod commercials are some sort of window into his acid flashbacks.


  • Considered Harmful

    @Atrophy said:

    @Zecc said:
    Yes, it's not like it's posted on a public website...

    But seriously, I've had the exact same problem. Someone with access to the executable can use .Net Reflector to have a look at the password.

    Of course, we didn't use such a stupid password (we used a mix of our initials and birth dates), so it wasn't [?] so easy to spot.

     

    Umm, correct me if I'm wrong, but if someone has access to the executable, aren't you completely and utterly borked already?  If you didn't obfuscate your code, they can just open up the executable in a text editor and look for strings that look like passwords.  If you did, they can use a decompiler and do the same.  If that fails, they can try brute-forcing the pass or a dictionary attack, or alternatively they could just find the if statement controlling the password response and reverse the logic using the aforementioned hex editor.

    This is what Bill Gates and his merry band of "programmers" failed to realize back when they came up with the idea of "proprietary software."   It can't be done.  The whole point of digital data is that it's easily copyable and manipulatable.  Any attempt to make it neither is like trying to make water not wet.

    Has no one heard the phrase, "one-way cryptographic hash"?  Though, of course, if a user has access to an executable, that user can likely modify the executable, and thereby circumvent any password protection.  If the executable is signed, then tampering can be detected, of coruse.

    One idea that I like is encrypting important pieces of the program and not including the passphrase.  Then the password you enter is actually required to decrypt the rest of the program, and not just some minor logical barrier.



  • Yes it is a web app thus it's protected, the WTF is it's relatively easy for a hacker to guess (I would use long string of random characters, Hexidecimal string or something like that). A dictionary attack could crack it in no time depending on how it was being used. Also it just sounds funny.

     

    Also the key isn't the real key being used, I obviously changed it (and won't mention the web app) but trust me the real one sounds just as funny and is just as WTFy.



  • @joe.edwards@imaginuity.com said:

    Has no one heard the phrase, "one-way cryptographic hash"?  Though, of course, if a user has access to an executable, that user can likely modify the executable, and thereby circumvent any password protection.  If the executable is signed, then tampering can be detected, of coruse.

    One idea that I like is encrypting important pieces of the program and not including the passphrase.  Then the password you enter is actually required to decrypt the rest of the program, and not just some minor logical barrier.

    Which also doesn't work. The sign check can be simply disabled. The encrypting with passphrase is possible, but I guess it would fail a bruteforce attempt.

    Give a good hacker the exe, and it's hacked. Best save yourself time and don't build in protection at all, or so minimal that's it's actualy stupid from an hackers view, but good enough to stop basic copying. 



  • @vt_mruhlin said:

    Well it is protected....

    Hardcoded passwords aren't that uncommon from what I've seen.  I mean, that's C#, not JavaScript right?  Not like a user can view source and get it.
     

    Still means that even without the source, the encrypted data would be easier to brute force.  In fact, a brute force cracker should be able to deduce this key based on a set of encrypted data quite rapidly (probably in less than a day, possibly less than an hour), since it's just a dictionary word.  There's really no excuse not to use a better key.
     



  • @Daid said:

    The encrypting with passphrase is possible, but I guess it would fail a bruteforce attempt.

    Not if it's done properly. A nice 256-bit key Rijndael cipher would prevent any brute force attack.

    However, this is still an impractical scheme. If every executable is unlocked by the same key, then that key (or the decrypted exe) will be made public the first time a copy of the program is unlocked.

    If every executable has its own key, that will probably require asymmetric encryption and shipping a public key with the exe. The exe can then be modified to change the public key to a value with a known private key. This method can only work with software that runs online and must communicate with a central server. The server can maintain a list of approved public keys (sold copies of the program) and only issue the private key once to the program owner.



  • @bobday said:

    @Daid said:
    The encrypting with passphrase is possible, but I guess it would fail a bruteforce attempt.

    Not if it's done properly. A nice 256-bit key Rijndael cipher would prevent any brute force attack.

    However, this is still an impractical scheme. If every executable is unlocked by the same key, then that key (or the decrypted exe) will be made public the first time a copy of the program is unlocked.

    If every executable has its own key, that will probably require asymmetric encryption and shipping a public key with the exe. The exe can then be modified to change the public key to a value with a known private key. This method can only work with software that runs online and must communicate with a central server. The server can maintain a list of approved public keys (sold copies of the program) and only issue the private key once to the program owner.

    ...at a cost of millions to the creator of the software in question, with very little return on investment.  They'll still do it, though... gotta keep those thieving bastards from stealing our software!

     *sigh*

    You'd think people would know a little more about the product they sell than that.

    But I guess it's hard to make someone understand something when his livelihood depends on him not understanding it.
     



  • Not if it's done properly. A nice 256-bit key Rijndael cipher would prevent any brute force attack.
    Not if your passphrase is "DONKEY". Nothing "prevents" a brute force attack - the whole idea is to try every possible key - making it take longer to make an attempt or have a larger number of keys to go through isn't worth anything if it's one of the first few hundred they try anyway.


  • @Random832 said:

    Not if it's done properly. A nice 256-bit key Rijndael cipher would prevent any brute force attack.
    Not if your passphrase is "DONKEY". Nothing "prevents" a brute force attack - the whole idea is to try every possible key - making it take longer to make an attempt or have a larger number of keys to go through isn't worth anything if it's one of the first few hundred they try anyway.

    You're right, nothing will protect you if you pick such a password. However, if you use a RNG with 256 bits of real entropy to generate a key (not passphrase) and protect your program with that, it cannot feasibly be brute forced.



  • @Atrophy said:

    ...at a cost of millions to the creator of the software in question, with very little return on investment.  They'll still do it, though... gotta keep those thieving bastards from stealing our software!

     sigh

    You'd think people would know a little more about the product they sell than that.

    But I guess it's hard to make someone understand something when his livelihood depends on him not understanding it.

    Indeed, security is always a trade-off. I was speaking purely theoretically, but it's definitely easy to spend more protecting an asset than you could reasonably stand to lose if it's compromised.



  • @Atrophy said:

      It can't be done.  The whole point of digital data is that it's easily copyable and manipulatable.  Any attempt to make it neither is like trying to make water not wet.

    Well, with technologies like the TPM, combined with one-way hashes, there would be indeed a way to "make water dry": Encrypt your code and delegate the whole integrity check/decryption logic into a chip where it is hard-wired into nanoscopic silicone wires and cannot be extracted without billion-dollar equipment, not to speak of tampering. Combine that with an operation system that can deny memory access even to kernel drivers and violine - you got it. The infrastructure is there now, if it can be used successfully is another thing.



  • @bobday said:

    @Random832 said:

    Not if it's done properly. A nice 256-bit key Rijndael cipher would prevent any brute force attack.
    Not if your passphrase is "DONKEY". Nothing "prevents" a brute force attack - the whole idea is to try every possible key - making it take longer to make an attempt or have a larger number of keys to go through isn't worth anything if it's one of the first few hundred they try anyway.

    You're right, nothing will protect you if you pick such a password. However, if you use a RNG with 256 bits of real entropy to generate a key (not passphrase) and protect your program with that, it cannot feasibly be brute forced.

    However, since nobody can remember that to type it in, you then have to store the key along with the program, which puts you right back at the DRM-level of idiocy as the user merely has to extract the key from the program and use it.

    Somebody will doubtless suggest putting the key on a dongle or some kind of removable media. Here's a better idea: put the damn program on the removable media and have done with it. 



  • @PSWorx said:

    @Atrophy said:

      It can't be done.  The whole point of digital data is that it's easily copyable and manipulatable.  Any attempt to make it neither is like trying to make water not wet.

    Well, with technologies like the TPM, combined with one-way hashes, there would be indeed a way to "make water dry": Encrypt your code and delegate the whole integrity check/decryption logic into a chip where it is hard-wired into nanoscopic silicone wires and cannot be extracted without billion-dollar equipment, not to speak of tampering. Combine that with an operation system that can deny memory access even to kernel drivers and violine - you got it. The infrastructure is there now, if it can be used successfully is another thing.

    Actually, that's partially mythical. Specifically, the costs of reverse-engineering a chip are usually greatly overstated. The necessary equipment costs a few thousand, and can be found in most universities, where many graduate students have after-hours access and could spend a few hours extracting the relevant information without anybody knowing what they were doing. An electron microscope is very useful if you have one, but you don't actually need one.

    Tamper-proof chips are every bit as non-existent as unbreakable security. The only difference is that most software people don't know much about chip construction, so they don't realise that the same problems still exist for physical objects.

    We cannot build locks that cannot be picked, ships that cannot be sunk, or chips that cannot be reverse-engineered. And we probably never will. Anybody who claims otherwise is trying to sell you something.



  • @Sunday Ironfoot said:

    Also the key isn't the real key being used, I obviously changed it (and won't mention the web app) but trust me the real one sounds just as funny and is just as WTFy.

    What's the real key? "MONKEY"?



  • @PSWorx said:

    Well, with technologies like the TPM, combined with one-way hashes, there would be indeed a way to "make water dry": Encrypt your code and delegate the whole integrity check/decryption logic into a chip where it is hard-wired into nanoscopic silicone wires and cannot be extracted without billion-dollar equipment, not to speak of tampering. Combine that with an operation system that can deny memory access even to kernel drivers and violine - you got it. The infrastructure is there now, if it can be used successfully is another thing.

    I have no idea what TPM (I'm feeling too lazy to google, just got home from work) is, but I'm pretty sure that the system you describe is pretty much identical to how the Xbox 360's security system works. The executables are signed and encrypted, and can only be decrypted using a key embedded inside the console's core. The chip also runs in a special secure mode that blocks all write access to the memory where the kernal and the executable are stored.

    (Layman's terms alert, (because I am, not you ;) )) At one point someone somehow managed to discover a bug in the control program which itself is in charge controlling whether the CPU locked access to kernal/application RAM. Apparently one of the instructions in the program used a 32bit value when it should have used a 64bit one, this meant that 32bits of what it passed to the CPU wasn't properly initialized. They somehow managed to write their own value to those 32 bits, changing what the instruction did. This in turn stopped the locking mode turning on and gave the hacker's the ability to inject their own code into the system wholesale. Microsoft closed the hole a little while later by updating the kernel via their secure network, though apparently people sell xbox's that have the correct kernel version on auction sights.

    I vaguely remember MS announcing a similar plan for PCs, where the hardware itself would analyze programs that tried to run on it, and only allow through those which it confirmed were approved by a "Central Governing Body" (presumably Micorosoft). There was a big outcry about it, since it would essentially give that body carte blanche to approve/forbid any program that a user tried to run.

    [url]http://www.extremetech.com/article2/0,3973,274309,00.asp[/url]

     



  • @Devi said:

    @PSWorx said:

    Well, with technologies like the TPM, combined with one-way hashes, there would be indeed a way to "make water dry": Encrypt your code and delegate the whole integrity check/decryption logic into a chip where it is hard-wired into nanoscopic silicone wires and cannot be extracted without billion-dollar equipment, not to speak of tampering. Combine that with an operation system that can deny memory access even to kernel drivers and violine - you got it. The infrastructure is there now, if it can be used successfully is another thing.

    I have no idea what TPM (I'm feeling too lazy to google, just got home from work) is, but I'm pretty sure that the system you describe is pretty much identical to how the Xbox 360's security system works. The executables are signed and encrypted, and can only be decrypted using a key embedded inside the console's core. The chip also runs in a special secure mode that blocks all write access to the memory where the kernal and the executable are stored.

    (Layman's terms alert, (because I am, not you ;) )) At one point someone somehow managed to discover a bug in the control program which itself is in charge controlling whether the CPU locked access to kernal/application RAM. Apparently one of the instructions in the program used a 32bit value when it should have used a 64bit one, this meant that 32bits of what it passed to the CPU wasn't properly initialized. They somehow managed to write their own value to those 32 bits, changing what the instruction did. This in turn stopped the locking mode turning on and gave the hacker's the ability to inject their own code into the system wholesale. Microsoft closed the hole a little while later by updating the kernel via their secure network, though apparently people sell xbox's that have the correct kernel version on auction sights.

    I vaguely remember MS announcing a similar plan for PCs, where the hardware itself would analyze programs that tried to run on it, and only allow through those which it confirmed were approved by a "Central Governing Body" (presumably Micorosoft). There was a big outcry about it, since it would essentially give that body carte blanche to approve/forbid any program that a user tried to run.

    [url]http://www.extremetech.com/article2/0,3973,274309,00.asp[/url]

    This is precisely what I meant. The "similar plan" had the euphemistic naming "Trusted Computing" (because now the vendors can trust your computer not to hinder them making profit). To be fair though, the whole thing isn't governed by Microsoft but by the "Trusted Computing Group" TCG (former "Trusted Computing Platform Alliance" TCPA), an organization of almost all "big players" of the industry.

    Luckily, after said outcry (and a, granted, pretty emotional and probably not very factual debate) the original plans are now off the table and replaced by a much weaker mechanism that has, however, found its way into Vista from what I know.

    As for the XBox, I've read about this too. I was a bit worried though, that the hackers in the end could only break the scheme by exploiting some rather dumb mistakes the developers did. And since the "next generations" of those systems all have mandatory updating built in to quickly fix such mistakes, I don't believe this strategy will work in the future. (See the 09-F9 incident)

    I can only hope asuffield is right. 



  • @Sunday Ironfoot said:

    Just found this in a class called Encryption, on a ASP.NET application I'm working on...

    protected const string KEY = "DONKEY";

    Which in turn is used by two methods EncryptTripleDES and DecryptTripleDES.

    Well it appears they're taking security seriously, they're using Triple DES :-)



    what's the problem?
    as you see, it is protected string KEY...

    -]



  • @axarydax said:

    @Sunday Ironfoot said:

    Just found this in a class called Encryption, on a ASP.NET application I'm working on...

    protected const string KEY = "DONKEY";

    Which in turn is used by two methods EncryptTripleDES and DecryptTripleDES.

    Well it appears they're taking security seriously, they're using Triple DES :-)



    what's the problem?
    as you see, it is protected string KEY...

    -]

    Yes, but a hacker could write his own class that inherits Encryption and gain access to the key :-) 



  • @PSWorx said:

    As for the XBox, I've read about this too. I was a bit worried though, that the hackers in the end could only break the scheme by exploiting some rather dumb mistakes the developers did. And since the "next generations" of those systems all have mandatory updating built in to quickly fix such mistakes, I don't believe this strategy will work in the future. (See the 09-F9 incident)

    The original xbox was finally cracked fully by a MIT student (Andrew "bunnie" Huang) with a high-speed digital oscilloscope and some custom FPGA hardware, who snooped the memory bus and captured the secret key. There was nothing Microsoft could do about this attack. The practical modchips use different techniques because Microsoft might have tried using copyright law against them if they simply patched the decrypted data and included it, not for any technical reasons. If the cheaper holes hadn't been available, then the more legally awkward ones would have been used, and the whole scene would just have moved a little further underground.

    There is not a whole heck of a lot you can do about somebody who is willing and able to perform snooping or man-in-the-middle attacks against your system busses. The proposed memory-verifying TPM would be entirely defeated by a widget that falsified the information received by the TPM, and remember that you only need one person to defeat it and upload the output. This probably has something to do with why nobody's bothered to build a memory-verifying TPM yet.

    Also note that as soon as the 09-F9 key was revoked, the 45-5F key was immediately published, before any of the new discs even made it to market. They haven't bothered revoking that one yet. Mandatory updating doesn't really work, because new keys can be extracted using the same methods, much faster than new processing keys can be shipped out. Sony have been playing the "mandatory update" game with the PSP for years now, and despite them releasing dozens of downgrades to block homebrew games from running, not one of them has accomplished anything.



  • I think the main issue that you have with all of these things, from DRM to TPM to CD/DVD copy protection is that from the very outset, you've already given those who would want to get at the data the means to decrypt it. If they didn't have the means to decrypt it, they couldn't listen to/watch/use whatever it is you've protected. All the people who design these systems can do is make it harder for the end-user to break open the data so they can have their nefarious, profit reducing way with it. Even hiding the data in a chip isn't safe, it must be somehow possible to break a chip open and examine it's circuitry, even if it isn't now, it will be in the future.

    At the end of the day all the protection system designers can do is make it so that it's so inconvenient to break the system, that enough people don't bother.



  • @mdk said:

    @Sunday Ironfoot said:

    Also the key isn't the real key being used, I obviously changed it (and won't mention the web app) but trust me the real one sounds just as funny and is just as WTFy.

    What's the real key? "MONKEY"?

     

    // Comment



  • @Devi said:

    At the end of the day all the protection system designers can do is make it so that it's so inconvenient to break the system, that enough people don't bother.

    Which is a mind-bogglingly stupid thing to try and do, because making it inconvenient to break the system is merely creating an interesting challenge, and that's the primary reason why most of these people break them. It is extremely rare for anybody to attack these systems for any other reason.



  • @asuffield said:

    @Devi said:

    At the end of the day all the protection system designers can do is make it so that it's so inconvenient to break the system, that enough people don't bother.

    Which is a mind-bogglingly stupid thing to try and do, because making it inconvenient to break the system is merely creating an interesting challenge, and that's the primary reason why most of these people break them. It is extremely rare for anybody to attack these systems for any other reason.

    Well, I guess that comes down to the consequences of the system being broken. If it's like that DVD decryption thing, where they'd used one key to encode countless DVDs and as soon as some guy released it online, the whole system to fell to bits, then yes. But if it's more a case of: you can break it but only if you solder these home made electronics straight onto your motherboard and then hope you never need the warranty, then only a few people will actually do it, so you should still make enough of a profit from your sales.

    Music's the one that always makes me laugh, since every DRM system ever invented can be circumvented by the high tech method of plugging a recorder into your sound card. Sony seemed to have the best attitude, one half of the company would sell you CDs and whinge about people copying to tape, while the other half sold high tech CD players, with special functions to make it easier to copy CDs to tape.

     



  • @Devi said:

    Sony seemed to have the best attitude, one half of the company would sell you CDs and whinge about people copying to tape, while the other half sold high tech CD players, with special functions to make it easier to copy CDs to tape.

    This has given us the hilarious result that on several occasions, the first half of the company has sued the other half. That company has issues. 


Log in to reply