Security Axioms



  • Contrary to current dogma, most computing devices are inherently secure. They rely on an inherent level of "security by obscurity" because of which true systems-level cracking (versus "social engineering" and such) has usually required a fairly high level of skill to pull off; the "script kiddie" was a phenomenon of very fast growth and remains an exception.

    This is why the term "hacker" originally just meant a skilled programmer of any sort, not necessarily a rogue actor. Security was written mostly to stave off the (l)users... of course someone who took the time to really learn about programming could eventually defeat most security measures. UNIX definitely exhibits this design "flaw," which is actually the only approach that ever ends up being economically sound. Any attempt to keep out real "hackers" puts one foot into metaphorical quicksand.


    This fact was forgotten by security theorists at some point, but it remains true; the attempt to make systems impermeable to attack despite being connected to computers operated by malevolent skilled programmers most of the time is a relatively modern phenomenon. Of course, the fact that this new perspective on security has led to a lucrative digital arms race (for both the hackers and for purveyors of security products and services) has played a role to its popularity. But there is not enough money in the world to prove, for example, that you cannot rob me via the Internet, if I choose to connect to it and to use its standards and patterns in my computing efforts.



    Truly strong security must therefore rest on physical measures. These include tough decisions about what to connect a network to, as well as a certain skepticism toward modelling a network after other, well-known networks. The Internet- a freely accessible worldwide bus to which connected computers expose themselves to each other via dozens of protocols - fails miserably in this respect. Really secure networks are proprietary, localized, and low-profile: an unlisted phone number; a modem running at a non-standard baud rate; a dedicated run of copper; or even a "Russian Woodpecker" style digital radio transmission. Nothing is making you use ASCII, or 8-bit bytes, or Intel assembly language, or Ethernet except your own fear. And security- digital or otherwise - is not built on fear.



    For software, on the other hand, security must remain at best one potential feature among many. This is a sensible categorization; as discussed above, software-level security is fallible, and is in fact unnecessary in many regimes. Should my fancy digital watch, for example, force me to "log in" somehow? Or ought it to work for whomever happens to be wearing it (just like many a timepiece costing thousands of dollars)? And does a digital watch (at least, a really full-featured one) not run a program of some kind, whose development falls under the purview of computer science just like any other? The net result of this line of thinking is that general statements about computer security are inherently meaningless, and must be treated with great skepticism.



    Finally, to the extent that software-level security must exist, it ought to be abstracted away from the applications programmer. Environments that require the developer to "remember" to implement arbitrary-sounding coding patterns in the name of security are themselves security problems, not the programmers who step unwittingly into these traps, with every good intention of applying expensive hardware to real problems. Unfortunately, most of the environments used by posters on sites like this one seem to exhibit this basic "human factors" problem in all its glory.



    Superficially, all of this seems to hint that certain programmers ought to be out of work. And it is true that writing viruses, and anti-virus programs, and everything in between, has definitely been an outlet for the efforts of many surplus systems programmers. As the operating system and its tool suite have become homogenized under "Wintel," the people who would otherwise have been writing "proprietary" or academic operating systems were not needed. Instead, these people ended up at Norton, McAfee, Kaspersky, etc., or - especially in the case of denizens of the second and third worlds - on the other side of the battle writing viruses. There were no longer any jobs at DEC, Apollo, Sage, Data General, etc.



    At a deeper level, though, I am advocating a return to a more fragmented world with a greater variety of platforms, OSes, and networks. Is it not more noble to do real systems design than to play cat-and-mouse with the "script kiddies?" What sort of person do we want designing systems: a creator, or Barney Fife? Deputy Fife is not a good model for a computer scientist; the world of computing has until very recently been completely opposite in its world view: open, accepting, collaborative, and even somewhat private.



    I am tired of living in a virtual world created by Barney Fifes, and I think many other thoughtful people are as well. I take comfort in the fact that the fundamental rules of computing have not changed, and even a long and annoying fad is still just a fad.



  • I will take some time to read all that. Can you give me top level management sumary?



  • @Nagesh said:

    I will take some time to read all that. Can you give me top level management sumary?
     

    "There's no such thing as absolute security."

    That may be a bit too simplistic, but it's actually the truth: any time you have information available to any person anywhere, there is some level of insecurity.

    I would probably argue, though, that from a philosophical standpoint "true" security would be having a system where, even if everyone knew every detail of the system it wouldn't matter. That would be True Security, but I don't think it's actually achievable.



  • @too_many_usernames said:

    @Nagesh said:

    I will take some time to read all that. Can you give me top level management sumary?
     

    "There's no such thing as absolute security."

    That may be a bit too simplistic, but it's actually the truth: any time you have information available to any person anywhere, there is some level of insecurity.

    I would probably argue, though, that from a philosophical standpoint "true" security would be having a system where, even if everyone knew every detail of the system it wouldn't matter. That would be True Security, but I don't think it's actually achievable.

    I believe this is true in case of "person protection" also. No matter what security, nothing stopping determined killer to get to target. There are severe examples in human history that I have seen on history channel.



  • @bridget99 said:

    Finally, to the extent that software-level security must exist, it ought to be abstracted away from the applications programmer. Environments that require the developer to "remember" to implement arbitrary-sounding coding patterns in the name of security are themselves security problems, not the programmers who step unwittingly into these traps, with every good intention of applying expensive hardware to real problems. Unfortunately, most of the environments used by posters on sites like this one seem to exhibit this basic "human factors" problem in all its glory.
    I hope you're trolling. Abstracting computer security away from developers is like abstracting building security away from architects. Non-security trained developers are always creating new security holes (they call them features) and each one has to be designed properly at every level. In the 90s security people thought they could simply install a firewall, block some ports, and walk away. Because of this, almost all modern protocols run over HTTP because HTTP was the only thing the firewall guys in the 90s left open. Now, port-based fire walling is only the first step in securing the perimeter.



    At the most basic level, every developer at least needs to know what they aren't qualified to do. All of them need to know the value of single-sign-on. All of them need to know that they aren't qualified to create their own encryption algorithm or even an implementation of a known good one. All of them need to know that even the most shitty security scheme is perfectly secure against the moron that designed it, and his opinion of his own pile of garbage will be identical to Bruce Schneier's opinion of his own work, and therefore meaningless (actually, Schneier will probably be more self-critical than the moron).
    @bridget99 said:
    At a deeper level, though, I am advocating a return to a more fragmented world with a greater variety of platforms, OSes, and networks. Is it not more noble to do real systems design than to play cat-and-mouse with the "script kiddies?" What sort of person do we want designing systems: a creator, or Barney Fife? Deputy Fife is not a good model for a computer scientist; the world of computing has until very recently been completely opposite in its world view: open, accepting, collaborative, and even somewhat private.
    I hope you realize than the crap that Microsoft puts out has much better security than all of the stuff from the 80s precisely because the whole world uses it. The increasing number of security flaws found is not related to an increasing number of coding errors, but rather to the increasing value of breaking in. If the computing world suddenly reset to 1975, but with the Internet, every bank account would be cleared out in the first ten minutes.



  • @Jaime said:

    I hope you're trolling. Abstracting computer security away from developers is like abstracting building security away from architects.

    What if I, as a web developer, outsource all my security to Facebook Connect? Or Windows LiveID? What do you think about that? If I did that, do you think it would matter how much I knew about security?

    Security should be (and already pretty much is) 100% something you can drop-in a component for. There are so many advantages to it that you'd be hard-pressed to talk me *out* of doing it... that's a huge competitive advantage, and a huge (free) insurance policy in case something does go wrong. ("Oh Facebook barfed? Well, that's unfortunate, but I don't have your data, Facebook does. So not my problem!") And 9 out of 10 of my customers love it.

    Yes, from the ivory tower "all software must be perfect, all programmers must know everything, Donald Knuth is my God and I bow before his image every morning" viewpoint, you're entirely right. However, I like to live here in the Real World. In the Real World, the odds I could hire a programmer who could write code a third as secure as Facebook's is 1:1000... probably only by sniping one of Facebook (or LiveID, or maybe Twitter) employees. Here in the Real World, I want my programmers working on tasks that solve my customer's problems and earn us money, not wasting their time on something that's already been perfected elsewhere.

    @Jaime said:

    I hope you realize than the crap that Microsoft puts out has much better security than all of the stuff from the 80s precisely because the whole world uses it. The increasing number of security flaws found is not related to an increasing number of coding errors, but rather to the increasing value of breaking in. If the computing world suddenly reset to 1975, but with the Internet, every bank account would be cleared out in the first ten minutes.

    The real problem with his idea is that the (sane) world has realized that the OS is an implementation detail.



  • You can outsource some of your security to Facebook Connect, but not all of it. I referred to this in my reference to the value of single-sign-on. However, if you outsource security to Facebook Connect and build a web app riddled with SQL Injection vulnerabilities, you are screwed. We've all seen the WTFs with an admin page that has an ?admin=true query string parameter. This simply can't be dropped in. Every time a feature is created that creates new surface area, it has to be defended. There are tons of AJAX apps out there written by poor developers that accidentally have a hidden api that is completely insecure. Some developers never consider the fact that a curious (or malevolent) user will look at the page source and start poking at the back end directly.



    Only 3 of the OWASP Top Ten can be bought. The other 7 have to be done by the developers and implementers of the system.



  • Here is a story from 2001



    Columbia House accidentally left directory indexing on and accidentally stored non-public information within the directory tree of the public web site. This still happens ten years later. There is no product that can be purchased that will prevent people who are not thinking defensively from accidentally leaving the corporate jewels out in the open. The fact that people think that this problem could ever be solved by outsourcing is the reason it continues.



  • @Jaime said:

    Here is a story from 2001



    Columbia House accidentally left directory indexing on and accidentally stored non-public information within the directory tree of the public web site. This still happens ten years later. There is no product that can be purchased that will prevent people who are not thinking defensively from accidentally leaving the corporate jewels out in the open. The fact that people think that this problem could ever be solved by outsourcing is the reason it continues.




    Columbia House?? Weren't they the people who wanted customers to tape a penny to a "Business Reply Mail" card so they could send them music on cassettes? I can't imagine that they have (or likely "had" at this point) any corporate jewels to speak of. Maybe the source of controversy here is not so much "how we should achieve security?" so much as it is "what do we care about securing?" I doubt it really is worth too much IT trouble to secure date like "Bridget99 spent $.01 to obtain 8 cassettes by Led Zepellin." I can assure you this is not worth 5 minutes of programmer time, and I say this speaking as 1) the programmer and 2) the putative victim.



    More to the point, I think that we've all fallen victim to the mentality that leads us to speak of things like "corporate jewels." My name, address, phone number, and SSN are really not half as valuable as you think they are, and I'm getting pretty disgusted at living in a world where "privacy" has become a sort of sine qua non of personal freedom.



    Besides that, I don't think this example speaks to my basic point that security ought to be abstracted away from programmers. Take a step back and ask yourself how this happened. There are several explanations that make sense to me. First, development was taking place in a production environment. This is basically the explanation Columbia House gave. In my experience, when this happens, it is virtually always a result of the failure of IT support (i.e. technicians and administrators) to give the programmers a proper test environment. And by "proper," I mean one that can be efficiently re-imaged with production data without having to find someone and make them do something.



    Do the developers that you work with have this? If not, do you really expect them to interrupt their workflow to protect some quasi-important data from some hypothetical attack? This just doesn't make economic sense, especially when management is clamoring for one of those "features" you mentioned.



    Second, I can envision a "problem" like the one you cited happening because the development team stored data in a folder they thought was adequately secured, but this turned out not to be the case. Again, this is not really the programmers' fault. In fact, it's an instance of the same basic problem I keep citing: security is not abstracted away from the developers by the IT staff. And the IT staff can hem and haw about "developers" with their godforsaken "features" (the whole reason the IT staff even draws a paycheck, incidentally), but all of this whining is really just a distraction from the fact that IT failed the programmers.



  • @bridget99 said:

    More to the point, I think that we've all fallen victim to the mentality that leads us to speak of things like "corporate jewels." My name, address, phone number, and SSN are really not half as valuable as you think they are, and I'm getting pretty disgusted at living in a world where "privacy" has become a sort of sine qua non of personal freedom.
    The case I really wanted to link to was an example where somebody failed exactly the same way but lost 130,000 credit card numbers instead of simply "personal information".  The failure is the same.  Due to the embarrassment created by these situations, very few of them get reported, so it is hard to find a good case study.

    @bridget99 said:

    Second, I can envision a "problem" like the one you cited happening because the development team stored data in a folder they thought was adequately secured, but this turned out not to be the case.
    So, how do you abstract this decision away from the developers?  It isn't as simple as "IT didn't secure the folder properly".  It's more like "IT gave the developers several areas, all secured properly, the developers put private content in the public folder".  The only way to abstract this away from them is to double-check [i]everything[/i] they do.  I don't mean every security decision, I mean everything because there is no way a developer who is (intentionally) unaware of security issues can decide when to bring in the security guys.  The far easier solution is to make security part of everybody's job.

    Even in the cases where abstraction seems easy, it turns out that an untrained developer can screw things up badly.  Encryption is a good example.  There are tons of security libraries and you'd be an idiot not to use one.  However, using a library is necessary, but not suficient, for good security.  For example: even if you use a good AES encryption library, you still need to do a lot of things properly.  You need a secure place to store the encryption keys.  Depending on your needs, any one of a dozen places may or may not meet your needs, choose wrong and you are screwed.  If you encrypt large blocks of repetitive data in ECB mode, you are screwed.  If you re-use initialization vectors, you are screwed.  The list goes on forever; I've never seen a non-security trained developer that could build an application that a security department could then slap security on top of.  The model of code -> penetration test -> fix has failed us miserably in the past and will continue to do so in the future.  If it worked, Internet Explorer would be security defect free by now.

     



  • @Jaime said:

    There are tons of security libraries and you'd be an idiot not to use one.  However, using a library is necessary, but not suficient, for good security.  For example: even if you use a good AES encryption library, you still need to do a lot of things properly...




    I agree, and this is exactly the problem I am lamenting. Just because this problem exists does not mean that it is inevitable. What I am saying is that it is an uneconomical and unsustainable model to expect developers to remember to do all of these things. They ought to be infrastructural.



    @Jaime said:
    The model of code -> penetration test -> fix has failed us miserably




    Again, I agree, but don't assume to much. I am not advocating that model, what I am really advocating is "code, deploy to an inherently secure location, move on to another project." And for the recode, I'm against any design that forces developers to remember something arbitrary. If one examines the design of WPF, for example, there are several cases where the developer has to remember to name properties using certain patterns, or certain aspects of designer support will break (without a compile-time error). I think that's crap... when I personally find that I have coded something like that (or am thinking about doing something like that), I typically go back and check the assumptions underlying my design and find that one of them has proven, during implementation, to be invalid. I think we've reached that point with "security," but there's so much vested interest in the status quo (in the form of people with security-related experience and certifications, people employed by antivirus companies, overstaffed IT departments doing 'threat management,' etc.) for us to expect a sudden change.



  • @bridget99 said:

    I think we've reached that point with "security," but there's so much vested interest in the status quo (in the form of people with security-related experience and certifications, people employed by antivirus companies, overstaffed IT departments doing 'threat management,' etc.) for us to expect a sudden change.
    I hope you realize that these things exist to move security out of the hands of users and developers.  They are doing exactly what you advocate and have proven it to be a bad idea.

    Physical security example:  There is an old lady who grew up in a nice town in a nicer time, she never used to lock her front door.  She moves to the city.  Solution A: install a higher quality lock in the front door.  Solution B: leave the current adequate lock in the door and convince the lady to be wary of her neighbors and lock her door.  Solution A is security through infrastructure, and is doomed to fail.  Solution B is solution through training and has some chance of success.  You cannot isolate someone from the reprocussions of their security choices and hope to make it better.



  • Here is a story from today that shows my point very well.... http://www.dailymail.co.uk/news/article-2003393/How-Citigroup-hackers-broke-door-using-banks-website.html



    200,000 credit card numbers compromised due to a simple URL injection flaw. No security product will find this flaw. Very few security audits done by non-programmers would have found this flaw. This can only be effectively fixed by using secure programming techniques and doing security code reviews within the development team.



  • @bridget99 said:

    Contrary to current dogma, most computing devices are inherently secure. They rely on an inherent level of "security by obscurity" because of which true systems-level cracking (versus "social engineering" and such) has usually required a fairly high level of skill to pull off; the "script kiddie" was a phenomenon of very fast growth and remains an exception.

    This is why the term "hacker" originally just meant a skilled programmer of any sort, not necessarily a rogue actor. Security was written mostly to stave off the (l)users... of course someone who took the time to really learn about programming could eventually defeat most security measures. UNIX definitely exhibits this design "flaw," which is actually the only approach that ever ends up being economically sound. Any attempt to keep out real "hackers" puts one foot into metaphorical quicksand.


    This fact was forgotten by security theorists at some point, but it remains true; the attempt to make systems impermeable to attack despite being connected to computers operated by malevolent skilled programmers most of the time is a relatively modern phenomenon. Of course, the fact that this new perspective on security has led to a lucrative digital arms race (for both the hackers and for purveyors of security products and services) has played a role to its popularity. But there is not enough money in the world to prove, for example, that you cannot rob me via the Internet, if I choose to connect to it and to use its standards and patterns in my computing efforts.



    Truly strong security must therefore rest on physical measures. These include tough decisions about what to connect a network to, as well as a certain skepticism toward modelling a network after other, well-known networks. The Internet- a freely accessible worldwide bus to which connected computers expose themselves to each other via dozens of protocols - fails miserably in this respect. Really secure networks are proprietary, localized, and low-profile: an unlisted phone number; a modem running at a non-standard baud rate; a dedicated run of copper; or even a "Russian Woodpecker" style digital radio transmission. Nothing is making you use ASCII, or 8-bit bytes, or Intel assembly language, or Ethernet except your own fear. And security- digital or otherwise - is not built on fear.



    For software, on the other hand, security must remain at best one potential feature among many. This is a sensible categorization; as discussed above, software-level security is fallible, and is in fact unnecessary in many regimes. Should my fancy digital watch, for example, force me to "log in" somehow? Or ought it to work for whomever happens to be wearing it (just like many a timepiece costing thousands of dollars)? And does a digital watch (at least, a really full-featured one) not run a program of some kind, whose development falls under the purview of computer science just like any other? The net result of this line of thinking is that general statements about computer security are inherently meaningless, and must be treated with great skepticism.



    Finally, to the extent that software-level security must exist, it ought to be abstracted away from the applications programmer. Environments that require the developer to "remember" to implement arbitrary-sounding coding patterns in the name of security are themselves security problems, not the programmers who step unwittingly into these traps, with every good intention of applying expensive hardware to real problems. Unfortunately, most of the environments used by posters on sites like this one seem to exhibit this basic "human factors" problem in all its glory.



    Superficially, all of this seems to hint that certain programmers ought to be out of work. And it is true that writing viruses, and anti-virus programs, and everything in between, has definitely been an outlet for the efforts of many surplus systems programmers. As the operating system and its tool suite have become homogenized under "Wintel," the people who would otherwise have been writing "proprietary" or academic operating systems were not needed. Instead, these people ended up at Norton, McAfee, Kaspersky, etc., or - especially in the case of denizens of the second and third worlds - on the other side of the battle writing viruses. There were no longer any jobs at DEC, Apollo, Sage, Data General, etc.



    At a deeper level, though, I am advocating a return to a more fragmented world with a greater variety of platforms, OSes, and networks. Is it not more noble to do real systems design than to play cat-and-mouse with the "script kiddies?" What sort of person do we want designing systems: a creator, or Barney Fife? Deputy Fife is not a good model for a computer scientist; the world of computing has until very recently been completely opposite in its world view: open, accepting, collaborative, and even somewhat private.



    I am tired of living in a virtual world created by Barney Fifes, and I think many other thoughtful people are as well. I take comfort in the fact that the fundamental rules of computing have not changed, and even a long and annoying fad is still just a fad.

    no



  • @Nagesh said:

    I will take some time to read all that. Can you give me top level management sumary?
     

    Courtesy of MSWord's "Auto Summarize" feature:

    @Auto Summarize said:

    This fact was forgotten by security theorists at some point, but it remains true; the attempt to make systems impermeable to attack despite being connected to computers operated by malevolent skilled programmers most of the time is a relatively modern phenomenon.

    Truly strong security must therefore rest on physical measures.

    Finally, to the extent that software-level security must exist, it ought to be abstracted away from the applications programmer.

     


Log in to reply