Security Scan



  • I'm nearing the end of a web application project. One of our company policies is that the security department does a scan of our app before it goes live. They did it yesterday.

    Today I got the results; we have over a hundred defects we have to address. The defects fall mainly into two categories; "Query Parameter in SSL Request" and "Cacheable SSL Page Found". However, the app is behind a corporate reverse-proxy and corporate policy is to only allow HTTPS on the outside. This policy was written by the same security department that runs the scan. So, now I have to justify to them why I made these pages SSL when it was their decision to make everything SSL, or stop using query string parameters and Cache-Control directives forever. They also dinged jQuery for "Client-Side (JavaScript) Cookie References", I'm supposed to address this too.



  • Disable HTTPS and inform them that the problem has been solved.



  • That's the thing.... my web server doesn't even have an SSL certificate installed. The appliance in front of my server (owned and operated by the security department) accepts incoming HTTPS and forwards HTTP to my server.



  • Well, if no one can view your web application, then it's completely secure, right?



  • Sounds like an automated audit tool.

    I had similar problems with our IT support department. Who found several critical security problems on one of the internal servers. Most of which was outdated software, or simply the telnet daemon running. I simply blocked their subnet in the firewall from accessing those services. Problem solved! (It's an internal server, which nobody would want to pwn. Most people don't even know the server exists, and the people that do got full access to it)



  • @Daid said:

    I had similar problems with our IT support department. Who found several critical security problems on one of the internal servers. Most of which was outdated software, or simply the telnet daemon running. I simply blocked their subnet in the firewall from accessing those services. Problem solved! (It's an internal server, which nobody would want to pwn. Most people don't even know the server exists, and the people that do got full access to it)

    I'm inclined to agree with them on telnet. Unless you have a really good reason to use it, it should be disabled. If you want a remote shell, use ssh. "Which nobody would want to pwn" is never true - attackers might want to use it as a staging post, or a place to store some code which does nothing for months and then launches attacks on other internal servers. And transmitting passwords en clair is always a bad idea because we all know that far too many people reuse passwords.



  • @pjt33 said:

    "Which nobody would want to pwn" is never true - attackers might want to use it as a staging post, or a place to store some code which does nothing for months and then launches attacks on other internal servers.
     

    Three more categories of attackers: those who want to use it as a testing range for attacks they're planning against another target; those who have nothing better to do with their time than stuff around with random boxes; and (a cohort not to be underestimated) those who don't realise that the server doesn't contain anything of value.


  • :belt_onion:

    @Daid said:

    Sounds like an automated audit tool.

    I had similar problems with our IT support department. Who found several critical security problems on one of the internal servers. Most of which was outdated software, or simply the telnet daemon running. I simply blocked their subnet in the firewall from accessing those services. Problem solved! (It's an internal server, which nobody would want to pwn. Most people don't even know the server exists, and the people that do got full access to it)

    I replied earlier, but I think my reply could have been more succinct:

    1. Outdated software and use of the Telnet protocol are critical security problems.
    2. Your IT support department pointing that out is not a problem.
    3. There's no such thing as a server no one knows about; if it's on the network, you have to assume everyone knows about it and its weaknesses.
    4. You may think your solution was very clever, but you didn't do yourself any favors.

    If you're able to block your own IT support department from accessing the server, why not block everyone except those who are supposed to know about it and use it?

    (That still can't be considered "secure", as a. there's no such thing as secure and b. those users' machines and their connections to the server may be compromised, but it's a damn good start.)



  • I don't know nearly enough about jQuery to comment usefully on that front, but I looked up the other two things and they aren't "SSL is bad", they're "SSL is not necessarily enough".  See the write-ups in this link, for instance -> http://confluence.arizona.edu/confluence/display/KITT/KFS+Pen+Test+Analyis+-+March+2010

     



  • @emurphy said:

    ... but I looked up the other two things and they aren't "SSL is bad", they're "SSL is not necessarily enough". ...
    It doesn't mean either.  What the scan tool was trying to tell us is that somebody chose to use SSL on the page, so obviously that data on that page is sensitive, but they also chose to pass data in the URL, which gets littered in a bunch of logs and cache entries. The other one is similar, it doesn't make sense to set a cache directive of public for sensitive data.

    The problem is, given the corporate decision to always use SSL for everything, SSL no longer implies that the data is sensitive. So, the entire rule is rendered useless. If they had their heads screwed on properly, they would realize that this comes up in every review of every project and it is a consequence of their own decision. The only sensible choice is to disable the rule in the scanning tool. The end result is that instead of helping to improve security, they are needlessly wasting everyone's time.

    To give some specific examples, the items that have public cache directives set are product images and the query string parameters are search criteria and this is a run-of-the-mill eCommerce site where we don't sell anything terribly special. Anyone with half a brain should be able to see that the choices we made were appropriate.

    BTW, the scan report they sent us was 1008 pages long. Just perusing it took several hours.



  • @Jaime said:

    BTW, the scan report they sent us was 1008 pages long. Just perusing it took several hours.

    That is good. Now all that should be in place is the ability to have these specific items "signed off" and marked in the scanning tool so that they do not show up in further scans (using this profile).....You scan tool can do that, right??? <grin>



  • We're like the 100th application they've scanned. They are never going to run the tool with any settings other than the defaults. It's Rational AppScan, so the tool is not to blame.

    I left out one defect they found. The report said our SSL Certificate was about to expire. We don't have an SSL Certificate, that's managed by the security team on the SSL appliance. Not only that, but it's a known bug in AppScan (http://www-01.ibm.com/support/docview.wss?uid=swg1PM46325). I still have to write up a response for this one. Their tool found a false positive in one of their certificates due to them not running the most current version of the tool and I have to write it up.



  • @TheCPUWizard said:

    Now all that should be in place is the ability to have these specific items "signed off" and marked in the scanning tool so that they do not show up in further scans (using this profile).

    The lack of this process (or any acknowledgement that this process needs to be put in place) is precisely the WTF that caused me to post this in the first place.

Log in to reply