Since when do CompSci people code like THAT?



  • This thread only exists because I am bored out of my skull out of supporting such a piece of junk as the system I'm about to describe. I was given the task to host and support it about 2 months ago by one of my friends, who happens to have coded it, and says "it's a piece of cake to support". The collection of scripts manages orders for a futon manufacture/shop company. All it does is, keeps the orders in a table, and prints order sheets.

    Should be pretty easy to do, right? I thought so at first, until I saw the multiple WTFs. For a start, the thing interfaces with a Delphi program. The thing itself is built in PHP, and outputs XML files. As to how it is read by the Delphi program, I have absolutely no clue. Wasn't given a documentation, or even a nice flowchart of how the thing works.Sounds fun already, right? It gets a lot more fun.

    Week #1:
    I got a call from the people saying "the system did not work" - which I did not get at first, because the thing ran smoothly on the old server and all I did was to export anything needed and install the correct libs. The system also generated quite "decent" (if not empty) XML pages. As such, I queried the original dev, and he said that it was a "db problem" (?).Shortly after purging the DB, the system worked.

    Week #2 (first big WTF):
    "The server is down!!!!! Please do something!"
    Went to check the DB, and this time, I almost fell over laughing. I never really checked the format of the DB entries, and now I wish I didn't. For the love of PHP, why would someone be mad enough to store 13 fields of data...as TEXT...representing the final XML structure....bzencoded in the DB?

    That was only the tip of the iceberg, though. See for yourself:

      $result = mysql_query("SELECT * FROM orders WHERE
    ID='{$_POST['OrderID']}'");
        // show all orders currently present:
        $row = mysql_fetch_array($result);
        $data =  bzdecompress(urldecode($row['XML']));   
        // output the XML:   
        echo $data;   

     

    And no, I am not kidding:
    - SQL injection flaw on line 1
    - No check for items before fetching data on line 2
    - XSS injection possibility on line 3 (not just that, a single misplaced char in the DB causes the script to fail quite miserably)
    All this in 3 of the 5 lines of the script. 

     Now, knowing that I have no access whatsoever to the Delphi program, how am I supposed to do anything to support this monstrous non-sanitized piece of code? And that's just one file of the thing.  And if you really want to see the extent of how good the sanitization is:

    www # grep -i 'addslashes' *
    www # grep -i 'htmlentities' *
    www # grep -i 'mysql_real_escape ' *

    This is what I call sanitized and secure scripting.

     

    (I'm sorry this took a rather disorganized form, but I just can't make sense of where to start with that script anymore. Plus, I seem to get all the flak from the script not working, when all problems are either due to sanitization or the script not checking for the existence of items)



  • Fixing this for you:@anima said:

    My friend wrote an ordering system for a futon manufacturer and asked me to host and maintain it. The system has a MySQL backend, Delphi middle tier, and a PHP frontend, and outputs XML. There is no documentation, and how the various tiers interact is unclear.

    From time to time, the system becomes inaccessible to users, which is fixed by completely purging the database.

    As for how new orders are stored, check this retrieval code out:

    $result = mysql_query("SELECT * FROM orders WHERE ID='{$_POST['OrderID']}'");
    // show all orders currently present:
    $row = mysql_fetch_array($result);
    $data =  bzdecompress(urldecode($row['XML']));
    // output the XML:
    echo $data;
    • Orders are stored as urlencoded bzcompressed XML blobs in a TEXT field, rather than using VARCHAR fields for each datum.
    • The function is vulnerable to SQL injection in the Order_ID field.
    • The function fails hard if there are no orders with that ID. 

    In addition, the PHP code uses no escaping or other safety checks, like 'addslashes', 'htmlentities', or 'mysql_real_escape'.

    Moreover, I have no access to the Delphi code.

     

    How am I supposed to do anything to support this monstrous abomination of a system?



  •  I don't know, that about sounds like the quality of code I'd expect from most comp sci students I've met.



  • In a production environment, used by people who are not IT-literate, and rely on the person hosting it to find the bugs without even having access to the damned thing? Overall, I think I spent 50 hours last month just debugging the thing for one reason or another. There must be a point at which one should just give up.



  • @anima said:

    In a production environment, used by people who are not IT-literate, and rely on the person hosting it to find the bugs without even having access to the damned thing? Overall, I think I spent 50 hours last month just debugging the thing for one reason or another. There must be a point at which one should just give up.

     

    sudo del /* -r -s

    (sorry if that's not quite right.  I haven't taken my linux courses yet.)



  •  it's sudo rm -rf ./*, and the way my luck is going, if I get more flak from the supported people, that's what will be applied to cleanse the server from insecurities. Think about it this way, why do I get all the flak from supporting a pile of junk if the said pile of junk breaks down because a single person has added an ampersand somewhere down the line in a POST variable?



  • Why do you care if the application is secure? If it aint's broke why fix it. I work on a older codebase (2000 era) that suffers from quite a few vulnerabilities (XSS, Sql Injection, CSRF). Sure, I could fix them but my company does not seem to mind if such vulnerabilities exist. I am not willing to put in the extra man hours in to fix the problem because their is no incentive except for 60 hour work weeks and a nice warm fuzzy feeling, which is more than what management will feel. 

     

    If they pay you specifically to fix the security holes awesome! Else, relax and enjoy some extra time off.



  •  @OSvsOS said:

    Why do you care if the application is secure? If it aint's broke why fix it. I work on a older codebase (2000 era) that suffers from quite a few vulnerabilities (XSS, Sql Injection, CSRF). Sure, I could fix them but my company does not seem to mind if such vulnerabilities exist. I am not willing to put in the extra man hours in to fix the problem because their is no incentive except for 60 hour work weeks and a nice warm fuzzy feeling, which is more than what management will feel. 

     

    If they pay you specifically to fix the security holes awesome! Else, relax and enjoy some extra time off.

    In case you haven't bothered reading the post entirely, it broke a few times already. And every single time, I got the flak (even though all I'm doing is hosting it...)



  • So the system was working fine untill you changed its hosting environment? Did you perform any tests after you switched its hosting environment? The real WTF is developers and sysadmins who do not test or have a backout plan.



  • @OSvsOS said:

    Why do you care if the application is secure? If it aint's broke why fix it. I work on a older codebase (2000 era) that suffers from quite a few vulnerabilities (XSS, Sql Injection, CSRF). Sure, I could fix them but my company does not seem to mind if such vulnerabilities exist. I am not willing to put in the extra man hours in to fix the problem because their is no incentive except for 60 hour work weeks and a nice warm fuzzy feeling, which is more than what management will feel. 

     

    If they pay you specifically to fix the security holes awesome! Else, relax and enjoy some extra time off.

    If you are hired as a software developer, fixing security problems is your job.  The company doesn't need to pay you extra to fix bugs.  And if you are salaried, some unpaid overtime here and there is also expected as part of the job.  Maybe not 60 hour weeks, but some.  Your job as a software developer is to communicate the severity of the issues to management and develop a plan to correct those issues.  Now, bad management might shut you down and tell you they'd prefer you spend your time some other way than fixing bugs.  In that case, sure, let the piece of crap have problems.  I've been in that situation before and struggling against bureaucracy or ignorant managers just to let you do the best thing is not worth the hassle.  But if you are not even making an effort to convey the problems and proposed solution to management, you are majorly failing in your job.

     

    TRWTF is you, sir.



  • Thank you for the tip. You suck at programming! How about that for a comeback?



  • Simple.  Put your prices up.  If this company is paying you $1000/week for 0 hours most weeks and 60 hours in some weeks (hopefully less than 1 in 4) then it's a good money-spinner.

    Even if your contract was only your friend saying "it's a piece of cake," you can make a good case to the customer that the system is a lot more complex than was originally contracted and your prices have to rise.

    If you don't think you can make this kind of money out of it, then it's time to kill the golden goose by telling them that you won't support the old system at the old rate but you will create a new one for a one-off fee of $10,000. (Or whatever your time is worth to rewrite it and add the inevitable new features they will ask for during the development cycle.)  I think most customers will choose the weekly payment plan, even though it will cost them much more in the long term.



  • @Qwerty said:

    Simple.  Put your prices up.  If this company is paying you $1000/week for 0 hours most weeks and 60 hours in some weeks (hopefully less than 1 in 4) then it's a good money-spinner.

    Even if your contract was only your friend saying "it's a piece of cake," you can make a good case to the customer that the system is a lot more complex than was originally contracted and your prices have to rise.

    If you don't think you can make this kind of money out of it, then it's time to kill the golden goose by telling them that you won't support the old system at the old rate but you will create a new one for a one-off fee of $10,000. (Or whatever your time is worth to rewrite it and add the inevitable new features they will ask for during the development cycle.)  I think most customers will choose the weekly payment plan, even though it will cost them much more in the long term.

     

    That whole thing implies that the customer is paying, right? That specific customer now owes me £120 in total. And I'm thinking of pulling the plug ont he hosting as we speak, because quite frankly, unpaying customers who not only do not pay (logically) and throw flak about their own piece of code really, really should be thrown out by the window a.s.a.p.
    I think I also made a grave mistake of going freelance because of this. I may be good at PHP/MySQL, but on the other hand, I'm just unable to deal with support flak aimed from third-parties about third-parties who just pick me as flak target because I'm the person hosting their crap.



  • @anima said:

    That specific customer now owes me £120 in total. And I'm thinking of pulling the plug ont he hosting as we speak, because quite frankly, unpaying customers who not only do not pay (logically) and throw flak about their own piece of code really, really should be thrown out by the window a.s.a.p.

    Amen to that. At the very least, why keep fixing their issues if they are not even paying you?

    @anima said:

    I think I also made a grave mistake of going freelance because of this. I may be good at PHP/MySQL, but on the other hand, I'm just unable to deal with support flak aimed from third-parties about third-parties who just pick me as flak target because I'm the person hosting their crap.

    Take heart, anima - it doesn't have to be like this. Even if you don't take qwerty's suggestion on this one, next time you can insist on a contract beforehand that limits the scope of what you will be doing for the customer. It's reasonable to inspect the system before you finalize the contract, too.

    Also, now you know never to believe your "friend" again when he has a job for you with "such an easy system to support"!



  • @aliquot said:

    Amen to that. At the very least, why keep fixing their issues if they are not even paying you?

    Generosity? I know, it's a flaw in the business world.

    Take heart, anima - it doesn't have to be like this. Even if you don't take qwerty's suggestion on this one, next time you can insist on a contract beforehand that limits the scope of what you will be doing for the customer. It's reasonable to inspect the system before you finalize the contract, too.

    Also, now you know never to believe your "friend" again when he has a job for you with "such an easy system to support"!

    Sounds like a good suggestion, I'll bear that in mind next time, if there is a next time. Tired sick of freelancing, so I think I'll just take a programming job in the area as part of a team.



  • @OSvsOS said:

    Thank you for the tip. You suck at programming! How about that for a comeback?

    About as sensible and thoughtful as your attitude towards software development.  See, this site is for people who care about good software to laugh at the mistakes of those who think fixing bugs isn't part of their job description.  It's a site to mock and learn from WTFs, not brag about creating them.

     

    I'm sorry you were confused on that point.



  • morbius is right, it's not even trolling, it's just the truth. As a programmer you are responsible for the code. No buts, no nothing, it is your responsibility. 

    Lots of books have been written and thousands of blogs have been written about when programmers no longer understand this, stuff like the broken window effect and such, but in the end if you can't take responsibility for code when you are made responsible for it, then you are a bad programmer. 



  • @stratos said:

    morbius is right, it's not even trolling, it's just the truth. As a programmer you are responsible for the code. No buts, no nothing, it is your responsibility. 

    Lots of books have been written and thousands of blogs have been written about when programmers no longer understand this, stuff like the broken window effect and such, but in the end if you can't take responsibility for code when you are made responsible for it, then you are a bad programmer. 

     

     Security is best applied in layers. If the operating system and network are unsecure what is the point of securing the application layer? If the data has no value why waste the resources to even secure the application. If the probability of attack is low why secure the application?

    I try to fight the fires that matter and accomplish what will get the biggest return on improving company efficiency and effectiveness. I use secure programming techniques on newer software projects, but not on legacy projects. The legacy projects are left without improvement because "I  feel" it is not valuable to the company for me to go back and write numerous software tests to insure that my "security improvements" did not break the production code. 

    Everybody likes to sit on a pillar of righteousness about computer security. Security requires compromises be made around usablity and cost to implement. It is important to have security in mind when designing new applications or supportting old ones. For me, I have 168 hours in my week and my family means more to me than spending extra time at the office securing an application that is getting replaced in six months. I care more about my family than my work.

     I would rather be a good parent than a good programmer. :)

     



  • @OSvsOS said:

    ...Security requires compromises be made around usablity and cost to implement...

    I disagree with leaving security holes open in general, but in this specific case it may make sense. Given as the way to "fix" the application is to purge the database, I don't see it as particularly critical that you keep that data safe.


  • @DemonWasp said:

    I disagree with leaving security holes open in general, but in this specific case it may make sense. Given as the way to "fix" the application is to purge the database, I don't see it as particularly critical that you keep that data safe.

    Losing data is not the only thing that can happen as a result of injection vulnerabilities. Sophisticated attacks can take advantage of buffer overflow vulnerabilities in the database server, and possibly take control of the entire machine.

    So if you have more than one app using the same database server, it's only as strong as its weakest point, and neglecting to fix vulnerabilities in that one single app means you don't really care about anything else on the machine.



  • @morbiuswilters said:

    @OSvsOS said:

    Thank you for the tip. You suck at programming! How about that for a comeback?

    About as sensible and thoughtful as your attitude towards software development.  See, this site is for people who care about good software to laugh at the mistakes of those who think fixing bugs isn't part of their job description.  It's a site to mock and learn from WTFs, not brag about creating them.

     

    I'm sorry you were confused on that point.

     

    I'm only a hobbiest web programmer, can I still post?



  • @Master Chief said:

    @morbiuswilters said:

    @OSvsOS said:

    Thank you for the tip. You suck at programming! How about that for a comeback?

    About as sensible and thoughtful as your attitude towards software development.  See, this site is for people who care about good software to laugh at the mistakes of those who think fixing bugs isn't part of their job description.  It's a site to mock and learn from WTFs, not brag about creating them.

     

    I'm sorry you were confused on that point.

     

    I'm only a hobbiest web programmer, can I still post?

    'Course you can...
    Along the same lines as OSvsOS, I've seen a few pieces of software or once even an entire network with gaping vulns (on the network case, think about "empty netadmin password"), in a school environment, where the only thing preventing the kids from logging in as admin and thus totally bypassing the ISA filtering was the fact that no-one tried...until once (where the school was literally flooded with printing orders to various printers because the smart alec found that the admin had no print quota limit).
    Different vulns, same result. It also illustrates quite well the weakest link theory in security.



  • Again, I am forced to ponder the thought processes of people who equate SQL and DB management with computer science.  These things have as much to do with computer science as Guitar Hero has to do with guitar playing.

    So, to answer the question: no, CS graduates would not do that; IS graduates certainly might.



  • @mrprogguy said:

    So, to answer the question: no, CS graduates would not do that; IS graduates certainly might.

    I'm sorry, but what planet do you live on?  I've known a good deal of programmers and competence has no relation to whether or not they have a CS degree.  Some of the most putrid code I've encountered was written by CS grads and some of the best developers I've known were self-educated.



  • @OSvsOS said:

    Security is best applied in layers. If the operating system and network are unsecure what is the point of securing the application layer?

    This is so illogical.  It's like saying "well, my house doesn't have a lock on the door, so I might as well leave the windows wide open 24 hours a day".  The whole point of layered security is that there is no single point of failure.  It sucks if there are flaws that make the software vulnerable at some layers, but that does not mean that it is sensible to just ignore security altogether.  That's the attitude that leads to major problems down the line.  Fixing security holes (especially with software vulnerable on multiple levels) is a process and good security requires constant awareness and vigilance.  You start by making your secure what you can and then push to secure other areas as well.  Simply giving up on security altogether is negligence.



  • @anima said:

    'Course you can...
    Along the same lines as OSvsOS, I've seen a few pieces of software or once even an entire network with gaping vulns (on the network case, think about "empty netadmin password"), in a school environment, where the only thing preventing the kids from logging in as admin and thus totally bypassing the ISA filtering was the fact that no-one tried...until once (where the school was literally flooded with printing orders to various printers because the smart alec found that the admin had no print quota limit).
    Different vulns, same result. It also illustrates quite well the weakest link theory in security.

     

    He got admin access to a school network and he filled up the printer queues.

    Talk about lack of imagination.



  • @Master Chief said:

    @anima said:

    'Course you can...
    Along the same lines as OSvsOS, I've seen a few pieces of software or once even an entire network with gaping vulns (on the network case, think about "empty netadmin password"), in a school environment, where the only thing preventing the kids from logging in as admin and thus totally bypassing the ISA filtering was the fact that no-one tried...until once (where the school was literally flooded with printing orders to various printers because the smart alec found that the admin had no print quota limit).
    Different vulns, same result. It also illustrates quite well the weakest link theory in security.

     

    He got admin access to a school network and he filled up the printer queues.

    Talk about lack of imagination.

     

    The server used to input marks is on another (physically separate) network. And he couldn't easily have got into other people's documents, judging by the very obscure permissions on all the files and folders for each user.



  •  @anima said:

    The server used to input marks is on another (physically separate) network. And he couldn't easily have got into other people's documents, judging by the very obscure permissions on all the files and folders for each user.

    I'm not talking about changing grades, I'm talking change the domain, change the DHCP settings, change the gateway, DHCP server IP, wireless breadcast (if any), etc.  Have FUN!



  • @OSvsOS said:

    @stratos said:

    morbius is right, it's not even trolling, it's just the truth. As a programmer you are responsible for the code. No buts, no nothing, it is your responsibility. 

    Lots of books have been written and thousands of blogs have been written about when programmers no longer understand this, stuff like the broken window effect and such, but in the end if you can't take responsibility for code when you are made responsible for it, then you are a bad programmer. 

     

     Security is best applied in layers. If the operating system and network are unsecure what is the point of securing the application layer? If the data has no value why waste the resources to even secure the application. If the probability of attack is low why secure the application?

    I try to fight the fires that matter and accomplish what will get the biggest return on improving company efficiency and effectiveness. I use secure programming techniques on newer software projects, but not on legacy projects. The legacy projects are left without improvement because "I  feel" it is not valuable to the company for me to go back and write numerous software tests to insure that my "security improvements" did not break the production code. 

    Everybody likes to sit on a pillar of righteousness about computer security. Security requires compromises be made around usablity and cost to implement. It is important to have security in mind when designing new applications or supportting old ones. For me, I have 168 hours in my week and my family means more to me than spending extra time at the office securing an application that is getting replaced in six months. I care more about my family than my work.

     I would rather be a good parent than a good programmer. :)

     

     

    I don't see how you equate "taking responsibility" with "i would rather be a good parent", perhaps in your fantasy world that makes sense, but at least not in mine.

    Taking responsibility does not automatically mean making decissions. If you see giant security flaws but don't have time allocated to fix them, you escalate. You did your job, you took responsibility, could not fix it because of time restraints and informed the person responsible of those things. Job fuckingly well done. Would have taken about 30 minutes.

    Among many other sentences that show signs of a bad employee, not even specifically programmers, is the following. Shit hits the fan, shit breaks, and the person responsible says "Oh, yeah, i knew that."

    When someone exploits the system by way of that application, the correct response is not "Oh, yeah, i knew that, but i didn't fix it because the network was insecure anyway."

    The correct response would be "Oh, yeah, i told management and they told me they would allocate some hours to fix that in 2010, i told them about the risk, but they wouldn't listen."

     



  • @OSvsOS said:

    @stratos said:

    morbius is right, it's not even trolling, it's just the truth. As a programmer you are responsible for the code. No buts, no nothing, it is your responsibility. 

    Lots of books have been written and thousands of blogs have been written about when programmers no longer understand this, stuff like the broken window effect and such, but in the end if you can't take responsibility for code when you are made responsible for it, then you are a bad programmer. 

     

     Security is best applied in layers. If the operating system and network are unsecure what is the point of securing the application layer? If the data has no value why waste the resources to even secure the application. If the probability of attack is low why secure the application?

     

    Which layers are directly accessible, which are not? In the case of this app, the DB, operating system and network are only accessible through that specific interface. It only makes more sense to secure it as much as possible in order to prevent the underlying unsecured (and impossible to secure totally) layers from being exposed.
    Saying that "it makes no sense to secure one layer and not the others" just strikes as stupidity. It's like saying "oh, actually, we won't bother with rigorous testing of our ciphers because we know that the underlying mathematical framework is flawed": basically, an excuse to slack.

    @Master Chief: network settings are open anyway. In this case, all it does is create mild inconvenience, and the users know that attempts to fuck around are logged. It's more of a questionf of trust in this case (and they can't override the web filtering by doing this, because only the filter has outside access).Changing the domain is more or less the same thing, the kids behaved enough and those who didn't got a nice letter by post.
    However, there's one thing they could've done that would've pissed everyone to no end: change the net admin pwd. Now that'd have been wicked to witness.

    @stratos: Do many work environments actually "allocate" hours in a distant future for security fixes? In every place I've worked, whenever something like that cropped up, it was an immediate matter.



  • @anima said:

    @stratos: Do many work environments actually "allocate" hours in a distant future for security fixes? In every place I've worked, whenever something like that cropped up, it was an immediate matter.

     

    Probebly not, but i just needed a example in which the issue was reported but not fixed. ;)

    Although sometimes a fix can be written, not in the release branch but in the development branch and then there is some time between writing the fix and putting the fix in production. Which could be a time frame in which the bug gets exploited. A reason why someone would write the fix in a development branch is when the fix entails refactoring a medium chunk of code. A bug fix is still a change in code, so unless it is a one line change, testing is in order. 

    Although in those cases you will mostly want to write a seperate "bad case" check for the production version, that won't handle as elegant, but will stop the bug from getting exploited.



  • @anima said:

    @stratos: Do many work environments actually "allocate" hours in a distant future for security fixes? In every place I've worked, whenever something like that cropped up, it was an immediate matter.

    Unfortunately, it sometimes does happen.  I've had jobs where I've spent hours arguing with management that a particular security problem should be addressed ASAP and that it was extremely serious, only to be told that it wasn't important.  My favorite was when the manager made an analogy to automobiles in an attempt to "prove" that the problem wasn't so severe.  So I'm completely sympathetic to developers who try to fix problems and bring them to the attention of management only to be told no.  I wouldn't blame them for having the security problems because they did their best to fix it.  However, if no attempt is made to get the problem fixed at all, then I consider that a failure on the part of the programmer.

     

    And just for reference, at the same job this was a pattern of negligence that eventually led me to quit.  On several occassions when I could not get approval to fix security holes on company time, I did it on my own time, simply because I felt it was the right thing to do.  In hindsight, I kind of wish I hadn't because my efforts were never appreciated and I actually ended up getting flak because I was unable to fix every security hole in my spare time, so when a few ended up causing problems the only response from the manager was a smug "Well, you said you fixed these on your own time but it looks like you didn't do a very good job because they still exist."  That's right, I basically got mocked and it was implied I was less-than-competent because I could not fix all of the outstanding bugs in the few weeks in a row where I worked an extra 20+ hours per week on my own time after being forbidden from working on the problems on "company time".  I'm happy to report that that manager was eventually let go, but it was only after me and about half of the development staff quit.



  • @anima said:

    And no, I am not kidding:
    - SQL injection flaw on line 1
    - No check for items before fetching data on line 2
    - XSS injection possibility on line 3 (not just that, a single misplaced char in the DB causes the script to fail quite miserably)
    All this in 3 of the 5 lines of the script.

    To tackle your subject line question directly and head on: Since 1988, at least.

    I studied CS a couple decades ago, at a school which was considered nearly top-notch.  In the course of getting my BS degree, out of over 500 hours in classrooms purportedly learning about how to program computers, we spent:

    • 1 hour on computer security.  Most of this hour was spent talking about the value of not griefing others.
    • 0 hours on avoiding SQL injection attacks (I took two classes which required SQL interactions, and a third that touched on SQL briefly.)
    • 15 minutes on validating subroutine return values aka error checking aka failure checking.

    HTML wasn't around until the end, and I didn't take the course they added for my final semester to address it.  Javascript and other client-side dynamic scripting wasn't around until after.  As such, the closest thing to XSS vulnerabilities the world knew at that point was there were some terminal escape sequences that one could send in email, which would insert key sequences into ones keyboard buffer if one used the terminal whose escape sequences were used, and one used an email reader that didn't protect one from this attack - like the default email reader included on every system.  However, they fixed this, by patching sendmail to delete all instances of chr(27) in unencoded emails.  Note that the existence of that flaw was not something covered in class.

    Having talked with graduates of many other CS schools, I gather that one of the things that really set my alma mater ahead in the rankings was that 1 hour lecture on computer security, as most of them didn't have anything comparable.

    In short, it seemed to me that as of two decades ago, CS could only improve on computer security, as they were virtually at rock bottom.

    On the bright side, it's been my impression that schools have made great strides since then - I've even heard rumors of a few schools having *several* entire classes focussed on computer security.  However, I haven't heard of any school requiring that a CS graduate actually take any of them.



  •  While I agree with the remainder of your comment, I do have to respond to your two questions:

    @OSvsOS said:

    If the operating system and network are unsecure what is the point of
    securing the application layer?

    C.Y.A. - plain and simple. If there is a security breach you need to be sure that it wasn't your code that caused it.

    @OSvsOS said:

    If the data has no value why waste the
    resources to even secure the application. If the probability of attack
    is low why secure the application?

    If we are talking about a publicly exposed system then even a relatively low value application can be used by hacker for nefarious reasons which can lead to major liability issues for your company. Even for internal systems small security bugs can lead to big problems when they can be accessed by other systems in the network which have become compromised by malware.

    Yes, don't be a slave to your company, but do champion the cause of securing your applications if only to be able to say "told you so" when (not if) there is a security breach.



  • @DeepThought said:

    Yes, don't be a slave to your company
     

     That is the exact point I wanted to hit on. If management is willing to allocate some of you 40 hr work week to fix security holes, sweet.

    I am extremely happy with the amount of feedback people provided. 


Log in to reply