Just a regularly abnormal day



  • @Scarlet_Manuka said:

    I've twice worked with the "use this wall jack for network A, or this one for network B" setup. In my previous job, the blue network cables were for the unclassified network and the red cables were for the classified network. We also had two system drives with a caddy arrangement, and you had to boot from the classified system drive to be able to log into the classified network. The classified drive had to be locked in your safe when not in use and overnight. This was all pretty reasonable from a security standpoint.

    This sounds like a lot of work to create a complicated alternative to simply having two computers, one on each network.


  • I survived the hour long Uno hand

    @Scarlet_Manuka said:

    our test network was isolated from our main network

    So jelly. My tests keep failing if run during he day because heavy network traffic from developers getting shit done causes timeouts.


  • Trolleybus Mechanic

    @Yamikuronue said:

    So jelly. My tests keep failing if run during he day because heavy network traffic from developers getting shit done causes timeouts.

    Nononono-- when shit randomly fails due to developers not understanding network infrastructure-- you call that a jellypotato



  • @Jaime said:

    This sounds like a lot of work to create a complicated alternative to simply having two computers, one on each network.

    How would you secure the classified drive during lunch breaks and overnight in your scenario, without the disk caddy arrangement? And if you're using a disk caddy, why not use it to reduce the number of computers required? (I'll assume for the sake of argument that we're using a KVM switch in this scenario so that I don't have to have two 21" CRT monitors on my desk, though I'm not sure if a KVM switch would have actually been allowed due to security reasons.)



  • What if the "secured" network only used thin clients?



  • @Scarlet_Manuka said:

    How would you secure the classified drive during lunch breaks and overnight

    Full disk encryption with a boot password. This is precisely the scenario it was made for.

    @Scarlet_Manuka said:

    why not use it to reduce the number of computers required

    The cost of a computer is negligible compared to other costs of maintaining that level of security. It's likely cheaper than the safe.



  • @Scarlet_Manuka said:

    How would you secure the classified drive during lunch breaks and overnight in your scenario, without the disk caddy arrangement

    If you're using a small size computer (a laptop, or a Mac mini or something else of similar size) then you can just put the whole computer in the locked file cabinet.



  • @Scarlet_Manuka said:

    How would you secure the classified drive during lunch breaks and overnight in your scenario

    Here's a better question: how is the data secured in your scenario?

    If the answer is that the data is on the hard drive in the safe, then the organization has a problem with data protection - they need to back up the data or it will be lost in short order. Hard drives that are physically removed from computers twice a day tend to fail a lot.

    Wherever the backup is kept is obviously considered secure enough, so put the actual data there and don't worry about physically securing the hard drive since it has no user data. Or even better, do classified work on a properly protected virtual machine. That also solves the second computer and second monitor problem.



  • @Jaime said:

    Or even better, do classified work on a properly protected virtual machine.
    That also solves the second computer and second monitor problem.

    You have never worked with classified information have you? For some levels
    of classification you are required to have an airgap between protected and
    unprotected networks. Some stuff you aren't even allowed a phone in the
    same room as the material. How does a VM fit in with those requirements?
    Remembering that this is all backed up by law and people tend to be
    overcautious as they don't fancy jail time...



  • @Nocha said:

    You have never worked with classified information have you?

    Why do you have to go and make this personal? Stay tuned to this post to find out why it is you that has the problem in this case.

    @Nocha said:

    For some levels
    of classification you are required to have an airgap between protected and
    unprotected networks.

    No shit, but this isn't one of them. From the post that started this:

    @Scarlet_Manuka said:

    I've twice worked with the "use this wall jack for network A, or this one for network B" setup. In my previous job, the blue network cables were for the unclassified network and the red cables were for the classified network. We also had two system drives with a caddy arrangement, and you had to boot from the classified system drive to be able to log into the classified network. The classified drive had to be locked in your safe when not in use and overnight. This was all pretty reasonable from a security standpoint.

    @Scarlet_Manuka obviously wasn't in an airgap scenario because he had red cables for the classified network. If they required airgapping, the red cables wouldn't exist.

    @Nocha said:

    How does a VM fit in with those requirements?

    Put the VM on the other side of a firewall and use one of those RSA key fobs with the changing numbers. The NSA already trusts secure data centers with their biggest secrets, this wouldn't be any different. Actually, in some respects, it's better since you can revoke access with a few moments notice and the user is left with nothing.



  • Given that these companies have gone to the effort to run multiple LANs and required what are essentially separate machines, I assumed there was a regulatory reason for it. Add in @Scarlet_Manuka's use of the terms "classified" and "unclassified" I came to the obvious conclusion.

    I have never seen an environment where classified material can be kept on the same environment, with only a firewall dividing it from the internet. I'm not sure that all countries would allow it. Hell, for some stuff (i.e. advanced weapon designs) I'm not sure any country would allow it.



  • Dude... I originally suggested a second computer for exactly that reason.



  • Damn, so you did. Apologies, I completely missed that post.

    Nocha is a muppet



  • Also...

    I work at a cloud company that specializes in architectural/engineering collaboration. There is work going on in our data centers that would suffer as much from a data disclosure as a weapons design project would. It's possible to secure this stuff. Airgapping hasn't been a cure-all since electronics migrated into the workplace. The soviets bugged typewriters in our embassy in the 1980s; you can't get more airgapped than that. If you add the opportunity cost of the inability to collaborate, spending a boat load on security is often more economical than airgapping.



  • @Jaime said:

    You've solved half your problem - you can't commit. But you didn't solve the other half - making sure your code actually exists somewhere other than your laptop, or anyone else's problem - making sure that everyone else can get to it. Also, the half of your problem you solved is the less important half.

    Again not following you. I assume they have SVN. Use git-svn to checkout the svn repo into a local git repo, do commits/etc locally, when have a chance plug into the magic jacks to sync up to the svn repo. You get the ability to do local commits whenever you want, and your code still lives in the central repo when you have a chance to sync up.

    Everyone can do this to make sure everyone can get to the code. It won't be the absolute latest code if you've made local commits but have not sent them to the svn repo, but it's still better then not having the code committed at all.



  • @LB_ said:

    You can just email a zip of the .git directory.

    :rofl:

    dir /s .git
    ...
    1,225,425,610 bytes

    After zip, 1,209,564,238



  • The point isn't to save space, it's to preserve the history of commits. I only said to zip it because that's how sending directories works ;p



  • @David_C said:

    If you're using a small size computer (a laptop, or a Mac mini or something else of similar size) then you can just put the whole computer in the locked file cabinet.

    These were desktops; it would have been a bit inconvenient to unplug everything so I could stick the computer in the safe every lunchtime.
    @Jaime said:
    Here's a better question: how is the data secured in your scenario?
    If the answer is that the data is on the hard drive in the safe

    Yes, as I described.

    then the organization has a problem with data protection - they need to back up the data or it will be lost in short order.
    Don't recall having any trouble with it while I was there. In any case, do you typically leave all your work on your local hard drive? I prefer to work in saner environments where you use the local hard drive for temporary or personal stuff, and save all your work stuff on the servers. @Nocha said:
    Some stuff you aren't even allowed a phone in the same room as the material.
    This was also true; technically not if you were just working on your own, but if you were discussing anything out loud you weren't allowed to have a phone around. Didn't bother me since I didn't have one back then anyway. @Jaime said:
    @Scarlet_Manuka obviously wasn't in an airgap scenario because he had red cables for the classified network. If they required airgapping, the red cables wouldn't exist.
    Not sure I quite follow you here. The red and blue cables weren't plugged into the same wall jack; we had one wall jack with a blue cable and one with a red cable. As far as I know there was no physical connection between the classified and unclassified networks, but I wasn't in the IT department there so I have no direct knowledge. @Jaime said:
    Or even better, do classified work on a properly protected virtual machine.
    I doubt relying on the VM software to isolate the classified information from the unclassified host would have been considered acceptable, but I don't know for certain. How good was the VM situation c.1999-2001 anyway? It's not something I ever dealt with until much later, so I really have no idea. @David_C said:
    Allowing any unauthorized media (like flash drives, iPods or installers for unapproved software) to connect to a computer on that network violates the air gap and is a vulnerability.
    Too right. Back then we didn't have to worry about flash drives or iPods too much, but I do recall several rules around the use of floppies. A floppy was considered to have the security classification of either the highest level information that had ever been stored on it, or the highest level network it had ever been connected to; I forget which, though the second sounds like a better idea. IIRC if it was rated classified it could only ever be used on the classified network, had to be locked away when not in use like any other classified material, and when no longer needed had to go in the shredder.

    The shredder there was pretty awesome, unsurprisingly. None of this "no more than 8 pages at once or I'll jam!" rubbish that I have to put up with from my home one. Of course, I bet it cost a lot more than my home one too.



  • @Scarlet_Manuka said:

    Not sure I quite follow you here. The red and blue cables weren't plugged into the same wall jack; we had one wall jack with a blue cable and one with a red cable. As far as I know there was no physical connection between the classified and unclassified networks, but I wasn't in the IT department there so I have no direct knowledge.

    So the network is airgapped, but the workstation isn't. Since we are talking about airgapping the entire network, moving the data to a VM on the airgapped network doesn't violate the airgap.

    @Scarlet_Manuka said:

    I doubt relying on the VM software to isolate the classified information from the unclassified host would have been considered acceptable

    The VM software has nothing to do with it. If the whole red network is airgapped, then the VM software doesn't bridge it, so it's irrelevant. If we are talking about accessing a secure VM from an insecure network, then the VM software wouldn't be what is responsible for security. You'd use a proper firewall with the proper amount of paranoia applied to the configuration. The NSA has offices all over the place that access their data center in Utah, so there is an acceptable way to do this.

    @Scarlet_Manuka said:

    In any case, do you typically leave all your work on your local hard drive? I prefer to work in saner environments where you use the local hard drive for temporary or personal stuff, and save all your work stuff on the servers.

    You brought up the hard-drive-in-the-safe thing and someone else brought up airgapping. I mentioned backup to point out that any sane backup strategy would preclude workstation-level airgapping. Now that you have confirmed that it was the entire network that is airgapped, not the workstation;, every solution I proposed would work, as long as all of the components reside on the red network.



  • @LB_ said:

    The point isn't to save space, it's to preserve the history of commits. I only said to zip it because that's how sending directories works ;p

    I was just pointing out that trying to email a 1G file is likely to cause some ... problems.



  • @russ0519 said:

    Again not following you. I assume they have SVN. Use git-svn to checkout the svn repo into a local git repo, do commits/etc locally, when have a chance plug into the magic jacks to sync up to the svn repo. You get the ability to do local commits whenever you want, and your code still lives in the central repo when you have a chance to sync up.

    That is only trivially better than the original problem statement of a server that they can only sync with every once in a while by plugging into the jack. The only benefit git gives you in your example in the ability to locally commit. This has some utility to the committer - but only from the standpoint of not having to deal with all of the commit messages at the end. It has no value to anyone other than the committer. Here's the important problems this does not solve:

    • The team still only shares code via the magic jack. This was the original main problem and you didn't solve it.
    • Changed code is still not backed up until you get a chance to sync up with the server.

    You seem to believe that because you are allowed to perform an action that involves the word commit (local commit) that it is equivalent to a proper commit that makes the code available in the central repository. git uses the term "push" for this, but it is equivalent to what pretty much every other VCS calls "commit" and what we really care about.

    TL;DR version:

    @russ0519 said:

    ability to do local commits

    Who gives a shit.

    @russ0519 said:

    it's still better then not having the code committed at all

    Nope.


  • ♿ (Parody)

    This post is deleted!

  • ♿ (Parody)

    @Jaime said:

    Who gives a shit

    Lots of people who aren't you!



  • @Jaime said:

    The team still only shares code via the magic jack. This was the original main problem and you didn't solve it.

    They can still share code using other ways (patches, local Git server, shared Git server such as Gitlab, or just a git instance sitting somewhere. And will function as a backup.



  • @Jaime said:

    So the network is airgapped, but the workstation isn't. Since we are talking about airgapping the entire network, moving the data to a VM on the airgapped network doesn't violate the airgap.

    Sorry, I don't follow this at all. Am I understanding you correctly that you're proposing, instead of a separate classified HDD for the workstation, to use the same HDD when connected to either network, and when connected to the classified network, to do all work via RDP or similar to a VM? In that case, how do you stop classified information leaking out over the RDP interface? (Indeed, the fact that you can see the classified info on your screen means that it has to leak out over the RDP interface. How do you make sure none of that stays on the workstation, for instance in the swapfile?)

    @Jaime said:

    If we are talking about accessing a secure VM from an insecure network, then the VM software wouldn't be what is responsible for security.

    No, in that case whatever you use to access the VM is responsible for security. How is that any better?



  • @boomzilla said:

    Lots of people who aren't you!

    Please tell me how local commits are a valid replacement for pushing to a centralized server? I understand that local commits allow a dev to put different comments on units of changes without access to the centralized repo and allow a dev to do a bunch of partial work and decide to push each when finished. But, none of that addresses the problem of sharing, protecting, and preserving code, which is what the original problem is.


  • ♿ (Parody)

    @Jaime said:

    Please tell me how local commits are a valid replacement for pushing to a centralized server?

    Why should I defend your straw man?


  • Discourse touched me in a no-no place

    @Jaime said:

    Please tell me how local commits are a valid replacement for pushing to a centralized server? I understand that local commits allow a dev to put different comments on units of changes without access to the centralized repo and allow a dev to do a bunch of partial work and decide to push each when finished. But, none of that addresses the problem of sharing, protecting, and preserving code, which is what the original problem is.

    The principle of pushing things to a server remains; you just don't require it to be there for every change. Local repositories represent an intermediate between your working directory and the shared server. Sometimes (e.g., if you have developers that also have to travel a fair bit) that's really useful.



  • @Scarlet_Manuka said:

    Am I understanding you correctly that you're proposing, instead of a separate classified HDD for the workstation, to use the same HDD when connected to either network, and when connected to the classified network, to do all work via RDP or similar to a VM?

    Nope. I'm proposing nothing more than accessing a VM that resides on the classified network from a computer on the classified network. The problem it solves is keeping sensitive data off the local hard drive so you don't have to go through the 1990's-era procedure of locking up the hard drive.

    I'm also suggesting that there is a high likelihood that there is an already established protocol for accessing high security networks remotely. If there weren't, then the NSAs Utah data center would be useless. However, this is a completely separate suggestion - the above paragraph still stands as a benefit even if you reject the possibility that remote access is possible to secure.



  • @dkf said:

    The principle of pushing things to a server remains

    Yes. And in the given solution (switch to git and commit locally), pushing to the server is no easier than it was without the solution. Therefore, the proposed solution doesn't make code sharing any better. If a proposed solution doesn't address the problem, it isn't a solution. That's all I'm saying. I'm not saying that there isn't some side benefits to local commits, I'm saying that the solution simply doesn't address the problem as described.



  • @boomzilla said:

    Why should I defend your straw man?

    It's not my straw man, it's the original problem. You are simply saying that local commits solve a problem. Blow jobs are awesome, but they don't solve this problem either. That doesn't mean I'm against blow jobs.


  • Discourse touched me in a no-no place

    @Jaime said:

    Blow jobs are awesome, but they don't solve this problem either.

    You're giving them wrong then. ;-)


  • ♿ (Parody)

    @Jaime said:

    It's not my straw man, it's the original problem. You are simply saying that local commits solve a problem.

    Bullshit. People have been saying that local commits make a bad situation a little bit better.



  • @CoyneTheDup said:

    Why does everyone keep saying things like this? I don't know for sure about this case, but there are many situations where an air-gapped network is appropriate, even desirable.

    Like one place I worked at the back end of the 90s. It was a British antivirus vendor that shall remain nameless. They had three networks: red, yellow, and blue.

    The blue network was the main office network, and there was nothing hugely :wtf: about it, well, aside from the Lotus Notes server. The blue network had no internet access except for sending and receiving email.

    The red network had Internet access, and no machine was allowed to be on both red and blue. When I first joined the company, the red network was very, very limited in who had access. When I left 18 months later, it was merely limited.

    All ports on the yellow network were in the same room, and that room had strict access controls. In particular, all removable media (which meant floppies and writeable CDs in that era) that went in were only allowed out in a big bag that went straight to the shredders, even if you had kept it in your pocket the whole time. (Yes, I realise that there's a question of how anyone would know. Don't ask.)

    The yellow network room was where the virus analysts did their analysing, so all machines in there were suspect. They did show me a motherboard whose BIOS had been nuked by CIH, which was cool.



  • @Jaime said:

    You seem to believe that because you are allowed to perform an action that involves the word commit (local commit) that it is equivalent to a proper commit ... git uses the term "push" for this, but it is equivalent to what pretty much every other VCS calls "commit" and what we really care about.

    It depends on what specific requirement you're dealing with. Obviously, until the changes are pushed, nobody else can see them and they are not going to be backed up with the reset of the server, so in that sense a local commit is meaningless.

    On the other hand, a local commit (vs. holding all of your diffs and committing them all at once when connected to the server) will preserve a more detailed change-log. If you have a dozen semantically-distinct changes, then doing 12 local commits followed by a push will result in all 12 commits appearing on the server as separate changes. If you didn't locally commit, but simply committed everything at once when connected, then they'd all show up on the server as a single change. This would not be as useful, especially if automated testing reveals a problem, since all the changes would be bundled together.



  • @Jaime said:

    Nope. I'm proposing nothing more than accessing a VM that resides on the classified network from a computer on the classified network. The problem it solves is keeping sensitive data off the local hard drive

    OK, that makes a little more sense. I don't see how it solves that problem though. At the very least you're getting screen data from the VM, how can you ensure none of that sticks in the pagefile? Also, then you have to do extra work to make sure the computer doesn't access anything on the network other than the designated VM, which adds another potential point of failure.
    @Jaime said:
    the 1990's-era procedure of locking up the hard drive

    As I mentioned earlier, I started there in 1999, so 1990s-era procedures were appropriate 😄


Log in to reply