Creating inefficiency for everyone



  • @blakeyrat said:

    Going something like 20 years and nobody's fixed the problem? Makes me angrier than usual.

    FTFY



  • @blakeyrat said:

    That's YOUR problem, as email admins, not the end user's. Stop griping and fix it.

    I hate excuses. See a problem, fix the problem. Going something like 20 years and nobody's fixed the problem? Makes me angry.

    I'd really like to hear your theory of how to send emails over the Internet that only involves 2 servers.



  • @Master Chief said:

    @blakeyrat said:
    That's YOUR problem, as email admins, not the end user's. Stop griping and fix it.

    I hate excuses. See a problem, fix the problem. Going something like 20 years and nobody's fixed the problem? Makes me angry.

    I'd really like to hear your theory of how to send emails over the Internet that only involves 2 servers.

    ... what the holy shit does only 2 servers have to do with anything?



  • @Master Chief said:

     You should not need to tell people not to send a 200 MB file to 90 people in the company.
    @Master Chief said:
    And you definitely don't need to be a programmer to understand that sending a 200 MB file to ONE person is retarded.
    Sending *anything* to 90 people often means that you are doing something wrong.  File size, however, is an entirely different issue. There's nothing the least bit retarded about needing to send a big file.  It isn't 1985 anymore, where the biggest file you'll ever see fits on a floppy disk. 

    And so it still comes down to one question:  I have a really big file that I need to send to you.  Assuming that at least one of us is not technically proficient, how do I get that file to you?



  • @blakeyrat said:

    @tgape said:
    You care because chances are pretty good, when you hit send on that email, you just put it on several third party servers.  The only times when that doesn't happen is when you're sending it internally, or when you're sending it from a company that does their own mail and doesn't outsource spam filtering to a company that does their own mail and doesn't outsource spam filtering.  A lot of VBCs are like that, but most companies are not.

    That's YOUR problem, as email admins, not the end user's. Stop griping and fix it.

    I hate excuses. See a problem, fix the problem. Going something like 20 years and nobody's fixed the problem? Makes me angry.

    It's not MY problem.  I'm no longer an email admin, and I work at a VBC that does not outsource any part of their email processing.  While I'm no longer an email admin, I'm familiar enough with our configs that I can say our outbound email to all of our significant business partners (including customers, suppliers, and a few competitors with whom we have some joint venture projects) is configured to require TLS with cert verification.  This basically means those business partners can't be outsourcing their inbound email filtering (or, if they do, they need to set up an exception for us.  There are a few that do that.)

    It's not a problem with the design of email, per se, either.  The protocols allow it, but do not require it.  It's a problem with how various companies have chosen to do business.

    This also has a direct correspondent in physical reality: when you put something in the mail - either USPS, Royal Mail, UPS, FedEx, DHL, or some other package or message delivery service - it passes through many hands before it gets to the intended recipient.  If you put your message on a postcard - the equivalent of unencrypted email - they all get to read the message.  Users need to understand that about physical mail.  They should understand that about email also.

    @blakeyrat said:

    @tgape said:
    Because several anti-virus programs have a 200MB file size limit, and many anti-virus programs really slow down on larger files.

    That's YOUR problem, not the end user's. Your job is to insulate the end user from your problems.

    That's what YOU think.  When I was an email admin, my management made my job very clear, and that wasn't it.  My job was to ensure that standard business email traffic flowed as quickly as feasible, while ensuring as few spam messages as feasible reached the end users.  Standard business email was strictly defined to not include any email messages over 30M, after encoding.  While I was not a part of that decision, I did hear about what rational was used.  The size was set based on the fact we had a handful of people working on a project that required them to collaborate with business partners (as defined above) on some 15M files.  30M was seen as being sufficient margin for growth.  Apart from those messages, fewer than 1% of messages over 20M were apparently legitimate business (for example, one guy apparently tried to send through email a 1G home DVD of vacation pictures and movies.)

    Note that I didn't point out about the malicious 42k messages, because that's not something a non-administrator user would encounter.  On the other hand, my non-technical aunt knew about the 200M file size limit, because some of her home movies are larger than that.  Admittedly, that does make her, in at least one regard, more advanced than many users.  Other non-technical people I know have commented on the fact that anti-virus programs spend a *lot* more time on larger files.  Still, thinking about it a bit more, that knowledge is limited to people who actually bothered to look at their anti-virus status display as it ran through a scan, *and* decided to check on the files that the scanner took over a minute to scan.  However, I'm not arguing that people should be able to be ignorant of what their system's doing, so I feel fine requiring that level of sophistication.  Also, our legal system apparently also feels fine requiring that level of sophistication, because when someone sends a sensitive unencrypted email over the open internet, and it results in some form of criminal data leak (either personal information, HIPPA, government secrets, etc.), the sender is generally held responsible, not the email administrators.



  • Type more.



  • @El_Heffe said:

    how do I get that file to you?
     

    I still think yousendit or wetransfer are viable options. There are several others but they all suck.  Wetransfer has ads, but they're actually quite tasteful.

    It's reasonabe to ask, given that there exist organisations like the above that are willing to host any size file for a strictly limited time, why every email provider / ISP in the world doesn't also do this. Too many parties/servers involved might be a good reason. With a fileshare service, it's just you, the service, and the recipient. The end product is a copied file on the hard drive of party 1 and 3. Anything in between vanishes in a week.

    Not so with email, where you can't make such assumptions and providers at any point in the transmission chain basically have to keep the file indefinitely, and then hope the recipient has the option "delete from server after downloading" turned on and "keep a copy of sent messages" turned off.



  • @PJH said:

    Or barf on certain 42KB files....
    42kB? That's huge!
    @tgape said:
    While I'm no longer an email admin, I'm familiar enough with our configs that I can say our outbound email to all of our significant business partners (including customers, suppliers, and a few competitors with whom we have some joint venture projects) is configured to require TLS with cert verification.
    Wow, you must have very few correspondents then - while quite a lot of servers do support SSL, most of them use useless self-signed certificates, which normally don't add anything to the security.

    Speaking of SSL and e-mail, I recently had a problem, when inbound mail from certain domain was rejected by the server with a simple "Access denied". This one took me a while to puzzle out - increasing logging verbosity did show me that something was off, but I couldn't quite pinpoint out what. Turned out that the sending server was trying to deliver messages through the SSL port (465), which I had configured to only allow authenticated connections. AFAIK, 465 isn't even an official port assignment, so I really have no idea why that server tried to deliver through it (I found out that the server was Lotus Domino).



  • @ender said:

    @PJH said:
    Or barf on certain 42KB files....
    42kB? That's huge!
    @tgape said:
    While I'm no longer an email admin, I'm familiar enough with our configs that I can say our outbound email to all of our significant business partners (including customers, suppliers, and a few competitors with whom we have some joint venture projects) is configured to require TLS with cert verification.
    Wow, you must have very few correspondents then - while quite a lot of servers do support SSL, most of them use useless self-signed certificates, which normally don't add anything to the security.

    If you are requiring the certs to either verify with a trusted CA cert, or exactly match one of the public keys you've gotten hand-delivered from the email administrator of the remote site (security conferences may be a good place to meet up for this activity), and you've restricted the trusted CA cert list to just the CAs for your correspondents, the security isn't that bad.

    That having been said, understand that when a VBC that really wants to have secure email connections to its partners is finding companies with which to work, selection bias automatically improves the average security of the selected companies, as compared to the industry average.  Also, even mom and pop shops tend to be more interested in playing ball when the VBC's interested enough to offer its prospective suppliers VBC-signed certs for email purposes at highly discounted rates.  (Of course, VBC-signed certs may not be very exciting if VBC isn't a recognized CA - but, then, such a VBC quickly becomes at least an industry-recognized CA.  Also, more interested does not necessarily mean "interested".  "Very reluctantly willing to comply so as to double product sales" (remember, VBC courting mom and pop supplier here) *is* more interested than "not at all willing to give time of day to the thought of getting a CA-signed cert".)

    I would imagine that, for a small business trying the same thing, they may find it a bit like pulling healthy gorilla teeth with no tools or anesthetic.  (Not that it would necessarily be dangerous - without tools, a normal human would probably not ever even get to the point of angering the gorilla.)



  • Yeah, we have network monkeys at our place who live in a cupboard and allow people to use computers to do 'magic' things without having to think about it because they have better things to think about. Sometimes the network monkeys emerge from their cupboard to remonstrate with someone for breaking the network, we then have to explain to said monkeys that their job is to let the magic happen and not start whining because our prototype hardware is broadcasting IP addresses over the network for example. One came marching down from his cupboard and demanded that i disconnect the hardware, I told him to go and tell the M.D that the hardware project wouldn't be finished because I.T didn't like me abusing the network he scurried away and i never heard back from him.



  • @Dayglo_Gerbil said:

    Yeah, we have network monkeys at our place who live in a cupboard and allow people to use computers to do 'magic' things without having to think about it because they have better things to think about. Sometimes the network monkeys emerge from their cupboard to remonstrate with someone for breaking the network, we then have to explain to said monkeys that their job is to let the magic happen and not start whining because our prototype hardware is broadcasting IP addresses over the network for example. One came marching down from his cupboard and demanded that i disconnect the hardware, I told him to go and tell the M.D that the hardware project wouldn't be finished because I.T didn't like me abusing the network he scurried away and i never heard back from him.

    If you're a developer of network applications, then maybe you should listen to those "network monkeys" every once in a while. The shit we see come through that we need to deploy with fresh and modern requirements developed in, oh, the 80's. Case in point: welcome to later 3 multicast and multicast routing. There's no fucking reason your nodes must be on the same VLAN. Modern networks do not have VLANs and subnets spanned to hell and gone across the data center or the enterprise. Use a multicast address for heartbeats and let a modern load balancing system direct the traffic to the available node. That allows for a system that is geographically dispersed, that can track which nodes are up and which are not, and doesn't put any archaic, stupid-assed requirement about everything being in the same layer 2 space and on the same subnet.


    And for $DEITY's sake, read and use DHCP as intended. It's a modern protocol too.


    Your "network monkeys" need to grow some balls. And I work in IT as a network engineer in the medical field. You try to bring some piece of shit to us that doesn't conform and we will reject it. If it doesn't meet our requirements, it doesn't go on the network. And yes, we've told many an MD this and said "Go find a different product", and it has the power of management behind it. That's one thing I love about where I work.



  • @nonpartisan said:

    If you're a developer of network applications, then maybe you should listen to those "network monkeys" every once in a while. The shit we see come through that we need to deploy with fresh and modern requirements developed in, oh, the 80's. Case in point: welcome to later 3 multicast and multicast routing. There's no fucking reason your nodes must be on the same VLAN. Modern networks do not have VLANs and subnets spanned to hell and gone across the data center or the enterprise. Use a multicast address for heartbeats and let a modern load balancing system direct the traffic to the available node. That allows for a system that is geographically dispersed, that can track which nodes are up and which are not, and doesn't put any archaic, stupid-assed requirement about everything being in the same layer 2 space and on the same subnet.


    And for $DEITY's sake, read and use DHCP as intended. It's a modern protocol too.


    Your "network monkeys" need to grow some balls. And I work in IT as a network engineer in the medical field. You try to bring some piece of shit to us that doesn't conform and we will reject it. If it doesn't meet our requirements, it doesn't go on the network. And yes, we've told many an MD this and said "Go find a different product", and it has the power of management behind it. That's one thing I love about where I work.

     If only all (most, many, more that a tiny percentage) of environments were like that. Congratulations you are in a great place.



  • Your "network monkeys" need to grow some balls. And I work in IT as a network engineer in the medical field. You try to bring some piece of shit to us that doesn't conform and we will reject it. If it doesn't meet our requirements, it doesn't go on the network. And yes, we've told many an MD this and said "Go find a different product", and it has the power of management behind it. That's one thing I love about where I work.


    This is precisely why we don't listen to the network monkeys, we don't allow them to hold the company to ransom, if in the process of developing our hardware we break the network then its tough shit the monkeys just have to fix it again, the company makes it money through developing hardware not through keeping I.T happy.The M.D of our company would be failing in his duties if he closed down any hardware development which did strange things to the network.



  • @Dayglo_Gerbil said:

    Your "network monkeys" need to grow some balls. And I work in IT as a network engineer in the medical field. You try to bring some piece of shit to us that doesn't conform and we will reject it. If it doesn't meet our requirements, it doesn't go on the network. And yes, we've told many an MD this and said "Go find a different product", and it has the power of management behind it. That's one thing I love about where I work. *********************************************************************************************************************

    This is precisely why we don't listen to the network monkeys, we don't allow them to hold the company to ransom, if in the process of developing our hardware we break the network then its tough shit the monkeys just have to fix it again, the company makes it money through developing hardware not through keeping I.T happy.The M.D of our company would be failing in his duties if he closed down any hardware development which did strange things to the network.

     

    That is such a fucked-up attitude I don't even know where to begin.  It has nothing to do with holding a company ransom and everything to do with maintaining an efficient, well-designed, well-structured network.  And your attitude is what causes us problems.

    We're not asking for more than a product that is well-designed that uses modern networking principles.  There is so little additional effort required to make an application use layer 3 multicast that there's no excuse not to.  That little change allows for cluster configurations to have different nodes in different locations, whether within the same data center or in a completely different data center miles away.  We're building a new data center and wrestling with the problems of "this node has to be on this same VLAN!!"  Bullshit; if the system was designed well, it would not have to be on the same VLAN.  And it would be easy to set up redundant clusters in remote locations.

    We're in the midst of a campus-wide IP change from public to private addressing.  In the last couple of months, we stumbled onto some devices that do classful routing.  The previous addressing was in a class B range; the new addressing in the 10-dot range is class A.  As a result, these devices can no longer communicate with the servers in the class B allocation.  We're making an allowance for now, because these devices are 15+ years old.  But when they started purchasing new hardware, we said that the new equipment must do DHCP and must do classless routing.  That's not asking for anything besides using modern network capabilities.

    I'd be interested in knowing what your company is . . . so we can make sure to avoid it in the future. This is the twenty-first century, so start using the network like it is as well.



  • @TheCPUWizard said:

    If only all (most, many, more that a tiny percentage) of environments were like that. Congratulations you are in a great place.
     

    Yeah, I know it, and I'm extremely grateful.

    There are times when there are no alternative products, in which case we do have to do the best we can.  But when there are alternatives, we insist on the alternatives whenever practical.  Earlier this year one of our network engineers (who also specializes in security) rejected a camera that a project wanted to use (I think the manufacturer didn't support DHCP; they were all static IP assignments).  So we worked with them and came up with a different network camera that would work and that worked for their project.

    We're not trying to hold anyone hostage (as was the accusation above).  We're trying to make sure that our network is hospital-grade.  "Hospital grade", in my mind, is stable, secure, reliable, and redundant.  Stable in that the overall design doesn't change.  Secure in that it is resistant to hacking (yes, I know, nothing is 100%, but we do as good a job as we can) and doesn't have malware dripping out of every network port, waiting to attack a new machine.  Reliable in that someone can connect to the network and expect to be able to get where they need to go.  Redundant in that we have backup power infrastructure (multiple circuits, UPSes, transfer switches) and, in the event an access switch stack completely goes down in the closet, there's a second switch stack that still has accessible devices on it.

    In my estimation, we do a good job.  And we have really good support from management for what we want to do.  They recognize that we're here to support the patients.   And in truth, coming from a background in emergency services, that's my main focus too -- I don't want any medical personnel to be inconvenienced, while handling a patient, by not having a working network connection wherever and whenever they need it.  Too many vendors need to grow up to the capabilities of the modern network and understand how much better their products could work on a modern network . . . if they would just design them to do so.



  • @Dayglo_Gerbil said:

    This is precisely why we don't listen to the network monkeys, we don't allow them to hold the company to ransom, if in the process of developing our hardware we break the network then its tough shit the monkeys just have to fix it again, the company makes it money through developing hardware not through keeping I.T happy.The M.D of our company would be failing in his duties if he closed down any hardware development which did strange things to the network.
     

    Development should have a separate network (or networks) from the rest of the company. That way "IT" doesn't have to do anything; if you break it you fix it yourself. Then some tests with "final" prototypes on the main network - if that breaks the network then you fail in your duties to develop hardware (since it would break your customers networks and they won't buy anything from you in the future).

    It's not about "us" and "them". IT needs to make sure services are available for other areas of the company and one department can't hold them to ransom.


Log in to reply