Secure FTP access



  • My company has an FTP site for transferring large files to and from our customers (we deal in complex measurement data which gets big quite quickly, so email doesn't work).



    Write access to this FTP site is strictly controlled. No one is allowed access without being set up by IT. Access must be through one specific FTP application (which has a firewall exception set up for it) and through one specific account. We are not allowed to know the password.



    IT must attend our system (remotely or physically) and type in the account details for us, to be saved in the application for future use as and when we need it. Once that process is done we are "set up" to use the FTP site.



    Yes, you are correct, the password is stored in plain text and easily accessible.



  • @Algorythmics said:

    My company has an FTP site for transferring large files to and from our customers (we deal in complex measurement data which gets big quite quickly, so email doesn't work).



    Write access to this FTP site is strictly controlled. No one is allowed access without being set up by IT. Access must be through one specific FTP application (which has a firewall exception set up for it) and through one specific account. We are not allowed to know the password.



    IT must attend our system (remotely or physically) and type in the account details for us, to be saved in the application for future use as and when we need it. Once that process is done we are "set up" to use the FTP site.



    Yes, you are correct, the password is stored in plain text and easily accessible.

    And even if it wasn't, you could sniff it off the wire. Or set up a local netcat listener and redirect the ftp server's name to 127.0.0.1 in your hosts file.



  • @DaveK said:

    And even if it wasn't, you could sniff it off the wire. Or set up a local netcat listener and redirect the ftp server's name to 127.0.0.1 in your hosts file.
    THAT'S HACKING! You must be fired immediately. For reasons! Reasons, yes, reasons. Known to IT management.

     



  • What a perversion!  That's so perverted that more information should be given.  Is this server accessible on the interwebs?  How much space does it have?  How fast of a connection? ....



  • @Algorythmics said:

    complex measurement data which gets big quite quickly, so email doesn't work
    Reminds me of a conversation I had with our IT person:

    Me:  What's the largest file I can send through our email system?

    IT:    I don't know.

    Me:  What do you mean . . . you don't know?

    IT:    Well . . . it depends .....

    Me:  It depends on what?

    IT:   Um . . . well . . . there are settings . . .

    Me:  OK.  And . . . .?

    IT:    Well . . . it depends .....




  • I'm assuming when they say "secure FTP" they don't actually mean "SFTP". Because that's actually secure.

    BTW, there are approximately 4434,324392 file-sharing websites now that let you securely share large files and are about a bazillion times easier to use than FTP. Maybe they should join the rest of us here in the 21st century.



  • Well, an SFTP client could still store credentials in plaintext. FileZilla does exactly this, by storing them in an XML file under %appdata%\FileZilla. That's not the fault of the protocol of course...The creators just don't seem to give much of a shit either, since their official response is "Just don't get malware infection in the first place" -> https://forum.filezilla-project.org/viewtopic.php?f=2&t=30765

    SFTP (and even FTP) still has its place, especially as part of an automated process. It's pretty easy to script something out that downloads/uploads to an FTP server. For ad-hoc large file transfers, I agree there are better services.



  • Ah, classic security by obscurity.



    I once worked on replacing an old application used to manage some control systems remotely. Remote access was password protected, of course, but the implementation was… interesting. Here is what happens when you connect to a remote system:

    • The application dials in (yes, the connection was done using RTC modems) to the remote system (which runs Linux) and establishes a PPP connection
    • The application opens a telnet session to the remote computer, using a login and password common to every system, hard-coded in the application source code
    • It sends commands to the remote shell to retrieve the encrypted password file (using cat, if I remember correctly)
    • It checks the encrypted password with the one entered locally. If they match, the connection stays open and the application UI allows sending commands to the remote system.



      The password “encryption” was done by XORing each character of the password with a repeated 4-character pad. But it doesn’t really matter, I guess.

  • Discourse touched me in a no-no place

    @blakeyrat said:

    I'm assuming when they say "secure FTP" they don't actually mean "SFTP". Because that's actually secure.
    They might mean FTPS instead. It's what you get when you apply SSL to FTP (I think) instead of SFTP where you're implementing something vaguely FTP-like on top of SSH. SSL and SSH don't really have all that much in common (except that they both can support a secure channel with non-trivial authentication of the other party, and both use technically-interesting crypto in order to do it).

    I'd normally recommend SFTP over FTPS, provided the data being transferred is of not simple public data. For simple public data, HTTP works quite well; it turns out to be faster than FTP since you don't have to screw around with creating multiple sockets (the overhead of in-band signalling is not a big problem in practice; thanks, HTTP/1.1). In fact, if someone says that genuine FTP is a requirement, please punch them in the mouth? Or at least tell them that they're wrong and should stop using such crappy shit as of at least 10 years ago.


  • Discourse touched me in a no-no place

    @VinDuv said:

    The password “encryption” was done by XORing each character of the password with a repeated 4-character pad. But it doesn’t really matter, I guess.
    I've seen a program (long ago) that did something similar, except it used a one-byte pad. It did have both low and high bits set, but was otherwise pretty damn simple, and there were some utter rookie mistakes involved, such as encoding the NUL terminator byte and putting another one after that. Too obvious! (The payload it was protecting was not nice; part of a suite of hack tools IIRC. We were pulling the thing apart by just disassembling it all and figuring it all out from first principles, and had the whole thing entirely dissected in around 30 minutes to the point where we could bypass all the protections it had and pretend to be the sort of user it provided “special services” for. I lost a lot of respect for script kiddies that day.)



  • @dkf said:

    @blakeyrat said:
    I'm assuming when they say "secure FTP" they don't actually mean "SFTP". Because that's actually secure.
    They might mean FTPS instead. It's what you get when you apply SSL to FTP (I think) instead of SFTP where you're implementing something vaguely FTP-like on top of SSH. SSL and SSH don't really have all that much in common (except that they both can support a secure channel with non-trivial authentication of the other party, and both use technically-interesting crypto in order to do it).

    I'd normally recommend SFTP over FTPS, provided the data being transferred is of not simple public data. For simple public data, HTTP works quite well; it turns out to be faster than FTP since you don't have to screw around with creating multiple sockets (the overhead of in-band signalling is not a big problem in practice; thanks, HTTP/1.1). In fact, if someone says that genuine FTP is a requirement, please punch them in the mouth? Or at least tell them that they're wrong and should stop using such crappy shit as of at least 10 years ago.


    Well FTP is better in at least one way. "FTP" fits into a char[4], but putting "HTTP" in a char[4] gives you heartbleed.



  • @dkf said:

    For simple public data, HTTP works quite well; it turns out to be faster than FTP since you don't have to screw around with creating multiple sockets (the overhead of in-band signalling is not a big problem in practice; thanks, HTTP/1.1).

    Or just use HTTPS.



  • @bighusker said:

    SFTP (and even FTP) still has its place, especially as part of an automated process. It's pretty easy to script something out that downloads/uploads to an FTP server.

    You lie. Scripting FTP is horrible. For example, there's no standard way to do file lists and it's all plain ASCII so you end up implementing different parsing algorithms for different server/OS combinations. HTTP would be about a billion times better.



  • @Algorythmics said:

    Yes, you are correct, the password is stored in plain text and easily accessible.

    FileZilla?



  • @bighusker said:

    SFTP (and even FTP) still has its place, especially as part of an automated process.
    FTP has no place in the world anymore. It is literally unsecurable (anyone can hop onto the data connection and download your file, or upload theirs in place of yours). From an automation standpoint, it has no API. The FTP spec actually says that the client should display the results of dir to the client and imposes no format on it. FTP clients have to know the format of every server in the wild in order to parse what the server sends.


  • Discourse touched me in a no-no place

    @Jaime said:

    [FTP] is literally unsecurable (anyone can hop onto the data connection and download your file, or upload theirs in place of yours).
    Not true; FTPS supports encryption of the data channel and modern crypto is quite good enough to make sure that nobody else can understand the data stream. The other points you make are valid, but it's the issues with firewalls that make all variations of FTP suck.

    That and the fact that there always seems to be yet another luser beginner trying to automate it, rather than using one of the large number of alternatives that already do the job without any fuss. It seems to attract the terminally clueless like flypaper…



  • @blakeyrat said:

    I'm assuming when they say "secure FTP" they don't actually mean "SFTP". Because that's actually secure.

    BTW, there are approximately 4434,324392 file-sharing websites now that let you securely share large files and are about a bazillion times easier to use than FTP. Maybe they should join the rest of us here in the 21st century.

     

    It could also be FTPS (FTP with an AUTH command to encrypt the connection using SSL/TLS), but nothing in Algorythmics post leads me to believe they use that either.  That and FTPS, despite being built into most FTP servers, isn't all that widely used.

     



  •  <font color="blue">Yes SFTP and FTPS are two different animals. Important point to be noted by all.</font>

     @blakeyrat said:

    I'm assuming when they say "secure FTP" they don't actually mean "SFTP". Because that's actually secure.

    BTW, there are approximately 4434,324392 file-sharing websites now that let you securely share large files and are about a bazillion times easier to use than FTP. Maybe they should join the rest of us here in the 21st century.

     

     



  • @dkf said:

    @Jaime said:
    [FTP] is literally unsecurable (anyone can hop onto the data connection and download your file, or upload theirs in place of yours).
    Not true; FTPS supports encryption of the data channel and modern crypto is quite good enough to make sure that nobody else can understand the data stream. The other points you make are valid, but it's the issues with firewalls that make all variations of FTP suck.

    Although FTPS can solve these problems, it doesn't work in practice because FTPS breaks stateful packet inspection on firewalls and is un-NAT-able. So, you get a choice of reducing security by using a poorly secured client, or reducing security by turning off the most effective features of your edge protection devices. So, from a security standpoint, FTP is still a horrible idea. It's tens times worse when you consider that there are numerous alternatives.

    Also, most people that insist on FTP do so because "it's already there and they already know it". 95% of the time, the client that they're referring to doesn't support FTPS, and/or they don't know how to actually invoke it.



  • When I've scripted FTP processes out, I've always used a 3rd party library that abstracted all that crap away from the developer.  ColdFusion had it built into the language.  SSIS has an "FTP Connection Manager" built in.  There are various .NET libraries out there that can handle FTP connections.  I never ran into any crazy issues, but I probably wasn't connecting to any obscure FTP servers either.  I certainly wasn't parsing raw ASCII output from command output. I'm sure it sucks if you are trying to write a new FTP client/API, but who the hell would want to re-invent the wheel like that?

    I will still argue that FTP has its place, as long as you don't care about securing the data you are transferring.  For one thing, there's also a lot more support for FTP servers vs. SFTP in terms of scripting (most of the .NET SFTP APIs I've dealt with are actually just wrappers for an executable).

     

     

     

     



  • But... there's much more support for scripting HTTP and it's a much more sane protocol. There are very few valid reasons to choose FTP over any alternative, and doing so should always be seen as a "I'm doing this because I have to, but I don't like it" choice.

    Just because "it worked for you" doesn't make it a good idea.



  • @Jaime said:

    But... there's much more support for scripting HTTP and it's a much more sane protocol. There are very few valid reasons to choose FTP over any alternative, and doing so should always be seen as a "I'm doing this because I have to, but I don't like it" choice.

    Just because "it worked for you" doesn't make it a good idea.

    The only reason I use FTP anymore is because some clients will literally only accept files that way. I tell them HTTP is literally faster, sleeker, better and safer and they don't care. "All of our partners have been giving us CSV reports compressed as .ar files, encrypted with single DES and delivered over non-secure FTP for twenty years, so we see no reason to change."


  • Discourse touched me in a no-no place

    @bighusker said:

    I will still argue that FTP has its place, as long as you don't care about securing the data you are transferring.
    As opposed to HTTP, which turns out to be faster and not a firewall-hating disaster?

    Yeah, HTTP is definitely faster. Measured it 10 years ago. What's more, parallel HTTP is faster than parallel FTP and doesn't require a custom HTTP server; the Range: header has been around and implemented for quite a while now…


  • Discourse touched me in a no-no place

    @morbiuswilters said:

    "All of our partners have been giving us CSV reports compressed as .ar files, encrypted with single DES and delivered over non-secure FTP for twenty years, so we see no reason to change."
    It's what they learned in medical school…



  • @dkf said:

    @morbiuswilters said:
    "All of our partners have been giving us CSV reports compressed as .ar files, encrypted with single DES and delivered over non-secure FTP for twenty years, so we see no reason to change."
    It's what they learned in medical school…

    Actually it's more likely that they need to talk to health insurers, as they tend to be even farther behind the hospitals.



  • @Jaime said:

    But... there's much more support for scripting HTTP and it's a much more sane protocol. There are very few valid reasons to choose FTP over any alternative, and doing so should always be seen as a "I'm doing this because I have to, but I don't like it" choice.

    Just because "it worked for you" doesn't make it a good idea.

     

    Well, I work in an industry where FTP/SFTP is still the norm.   If I can get data transfered to/from a vendor securely and they are happy with it, then it's a good idea.  In the end, that's really what matters.  Our secure FTP server accomplishes that goal without me having to do extra work that I don't have time for.  I can dump data files out to a user's share and they can log in over SFTP to download it.  The only place I use an FTP server is to transfer access logs from a linux web server down to our Windows SmarterStats server across a local network.  That's what SmarterStats supported out of the box and security wasn't a concern for us.  It gets the job done, and that's all my boss cares about.

     

    Scripting out FTP really isn't that bad if you use the right tools.  Is it the best protocol?  No...but it is more universal than HTTP in my industry, and I haven't encountered major issues with it. 

    So you guys that use HTTP for file transfer...what are you using?  Custom web service methods to get/put data, or is it simpler than that?  How do you authenticate users?  I'm not asking to be sarcastic...I just don't have much experience doing it that way.


  • Discourse touched me in a no-no place

    @bighusker said:

    So you guys that use HTTP for file transfer...what are you using? Custom web service methods to get/put data, or is it simpler than that? How do you authenticate users? I'm not asking to be sarcastic...I just don't have much experience doing it that way.
    For download? Virtually any common HTTP server works, and browsers handle everything just fine; a gigabyte download isn't a big deal unless you've got a terrible network (and then it would be bad with FTP too). Pop it on HTTPS and stick HTTP basic auth on if it's confidential; they all support that, and there's oodles of help text on the web.

    For upload, it might be simplest to turn on WebDAV support (it's a common extension to HTTP) and you'll find that Windows supports it natively in Explorer so it isn't like you have to push new clients out or anything. (Again, HTTPS and HTTP basic auth provide entirely suitable security for this sort of thing.) I don't know if the mounts created that way work with programs other than Explorer; if they do, then you can get most of your code to substitute “write to remote host” with “write to special ‘local’ location” and that's something even the cruftiest code can do.

    If you want scriptable clients, curl and wget work nicely, but you might not need them as virtually every programming language has HTTP client code either supplied or as a common library. I know it for some languages, but I'd reckon on being able to find out definitively for any other in seconds with one web search.

    In short, when we say HTTP is better, we really mean it. It's not an idle elitist thing; we've measured it and found it better in every single way that matters.



  • @dkf said:

    @bighusker said:
    So you guys that use HTTP for file transfer...what are you using? Custom web service methods to get/put data, or is it simpler than that? How do you authenticate users? I'm not asking to be sarcastic...I just don't have much experience doing it that way.
    For download? Virtually any common HTTP server works, and browsers handle everything just fine; a gigabyte download isn't a big deal unless you've got a terrible network (and then it would be bad with FTP too). Pop it on HTTPS and stick HTTP basic auth on if it's confidential; they all support that, and there's oodles of help text on the web.

    For upload, it might be simplest to turn on WebDAV support (it's a common extension to HTTP) and you'll find that Windows supports it natively in Explorer so it isn't like you have to push new clients out or anything. (Again, HTTPS and HTTP basic auth provide entirely suitable security for this sort of thing.) I don't know if the mounts created that way work with programs other than Explorer; if they do, then you can get most of your code to substitute “write to remote host” with “write to special ‘local’ location” and that's something even the cruftiest code can do.

    If you want scriptable clients, curl and wget work nicely, but you might not need them as virtually every programming language has HTTP client code either supplied or as a common library. I know it for some languages, but I'd reckon on being able to find out definitively for any other in seconds with one web search.

    In short, when we say HTTP is better, we really mean it. It's not an idle elitist thing; we've measured it and found it better in every single way that matters.

    How about directory listings? HTTP is just as bad as FTP on that. It's not better, just not-worse.


  • Discourse touched me in a no-no place

    @Ben L. said:

    How about directory listings?
    If you're using WebDAV, it's much better. Otherwise… for some reason it tends to be not such a problem.



  • WebDAV can do directory listings with the PROPFIND method.

    However, once you have your content on a web server, it's trivial to write some server-side code to make directory listing unnecessary. Something as simple as providing a simple URL like http://www.company.com/project/thing/20140418 or http://www.company.com/project/download.php?date=20140418. Any college kid can write the five lines of code to implement that. A modern web paradigm would be to allow /project/thing to be a valid resource that returns a JSON object that lists what is available and /project/thing/resourcename to pull the content. An older style would be to have /project/thing?op=list get a listing as XML and /project/thing?op=get&name=resourcename fetch the content.



  • @dkf said:

    @Ben L. said:
    How about directory listings?
    If you're using WebDAV, it's much better. Otherwise… for some reason it tends to be not such a problem.

    It's also not even much of an issue with plain-jane HTML index pages, which is the default for most web servers. Why? Because the page contains links. So if your platform has an HTML library with basic DOM functions (and which platform doesn't?) then you can easily grab the links to the files.



  • @Jaime said:

    A modern web paradigm would be to allow /project/thing to be a valid resource that returns a JSON object..

    Or if you're feeling saucy, you could return the content items in Link: HTTP headers.



  • @dkf said:

    HTTP, which turns out to be faster and not a firewall-hating disaster?
     

    Speaking of which, while from the user end of things maybe that's not true, form teh network or security adminstrator, all this shit flwoing over HTTP is a PITA. Not that I'm saying that's wrong or bad, but it's annoying to some.  



  • @mahlerrd said:

    teh
    @mahlerrd said:
    flwoing
    @mahlerrd said:
    mahlerrd

    Was your keyboard on fire while you wrote that post?



  • @mahlerrd said:

    @dkf said:

    HTTP, which turns out to be faster and not a firewall-hating disaster?
     

    Speaking of which, while from the user end of things maybe that's not true, form teh network or security adminstrator, all this shit flwoing over HTTP is a PITA. Not that I'm saying that's wrong or bad, but it's annoying to some.

    Well maybe if the net admins didn't suck so much, we wouldn't be shoehorning everything into HTTP just so we can get it to run. And maybe I'd also stop giving out router passwords and colo keycards to random people I meet on the street, did you think of that?


Log in to reply