FTP is just as good



  • I'm perusing this list of off-site images referenced by a website when I notice a rather strange URL:

    ftp://it:fei3Xai4@ftp.handvask.da/webcamrodby.jpg

    The FTP account works, I checked. There are some PDF invoices in there, some Windows binaries with accompanying serials.txt, and some SQL and Access DB dumps amongst other dross. And there is a webcam that regularly writes a snapshot into the root dir. This setup has apparently been running for years!

    I don't know whether the person was 😃 unaware of the implications of sharing this URL, or whether it was a case of 🕶 oh nobody will know.



  • Some years back, when Dropbox and even file sharing websites like Rapidshare were not around, I had given my brother an account on my FTP server.

    He then used those credentials at a friend's home once and obviously had clicked on "Save the credentials" somewhere - because one week later I noticed access attempts from Russia and other assorted countries trying to upload Warez on my server.

    Unhappily for them, I had given my brother only a 30 MB quota. :)

    That was also the time where I only had a 2 GB downstream limit per month and thus I actually had to limit the upload of my FTP server because the upstream would also consume a tiny percentage of downstream. ;)


  • Winner of the 2016 Presidential Election

    Um… I hope you made sure your account here cannot be easily linked to your real-life identity. Depending on where you live, you might have just committed a serious crime.



  • I checked the pages that reference this resource and they do indeed display a webcam image. So browsers just load the image no questions asked. The only reason this has not been exploited by automated bots must be that the URL is not a plain href in an <img>. Rather, it's set as the background to a figure

    <figure class="webcam" style="background-image: url('ftp://it:fei3Xai4@ftp.handvask.da/webcamrodby.jpg'); background-size: cover;"></figure>
    

    The pages that reference this see about forty visitors daily, they must all connect via FTP to download the image. If I ever need plausible deniability, I will use this technique to smear the credentials around.



  • Logging into someone's FTP-account using info they shared on a webpage might be a serious crime. I dunno. Not that anybody is going to bother.

    If push came to shove, I'd claim that I was working on this website as part of my job and verifying links is part of that job. Which is all true. Whether this justifies browsing the rest of the account contents is questionable. Apart from curiosity I also wanted to make sure this account was not tied to any of our services.

    I don't even know whether I want to flag this. I'll write a mail to my boss I guess, he will forward it to somebody who knows who did this. They will not see the issue and the world will still be unsafe. Could just as well wait till we move to SSL and watch them scramble to provide an SSL link for this file 😄



  • @Rhywden said:

    Unhappily for them, I had given my brother only a 30 MB quota. 😄

    Yes that was a sane choice. I once got a free copy of "Of Mice and Men 1992" because some guys discovered our company ftp allowed anon upload and download. The server sat behind a 64kbit/s link with the rest of our office. Nobody was happy about the situation. Not the leechers outside, not the programmers in the office.



  • @gleemonk said:

    I don't know whether the person was 😃 unaware of the implications of sharing this URL, or whether it was a case of 🕶 oh nobody will know.

    They are unaware. The mega-huge company I used to work at banned all use of plain FTP for security reasons in 2010. It cost them great pain and expense to find and implement alternatives, but they did it. The FTP protocol cannot be reasonably secured. There are secure alternatives to it (SFTP, FTPS), but no secure versions of it.

    If you are trying to sneak something under the radar, installing an FTP server and opening port 21 isn't exactly stealthy.



  • @Jaime said:

    They are unaware. The mega-huge company I used to work at banned all use of plain FTP for security reasons in 2010.

    Yeah FTP should have been phased out long ago. The only reason it's still there is sheer inertia.

    What surprised me is that it's possible to tell a browser to load resources over FTP. Makes sense technically because browsers have to support FTP for downloads anyway. Conceptually it's fucking stupid.



  • FTP is a piece of shit.



  • That's what happens when you use a protocol designed for NCP networks.



  • @anonymous234 said:

    FTP is a piece of shit.

    When I was a student, we had to use dedicated FTP software to download porn images listed in Gopherspace. Actually I don't think this reverie contradicts your original statement in any way.


  • Discourse touched me in a no-no place

    @Jaime said:

    The FTP protocol cannot be reasonably secured. There are secure alternatives to it (SFTP, FTPS), but no secure versions of it.

    FTPS is the secure version of FTP. The reason it isn't used widely is because some website authoring software platforms don't grok anything other than plain old FTP for uploading the results.

    Friends don't let friends enable upload via FTP.



  • ...unless you only open port 21 for trusted IP addresses?


  • Discourse touched me in a no-no place

    @Arantor said:

    unless you only open port 21 for trusted IP addresses?

    Still not good. FTP is also awkward from the way it does a complicated dance with the data socket, which complicates your firewall configuration. And it's not actually as fast as either HTTP or SFTP once you're moving large amounts of data… and that HTTP is better was surprising to me when I found this out back in around 2005.



  • Well, yes, you also open a bunch of passive ports and only open those ports on the IP too ;)



  • At work we open 21, 990 and 5500-5525 (passive range) for our office IP. Can't say that's a complicated rule.

    As for performance, do you have any benchmarks showing HTTP is faster? FTP has always seemed faster to me, especially when you have a lot of concurrent connections open in FileZilla.


  • Discourse touched me in a no-no place

    @ancarda said:

    At work we open 21, 990 and 5500-5525 (passive range) for our office IP. Can't say that's a complicated rule.

    You open ports that don't have something already bound to it? :doing_it_wrong:

    @ancarda said:

    As for performance, do you have any benchmarks showing HTTP is faster?

    Well, we found that it didn't matter in this project (from quite a few years ago now). It helps if you use parallel HTTP transfers; HTTP/1.1 can support those by default, though most browser makers have never heard of the concept and you're dependent on the server supporting the Range header.



  • Remember, @ancarda works there with me.



  • @dkf said:

    And it's not actually as fast as either HTTP or SFTP once you're moving large amounts of data…

    [citation needed]

    Because FTP uses a separate socket for data transfers, it should be at least as efficient as any other protocol for large file transfers. The server side can open up the file to be transferred and then use sendfile() or the Windows equivalent to tell the kernel to transfer the data directly from the open fie to the socket. HTTP can do this too, but after sending the HTTP response headers.

    For transferring many small files, I agree that FTP is much less efficient because it opens and closes the data socket for each file transfer, which SFTP does not need to do and HTTP with keep-alive does not need to do either.



  • @quijibo said:

    large file transfers

    That's the key. If you're transferring a single file, you may as well just be using netcat. If you're transferring thousands of files, you're making at least one connection for each one.


  • Discourse touched me in a no-no place

    @quijibo said:

    Because FTP uses a separate socket for data transfers, it should be at least as efficient as any other protocol for large file transfers.

    It really depends on what your bottlenecks are. The local OS at either side is very rarely the bottleneck. If you're going over the open internet, that will be the bottleneck unless you're connecting via a truly awful link.

    There are several key metrics involved: bandwidth, which is how much data you can push down the connection per second (the usual quoted figure), latency, which is the time to round-trip a packet (and which matters for connection establishment), and jitter, which is the amount of variability in the other metrics. Jitter is a bugger, as it can result connections appearing to hang while that one packet that got delayed catches up or gets resent.

    If you send a file over FTP, you've first got the establishment of the control connection — quite a few round-trips, so plenty of latency — and all the business to do with logging in and putting the upload in binary mode and setting up the data connection and that's more round trips and more latency. Then you shovel the bytes over the data socket which is mostly about bandwidth though with some jitter involved too. Then you're done.

    Over HTTP (which has better support for both encryption and compression) you can instead just open a socket and immediately start throwing the header — it's only a packet or two of overhead — then the body down it. If you fuck up, which is not the normal case, you will be told asynchronously to doing the data transfer. This avoids a lot of the latency-related overhead (though obviously not all).

    Once we move to parallel transfers (shipping different bits of a file down different sockets, to get higher bandwidth utilisation) you may become able to avoid having to repeat the overhead of the FTP control connection so much, making things be closer to HTTP (when that's uncompressed ;)). It's not the normal configuration though, and can make network administrators and other users a bit grumpy as you're trying to stomp on the usual stochastic Fair Share network rules.

    Re encryption and compression, the usual rule is that encryption adds overhead due to the need to make additional round trips when negotiating the session key, and compression adds no appreciable overhead as that's done in parts that are not bottlenecked. Since compression might well reduce the amount of data being transferred (and encryption includes some compression to avoid some classes of crypto attacks) the effect can even look like negative overhead.

    HTTP usually beats FTP because it is an asynchronous single-socket protocol. It's also a heck of a lot easier to handle the administration of at the firewall level, since the server side will always be on a port that you know.



  • @dkf said:

    It really depends on what your bottlenecks are. The local OS at either side is very rarely the bottleneck. If you're going over the open internet, that will be the bottleneck unless you're connecting via a truly awful link.

    There are several key metrics involved: bandwidth, which is how much data you can push down the connection per second (the usual quoted figure), latency, which is the time to round-trip a packet (and which matters for connection establishment), and jitter, which is the amount of variability in the other metrics. Jitter is a bugger, as it can result connections appearing to hang while that one packet that got delayed catches up or gets resent.

    And all of those affect HTTP and SFTP just as they do FTP, so they don't make HTTP or SFTP more efficient for large file transfers.

    @dkf said:

    Over HTTP (which has better support for both encryption and compression) you can instead just open a socket and immediately start throwing the header — it's only a packet or two of overhead — then the body down it. If you fuck up, which is not the normal case, you will be told asynchronously to doing the data transfer. This avoids a lot of the latency-related overhead (though obviously not all).

    But the setup time for a large file transfer (100's of MB or GB) is negligible compared to all of the bits that you are shoveling down the connection. Yes, FTP is slower to log in and start the file transfer because of the round-trip time for commands, but that is such a small amount of time to wait compared to the data transfer itself.

    Furthermore, these days HTTP is not just "send a header and then send the body down" as you simplified it to. Chunked transfers have control information inline with the data, so you now you have to carefully process all of that. SFTP has to multiplex the data channels inside the one TCP/IP socket, also adding overhead to every chunk of data transferred.

    @dkf said:

    It's not the normal configuration though, and can make network administrators and other users a bit grumpy as you're trying to stomp on the usual stochastic Fair Share network rules.

    I thought we were talking about protocol efficiency, not whether other users will complain about bandwidth utilization.

    @dkf said:

    Re encryption and compression, the usual rule is that encryption adds overhead due to the need to make additional round trips when negotiating the session key, and compression adds no appreciable overhead as that's done in parts that are not bottlenecked. Since compression might well reduce the amount of data being transferred (and encryption includes some compression to avoid some classes of crypto attacks) the effect can even look like negative overhead.

    Your point about compression in HTTP and SFTP is valid. However, one could argue that often we tend to transfer already compressed files (movies, images, .zip or gzipped for software, etc). So is the protocol-level compression able to compress that data further? The answer is probably yes, but not that much. Compression in HTTP is very good for HTML, CSS, JS, etc. Again, I was responding to transferring a large file, not many small HTML files. HTTP wins for small files.

    @dkf said:

    HTTP usually beats FTP because it is an asynchronous single-socket protocol.

    The asynchronicity of the protocol doesn't make it more efficient when sending one large single file as a stream. You still need all of the right bytes in the right order in order to be transferring something useful. TCP/IP handles the streaming in HTTP, SFTP, and FTP equally. I'm asking: what extra crap do these protocols bolt on to the basic data of the file to be transferred that is adding overhead?

    With FTP, the answer is (after the small overhead of setting up the transfer): nothing

    With "plain" HTTP (after sending the headers): nothing, but possibly compression adding a benefit

    With HTTP and chunked transfers: the sending of chunk headers and parsing of the chunks adds overhead

    With SFTP: The required encryption and channel multiplexing built into the SSH protocol adds the most overhead compared to the above.

    @dkf said:

    It's also a heck of a lot easier to handle the administration of at the firewall level, since the server side will always be on a port that you know.

    Again, this is about efficiency, not how much the network admin likes the protocol.

    I was responding to your statement that FTP is is "not actually as fast" as HTTP and SFTP with large amounts of data (which I interpreted to be one large file). With SFTP that claim is clearly false. HTTP may be more efficient, depending mostly on the compression, but otherwise it is not so clear as to make a blanket statement that HTTP is more efficient for a large file transfer.



  • @quijibo said:

    TCP/IP handles the streaming in HTTP, SFTP, and FTP equally. I'm asking: what extra crap do these protocols bolt on to the basic data of the file to be transferred that is adding overhead?

    In stream mode, where FTP doesn't add a crap-ton of formatting to the data, FTP doesn't have a way to signal end-of-file and therefore requires a new data connection be used for each file. HTTP has content-length as a base part of the protocol, so it can reuse the connection reliably.

    In block mode, FTP has a lot of overhead to support restarts of failed transfers. It's more overhead that you would guess, since the protocol was invented in the 60's and standardized in the early 80's. All protocol-level messaging was designed to be splatted on the screen to the user, so all of the messages are very wordy.



  • @Jaime said:

    In stream mode, where FTP doesn't add a crap-ton of formatting to the data, FTP doesn't have a way to signal end-of-file and therefore requires a new data connection be used for each file. HTTP has content-length as a base part of the protocol, so it can reuse the connection reliably.

    Okay, but for a large file the overhead of closing and opening a connection is rather small. HTTP is slightly better for many small files because it can reuse the connection. My response started on the premise of a large file transfer.

    @Jaime said:

    In block mode, FTP has a lot of overhead to support restarts of failed transfers.

    In my experience, block mode was not used very often. Stream mode supports resuming very easily. The client requests the entire file, and if the transfer fails early, then the client requests the remainder of the file. This is analogous to HTTP chunked transfer encoding versus using content-length to transfer all at once. So both protocols are roughly the same in overhead, having both modes possible.

    @Jaime said:

    It's more overhead that you would guess, since the protocol was invented in the 60's and standardized in the early 80's.

    I don't need to guess. I have read the RFC and written an FTP client, likely while some readers here were in diapers. Also, according to wikipedia, it was invented in 1971.

    @Jaime said:

    All protocol-level messaging was designed to be splatted on the screen to the user,

    No, it was designed to be easy to implement on a variety of systems with very little overhead. Clients tended to display the protocol to the user, but that isn't necessary. A web browser could spit out the HTTP response headers too, since they are also ASCII.

    @Jaime said:

    so all of the messages are very wordy.

    Not compared to HTTP when including the many headers that are sent by most clients and servers. The commands are a ~4 letter code followed by an argument and the responses are 3-digit codes with a typically short English string afterwards. I would not call that wordy.

    Look, I wouldn't use FTP today for any serious public service because better alternatives exist, but just because a protocol is old, does not make it slow or inefficient. Back when I was writing that FTP client (as part of in a larger program), no one would use SFTP over a 14.4k modem (or heaven forbid, 2400 baud) to transfer large files. HTTP was okay to use for downloading files too, but large file transfers were done with FTP primarily. This was exactly because FTP is quite efficient at transferring files (as is HTTP), other than the initial setup latency of sending a few commands and waiting for responses.


  • Discourse touched me in a no-no place

    @quijibo said:

    I'm asking: what extra crap do these protocols bolt on to the basic data of the file to be transferred that is adding overhead?

    All I'm going to say is that for moving large files, we measured that HTTP was faster than FTP. We measured it between relatively open systems across the internet, and also with tight firewall traversal (with things routed via a DMZ machine, where there was a single process doing the proxying stuff). We also compared parallel transfers, which HTTP also won. We were using video files, which weren't very compressible. The people in charge of this work were thorough. 😄

    What I'm not going to do is to tell you why these differences were measured. (OK, they were measured because the project wanted to know whether the cunning stuff we were doing with the DMZ had a lot of overhead; our finding there was “within acceptable parameters, and OMG! doing it with HTTP was simpler!” But I didn't mean that sort of why here.) Your armchair speculation will just have to live with the fact that sometimes reality is a bit uncooperative. So go and measure it and instrument it yourself. ;-)

    @quijibo said:

    The asynchronicity of the protocol doesn't make it more efficient when sending one large single file as a stream.

    Well, it always does, but the size of the effect ought to tend towards negligible as the file size increases, since the dominating factor ought to be the actual movement of the data (which should be bandwidth-governed). Yet for all that, it's actually pretty common now for latency to be really large; connection establishment is horribly costly. I suspect the real cause is the amount of buffering introduced at various routers along the way; they don't decrease bandwidth all that much, but they do tend to hold up individual packets (according to some complicated stochastic model) and that stuffs the latency something rotten.

    But I could be way wrong. These days, I mostly deal with campus-level networking, where the main constraints are whether the router is configured to enable gigabit on the particular port or not. No idea what the governing constraints there are, but where the network is fast enough through, I can actually see other things being the bottleneck… 😄





  • Fair enough. An argument made on actual measured results is good enough in my books!



  • @gleemonk said:

    The only reason it's still there is sheer inertia.

    A few years back, I worked with AS/400 administrators who insisted the only way to transfer a file out of the AS/400 system they had was via regular FTP. I think it was because they didn't want spend 5 minutes to google and then read IBM's documentation. And that they probably didn't understand a word of it.



  • @NTW said:

    few years back, I worked with AS/400 administrators who insisted the only way to transfer a file out of the AS/400 system they had was via regular FTP

    Same here, my case was only two years ago. In my case, they had hacked the user subsystem up so much that they refused to install IFS and they had so many individual scripts and programs that FTP'd, that they quoted thousands of hours to test doing what you linked to.



  • We are flying on an Internet airplane in which we are constantly swapping the wings, the engines, and the fuselage, with most of the cockpit instruments removed but only a few new instruments reinstalled. It crashed before; will it crash again? "[Bufferbloat: Dark Buffers in the Internet](http://cacm.acm.org/magazines/2012/1/144810-bufferbloat/fulltext)", Communications of the ACM, Vol 55 Iss 1.


  • @ancarda said:

    At work we open 21, 990 and 5500-5525 (passive range) for our office IP. Can't say that's a complicated rule.

    Compared to HTTP(S), which is port 80 or 443, or SFTP which just needs port 22. So, FTP has a necessarily larger profile.

    It's also a much larger headache. You have to consider how you're going to treat active transfers, which many FTP daemons don't allow you to disable. And NAT can actually severely mess with FTP, since most NAT/PAT setups can translate ports in TCP headers just fine but won't be able to translate the ports in the application layer. Additionally, the protocol is fairly loose, so you'll find that some clients will not communicate with some servers. If you're okay with your application occasionally being incompatible or inaccessible for entirely technical reasons, then sure, FTP is just fine.

    As I said above, FTP sucks because it was originally designed to be used with NCP. That's why it often looks like it's operating in simplex and has so many requirements that make it appear to operate in simplex: it used to.

    As for performance, do you have any benchmarks showing HTTP is faster? FTP has always seemed faster to me, especially when you have a lot of concurrent connections open in FileZilla.

    Well, FTP is intended for file transfers, and HTTP is much more complex, so I'd believe that in some cases FTP performs better. However, HTTP can have any number of enhancements. HTTP is a heavily optimized protocol, especially if you're using HTTP/2. I'd bet that it's that an httpd I/O isn't as well optimized for large file transfer compared to an ftpd. On the other hand, FTP is a legacy protocol. It's never going to be updated. It's never going to be modernized, because you should be using SFTP (which has it's own setup issues piggybacking on a shell protocol, but that's another thing entirely). It's always going to have one foot stuck in the ARPANet.


  • Discourse touched me in a no-no place

    @BaconBits said:

    Additionally, the protocol is fairly loose, so you'll find that some clients will not communicate with some servers.

    For example, what is the format of the response to requesting a file list? I've seen that all over the place, and computers are nowhere near as good at parsing that sort of thing as people.



  • @dkf said:

    For example, what is the format of the response to requesting a file list? I've seen that all over the place, and computers are nowhere near as good at parsing that sort of thing as people.

    Here's the entire section of RFC 959 dealing with the LIST command:

    This command causes a list to be sent from the server to the passive DTP. If the pathname specifies a directory or other group of files, the server should transfer a list of files in the specified directory. If the pathname specifies a file then the server should send current information on the file. A null argument implies the user's current working or default directory. The data transfer is over the data connection in type ASCII or type EBCDIC. (The user must ensure that the TYPE is appropriately ASCII or EBCDIC). Since the information on a file may vary widely from system to system, this information may be hard to use automatically in a program, but may be quite useful to a human user.
    Note that they explicitly call out that the data will be hard to consume by a program.


  • @dkf said:

    @Jaime said:

    The FTP protocol cannot be reasonably secured. There are secure alternatives to it (SFTP, FTPS), but no secure versions of it.

    FTPS is the secure version of FTP. The reason it isn't used widely is because some website authoring software platforms don't grok anything other than plain old FTP for uploading the results.

    Friends don't let friends enable upload via FTP.

    The real WTF is that some hosting services only allow plain FTP unless you pay them more to enable more secure protocols.


  • FoxDev

    @Medinoc said:

    The real WTF is that some hosting services only allow plain FTP unless you pay them more to enable more secure protocols.

    SSL certificates aren't free



  • @RaceProUK said:

    @Medinoc said:

    The real WTF is that some hosting services only allow plain FTP unless you pay them more to enable more secure protocols.

    SSL certificates aren't free

    But you don't even FTP your own website: You FTP their "website administration" server, which means only one domain (and thus, only one certificate) for the thousands of accounts they have. Oh, and the "web interface" part of their administration site uses HTTPS, which means they already have a certificate.



  • @Jaime said:

    @dkf said:

    For example, what is the format of the response to requesting a file list? I've seen that all over the place, and computers are nowhere near as good at parsing that sort of thing as people.

    Here's the entire section of RFC 959 dealing with the LIST command:

    This command causes a list to be sent from the server to the passive DTP. If the pathname specifies a directory or other group of files, the server should transfer a list of files in the specified directory. If the pathname specifies a file then the server should send current information on the file. A null argument implies the user's current working or default directory. The data transfer is over the data connection in type ASCII or type EBCDIC. (The user must ensure that the TYPE is appropriately ASCII or EBCDIC). Since the information on a file may vary widely from system to system, this information may be hard to use automatically in a program, but may be quite useful to a human user.
    Note that they explicitly call out that the data will be hard to consume by a program.

    From experience, enterprise proxies make this even more fun: While regular FTP LIST returns mostly a ls -l output, the proxy will turn it into a HTML page.


  • Discourse touched me in a no-no place

    @Medinoc said:

    enterprise proxies

    😡


  • ♿ (Parody)

    @RaceProUK said:

    @Medinoc said:

    The real WTF is that some hosting services only allow plain FTP unless you pay them more to enable more secure protocols.

    SSL certificates aren't free

    But ssh is. Has Microsoft released their native ssh stuff yet?



  • @boomzilla said:

    @RaceProUK said:

    @Medinoc said:

    The real WTF is that some hosting services only allow plain FTP unless you pay them more to enable more secure protocols.

    SSL certificates aren't free

    But ssh is. Has Microsoft released their native ssh stuff yet?

    The source was made available months ago:

    It's still under active development for the initial release, but as far as I'm aware it works.

    I wouldn't put it into production, of course.

    That said, this one is $100:

    All we've ever done is run a domain-joined Red Hat or Debian box for SFTP.



  • @dkf said:

    @Arantor said:

    unless you only open port 21 for trusted IP addresses?

    Still not good. FTP is also awkward from the way it does a complicated dance with the data socket, which complicates your firewall configuration.

    Given that FTP was one of the first protocols designed, at a time when TCP/IP hadn't even been designed (Arpanet was still using NCP in 1971, and didn't move to TCP/IP until 1980), a certain awkwardness and lack of security was inevitable. While the original RFC 114 spec has been hacked about repeatedly updated, the basic design never really changed, and most of the subsequent RFCs muddied the water more than they clarified it. It's safe to say that using plaintext FTP in 2016 for anything other than testing a new TCP/IP stack implementation is a Bad Idea.


  • Discourse touched me in a no-no place

    @ScholRLEA said:

    Given that FTP was one of the first protocols designed, at a time when TCP/IP hadn't even been designed (Arpanet was still using NCP in 1971, and didn't move to TCP/IP until 1980), a certain awkwardness and lack of security was inevitable.

    That things have changed since then is pretty much inevitable. I'm not complaining at that. The problem is that people are still supporting this shit and not moving on.



  • @RaceProUK said in FTP is just as good:

    @Medinoc said:

    The real WTF is that some hosting services only allow plain FTP unless you pay them more to enable more secure protocols.

    SSL certificates aren't free

    letsencrypt.org ...?


  • Discourse touched me in a no-no place

    @Zemm Also startSSL.


  • Notification Spam Recipient

    @loopback0 said in FTP is just as good:

    startSSL

    Did you see they recently update the UI on their website (a bit)? It almost looks like something-not-from-the-90s now!


  • Discourse touched me in a no-no place

    @Tsaukpaetra As long as the service they're supposed to provide is OK, who cares about how ugly their UI is? It's not like it's something that most users need; the people who want that sort of service tend to prefer a no-BS get-shit-done interface.

    Leave the anti-parallax scrolling BS for the hipster websites…


Log in to reply