TL;DR Dumbsh@$* drops server - it does not go well



  • Even though it is the holiday season and I will soon have all the snide, cutting remarks and passive aggressive abuse that only family can provide, I would relate the tale of my last few days and let strangers on the Internet tell me how unbelievably stupid I am and how I am truly the real :wtf:.

    It was time for our yearly power system maintenance (last performed circa 2009). We would have to disconnect the server room power from the generator circuit for a couple hours. Since the in-rack UPSes don’t have anywhere near that kind of runtime, all of our servers would need to be shut down. This bit went relatively smoothly other than an “identical” spare part from 1995 not fitting in place of a failing part from 1985.

    While everything was down, I thought I’d have a look inside our anaemic, 10 year old e-mail server in case it might be possible to add more RAM (it chugs so very hard with only 2GB). This is the point at which my employer became totally justified in firing me, should they choose to do so. It seems that the server was not, in fact, hooked into one of its rails but only balancing there. So when I went to pull it out, it fell over and went clonk. It didn’t actually fall out of the rack but it did stop very abruptly. I hooked it in correctly and shoved it back into the rack with some profanity.

    When it came time to power everything back up, surprisingly almost everything came back except that e-mail server. The RAID BIOS showed a failed disk in the mirror (never did figure out why it didn’t boot from the 2nd disk). Apparently 10 year old SCSI disks don’t like being banged around. This is where I made my second firing mistake. I removed the failed disk to have a look at it. When I put it back, the RAID BIOS decided that it was good and that it was the primary disk in the array and proceeded to rebuild the mirror over top of the other disk, resulting in a disk that was bad and another disk that was a perfect copy of a bad disk.

    Much wrangling within the RAID BIOS ensued but there was nothing for it. It was Belgiumed. It was now about midnight and tomorrow was a work day so I ventured to the tape safe and retrieved the image hard drive from the last time one of my predecessors broke the e-mail server. Written across it in bright red letters:

    Corrupted Drive – Do Not Use for New Images

    With that auspicious start, I find the server image, finally figure out that it is a “PartImage” image and set out to the Internet to download a recovery CD to let me restore it and get back to business. Let’s digress for a moment and talk about PartImage. W. T. F. is up with imaging software that, when you use compression to save the image neglects to properly indicate that it is a compressed image meaning that you need to rename the image files in obscure ways and / or perform command line shenanigans to properly restore it? Some days I think that @blakeyrat’s stance on OSS is hyperbole and then I’m faced with this kind of shit and I think he’s totally right and we should form a cult based on his writings on the DailyWTF preaching the gospel of Windows good, death to open source.

    So, fresh new disks in the machine (‘cause why not?). Image installed using minor Linux command line wizardry. Fire it up and... no boot. Disk looks good when viewed from the Linux recovery CD but won’t boot. Can we make a Windows Server 2003 boot disk? We can (that this server is old enough to still have a floppy drive should tell you everything about it you need to know). Boot from the disk, server boots but won’t connect to the domain. It's wanting the local admin password to log in and see exactly what state it’s actually in. Usual admin passwords for these sorts of things don’t work. Where would we have recorded such a password? After briefly considering “solving” this problem by typing up a letter of resignation and leaving it on my boss’ desk, I set out on an epic quest through our procedures and setup specs to try to find the password. Never did find it but I did run across a “bare metal Exchange server disaster recovery” procedure.

    At this point, I’ve had enough trying to get the original server back. I’ve still got the mail store (conveniently on an external SCSI array that didn’t get dropped). Waited around (for about an hour) for my boss to come in so that I could make sure that I wasn’t completely out of my mind by starting fresh then pave the server, reinstall everything following disaster recovery procedure, spend way too much time working out the 3 steps omitted from disaster recovery procedure and get it all back up so that the company can resume “normal” operations. Final tally:

    Without e-mail: 9pm – 1pm next day.
    Without sleep: 26 hrs.
    Without BlackBerry connectivity: still (but I can probably write another one of these just on the myriad of things wrong with BES 10).
    Without good, working, tested image of critical servers: never again 😮 🔫


  • Garbage Person

    Chances are good that your alleged mistake and even the dropping of the server are irrelevant. It's fate was sealed as soon as it went to cold shutdown. Server reboot mortality is very, very real.

    Now, if backups are your responsibility, we can still talk about the firing thing.



  • @smallshellscript said:

    conveniently on an external SCSI array that didn’t get dropped

    The person who took that decision just saved your ass. Now go and install some SSDs.


  • BINNED

    @Eldelshell said:

    Now go and install some SSDs.

    Why would one, given ...

    @smallshellscript said:

    10 year old e-mail server

    @smallshellscript said:

    2GB

    Fucking replace that turd. If it ran for 10 years than it has had it's time.

    @smallshellscript said:

    Windows Server 2003

    Given that this is end of life the exchange version it runs most likely is also near its end.


  • Discourse touched me in a no-no place

    It does seem like that server's beyond it's useful life if it's running a critical service.



  • @smallshellscript said:

    When I put it back, the RAID BIOS decided that it was good and that it was the primary disk in the array and proceeded to rebuild the mirror over top of the other disk, resulting in a disk that was bad and another disk that was a perfect copy of a bad disk.

    It's a horrible sinking feeling, isn't it?

    As a result of watching something very similar happen to somebody else, the first thing I did when our hardware-RAID6 server went tits up was pull all eight disks individually and image them onto spares. That took four hours, using eight of our lab workstations running simultaneously. It was totally, totally worth it.

    We're out in the sticks and there was no way our vendor was going to be able to have us up and running as quickly as we needed to be, so I had to recommission the old server (never throw out your old hardware!) and move all the data from that dead one's RAID6 set onto the old server's RAID1. Were I not working with fully backed-up images, I would have totalled our data at least twice in the course of learning enough about mdraid to achieve that.



  • Ugh what a nightmare. Can't wait for my own disaster, so I can learn my backups/procedures lesson too.



  • Don't keep your backups on drives hanging off the same controller as your live data, is my best advice at this point - especially if you've been naive enough to buy hardware that refuses to expose individual drives to the mobo so the only way you'll get RAID is via a hardware RAID card.

    That old server now sits on the bench next to the new one. Every night the new server shuts down all its VMs, snapshots itself, brings the VMs up again, then WOLs the old server (which boots from a live USB stick), rsyncs all the snapshots over, then does a bit of foolery to make them bootable over there. All I need to do after a crash is move the Ethernet and UPS cables over to the old server, remove the USB stick and boot it. Good thing, too - the vendor's first attempt at repair wasn't so great, and about three weeks after the first crash I ended up running off the old server for another week as they replaced the second redundant PSU.

    When the time comes to upgrade our server hardware, all I'll need to do is boot that same USB stick on the new bare metal, use it to format the new disks the way I want them, then run the exact same backup script on the existing server to migrate it.

    I also do a nightly rsync-over-ssh of the nightly snapshots to a set of encrypted drives I keep at my house, just in case the server closet at work burns down. I sleep better these days.



  • @flabdablet said:

    I also do a nightly rsync-over-ssh of the nightly snapshots to a set of encrypted drives I keep at my house, just in case the server closet at work burns down. I sleep better these days.

    And a trusty shotgun underneath the pillow, for the most persistent of data thieves.



  • Given that the backup drives are encrypted with a 256-bit randomly generated key that's stored only on the origin server and in my KeePass, I think I'm actually safer without explosives stored next to my head.


  • Discourse touched me in a no-no place

    @flabdablet said:

    rsync-over-ssh of the nightly snapshots

    How long does that take - or is there really that little daily difference in the sources that rsync makes it worth doing?



  • Usually about 12 hours.

    I'm using a Cubietruck as the server on my end; best read/write speed I can get to the dmcrypt disks is around 12MB/s. So rsync spends most of its time with the network link idle, just reading huge VM disk image files off the encrypted disks to build its running checksums. If I ran something more power-hungry I'd probably be able to do it in something closer to four hours.

    For comparison, the rsync job to the old server on the same bench (GigE, one switch in between) runs in full-file transfer mode rather than rsync "smart" diff mode, and typically takes about 20 minutes.



  • @Eldelshell said:

    Now go and install some SSDs.

    Are SSDs reliable yet? So far I've had a 100% failure rate within the first year for every SSD I've owned, with only one exception which is a no-name remanufactured 30GB drive I bought for peanuts to host a Minecraft server.



  • @mott555 said:

    Are SSDs reliable yet? So far I've had a 100% failure rate within the first year for every SSD I've owned, with only one exception which is a no-name remanufactured 30GB drive I bought for peanuts to host a Minecraft server.

    I haven't had any problem with SSDs in my work laptop, my home laptop and my home desktop, although the desktop is only about half a year old.

    It's kind of ironic that your only non-failing SSD was used for Minecraft, considering how often it writes small amounts of data to disk (unless this has changed recently).



  • @mott555 said:

    Are SSDs reliable yet? So far I've had a 100% failure rate within the first year for every SSD I've owned, with only one exception which is a no-name remanufactured 30GB drive I bought for peanuts to host a Minecraft server.

    Use enterprise SSDs? Typically way more reliable than consumer ones for obvious reasons.

    But on the consumer grad side, thetechreport had an SSD test running over the year which ended up writing 2 petabytes to a bunch of SSDs, the end result is the Samsung and Kingston HyperX still worked.

    Just avoid the shitty cheap SSDs if you want a real expectation of life. There are reasons why Samsung warranties their 840 pro series for 5 years (the only limitation is your warranty is up when you hit the write threshold counter which they spec as 10 GB/day for 93 years).



  • @delfinom said:

    Use enterprise SSDs? Typically way more reliable than consumer ones for obvious reasons.

    Nope, way too expensive at the time. Even today I'm too cheap for consumer SSDs. My desktop is staying on spinning rust until it's affordable to have a 1TB SSD partition on a RAID-5 array.

    I think my current laptop SSD is a Samsung 840 which somehow hasn't failed yet (I'm overdue). Before that I went through a couple first-gen SandForce-based drives, both from reputable manufacturers. Given my history with SSDs, nothing important lives on that laptop.

    @delfinom said:

    Just avoid the shitty cheap SSDs if you want a real expectation of life.

    Funny enough, my longest-life SSD so far is the shitty cheap one I bought for my Minecraft server.



  • @mott555 said:

    Nope, way too expensive at the time. Even today I'm too cheap for consumer SSDs.

    I think my current laptop SSD is a Samsung 840 which somehow hasn't failed yet (I'm overdue). Before that I went through a couple first-gen SandForce-based drives, both from reputable manufacturers. Given my history with SSDs, nothing important lives on that laptop.

    @delfinom said:

    Just avoid the shitty cheap SSDs if you want a real expectation of life.

    Funny enough, my longest-life SSD so far is the shitty cheap one I bought for my Minecraft server.

    Expectation of life not that it can't last long.

    I trust SSDs as much as hard drives, I don't. Backups in a RaidZ2 array ftw. (No shitty manufacturer drives either, fuck Seagate)


  • Grade A Premium Asshole

    @smallshellscript said:

    This is the point at which my employer became totally justified in firing me, should they choose to do so.

    @smallshellscript said:

    This is where I made my second firing mistake.

    Not from my point of view, and I have always lived by "hire slowly and fire quickly". Unless there are details I skipped or you omitted, I see nothing here to fire you for.

    Meh, maybe you should have had a better hold on it as you pulled it from the rack? Though the fact that it was not hooked was the fault of the last person to pull it out from the rack. Was it a Dell server? Their older rails had this annoying habit of not quite seating when you placed the server in. Sometimes you had to shuffle it around, etc.

    @smallshellscript said:

    reinstall everything following disaster recovery procedure, spend way too much time working out the 3 steps omitted from disaster recovery procedure and get it all back up so that the company can resume “normal” operations.

    Based upon that, you would be the type of person that I would want to hire. You say you spent too much time, but you were able to figure it out. Don't be too hard on yourself. There are a shitload of people out there that would have blindly followed procedure and then blamed procedure when they failed. You figured it the Belgium out.

    Shit happens man. You persevered and recovered from the mistakes when the initial mistake was not even one of your own. You stuck with the job until it was done, even though it meant working overnight. You performed a bare-metal recovery of an Exchange server (not a terribly easy task...) Now, convince them to replace that wheezing server before it finishes crashing and burning.

    Now, a little anecdote of my own: I once needed to replace a fan on a PowerEdge 2900 rackmount server that was completely loaded with drives, etc. Heavy...bastard. As this machine had hot-swap fans and was equipped with the cable management arm in the back, I decided to do it hot instead of going to the datacenter at 3am. I pulled the server out and it jumped off the rail on side away from me as I pulled it out. I grabbed it very quickly, but with the awkward positioning, being off-balance and generally not expecting a ~100lb server to make a dive for the concrete, I was barely able to grab it. Just...barely. Bent the shit out of the rail that remained attached. It was tweaked beyond all recognition. I wiggle my hand up to where I can reach the power button and just barely am able to hit it to start shutdown procedures and keep enough of a grip to not drop it.

    There is literally no one around to help. Just me. You cannot even imagine the contortions I had to go through to get the cables undone from the back, rip the cable management arm off, barely get the twisted as shit rail loose, etc. I got it down to the floor right before I lost all strength in my right arm (prior injury leaves me with no stamina in my right arm due to a herniated muscle). An unbelievably close call, and honestly if the one rail had not slowed the fall, a VERY expensive server (at the time) would have hit the concrete.

    After that, I got rid of the cable management arms on all of our servers and now we do all work inside the cases while the server is powered down, as a matter of procedure...



  • I've got my first laptop with a SSD and I love it. I can toss the thing around without worries about some disk bouncing around a metal casing.


  • Java Dev

    I've got a SSD in my desktop. Combined with some good airflow management (large slow fans) it's barely audible.

    Said SSD is rather large, 512GB, because it needs to host a dual boot installation and all my game installs are on it. Data storage is on a NAS.



  • @mott555 said:

    Are SSDs reliable yet?

    Have been for ages.

    @mott555 said:

    So far I've had a 100% failure rate within the first year for every SSD I've owned, with only one exception which is a no-name remanufactured 30GB drive I bought for peanuts to host a Minecraft server.

    So I have a HD curse, and HDs for me almost never last over 2 years. I've had 3 SSDs last 3+ years, even in the short period of time they've been affordable. The 256 GB SSD in my desktop impressed me with its reliability over 3 years that I bought a 500 GB to replace it. Sure it wasn't cheap, but for me it's cheaper than buying a new spinning HD every year.

    I do still have the spinning HD for video editing. So uh... I still have to buy a new one every 2 years or so. But whatever, I like making videos.


  • Grade A Premium Asshole

    @blakeyrat said:

    I do still have the spinning HD for video editing.

    Do you edit on the SSD and then move it to spinning rust? Or do you edit it while it is on the HDD?



  • @Intercourse said:

    Do you edit on the SSD and then move it to spinning rust? Or do you edit it while it is on the HDD?

    The latter. FRAPS captures are generally too large to even fit on the SSD-- 1080p is something like 10 GB/minute. I do an initial compression to get it in mp4 at a reasonable size I can edit (and backup without staring at an upload for 4 days). Compressing that, and rendering the final video are both CPU-bound, not I/O-bound, so an SSD wouldn't help there. Editing is generally human brain-bound, so an SSD wouldn't help there-- pretty much the only thing an SSD would improve is to allow Vegas to draw frame previews more quickly, since that takes a lot of crazy seeking every time I change the zoom. And reliability of course. I've lost at least 3-4 hours of Robots in the News material due to HD failures, including the originals of the first 5 Fanfic Fridays. And a bunch of Marlow Briggs, which pissed off Rantis and he doesn't want to re-do them.

    The exception here is when recording Xbox One games, I usually do those on the SSD because they're more like 1GB/30 minutes, since the capture hardware does an initial h.264 pass. But I put them on the HD before I do any editing, so... same workflow. I just don't have to do the initial compression pass because the capture hardware does it for me.


  • Grade A Premium Asshole

    Fair enough. Good explanation. Much appreciated.

    Also, I had no clue that FRAPS captures were that large. That is starting to get to the I/O limit of a HDD.



  • @Intercourse said:

    Also, I had no clue that FRAPS captures were that large. That is starting to get to the I/O limit of a HDD.

    It's a rule of thumb, I don't remember the exact number. It's an exaggeration, as it would be a shitty rule-of-thumb if it resulted in me losing videos due to running out of space.


  • Grade A Premium Asshole

    Err on the side of caution because when you are recording a shitty game you would not want to do that all over again. Gotcha.



  • Oh God yeah. I wanted to re-do Revelations 2012 because the audio came out so fucking terrible*, but 1) Playing that was a slog and 2) the dramatic ending was a bug that probably wouldn't have occurred a second time, and it was the best part of the video.

    * That was before I realized how fucking broken Audacity is-- it's volume sliders in the toolbar adjust system volume, not application volume! So in my attempts to get Skype's mic gain to a reasonable level, I ended up clipping the shit out of my own recording.

    Since then I've switched to SoundForge, which is not awful (and coincidentally not open source, hmm!) and only use Audacity for noise cancellation. Which it's fucking slow at. Because it's single-threaded.

    BTW if any of you open source-loving mega-minds want to prove the superiority of your development methodology, how about making a sound editor that isn't fried ass? Or a video editor that comes within 2 orders of magnitude of half as good as Vegas or Premiere?


  • Grade A Premium Asshole

    @blakeyrat said:

    BTW if any of you open source-loving mega-minds want to prove the superiority of your development methodology, how about making a sound editor that isn't fried ass? Or a video editor that comes within 2 orders of magnitude of half as good as Vegas or Premiere?

    Why whatever do you mean?

    https://www.youtube.com/watch?v=MS7hXuO2UKE



  • @Weng said:

    Now, if backups are your responsibility, we can still talk about the firing thing.

    They weren't technically but they are now since I obviously can't trust the ones taken before I took on the IT role (which is most of our critical infrastructure, of course).



  • @Eldelshell said:

    The person who took that decision just saved your ass. Now go and install some SSDs.

    Not in a server. Particularly not in a server of this vintage. I love SSDs for workstations / laptops (you can get a late Pentium 4 / Pentium D era desktop to run Windows 7 really well with the addition of an SSD) but I prefer my server drives to have a gradual failure mode rather than the instant "now it's gone forever" failure you see with solid state.



  • Haha I'm 6:30 in and he hasn't even installed a working Linux distro yet.


  • Grade A Premium Asshole

    @blakeyrat said:

    Haha I'm 6:30 in and he hasn't even installed a working Linux distro yet.

    I thought you would like that.



  • @Luhmann said:

    Fucking replace that turd. If it ran for 10 years than it has had it's time.

    We have a contractor starting in January who'll be doing exactly that. Pity there isn't a direct upgrade path from Exchange 2003 to Exchange 2013.

    @Luhmann said:

    Given that this is end of life the exchange version it runs most likely is also near its end.

    July 2015 - at least for Server 2003. Exchange 2003 may actually already be EOL. Yep. http://support2.microsoft.com/lifecycle/search/?alpha=Exchange+Server

    PS - get fucked Discourse. I'll converse how I want to.



  • @Intercourse said:

    I thought you would like that.

    The content's ok, but man this guy talks a lot and says little. I mean, I don't edit my shit down either, but I don't pretend I'm doing a TV show either. EDIT: and he does that annoying jump cut thing too, EVEN THOUGH HE HAS TWO CAMERAS! WTF!

    EDIT: So the nicest (or at least most-working) open source video editor can't split a stereo track into two mono tracks?! HOW BASIC A FUNCTIONALITY IS THAT! (That said, his recording two separate mics and left/right in a single sound channel is kind of a WTF, also, but I guess it'd reduce the equipment you need to carry around if you're doing interviews at an overpass(?).


  • Grade A Premium Asshole

    Most of their videos are pretty concise. I think they just wanted to show how much trouble they went through, and the video still turned out pretty shit.

    Here is the video he is referencing in that:

    https://www.youtube.com/watch?v=jaJ7vUu1ixg

    The audio turned out pretty horrible, because of the OSS he was using to edit the video. Yeah, the first video I posted is pretty long, but I think that was to show desperation and just so no one would say, "All you had to do was (recompile the kernel, use different hardware or distro, etc)".



  • @delfinom said:

    Kingston HyperX

    These are the best SSDs I've found. Probably 50 installs so far, no failures yet (it'll happen, just hasn't yet). Compare that to the 3 or 4 other brands we've got in the building of which I've lost 6 drives total so far.



  • @Intercourse said:

    The audio turned out pretty horrible, because of the OSS he was using to edit the video. Yeah, the first video I posted is pretty long, but I think that was to show desperation and just so no one would say, "All you had to do was (recompile the kernel, use different hardware or distro, etc)".

    I'll bet you $50 if I visit the YouTube comments (assuming he has them turned on), I'll see that standard Linux bullshit in them anyway.



  • @Intercourse said:

    Unless there are details I skipped or you omitted, I see nothing here to fire you for.

    Nothing pertinent. I neglected to mention stuff like the night janitor poking his head in every two hours to make sure I hadn't somehow died whilst sitting on a step stool in the server room working at the terminal and getting inventive with my cursing or the mad scramble through the office to find the Server 2003 CDs. Nothing like rooting through piles of paperwork and random disks like a racoon in a dumpster because someone neglected to put something back. :sigh:

    @Intercourse said:

    Was it a Dell server?

    Yep. Stupid things. At least their newer rails seem to be a bit more substantial.

    @Intercourse said:

    There are a shitload of people out there that would have blindly followed procedure and then blamed procedure when they failed.

    Oh, I blame the procedure for missing steps. Doesn't escape the fact that I had to make it work. Which reminds me, I need to finish updating that before I forget what I did and where the screenshots are supposed to go.

    @Intercourse said:

    PowerEdge 2900 rackmount server

    Man, we used to have one of those. So bloody heavy.


  • Discourse touched me in a no-no place

    @smallshellscript said:

    late Pentium 4 / Pentium D era desktop to run Windows 7 really well with the addition of an SSD)

    Huh, the guy in the inner video about 45 seconds in, filmed outside in front of an underpass is using a DNA Lounge/DNA Pizza shirt, or at least a shirt sporting their logo.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    this guy talks a lot and says little

    The single biggest problem with videos. "Here's a 'life hack' about how to cut cherry tomatoes. It takes 4 seconds to show you how to do it, but I'll pad it out to a minute."

    Obviously that particular example isn't a big deal, but turning a 1-minute set of instructions into a 10-minute video, ARGH.



  • Kind of but he wouldn't be able to get away with it without the british accent. People would watch maybe 15 seconds of a talking head with my Seattle accent sitting in front of a messy desk and two monitors.



  • @blakeyrat said:

    15 seconds

    So you only need to pad your cherry tomato videos out by 11 seconds? Efficiency.


  • Grade A Premium Asshole

    @smallshellscript said:

    I prefer my server drives to have a gradual failure mode rather than the instant "now it's gone forever" failure you see with solid state.

    We have noticed that also. SSDs have a very definite lifespan with regards to wear. Running in RAID1, they will both fail at almost precisely the same time. One way we have found to mitigate this is to run the RAID array for 3-6 months and then replace one drive with a new one. That way it staggers the failures from wear.



  • This is probably a good policy even with spinning disks. I'm always leery of putting two identical disks from the same batch bought at the same time and expecting them to fail at different times (baring manufacturing defects that pop up right away).



  • @smallshellscript said:

    This is probably a good policy even with spinning disks. I'm always leery of putting two identical disks from the same batch bought at the same time and expecting them to fail at different times (baring manufacturing defects that pop up right away).

    apt typo. 😄


  • Grade A Premium Asshole

    @smallshellscript said:

    I'm always leery of putting two identical disks from the same batch bought at the same time

    We always build standard HDD RAID arrays with drives of different manufacturers on opposing sides of the array. For RAID 1/10 anyway. For RAID5/6, there are not enough manufacturers. :)



  • @delfinom said:

    Expectation of life not that it can't last long.

    I trust SSDs as much as hard drives, I don't. Backups in a RaidZ2 array ftw. (No shitty manufacturer drives either, fuck Seagate)

    Isn't a Seagate of unit of measurement?

    "My Western Digital SSD lasted 37 Seagates."


  • Discourse touched me in a no-no place

    Was encrypting the drive in my laptop just now with Bitlocker but at 64% after a few hours of running the whole laptop hung had to be restarted. And, expectedly, doesn't boot now.
    Fine, 98% of the data is replaceable so about to wipe it except some of the data I want to keep seems to have not backed up to Dropbox (user error I suspect) so I ideally need to try and recover the drive. Except that requires another 1TB drive, and I don't have one spare and it's 9pm on Christmas Eve so I can't exactly get one today or tomorrow either.
    Fail.
    Will see how I feel tomorrow with regards to writing the data off vs waiting to try and recover it. Its not critical but I'd rather not lose it either.


  • Garbage Person

    So this is highly interesting. I would be VERY interested in a series of blog posts, videos, or whatever directly comparing tooling for different use cases on best-of-breed commercial/non-FSF free software, general Linuxy software (i.e. binary drivers, etc.), and full-up FSF-approved software. I might be able to be convinced to do a PoC.



  • @blakeyrat said:

    10 GB/minute

    @blakeyrat said:

    1GB/30 minutes

    My raw video was somewhere around 11 to 12 KiB/s and the final product was around 90 to 100 KiB/s. I guess the fact that it was 720p might have helped... That and the fact that it was less data than a 128×60 16 bit color image per frame.


Log in to reply