Oh dear god. You say this guy has a popular youtube channel? God only knows why. My recommendation to the company who hires this guy. Fire him. Now. No, wait, fire him LAST WEEK. He has no idea what the hell he is doing. I take that back, he has enough basic IT knowledge to be fooling around on his own, but by no means should he be in charge of a companies back end.
Let me count the ways:
His server was a pile of junk. God knows how many thousands of pounds (dollars, whatever) of SSDs in a hand-built server, cheap-ass RAID cards, the wrong sized backplane and I'm willing to bet some cheap non-server motherboard lacking things like ECC memory and redundancy in PSU, etc. So you're a guy who likes building custom computers? Fine, build desktops, hell even build non-critical or backup servers, but core critical stuff, you buy proper server hardware from people who specialise in it. Full Step. The End.
Mixing hardware and software RAID? Moron.
His backup strategy. One single backup routine which operates by deleting the old backup and then writing a new one. And that's it. No offsite, no media rotation. Dear god. Get him away from the server room right now and never let him back in.
It took him THREE WEEKS to recover that data. He's lucky that the company didn't fold. Notice in the shots when they were all 'the data is back!" everyone was feigning excitement for the camera? That'll be because it was pretty much redundant at that point. Anything on that server which was actually critical data they've already worked around, reshot, whatever.
A LONG time ago (>15y) I was called into a company on a friday morning. Their IT manager had had a disk fail in a very similar RAID50 setup (but, being 15 years ago, we're talking SCSI) the previous afternoon, had popped in a spare disk, triggered the rebuild and gone home. He came in on Friday morning to discover a second disk in the same RAID5 had failed during the rebuild. He had also discovered his backup system (which was decent in concept, proper full/incremental, grandfathering, offsite) had been failing and his last backup was from nearly a month ago.
I had their data back on Monday morning, the company lost 1 day of work. After trying a variety of tricks all weekend and none of them working, I took one of the failed disks which was exhibiting signs of bearing failure in the motors, connected it to a 3m SCSI cable, plugged one end into the SCSI backplane where the drive had been, wrapped the other end in plastic, dragged a fridge/freezer from the company kitchen into the server room, drilled a hole through the side of the freezer and ran the SCSI cable through the hole so the wrapped drive was inside the freezer. This shrunk the bearings in the motor enough that it ran long enough to rebuild the other bad disk, and from there I could rebuild the 'freezer' disk and recover the whole array.
I'm not suggesting this guy should have used a freezer, because SSD, but in my 20 years of IT consultancy and disaster recovery, the only situation I've had similar to his, it took me 3 days (and thankfully over a weekend), compared to his 3 weeks, and he's celebrating like he's done a good job?