The Sad State Of (Atwood's) Web App Deployment
-
(who runs Gentoo on a server?!)
I do, on both of my servers. I run stable amd64. One's a Thinkpad and the other's a Dell laptop, but they're both the same hardware, so I build binary packages on one that also get installed on the other.
...you end up redownloading every single package if there's a libc bugfix
This is why /usr/portage/distfiles isn't on tmpfs, is on a drive with a couple of GBs of space, and only gets pruned every month or so.
-
This is why /usr/portage/distfiles isn't on tmpfs, is on a drive with a couple of GBs of space, and only gets pruned every month or so.
You've missed the point. My sentence started with "if you actually have some sanity" - implying that this is the case if you weren't running gentoo. So saying "you don't have to redownload source files" while true, is irrelevant - this was me saying if you had to redownload half your system on a binary-based distro.
I do, on both of my servers. I run stable amd64. One's a Thinkpad and the other's a Dell laptop, but they're both the same hardware, so I build binary packages on one that also get installed on the other.
"servers" "a Thinkpad" "a Dell laptop"
I was talking about real servers, not some horribly contrived home scenario.
-
You've missed the point.
Ah. I mis-parsed the original statement.
I was talking about real servers...
So, I guess ARM servers aren't real servers? Or are servers only IBM mainframe hardware?
Realistically, because nearly everything that isn't based on exotic hardware is all 32 or 64-bit Intel-compatible machines, you're going to have to try hard to pull this out of the No True Scotsman trap.
-
I think we all agree a Dell laptop is not a real server.
-
What is a real server?
-
I define it as "any computer that is not a Dell laptop".
-
So, my Thinkpad is a real server, but its Dell twin is not. One out of two ain't bad.
-
What is a real server?
Something with rack mount ears/rails and an IPMI/ILO, for starters.
-
[GPG is] actually designed to not be scriptable.
I can actually forgive that one specific program for not being scriptable, for reasons that should be obvious if you think about it. (I'm not saying the reasons will necessarily survive deep examination. But at first blush I can understand why someone wouldn't want it to be.)
-
Something with rack mount ears/rails and an IPMI/ILO, for starters.
What about a tower server? HP and Dell used to sell those--we've got a couple of 'em moldering away in a closet.
-
A miserable little pile of microchips.
But enough talk, have at you!
-
What about a tower server? HP and Dell used to sell those--we've got a couple of 'em moldering away in a closet.
Given previous experience, they are a pain in the ass that don't properly fit in a rack. ;)
-
So, I guess ARM servers aren't real servers? Or are servers only IBM mainframe hardware?
Realistically, because nearly everything that isn't based on exotic hardware is all 32 or 64-bit Intel-compatible machines, you're going to have to try hard to pull this out of the No True Scotsman trap.
Although you seem to have done a good job of falling into the Reductio ad absurdum trap.
-
@bugmenot said:
What is a real server?
Something with rack mount ears/rails and an IPMI/ILO, for starters.
So you are excluding tower form servers? Only rack servers count in your tiny perfect world?
@Minimaul said:
Something with rack mount ears/rails and an IPMI/ILO, for starters.
What about a tower server? HP and Dell used to sell those--we've got a couple of 'em moldering away in a closet.
Don't know about HP, but Dell still does.
-
So you are excluding tower form servers? Only rack servers count in your tiny perfect world?
http://www.weblogsinc.com/common/images/1016813135734446.JPG
-
So you are excluding tower form servers? Only rack servers count in your tiny perfect world?
Sure, why not.
Edit: while we're at it, I'd like to outlaw vi and emacs, and mandate that any developer who writes software without using source control has their hands surgically removed.
-
Something with rack mount ears/rails and an IPMI/ILO
IPMI & friends are really nice if you're operating at scale. If you only have a few on-premises machines, they're totally optional.
@FrostCat said:
What about a tower server? HP and Dell used to sell those--we've got a couple of 'em moldering away in a closet.
Given previous experience, they are a pain in the ass that don't properly fit in a rack.You seem to be confusing form with function, while also forgetting that Google started out with an enormous pile of whitebox machines, ten percent of which were on fire at any given moment.
you seem to have done a good job of falling into the Reductio ad absurdum trap.
There's nothing absurd about talking about shipping products from major manufacturers: http://www8.hp.com/h20195/v2/GetPDF.aspx/c04384048.pdf http://www-03.ibm.com/systems/z/
-
IPMI & friends are really nice if you're operating at scale. If you only have a few on-premises machines, they're totally optional.
I'd argue they're essential in general - especially if you ever need to do emergency work out of hours.
You seem to be confusing form with function, while also forgetting that Google started out with an enormous pile of whitebox machines, ten percent of which were on fire at any given moment.
Yes, OK, I forgot about tower servers, jeez. Get over it.There's nothing absurd about talking about shipping products from major manufacturers: http://www8.hp.com/h20195/v2/GetPDF.aspx/c04384048.pdf http://www-03.ibm.com/systems/z/
You've missed the point again. I was never implying these aren't real servers. I was saying that laptops are not real servers, and I doubt many people would disagree.
-
Yes, OK, I forgot about tower servers, jeez.
These weren't tower servers, they were desktop machines. Piles of desktop machines. Google later switched to rack-mountable hardware to reduce handling difficulty, but they started with desktop hardware in non-rack-mountable cases.
I was saying that laptops are not real servers...
...because they don't have IPMI and don't have inbuilt rackmount ears? Is there any other reason?
I'd argue [IPMI & friends are] essential in general...
Reasonable people disagree about this. shrug
-
These weren't tower servers, they were desktop machines. Piles of desktop machines. Google later switched to rack-mountable hardware to reduce handling difficulty, but they started with desktop hardware in non-rack-mountable cases.
Yes, and most reasonable sysadmins would agree that this is utterly insane. Google are not a great source of best practices for the world at large.
...because they don't have IPMI and don't have inbuilt rackmount ears? Is there any other reason?
A lack of IPMI and a lack of ECC are good enough for me. Plus they don't have a sensible cooling setup to be ran at full tilt for a decent amount of time. Mounting is difficult, they usually have shitty NICs.Pick some of the above ;)
Reasonable people disagree about this. shrug
That's fair enough. I absolutely require some method of sensible out of band admin for any server that functions when the OS is screwed, and lets me get into the BIOS/EFI - IPMI/ILO is generally the best way to get it.
-
ECC
This, and multiple NICs (in addition to preferably non-shitty).
(Yes, there are desktops with multiple NICs; seldom laptops. No, USB dongles don't count.)
-
I'd personally draw the line at "laptop".
A laptop can't really be relied upon to be a server in most cases IMO. YMMV, but laptops are generally more likely to overheat and can't handle the load. most desktops can run as a server (I've got several Pentium 4 Dell Dimensions doing various low-power tasks on my home network)
-
Plus they don't have a sensible cooling setup to be ran at full tilt for a decent amount of time. ... they usually have shitty NICs.
My little servers run at full tilt 24/7, and have been doing so for years. I expect that I will -one day- have to replace the cooling fans in them. That day has not yet come.
The only thing wrong with the NICs in them is their lack of support for jumbo frames.
...a lack of ECC...
Yeah, this bothers the shit out of me, too. When btrfs becomes even more stable, I'll switch to it for the root partitions on both machines.
I absolutely require some method of sensible out of band admin...
I prefer my systems to not come with a poorly-or-un-audited network accessible backdoor that less-often-than-not grants full access to the running machine in question. (I've seen a few talks by some rather clever people. The code in these things is often really bad.)
This, and multiple NICs...
Multiple NICs simplifies a few network configurations (and potentially gives you Nx total throughput), but what configurations can't you replicate with multiple VLANs and a single NIC?
-
When btrfs becomes even more stable
How would that help with a lack of ECC? Stuff that rots in memory will have a rotten checksum calculated on top of it, so even assuming btrfs wouldn't eat your data on general principle, any checksumming it may add wouldn't help much.
what configurations can't you replicate with multiple VLANs and a single NIC
Any that requires some kind of resiliency against failure.
-
I prefer my systems to not come with a poorly-or-un-audited network accessible backdoor that less-often-than-not grants full access to the running machine in question. (I've seen a few talks by some rather clever people. The code in these things is often really bad.)
You don't put IPMI on a publicly facing network, you put it on a secure OOB network, usually on RFC1918 addresses, accessed using a VPN. So this really doesn't matter.
Multiple NICs simplifies a few network configurations (and potentially gives you Nx total throughput), but what configurations can't you replicate with multiple VLANs and a single NIC?
High throughput (as you said), also bonded connectivity for fallback rather than for performance.
How would that help with a lack of ECC? Stuff that rots in memory will have a rotten checksum calculated on top of it, so even assuming btrfs wouldn't eat your data on general principle, any checksumming it may add wouldn't help much.
And this is why you should always run ZFS on machines with ECC. If you get a single bit memory error while calculating your checksum, you're stuffed.
-
How would that help with a lack of ECC?
shrug. I figure that use of a fs that is nominally concerned with data corruption detection is better than not. I intimately understand the implications of GIGO, but lack the intimate understanding of btrfs internals required to understand when it could detect RAM-introduced corruption and when it could not. Obviously, if your checksum gets flipped after you calculate it, you're super-screwed unless you've had the foresight do something clever.
...even assuming btrfs wouldn't eat your data...
I've been using it for many, many years. The only time I've lost data with btrfs was when I was using an SSD with buggy firmware that decided to write any old thing to disc, rather than what you asked it to write. Of course, I've had btrfs trigger PAX size overflow detection when using transparent compression, run into the recent unlink/mv deadlock issue (that should be fixed in 4.2), and -waay back in the day- had spurious ENOSPACE issues. But those are DoSs (which are obviously entirely unacceptable in situations where uptime is important), not data loss.
Any that requires some kind of resiliency against failure.
Eh. The only failure-resilience schemes I can think of that can't otherwise be achieved with vlans are failure of NIC, cabling, or the upstream switch. Am I missing something obvious? While these are real concerns, you could pop in a good GigE ExpressCard NIC and call it a day.
...you put [IPMI] on a secure OOB network... [s]o this really doesn't matter.
A) Network accessible, poorly coded, root-granting software is always a big deal.
II) If that network isn't physically separate from your other network, then it kinda does matter.
2a) I remember reading about a notable misfeature of a particular IPMI system from a major hardware vendor that would -if no cable was plugged into the maintenance port- make the IPMI system listen on any other active interface.Note: I'm not one of those "OMG HACKERS WILL GET YOU NO MATTER WHAT YOU DO!11" people. Like most (all?) folks on this forum, I know that deadlines and budgetary constraints mean that No-one(TM) makes more than a token effort to design and build secure systems.
-
-
Obviously, if your checksum gets flipped after you calculate it, you're super-screwed unless you've had the foresight do something clever.
Like use ECC? ;)Eh. The only failure-resilience schemes I can think of that can't otherwise be achieved with vlans are failure of NIC, cabling, or the upstream switch. Am I missing something obvious? While these are real concerns, you could pop in a good GigE ExpressCard NIC and call it a day.
You're not missing anything obvious, these are what you're protecting against.A) Network accessible, poorly coded, root-granting software is always a big deal.II) If that network isn't physically separate from your other network, then it kinda does matter.
Like I said, a secure OOB network makes this not really matter.2a) I remember reading about a notable misfeature of a particular IPMI system from a major hardware vendor that would -if no cable was plugged into the maintenance port- make the IPMI system listen on any other active interface.
For Supermicro at least (on modern-ish boards), this is configurable in the BIOS or IPMI settings. You can make it totally ignore the main LAN ports for IPMI if you so wish.
-
According to that article, they have a proper server and are just using the Thinkpads to drive displays that are inside the bunker itself?
-
Facts are a to humor, and rankling blakey.
-
@abarker said:
So you are excluding tower form servers? Only rack servers count in your tiny perfect world?
http://www.weblogsinc.com/common/images/1016813135734446.JPG
That form factor is so last decade.
-
-
-
Try porting one of your .NET apps to Mono, and also, uh, jump back in time a year or two...
Don't wanna, can't make meeeeeeee
-
Given previous experience, they are a pain in the ass that don't properly fit in a rack.
On the contrary. I have an HP tower server sitting in the bottom of my office's rack right now.
-
I'd personally draw the line at "laptop".
Hey, asshole, this is my home web/ftp server****strong text:
http://i.imgur.com/HAa3nAx.png
That's my old Lenovo laptop.
The screwdriver is to keep the screen open just wide enough to keep it from hibernating, but not so wide as to turn the screen on. Plus, ventilation. (that's also why it's being propped up by a 500MB hard drive)
Plus, built-in KVM (if the keyboard worked right, or the trackpad worked at all).
And built-in UPS (well, the battery is all but shot so maybe 5 minutes, though it is hooked into a legit UPS).
AND AND it's "rack mountable". By rack, I mean shelf. Except I'm out of shelf space, so it's on my old desk in the basement that I don't use. And I don't use the desk because my main computer is a laptop!
See how it all comes full circle?
Filed under: I'm not sure whose point I'm making anymore.
-
So, I guess ARM servers aren't real servers?
They're capable of being very nice servers, as they run a bit cooler than Intel(-compatible) parts at full tilt. Reducing the amount of cooling you need is a wonderful benefit when running at real scale.
How many gigaflops per kilogram can you achieve in practice?
-
That's what you get if you can't find a package maintainer of the particular distro you're using to prepare the package for you.
In the article, one of the things he mentioned about breaking LUA is web servers need to run as root to bind to port 80. IMO all systems that supports proper security (no matter it's Windows or *nix) needs "god" account to bind to ports under 1024, so you could argue it's flaw associated with the web standard. Apache or its deriatives workaround this by chrooting it and run it has "httpd" account that can do less destructive things than you can usually do with root.
At the end of article it mentioned PHP, but when I installed website for it years ago that has another PHP website running, I also run into dependancy hell. It takes some chroot magic to make it believe all the confliciting libraries does not exist and finally it works.
-
all systems that supports proper security (no matter it's Windows or *nix) needs "god" account to bind to ports under 1024
Or, better,
setcap cap_net_bind_service=+ep /usr/sbin/httpd
.
-
Nice to learn something new here.
I'd prefer more on the iptable approach though, because that needs no modification on the binary. I suspect need to manually modify the binary after each update is setting trap for the next person assigned to admin the server.
-
Another approach is to run nginx or something like that on port 80 and have it forward the interesting connections to you. Like that, it can serve up the static files for you so that you can focus on the complicated stuff.
-
It takes some chroot magic to make it believe all the confliciting libraries does not exist and finally it works.
And that's what Docker does: it creates something resembling a chroot environment by overlaying different filesystem images and a sufficient amount of kernel magic, then runs a process in that environment.
That's why running Discourse or a more complicated app in Docker "just works" - it grabs everything it needs and goes to work somewhere in a corner.
-
So it seems really 90% of the problem he experienced is indeed his own fault.
-
-
I'd like to outlaw vi
HERETIC!!
and mandate that any developer who writes software without using source control has their hands surgically removed.
Redemption.
-
outlaw
vi
is horrible. everyone knows thatvim
is where it's at.Filed under: out of context mobile quote
-
But, damn, is it secure!
If I wanted that sort of security, I'd tie my data to a few hundred thousand tons of concrete and tip into an oceanic trench. Hack that, assholes!
-
The age-old security/convenience tradeoff there...
-
@Lorne_Kates said:
That's my old Lenovo laptop.
The screwdriver is to keep the screen open just wide enough to keep it from hibernating, but not so wide as to turn the screen on. Plus, ventilation. (that's also why it's being propped up by a 500MB hard drive)
You must be a fan of Louis Slotin https://en.wikipedia.org/wiki/Louis_Slotin#Criticality_accident .
-
vi is horrible. everyone knows that vim is where it's at.
Well, yes. But vi is vim's cousin, so outlawing vi is still heretical because vim would be next!