...and ISOs I guess they needed to do eventually since Microsoft Store uses them.
ISTR MSFT "shipping" installation ISOs for products long before Windows shipped with any sort of software to handle said files.
...and ISOs I guess they needed to do eventually since Microsoft Store uses them.
ISTR MSFT "shipping" installation ISOs for products long before Windows shipped with any sort of software to handle said files.
It doesn't "assign" the value to x.
Well, if we're getting pedantic, the local variable declaration sets aside sizeof($Class) space on the stack, then $Class's ctor is called to twiddle the bits in that space until they're "right". ;)
It does not assign - it default constructs.
$Class's default ctor twiddles the bits in the hole set aside by the variable's declaration (and maybe in heap holes, depending on how it was written) until the default ctor's code is done executing. Better?
This copy assigns.
Sure. And it does that by way of $Class::operator= (which might be written in terms of $Class's copy ctor.)
Edit: Why do the smilies look like they're laughing? (And why does the frowning face look horrified?)
else if(true)
I was unclear on the scoping rules for variables declared in an if clause, so I created a nested scope with as few keypresses as I could at the time.
TRWTF is that the preview indicates that I can't put a code block inside a comment quotation.
...all I know is rand() is going away eventually and you should use the <random> header instead.
For real code? Fuck yes. For one-off posts to "lively" programming forums? Ehhhh.
Edit: Hmm. Neither clang(3.6.2) nor G++(4.9.3) complain about the use of rand when I compile with --std=c++14. shrug
Barely - rand() is deprecated
Odd. G++ doesn't whine at me when I compile that code with --std=c++11 -Wall -pedantic
Edit: You did notice that that's rand(3) from stdlib.h, right?
It's been a while, but I think that these are all correct statements:
Sometype x; //calls Sometype() and assigns the value to x
Sometype y(7); //calls Sometype(7) and...
x = y; //calls x.operator=(y) (or however you spell it)
Whoa. It gets stranger:
$ cat blah.cpp
#include <iostream>
#include <stdlib.h>
#include <time.h>
bool doFun() {
return rand()%2;
}
int main() {
srand(time(NULL));
if(bool thing=doFun()) {
std::cout << "thing " << thing << "\n";
}
else if(true) {
if(doFun()) {
std::cout << "df 0, thing " << thing << "\n";
}
else {
std::cout << "df 0, thing " << thing << "\n";
}
}
return 0;
}
$ g++ -Wall -pedantic -std=c++98 blah.cpp && ./a.out
df 0, thing 0
It's also valid C++11.
I did C++ for years and never knew that this existed, let alone its scoping rules.
I think that locking a device's writer to a single process would be doable. I don't know if that's an idiom currently in use though.
Why doesn't Discourse's "highlight to quote" feature copy the formatting of the quoted text? I had to manually fixup the quote.
I'm fairly certain that if you're writing the device driver, you get to determine what happens when someone tries to open the device node for your device. You can return EACCES to all open-for-writing attempts unless it's 12:01 on the third Tuesday of the month, if you like. I know that there are nodes under either /sys or /proc that are root-writeable, but return failure when root attempts to write to them, unless certain conditions are met.
It wouldn't strike me as very strange if I ran into a device that behaved in the way you describe your theoretical TV tuner behaving.
Allowing idiots to point guns at their feet is the Microsoft Raison d'Γͺtre. The problem is they let the uber-nerds develop the product to the point of un-usablity...
MSFT does create power tools, yes. However, I can't agree with the second sentence in the quote. From what I've read MSFT is absolutely caked with layers and layers of bureaucracy and red tape. "Nerds running wild" is -as I understand it- not a thing that happens there.
Upgrades are non-trivial tasks.
If you wish to upgrade from version A to version Z, and you have instructions for upgrading from A->B, from B->C, from C->D, and so on down the line, then the upgrade process is trivial, but -perhaps- time consuming. Always first make correct software, then make that software fast.
Or were you speaking from the perspective of the installer author? If you were, then why should creating an upgrade package be more complicated than writing down the objects to be removed, replaced, and added, the services to be stopped, restarted, and started, and in which order all of these operations are to be performed?
Tedious? Yes. Much to keep track of? Sure. But the complexity is unavoidable: this is work that you have to do if you want to upgrade, rather than uninstall-the-old-and-install-the-new.
JACK should've been where PA currently is.
Agreed. For the longest time, I didn't use JACK because I had the impression that it was devilishly difficult to configure. When I went to use it, I found that it was substantially easier than configuring Pulseaudio back in the ~0.9.7 days. (Indeed, qjackctl makes it so you don't really even have to read the jackd manual. ;) )
Doesn't Apple use it...
I don't know if you're referring to JACK or PA, but a quick search doesn't raise indications of Apple using either one in OS X. I wouldn't be surprised if my search terms sucked.
All I want to do is update 3 files, WHY IS THIS SO HARD!
It might be for the same reason Windows didn't ship with even the most basic CD burning software until Windows XP or so, and -IIRC- Windows doesn't ship with CD image mounting software: MSFT doesn't want to anger the folks who make their third-party-software "ecosystem".
Better to ship a poorly documented pile of legos that enables enterprising third parties to drink from an unending money fountain than to let anyone use your software by properly documenting it and giving it a sane developer-facing interface.
Fortunately it's relatively lag free and at best just works.
That really depends on your hardware. On my laptop, PA works okay enough. On my desktop, PA introduces ~100ms latency (okay, that's totally fine) that varies by +-20% (okay, that's totally not cool). This doesn't happen on Windows, and it doesn't happen when I kill PA and use JACK.
I've changed to JACK on both machines, and am making use of the JACK<->ALSA bridge for non-JACK-aware software. I'm substantially more happy with the results. I do lose the ability to do per-application volume control in kmix, but I can manually set up such a thing with jack_mixer, so it's totally possible to do automatically, but just hasn't been done by the kmix folks.
Guess I have my next project for when I get a pile of round tuits.
...self-healing...
Isn't this just "Reinstall missing or changed files from the copy of the original install file that we squirrelled away on disk for you."? Every Linux package manager does something effectively like that, too. It's kinda a basic feature.
It also supports per-user versus per-machine installs...
It's far from the only installer that does that. ;)
... InstallCast (dedicated tab in Add/Remove Programs to grab installers from) ... But no, no one uses any of those things...
I have literally never seen anyone use this InstallCast thing. Is it big in The Enterprise or something?
But it's not context-aware...
I have no idea what you mean by "context-aware". I thought I made the limitations and intended use of SO_REUSEPORT pretty clear in my comment.
...so it's useless in this context.
That would be why the first words out of my hands were:
Not that this does what APap seemed to want to do, but...
...there's just enough difference in how things are factored to make writing portable stuff very hard.
Remember that time that MSFT went on a quest to discourage the use of the top N insecurely-used library functions in their libc, deprecated like three of the most commonly used libc functions used on Windows, Mac, or Linux, and replaced them with versions that -when you dug a little into their design- didn't actually fix the the problems they set out to fix?
Good times.
You can't spin a new apache, because it wouldn't be able to listen on the same port.
Not that this does what APap seemed to want to do, but you can (theoretically) start up a new apache on the same port if you're running on Linux 3.9 or later, but you can't run the second apache instance as a different user than the first. Grep for SO_REUSEPORT here: http://man7.org/linux/man-pages/man7/socket.7.html and maybe look at the LWN article on the option: https://lwn.net/Articles/542629/
SO_REUSEPORT is designed for fair distribution of initial TCP or UDP connections between some number of applications.
How would that help with a lack of ECC?
shrug. I figure that use of a fs that is nominally concerned with data corruption detection is better than not. I intimately understand the implications of GIGO, but lack the intimate understanding of btrfs internals required to understand when it could detect RAM-introduced corruption and when it could not. Obviously, if your checksum gets flipped after you calculate it, you're super-screwed unless you've had the foresight do something clever.
...even assuming btrfs wouldn't eat your data...
I've been using it for many, many years. The only time I've lost data with btrfs was when I was using an SSD with buggy firmware that decided to write any old thing to disc, rather than what you asked it to write. Of course, I've had btrfs trigger PAX size overflow detection when using transparent compression, run into the recent unlink/mv deadlock issue (that should be fixed in 4.2), and -waay back in the day- had spurious ENOSPACE issues. But those are DoSs (which are obviously entirely unacceptable in situations where uptime is important), not data loss.
Any that requires some kind of resiliency against failure.
Eh. The only failure-resilience schemes I can think of that can't otherwise be achieved with vlans are failure of NIC, cabling, or the upstream switch. Am I missing something obvious? While these are real concerns, you could pop in a good GigE ExpressCard NIC and call it a day.
...you put [IPMI] on a secure OOB network... [s]o this really doesn't matter.
A) Network accessible, poorly coded, root-granting software is always a big deal.
II) If that network isn't physically separate from your other network, then it kinda does matter.
2a) I remember reading about a notable misfeature of a particular IPMI system from a major hardware vendor that would -if no cable was plugged into the maintenance port- make the IPMI system listen on any other active interface.
Note: I'm not one of those "OMG HACKERS WILL GET YOU NO MATTER WHAT YOU DO!11" people. Like most (all?) folks on this forum, I know that deadlines and budgetary constraints mean that No-one(TM) makes more than a token effort to design and build secure systems.
Plus they don't have a sensible cooling setup to be ran at full tilt for a decent amount of time. ... they usually have shitty NICs.
My little servers run at full tilt 24/7, and have been doing so for years. I expect that I will -one day- have to replace the cooling fans in them. That day has not yet come.
The only thing wrong with the NICs in them is their lack of support for jumbo frames.
...a lack of ECC...
Yeah, this bothers the shit out of me, too. When btrfs becomes even more stable, I'll switch to it for the root partitions on both machines.
I absolutely require some method of sensible out of band admin...
I prefer my systems to not come with a poorly-or-un-audited network accessible backdoor that less-often-than-not grants full access to the running machine in question. (I've seen a few talks by some rather clever people. The code in these things is often really bad.)
This, and multiple NICs...
Multiple NICs simplifies a few network configurations (and potentially gives you Nx total throughput), but what configurations can't you replicate with multiple VLANs and a single NIC?
Yes, OK, I forgot about tower servers, jeez.
These weren't tower servers, they were desktop machines. Piles of desktop machines. Google later switched to rack-mountable hardware to reduce handling difficulty, but they started with desktop hardware in non-rack-mountable cases.
I was saying that laptops are not real servers...
...because they don't have IPMI and don't have inbuilt rackmount ears? Is there any other reason?
I'd argue [IPMI & friends are] essential in general...
Reasonable people disagree about this. shrug
Something with rack mount ears/rails and an IPMI/ILO
IPMI & friends are really nice if you're operating at scale. If you only have a few on-premises machines, they're totally optional.
@FrostCat said:What about a tower server? HP and Dell used to sell those--we've got a couple of 'em moldering away in a closet.
Given previous experience, they are a pain in the ass that don't properly fit in a rack.
You seem to be confusing form with function, while also forgetting that Google started out with an enormous pile of whitebox machines, ten percent of which were on fire at any given moment.
you seem to have done a good job of falling into the Reductio ad absurdum trap.
There's nothing absurd about talking about shipping products from major manufacturers: http://www8.hp.com/h20195/v2/GetPDF.aspx/c04384048.pdf http://www-03.ibm.com/systems/z/
So, my Thinkpad is a real server, but its Dell twin is not. One out of two ain't bad.
You've missed the point.
Ah. I mis-parsed the original statement.
I was talking about real servers...
So, I guess ARM servers aren't real servers? Or are servers only IBM mainframe hardware?
Realistically, because nearly everything that isn't based on exotic hardware is all 32 or 64-bit Intel-compatible machines, you're going to have to try hard to pull this out of the No True Scotsman trap.
(who runs Gentoo on a server?!)
I do, on both of my servers. I run stable amd64. One's a Thinkpad and the other's a Dell laptop, but they're both the same hardware, so I build binary packages on one that also get installed on the other.
...you end up redownloading every single package if there's a libc bugfix
This is why /usr/portage/distfiles isn't on tmpfs, is on a drive with a couple of GBs of space, and only gets pruned every month or so.
It still doesn't mean you can go about yelling "FIRE!" in crowded places.
Search for 'Trope Two: "Like shouting fire in a crowded theater"' The author is a former federal prosecutor and currently practicing lawyer.
Go to the FCC headquarters, and put in two massive antennas...
Given that you can't transmit much more than a watt, you'll either have to get on FCC property to do much -in which case your antenna will be confiscated- or you'll need a highly directional antenna -which won't wreak the havoc you seem to intend-.
Edit: Honestly, you'd get better results by using your directional antenna to spew forged DEAUTHENTICATE packets, but then you're not protesting FCC frequency management policies, but rather the insecure parts of WiFi's protocol design.
i would have suggested the WEB NC17 panda...
This is the first three results for a GIS of "Web NC17 panda": http://www.mabsland.com/Adoption.html
OTOH, I'm not at work.
... [Maybe] they were using GitHub to host something they were harassing others with instead of actually writing code.
Its first appearance in archive.org's cache looks pretty innocuous: https://web.archive.org/web/20140903025840/https://github.com/WebMBro/WebMConverter
Is it better than it was a few years ago?
I've been using Gentoo for -fuck me, I'm old- a decade. If you tell me what you hated, I can probably tell you if it's no longer there.
...SVN with Github seems to work just fine and doesn't require ANY shenanigans whatsoever to make things Just Work.
It's good that you found something that worked. From your OP, it sounds like the only problem you had with git was a misattributed commit (as found by accalia here) , which was a huge pain to fix. Do I have that right?
That's irrelevant packages.
If Mint['s graphical package installer] failed to install that, it is a big bug.
I've run into my fair share of never-used-by-anyone-who-cares-to-maintain-them buggy GUIs over top totally solid package managers before. I don't understand how they get shipped, but they do.
All I know is [T]he Mint application installer thing [didn't have Mono as a dependency for Monodevelop].
That sucks. Either the Mint packagers seriously fucked up, or there's an "Also install recommended packages" checkbox in the Mint application installer thing that needs to be checked. Both Ubuntu and Gentoo Linux know that Monodevelop has a dependency on Mono:
# lsb_release -d Description: Ubuntu 15.04 # apt-get install monodevelop Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: [snip] libmono-2.0-dev libmono-accessibility2.0-cil libmono-accessibility4.0-cil [snip] # lsb_release -d Description: Gentoo Base System release 2.2 # emerge -p monodevelop These are the packages that would be merged, in order: Calculating dependencies... done! [snip] [ebuild N ] dev-lang/mono-3.2.8 USE="nls pax_kernel -debug -doc -minimal -xen" [snip] [ebuild N ] dev-util/monodevelop-3.0.2-r1 USE="git subversion" [snip]
You have my sympathies. Were you also in charge of SVN-user training and support during the transition?
Full access implies shell access...
Nah.
I guess you've not administered an SVN server. You can prevent folks from reading and/or writing to certain paths within a repo. See: http://svnbook.red-bean.com/en/1.7/svn.serverconfig.pathbasedauthz.html
Though, I guess I could have been more precise. All -I guess- you'd need would be full read access to a repo.
It's clear that NTFS uses inodes-in-everything-but-name to refer to files behind the scenes. The fact that you can rename an exclusively-write-locked file is a good demonstration of this fact. It's clear that you're aware of the first of these facts.
Duh, but on Linux, you do not know it's happening...
With the proliferation of hand-rolled installer software on Windows, you as a user also have no way of knowing whether an old DLL has been renamed out of the way and replaced with an upgraded one. Is this Doing It Wrong(TM)? Maybe. But Windows Makes It Possible(TM) so some shitlord already has done it.
...in Windows with its far superior file caching...
Speaking as a five-year Windows dev and a more-than-five-year Linux dev, I have observed that -as a developer and a user- Windows's file caching is seriously inferior to Linux's.
Maybe you have some benchmarks or something that show MSFT's implementation to kick the shit out of the default Linux implementation. As a user, I have to say that I don't care. Windows's file caching is objectively inferior.
Then goes on to talk about svnadmin hotcopy.
I guess I was unclear. That's server-side backup which (when last I used it) requires shell access.
Is there official SVN Cabal-maintained software for doing remote backups for when you have full access to a repo, but cannot acquire shell access to the server which contains the repo?
Because you never updated the buggy one. You just renamed it and then put another version of it at the old path.
This is exactly what happens on Linux. Anyone who has the old version open keeps referencing it until they go to open it by name again. Anyone else who goes to open the replaced DLL will get the new one.
This is shockingly similar to how unlink(2), (followed by a copy of the updated file) works on *nix. Some might go so far as to call it identical.
Torvalds argues that they can do that, not that they necessarily should.
Yeah. There's plenty of room in git-land (using software like gitolite) to use git almost exactly like one uses SVN. If you slapped a suitably restrictive frontend over git, you could make the user experience exactly like SVN. (Though, one would need to do something slightly clever to simulate and store SVN properties.)
Nope. svnsync can quite easily do it...
True. There are tools that will -if you have the access rights- remotely read out an entire repo and make a backup of it. When I used them, they were slow as -what is it now?- Frozen mole asses?
The one I used was unreasonably fiddly, and failed to handle errors (like dropped network connections & etc.) well. No idea why. Maybe it was someone's freshman CS project or something.
Is there official tooling for this sort of backup, and is it speed substantially slower than one would expect for the task being performed?
That sounds like the older db repo format.
It might have been, yeah. I have vague memories of pushing really hard to move to the sharded repo backend store, but I cannot remember if that change ever happened. I guess if it happened in the 1.6.x era, was easy to do online, or not too time-consuming offline it happened over a weekend some time.
In Windows a file can't be deleted when it's open..
True. But it can be renamed in almost every case. Thanks, Microsoft!
...this is exactly Blakey's use case.
Blakeyrat makes a billion checkpoint commits throughout the day? You'd figure that he'd HATE SVN with a burning passion. Those server roundtrips are brutal.
That mindset is so alien to me...SVN is easy to set up, easy to use, easy to configure, and most of the time Just Works. So Torvalds set out to do the exact opposite of that? And apparently succeeded.
$ mkdir git-repo ; cd git-repo ; git init . ; touch hello ; git add hello ; git commit -m "Hello world"
If you can read bash, then you can see that it's substantially easier to get your first git repo up and running with its first commit than to stand up an SVN server. (This is a red herring, I know, but it was a red herring in a barrel!)
When I think back to the time before I knew anything about SVN, I can honestly say that basic proficiency in git was no harder to achieve than basic proficiency with SVN.
I just used Github's tutorials, which are apparently inadequate and/or not quite correct.
Github's tutorials are actually quite adequate and correct.
I'm still standing by the assertion that the Spigot tutorial that you haven't acknowledged that you read changed your git Name and Email address. The git log from earlier in the conversation strongly suggests that this is what happened.
You can't blame a tool for attributing a commit to the Name and Email address that you had -at the time- told it to attribute the commit to. (Well, I mean you can, but then you're no better than blakeyrat, and he's a troll.)
In every group, there's always one guy who goes into his cave for a month at a time and sucks up everyone else's code, while never committing anything.
Holy Christ, dude. I hate that guy, too. I pull the drive out of his PC to get his code.
All I'm looking for is to not be forced to commit WIP code that breaks the test suite. Some features take more than eight hours to build, you know? Some features take more than a week, too. It's nice to not worry about scrutiny when you've a long, difficult heads-down project ahead of you.
How often does that actually happen?
This particular case? Not often.
Look. I went from using SVN for four years, to using git for two. I knew nothing about git going in, but -after I learned the basics and played around with it for a week or so- I found that I no longer had to refrain from making the otherwise-perfectly-reasonable changes I would avoid because I knew that SVN would choke on them, and leave me with a mess to manually clean up (at best), or roll back and start over (at worst).
I can't give specific examples, as this shit has been lost to the winds of time (and drink). Just know that I'm not a software zealot, and that git made software development with a VCS as close to a hassle-free experience as I think is possible.
Is that in your usual workflow? Man your workplace sucks.
Nope.
The fact that you can move a git repo from one host to another without doing much more than a "git init" on the remote host and a "git push" on the local host is what makes git decentralized, regardless of how "most" people use it.
For example, just because almost noone makes use X11's -or OpenGL's- network transparency, doesn't mean that they're not network-transparent protocols.
The only way that works is if your work is so isolated that no one else cares.
Just as @sloosecannon said, when I say "rebase twice a day", that means when I roll in to work in the $MORNING, I merge in the latest changes from the dev branch into my own, resolve any merge conflicts, run the test suite, then keep working. Then in the $EVENING, I do the same thing.
It works really well, actually; the process doesn't need much modification for the case where you have multiple devs working on a single branch.
If it makes you feel better, you can think of it as working on a feature branch that's destined for eventual merge into the dev branch.
the overall Git workflow is still centralized for everyone who's not Linus Torvalds.
Fuck me! I guess the ID cards of me and my friends have been misprinted. I must alert the authorities immediately!
Here's the use case to consider.
Say you have your multi-gigabyte, hundred-thousand-commit SVN repos hosted with a third party that doesn't grant you shell access, but does provide you with a web page to create new repos and do user management for each one.
You want to move your repos to a new third party. What do you have to do, and how much time does it take to do it?
With git, it's trivial, as everyone has a copy of the repo already.
With SVN, not so much. Not at all, in fact. You pretty much have to either have shell access of some kind to do a dump of the repo in order to move it, or have had the forethought to set up a postcommit hook to email the commit details to you. (But I'm not sure how well email backup would preserve SVN metadata.) It's been a very long time since I've had to think about any of that shit.
But it had better be before you leave for the day, or else you're an idiot...
If I promise to
may I please work in my cozy dev cave until my work is ready for public consumption, Buildmistress?
Which is probably why every single place that uses Git has a centralized server that keeps track of your data.
Nah. That's a convenient common source for that data. Everyone who has checked out a copy of that repo is also now a source of that data.
If Github dies today, the only things lost are a bunch of now-broken links and Github Issues. If -say- SourceForge decides to take their SVN servers offline, the two people who still have active projects there will lose all of their history.
Apples and oranges, man.
[I was told] ... SVN doesn't have branches.
Whoever told you this is clearly a wanker, and should maybe be fired.
Git's merge handling is magical when compared to SVN's. Change something in a file in master, but that file has moved around in dev? That'll merge no problem. Rename a file, but wanna merge? Again, no problem.
When last I checked (It's been a while... SVN ~1.7.x, right around the time of the goddamn SQLite WC rewrite), to get something approaching 10% of the power of git's merging required a ton of roundtrips to and grinding of the SVN server to reconcile all of the mergeinfo metadata.
Unless the SVN folks have done a lot of work in this area (and they very well may have!), git kicks the shit out of SVN when it comes time to do a complicated merge.
I'm much more concerned that I only allow the right people to perform the right tasks and that my audit trail is meaningful.
Then use gitolite or something similar as a central repository. It authenticates using SSH keys and lets you assign read/write/create privs to each repo based on the key presented.
But, frankly, if you don't trust your goddamn programmers, then you've seriously hired the wrong people. :)