Boost::Fuck! (the git command)


  • Banned

    @Kian said:

    I wouldn't say that's "irrelevant". If C++ libraries were a zip file with the whole source, and you then compiled that zip file when building the executable, you wouldn't have ABI issues either.

    That's why I like Open Source. I couldn't care less for the ideology; but OSS libs are the only truly portable ones.

    @Kian said:

    Being able to losslessly convert back to source means being given the source.

    No, it's... ah, forget it.

    @Kian said:

    Surprisingly, templates aren't usually the culprit of terrible compilation times. Headers are. If you want to improve compilation times, try to limit headers as much as possible (of course, templates have to live in headers, so you're double screwed there).

    When using a lot of boost::assigns, boost::spirits and similar, then template specialization costs more time than parsing headers.



  • @EvanED said:

    One of these days I want to play around with trying to use extern template or something to improve things

    I've toyed with the idea of creating a tool that will append every source file in a project, then compile the whole thing as one (maybe split it into a few, to make use of multiple cores). Most projects would probably get huge gains from that.

    @Gaska said:

    OSS libs are the only truly portable ones.
    So long as you can get them to build at all :P

    @Gaska said:

    When using a lot of boost::assigns, boost::spirits and similar, then template specialization costs more time than parsing headers.
    Ugh. Tried spirit once. Couldn't make sense of it and gave up. I prefer to stay away from boost. Which is sometimes a pain, because most of the answers on sites like stackoverflow (I know, a favorite around here) are "use boost!", even when you specifically ask for non-boost solutions.

    EDIT - Why does discourse offer to replace ´:P for '😛, if it's going to show a picture anyway?



  • @Kian said:

    I've toyed with the idea of creating a tool that will append every source file in a project, then compile the whole thing as one (maybe split it into a few, to make use of multiple cores). Most projects would probably get huge gains from that.
    Only if you ignore incremental builds, I suspect. If it would even be able to compile large projects; I suspect it would run GCC out of memory on ours. (With optimization, I can only build with -O2 otherwise my 16 GB of memory fills. And for some reason when that happens on my computer, instead of the OOM killer doing something, it seems to pretty reliably just completely and totally freeze.)

    @Kian said:

    most of the answers on sites like stackoverflow (I know, a favorite around here) are "use boost!", even when you specifically ask for non-boost solutions.
    Most of Boost is pretty nice. I wish that using simpler libraries in it didn't pull in so much (again for compilation time reasons), but I think it has too much stuff that is useful that it easily wins out over cobbling together 10 other special-purpose libraries and/or writing your own replacements for useful things that it has that are absent from the standard library.


  • Banned

    @Kian said:

    I've toyed with the idea of creating a tool that will append every source file in a project, then compile the whole thing as one (maybe split it into a few, to make use of multiple cores). Most projects would probably get huge gains from that.

    There are already existing solutions to it. The problem is you throw away all the benefits of incremental compilation.

    @Kian said:

    I prefer to stay away from boost.

    +1

    @EvanED said:

    Most of Boost is pretty nice.

    -1



  • @Gaska said:

    Most of Boost is pretty nice.

    -1


    So what would you do for shared_ptr, function, unordered_map, random, regex, and static assertions in a non-C++11 environment? bimap? Command-line parsing? optional? Even some of the preprocessor library has been quite helpful. Boost test too is borderline, though there are better competitors and I recently "had to" dump it because of build issues.

    OK, maybe that's not "most", but there's a lot in Boost that is very very useful, especially if you don't have C++11. Like I said, you could cobble together implementations of those from other libraries and/or write your own (e.g. static_assert is easy; unordered_map would usually be a big WTF to implement yourself), but Boost provides good implementations of these in a one-stop shop.


  • Banned

    There are 133 libraries in boost. Most of them are very domain-specific libraries like Octonions or Multi-Index. Others have been superseded by C++11 features like lambdas and initializer lists. There are few that are still useful in common case, like Optional and Filesystem. The rest are pure abominations that should never have been written to start with - for example ScopeExit, Parameter, MSM.



  • @EvanED said:

    So what would you do for shared_ptr, function, unordered_map, random, regex, and static assertions in a non-C++11 environment?

    Install a compiler with C++11 support? It's less painful than installing boost.

    Why does a group of libraries need to be installed and bootstrapped? They should just sit there in a directory and let me grab what I need when I need it!


  • Banned

    @Kian said:

    Why does a group of libraries need to be installed and bootstrapped?

    Dynamic libraries?



  • @Kian said:

    Install a compiler with C++11 support? It's less painful than installing boost.
    So all I have to do is (1) convince everyone in charge to upgrade compiler versions on all our platforms, (2) actually upgrade it everywhere, (3) enable -std=c++11, and probably (4) fixing the hundred things that break, and that's supposed to be easier than "download and unzip" which is all you need to get most of those libraries I mentioned working? And easier than apt-get install boost-parameter_options boost-regex for the others, or whatever the appropriate names are?

    (I'm probably exaggerating with "hundred things that break", but I would be quite surprised if things still compile & work cleanly with C++11 support enabled.)



  • Unless you were using auto, or register or stuff like that, which were of dubious benefit in the first place, I don't think there are any breaking changes from C++03 to C++11.

    Also, you should have been keeping up with compiler versions, even if you didn't enable -std=c++11. Compilers have bugs too, and there can be performance improvements.

    Using boost would have required changing the project options too, to add the boost dependencies. Enabling -std=c++11 should, in most cases, not be so costly.

    Finally, some of use code in Windows, and the steps there are more involved.

    Here's a list of the breaking changes. http://stackoverflow.com/questions/6399615/what-breaking-changes-are-introduced-in-c11



  • @Kian said:

    Unless you were using auto, or register or stuff like that, which were of dubious benefit in the first place, I don't think there are any breaking changes from C++98 to C++11.
    Just tried it, and yes, there is at least one breaking change we have a lot in our code base, both of our own code and third-party code we include:

    $ cat cpp11.cc
    #include <cstdio>
    
    typedef int int32;
    #define PRId32 "d"
    
    int main()
    {
        int32 x = 5;
        std::printf("The value of x is %"PRId32"\n", x);
    }
    $ g++-4.9 -Wall -Wextra -pedantic cpp11.cc  # no warning or error here
    $ g++-4.9 cpp11.cc -std=c++11 
    cpp11.cc:9:17: warning: invalid suffix on literal; C++11 requires a space between literal and string macro [-Wliteral-suffix]
         std::printf("The value of x is %"PRId32"\n", x);
                     ^
    

    That would turn into an error for use because we compile with -Werror most of the time (because we're at least mildly sane), and a similar thing will just straight-up produce errors in other contexts (e.g. error: unable to find string literal operator 'operator"" PRIxS') that I'm not sure how they differ.



  • I know it would require a bit of retyping, so it's not 100% transparent, but I'm not sure what the problem in your example is? If you add the spaces that it requests:

    std::printf("The value of x is %" PRId32 "\n", x);
    

    the macro becomes

    std::printf("The value of x is %" "d" "\n", x);
    

    which passes the string

    "The value of x is %d\n"
    

    to printf. Of course, you might not be able to change the third party code, so that sucks.

    Also, that looks like a dumb macro to have. Not blaming you for it, just curious what it enables.

    And that typedef is the worst. int isn't guaranteed to be 32 bits. It should be followed by a static_assert or something.



  • @Kian said:

    I know it would require a bit of retyping, so it's not 100% transparent, but I'm not sure what the problem in your example is?
    It's still stuff that would require fixing. It's not hard fixing, but few things would be. (It's also not hard to install Boost, even on windows if you get a prebuild one. :-))

    @Kian said:

    Also, that looks like a dumb macro to have. Not blaming you for it, just curious what it enables.
    It means that you can write a truly platform-independent format string. You can't say %d because if int is a different size then the format string would be wrong. (Or if you had 32-bit longs and int32 were typedefed to that instead, probably %d would give a warning in that case.)

    This is exactly why <cinttypes> defines these macros.

    @Kian said:

    And that typedef is the worst. int isn't guaranteed to be 32 bits. It should be followed by a static_assert or something.
    For crying out loud, it was an example. In the real code it's effectively

    #if SIZE_OF_INT == 4
    typedef int int32;
    typedef unsigned int uint32;
    #else
    #error "Unsupported"
    #endif
    

    Other types, especially for potentially-64-bit types such as intptr_t and int64, have other alternatives. SIZE_OF_INT is set by the configuration step. Also, I'm anonymizing slightly; our identifiers have prefixes.



  • Another example of unspecified size of primitive types biting a developer.


  • Banned

    Why don't you just use <cstdint>?



  • @Gaska said:

    Why don't you just use <cstdint>?
    Did you miss "might not have C++11"?

    And actually, this even goes beyond "we aren't yet using C++11 even now for most of our code", because our code base long predates C++11. I wouldn't be at all surprised if it predates even C99's version of that header. And because our names don't agree with those in stdint.h, we would have a major search-and-replace job to convert to using something that we don't always have available and gives little benefit.



  • @EvanED said:

    For crying out loud, it was an example.

    I wasn't sure how much of it was copied from your code base and how much was shorthand for demonstration purposes. I don't put anything past a production code base.

    @EvanED said:

    You can't say %d because if int is a different size then the format string would be wrong. (Or if you had 32-bit longs and int32 were typedefed to that instead, probably %d would give a warning in that case.)

    Ah, yeah. Variadic arguments. Because C loves losing as much type information as it can as fast as possible. EVERYTHING IS AN INT!



  • @blakeyrat said:

    Now I have to use Git to develop an app that only runs on Windows servers, fucking Linux assholes.

    How is that their fault?



  • Without them, OSS wouldn't be a popular thing, Git wouldn't exist, and his boss wouldn't have forced it upon him?



  • @Bort said:

    How is that their fault?

    You... do know that Git was written by Linus Torvalds, right?


  • FoxDev

    And then the Linux kernel source was moved to git, which of course meant that so many others went ooh, shiny! and switched without thinking whether it was right for them



  • @RaceProUK said:

    And then the Linux kernel source was moved to git, which of course meant that so many others went ooh, shiny! and switched without thinking whether it was right for them

    Well... git was designed to be used to manage the source code of one of the most complicated pieces of software on the planet, with over 10,000 developers and no central work coordination. Of course it will be perfect for this tightly managed team of five.



  • @Jaime said:

    one of the most complicated pieces of software on the planet,

    Putt-Putt Joins the Circus?!



  • @powerlord said:

    You... do know that Git was written by Linus Torvalds, right?

    Linus wrote Git, is the reason why blakey is forced to use it? Is blakey's boss buddies with Linus - kind of like how we ending up with Discourse?

    @Kian said:

    Without them, OSS wouldn't be a popular thing, Git wouldn't exist, and his boss wouldn't have forced it upon him?

    So you're saying his boss isn't capable of independent decision making... hmmm... yeah... might be right about that one...



  • @Bort said:

    So you're saying his boss isn't capable of independent decision making...

    No, I said that if OSS wasn't popular right now, and Git didn't exist, his boss wouldn't have forced it on him. I don't know his boss enough to judge him one way or another.



  • @Kian said:

    No, I said that if OSS wasn't popular right now, and Git didn't exist, his boss wouldn't have forced it on him.

    You mean to tell me that his boss wouldn't have forced him to use software if it didn't exist?! No way!

    And the fact that it exists is the only reason it was chosen?

    Or is it the fact that it's popular?

    What do you think of the independent decision making skills of a person who chooses something because it's popular (when alternatives like SVN are also well-known)?

    Whatever.


    <del></del> doesn't seem to work across multiple lines



  • You know you can just click reply, right? No need to quote my entire post right after me 😄

    I see what you mean. I'm not saying Linux assholes are the only people to blame, but they certainly established the sine qua non condition required for Blakey to suffer. So it's easy to see why he blames them.

    There can be additional parties to blame as well, but it doesn't look like he cares about those as much. And he obviously doesn't blame his boss, because he's being paid to suffer and he accepts that. He simply curses the people who made the implement of his torture, not the one applying it.

    I like git, personally. I mean, the idea of git. I use git without being forced to.



  • @Kian said:

    I see what you mean... So it's easy to see why he blames them.

    Oh, look a reasonable person who tries to see both sides of the argument hurr durr...

    What are you doing here!?



  • There's a lot about git that I genuinely love.
    Much easier to use than Clearcase, svn or cvs, and a lot of features that Perforce doesn't have.
    The other one I've used is VSS. Which is not a real option.

    Never used TFS so can't comment. Maybe it's the ultimate solution to the world's source and configuration management problems, though I doubt it.



  • @Jaime said:

    Well... git was designed to be used to manage the source code of one of the most complicated pieces of software on the planet, with over 10,000 developers and no central work coordination. Of course it will be perfect for this tightly managed team of five.
    Actually... this is coming from a Git fan, but I think it works best for small projects as well as very large things. It's sort of an in-between area, where the project is too large (or too loosely coupled) to comfortably fit into a single repository but you could still operate in a centralized manner because you're in a company or whatever, where the drawbacks of Git make some other VCSs like Subversion look tempting. For small projects that don't have the "split into multiple repos" problem, I think Git is a better CVCS than almost any CVCS. (I also haven't used TFS, nor Perforce, so I can't comment too much on those.)



  • @lightsoff said:

    Never used TFS so can't comment. Maybe it's the ultimate solution to the world's source and configuration management problems, though I doubt it.

    TFS has some problems that annoy me terribly on occasion:

    1. If you add a new file to a project, then undo that add in pending changes, then check in, the build server ends up with a project file that references a file that no longer exists, and breaks, so you have to 'remove' the file in solution explorer instead. This is because TFS and the project reference the file separately, and I can't really think of a good way around it.
    2. If you move directories around, sometimes getting latest does not correctly get the new locations of directories, but updates the files in them, resulting in you being unable to build unless you do a full-overwrite-get-specific.

    But I can deal with those. Shelfsets, which are what Blakey wishes Git had, can be rather convenient. Overall, TFS is very nice to deal with, though I have little-to-no experience with anything else, and can therefore make no comparisons.


  • FoxDev

    @Magus said:

    If you add a new file to a project, then undo that add in pending changes, then check in, the build server ends up with a project file that references a file that no longer exists, and breaks, so you have to 'remove' the file in solution explorer instead. This is because TFS and the project reference the file separately, and I can't really think of a good way around it.

    That would happen with Subversion, git, hg, CVS, etc. 😛



  • Yeah, I kind of figured. It's annoying, but a consequence of how these systems work, and the only way you could do it better is to have the project somehow linked to source control rather deeply, and that would cause other problems.



  • @Kuro said:

    @cartman82 said:
    Technology is moving forwardto the right.

    Filed Under: FTFY

    Don't do that. You'll create a culture that
    Restricts commands only to those who are privileged.
    Only certain classes will have access to rights.
    Users will be forever stuck with roles.
    1% of the users will control 99% of the system.
    There will be massive output gaps.
    Only certain devices will be allowed to input.
    Everyone will have to ask the "man" how to use programs.
    Everyone will answer to "the system".
    The administrators will control everything.



  • Besides, @cartman82 is wrong, technology can't move forward, because it is perfe
    ct as it is. We can't be having with these disgusting 'new' ways of doing things
    . It worked perfectly before, and it's still the best way to do things. Why, I c
    an't believe someone would even *try* to change things, since it would clearly b
    e worse!


  • Technology should move forward faster than we can check if the foundation can support the rails.

    Why insure our new ideal is stable? We are wiser than anyone else, and anything that doesn't fit into our way of thinking must be wrong.

    Roles and privileges are bigoted. We should support freedom of expression for everyone that fits within our narrative.

    The only thing that matters is the user stories.

    Test driven development doesn't allow you to be flexible for the ever changing user environment.

    Can you say your software will be ready to transition into virtual reality?

    Why worry about making your software fit the current platform, when we are busy insuring that will change tomorrow.



  • I concede my defeat. 80-character line lengths are far less painful to read than SJW language. But I'm afraid I cannot like a post containing it, either.


  • :belt_onion:

    @Magus said:

    Shelfsets, which are what Blakey wishes his Git tools had, can be rather convenient.

    git stash is pretty much exactly the same thing :)



  • @sloosecannon said:

    git stash is pretty much exactly the same thing
    Eh, I don't think that's really true. (Unless... can you push a detached head? If so then you could stash and then push the stash. I suspect that wouldn't work very well.) That solves one of his problems, but not the other; it sounded like a lot of his question was spawned by what you do when you leave so that your changes don't get lost if your hard drive dies. git stash... doesn't do one iota to solve that problem.) I'll admit that's kind of a nice feature that Git doesn't have a trivial analogue to. I think you'd have to fake it by creating a short-lived branch, pushing that, then deleting it when you come back in. Which isn't the worst thing in the world, but is few manual steps if you don't like CLI tools.



  • @EvanED said:

    I think Git is a better CVCS than almost any CVCS

    Nope. Until it has granular security, git will never be a good fit in the target market for CVCS - corporate development. Security-by-workflow (I'll accept your push if I like it) works great if you have an army of unpaid developers. It's an extra step and an annoyance for many.

    Also, commit turning into commit-push doubles the number of tasks in the most common workflow for almost no benefit to CVCS customers.



  • @Jaime said:

    Nope. Until it has granular security, git will never be a good fit in the target market for CVCS - corporate development. Security-by-workflow (I'll accept your push if I like it) works great if you have an army of unpaid developers. It's an extra step and an annoyance for many.
    1. "For small projects that don't have the 'split into multiple repos' problem', I think Git is a better CVCS than almost any CVCS." When you're talking about a small team/project like that, you probably don't care about granular security. Heck, plenty of large places could implement granular security and specifically choose not to as a matter of principle.
    2. You can bodge granular security into Git by splitting the repo or, for commit access, hooks
    3. I didn't say it doesn't have drawbacks, but I think the benefits outweigh them for the small team environment. Just a few of those benefits are (1) faster operations, (2) disconnected operation (still useful for company environments, I guess unless no one ever travels), (3) things like git commit --amend, git add --interactive (which is amazing), and rebasing, (4) the ability to commit locally without either affecting others or going to the point of creating a whole new branch for your changes.



  • @EvanED said:

    "For small projects that don't have the 'split into multiple repos' problem', I think Git is a better CVCS than almost any CVCS." When you're talking about a small team/project like that, you probably don't care about granular security. Heck, plenty of large places could implement granular security and specifically choose not to as a matter of principle.

    OK, so sometimes you don't use the better security that every CVCS offers, but sometimes you do. Git can still never be better from a security perspective if you don't care about security, it can only be even.

    @EvanED said:

    You can bodge granular security into Git by splitting the repo or, for commit access, hooks

    That's not better, that's worse.

    @EvanED said:

    (1) faster operations

    I've never had a VCS operation slow enough that it impeded my work. You can't sell me solutions to problems I don't have.

    @EvanED said:

    (2) disconnected operation (still useful for company environments, I guess unless no one ever travels

    What operations? You can commit while disconnected, but you can't push. If it's not pushed, it's close to pointless anyways, from a CVCS perspective. Taking a checkout of the entire repo is roughly analogous to cloning a git repository and gives you the ability to do pretty much the same operations.

    I've travelled a lot during a time when the company used a CVCS. I never had a problem getting work done on a plane - as long as I had the source code with me. The only advantage git confers here is that you have no choice but to bring the entire repository with you. On the other hand, this is enough of a problem that many split up their code into multiple repositories, reintroducing the problem in the form of "Do I have a clone of the right repository with me?".

    @EvanED said:

    (4) the ability to commit locally without either affecting others or going to the point of creating a whole new branch for your changes.

    Us CVCS people don't care what you do locally. Why is it so important to commit to the local staging area? If you ain't pushing, it just doesn't matter. Besides, the benefit of "not creating a branch" is so minor that it's essentially not a benefit. Branch creation is cheap in every semi-modern VCS.

    @EvanED said:

    (3) things like git commit --amend, git add --interactive (which is amazing), and rebasing,

    Some of these have validity. I never said it was all bad, it's just not better in the role of a CVCS. None of this overcomes the primary obstacle that I can't checkout a small portion of a big repository. Git would have to have a boatload of far superior features to make up for that one weakness and it just doesn't.


  • :belt_onion:

    @EvanED said:

    Eh, I don't think that's really true. (Unless... can you push a detached head? If so then you could stash and then push the stash. I suspect that wouldn't work very well.) That solves one of his problems, but not the other; it sounded like a lot of his question was spawned by what you do when you leave so that your changes don't get lost if your hard drive dies. git stash... doesn't do one iota to solve that problem.) I'll admit that's kind of a nice feature that Git doesn't have a trivial analogue to. I think you'd have to fake it by creating a short-lived branch, pushing that, then deleting it when you come back in. Which isn't the worst thing in the world, but is few manual steps if you don't like CLI tools.

    That's true, it doesn't push it remotely, but it would fix his original problem (I have to commit when I switch branches!!!!!!!!).



  • @Jaime said:

    I've never had a VCS operation slow enough that it impeded my work.
    svn log sometimes takes a while for me. But yes, I find that one less compelling than the others. (I listed in arbitrary order.)

    @Jaime said:

    If it's not pushed, it's close to pointless anyways, from a CVCS perspective.
    Bull. Crap. Commits serve the same purpose in CVCSs as they do in DVCSs, which is to record a logical change.

    If you are disconnected and reach the point at which you have a logical change set and can't commit because you're using a CVCS and you're disconnected, you have three options:

    1. Start using patchutils (interdiff in particular) to manage your logical changes, which is error-prone and a PITA
    2. Stop working until you can connect
    3. Keep working; this option is the one that is contrary to the purpose of a VCS because your aggregate changes will not be a logical unit. It also means that if you mess up your second change you can't go back to the first

    All of those suck, and the problem almost completely vanishes with a DVCS. (And only "almost" because "disconnected" = "more opportunity for conflicts.")

    @Jaime said:

    Why is it so important to commit to the local staging area?
    Again, no; same reason as above: if you have multiple logical changes that you're not quite sure if they're ready to be shared with the world, then committing locally and not pushing (or pushing to a "private" branch) allows you the best of both worlds -- you keep your logical changes as commits, which is good, but they're still flexible if you decide they'll need to be revised, which is also good.

    @Jaime said:

    None of this overcomes the primary obstacle that I can't checkout a small portion of a big repository. Git would have to have a boatload of far superior features to make up for that one weakness and it just doesn't.
    I view it the other way around. The --amend, add -i, etc. comments help all projects. Disconnected operation helps a lot of projects. The lack of partial checkouts is an inconvenience for some projects, and fatal for a few. For those where it is fatal, then it's not better, but I think that's a fairly small minority of even CVCS projects. For those where it's an inconvenience, I think the benefits of Git outweigh the drawbacks.

    @sloosecannon said:

    That's true, it doesn't push it remotely, but it would fix his original problem (I have to commit when I switch branches!!!!!!!!).
    That's only one of the complaints; by far the bigger one, at least in the most recent thread, seems to be "The need to check-in code at the end of each day so it'll be secure if my laptop gets run over by a bus".



  • @Jaime said:

    Us CVCS people don't care what you do locally. Why is it so important to commit to the local staging area?

    I find it useful as a "save point". I can try a bunch of things, easily return to one of many stages of development if something doesn't work quite right, etc. Maybe I'm blocked on one task and want to switch to another in the meantime. Centralized solutions always make that workflow more painful. Also, they're more of a pain to set up. To date, I haven't worked in one where everything worked correctly. There's always some functionality that the team has essentially given up because the server doesn't work. Also, I work "overseas" (yay cheap labor!) and servers are often back in the US, which means connection times are awful for even trivial operations.

    Git always makes everything it does available once you've installed it in your machine. That's handy.



  • How did this turn into a C++ flamewar? Have the requisite 26.37 days passed since the last one?

    @EvanED said:

    I'm also not sure about this, but I think that standardization of the ABI may be on the way.

    Even though 95% of my dealings with C++ are on a project where I have source to all dependencies, and only compile with a single compiler for a single platform, I would welcome this.

    It'd remove the need to spend about a day rebuilding all those dependencies when upgrading Visual Studio.

    @tarunik said:

    Yeah -- it seems that the MS ABI and the GCC (Itanium/CodeSorcery) ABIs are more-or-less the predominant ones, on the C++ side of the ball at least.

    The next step is to get them to agree on their ABI implementations, and then from there...



  • @RaceProUK said:

    BBCode works better

    Better than Markdown? Somehow, this is not shocking news.



  • @Kian said:

    I prefer to stay away from boost. Which is sometimes a pain, because most of the answers on sites like stackoverflow (I know, a favorite around here) are "use boost!", even when you specifically ask for non-boost solutions.

    Just like how every JS question gets a "use jquery!"


  • :belt_onion:

    @Groaner said:

    Just like how every JS question gets a "use jquery!"

    At least jquery is (usually) the right tool for the job


    @EvanED said:

    That's only one of the complaints; by far the bigger one, at least in the most recent thread, seems to be "The need to check-in code at the end of each day so it'll be secure if my laptop gets run over by a bus".

    Well to make it Truly secure
    you must Make sure to
    Share the Information.
    Share Everything,
    then it Is secure even If my
    House burned down


  • Banned

    @Groaner said:

    How did this turn into a C++ flamewar?

    You're late. It's Git flamewar now.

    @Groaner said:

    The next step is to get them to agree on their ABI implementations

    I doubt it will ever happen without ISO C++ requiring one particular ABI. ABI change is a very big deal on both platforms.


Log in to reply