Unit Fighting


  • Dupa

    @masonwheeler said in WPF best practices:

    mocking: n. What should be done, quite thoroughly and derisively, to people who advocate turning simple, straightforward source code into a mass of complexity and layers and "decoupled" pasta, in order to make it easier to run automated tests against.

    Yeah, sure. I’m doing an emergency release tomorrow because my predecessor didn’t have time for unit tests.

    Somehow, I never receive bug reports for the code that is covered with good unit tests. Go figure.


  • Impossible Mission - B

    @kt_ said in WPF best practices:

    I’m doing an emergency release tomorrow because my predecessor didn’t have time for unit tests.

    Somehow, I highly doubt that's the actual reason why you're doing an emergency release.


  • Dupa

    @masonwheeler said in WPF best practices:

    @kt_ said in WPF best practices:

    I’m doing an emergency release tomorrow because my predecessor didn’t have time for unit tests.

    Somehow, I highly doubt that's the actual reason why you're doing an emergency release.

    It’s a simple issue that good unit tests would catch.

    Think of unit tests as a fail safe: they force you to again think about your code and examine it closely. After writing it.

    Just like code review.

    Another thing: I spent a few hours validating that what he wrote wasn’t on purpose. Good unit tests serve as a spec. I’d know it was a bug right away.


  • Impossible Mission - B

    @kt_ said in WPF best practices:

    It’s a simple issue that good unit tests would catch.

    :rolleyes:

    Hindsight's 20/20. It's easy to look at almost any bug after the fact, knowing exactly what the bug is, and imagine up a simple test that "would have caught it." Actually coming up with that test beforehand, though, is a very different matter. This is actually one of the strongest arguments against test-first development:

    1. Your tests can only catch things they were designed to test for
    2. You only write tests for things you're already aware might be a problem
    3. If you're already aware it might be a problem, that means you're paying enough attention to that specific aspect that you're going to end up getting that part right, so what good does the test do?

    Good unit tests serve as a spec

    And so do bad ones, unfortunately. If the test has a bug, that bug becomes part of the spec rather than getting fixed.



  • @masonwheeler said in WPF best practices:

    If you're already aware it might be a problem, that means you're paying enough attention to that specific aspect that you're going to end up getting that part right, so what good does the test do?

    What good does it do? It keeps you from being Jeff Atwood. More precisely, someone is less likely to break it in the future. Regressions are pure waste. Unit tests are a very effective solution to regression.


  • Dupa

    @masonwheeler said in WPF best practices:

    @kt_ said in WPF best practices:

    It’s a simple issue that good unit tests would catch.

    :rolleyes:

    Hindsight's 20/20. It's easy to look at almost any bug after the fact, knowing exactly what the bug is, and imagine up a simple test that "would have caught it."

    It’s easy to disregard the main points of my posts. Unit tests give you: a chance to reflect, a spec and an easy way to refactor code, even if you’re not the person who wrot it.

    This is actually one of the strongest arguments against test-first development:

    Never said anything about TDD, so I’ll happily disregard the rest of your post.

    Oh wait, I can’t, because it’s stupid.

    1. Your tests can only catch things they were designed to test for
    2. You only write tests for things you're already aware might be a problem
    3. If you're already aware it might be a problem, that means you're paying enough attention to that specific aspect that you're going to end up getting that part right, so what good does the test do?

    That’s an awful lot of non sequiteurs. You write tests for what you come up with. You do the coming up while writing the code. Ergo, with TDD you have to think about every line of code. If you don’t do TDD but still write tests, you still have to think about how your code will react, because you want to test it.

    Either way, you get one more chance to take a thorough look at your code and with a very inquiring eye: you’re trying to break it, after all. And how! Line by line, and not click after click.

    And of course you purposefully don’t respond to the argument about refactoring and the fact that you can treat them as spec.

    I know your next argument: but that’s what good unit tests are like. Bad unit tests don’t give that, bad unit tests are shit! Well, bravo, Sherlock, you just realized that shit code is shit. Go have a drink, you deserve it.


  • Dupa

    @kt_ said in WPF best practices:

    I know your next argument: but that’s what good unit tests are like. Bad unit tests don’t give that, bad unit tests are shit! Well, bravo, Sherlock, you just realized that shit code is shit. Go have a drink, you deserve it.

    @masonwheeler Huh, what do you know. That’s the exact point you made in an edit. How’s that for anticipation?

    Bravo, @kt_. Bravo, I say!



  • @kt_ Calm down man, he's active on stack overflow. He can't help it. He probably has javascript on him.


  • Impossible Mission - B

    @kt_ said in WPF best practices:

    @masonwheeler Huh, what do you know. That’s the exact point you made in an edit. How’s that for anticipation?

    That's not the point I made in the edit; the point I made is that bad tests are actively harmful. This isn't simply an opinion; it's something that's been studied and consistently happens.


  • Dupa

    @masonwheeler said in WPF best practices:

    @kt_ said in WPF best practices:

    @masonwheeler Huh, what do you know. That’s the exact point you made in an edit. How’s that for anticipation?

    That's not the point I made in the edit; the point I made is that bad tests are actively harmful. This isn't simply an opinion; it's something that's been studied and consistently happens.

    That not the point I made; the point I made is that untested bad code is actively harmful. This isn’t simply an option; it’s something that’s been studied and consistently happens. ;;

    and one more: ; for good measure;

    ;;



    1. Bad code is bad.
    2. It's still bad if it's in a test.
    3. If code is in a test, it should be really explicit.
    4. Code should be reviewed before it is added to the repository.
    5. Code should be tested before it is added to the repository.
    6. Since tests are code, they should be reviewed too.
    7. If bad tests are in your repository, it's your fault.

  • Dupa

    @masonwheeler hey, but you’re the guy who still believed that camel has two humps. In 2017.

    Explanation:

    Nevertheless, I will read your link. What the hell! Not now, but I will. I promise. None of the points you make are able to convince me, so maybe this article will be able to sway my opinion, or at least pique my interest.

    I mean this in a good way. No sarcasm intended.


  • Dupa

    @magus said in WPF best practices:

    1. Bad code is bad.
    2. It's still bad if it's in a test.
    3. If code is in a test, it should be really explicit.
    4. Code should be reviewed before it is added to the repository.
    5. Code should be tested before it is added to the repository.
    6. Since tests are code, they should be reviewed too.
    7. If bad tests are in your repository, it's your fault.

    Yeah, I think a lot of it comes down to how responsible you feel for the codebase. Because upkeep costs. Tests cost, code review costs, refactoring costs.

    I work at a place where churning out features is really important and I work with a few people who are really good at this, who can make do without all that upkeep stuff and they still create relatively bug free features fast. But I’m not sure. I think the code is too important, and the vision of someone else having to maintain it too, to just let it rot. I’m with Bob Martin on that.

    Upkeep costs and doing it wrong might lead to horrible consequences. But not doing it might so even more.

    Of course not saying that you, @masonwheeler, specifically, fail to do any of these — apart from tests, which you happily admitted to yourself! It’s just a general observation.

    Either way, @mods, this should probably be Jeffed?


  • Banned

    @masonwheeler said in WPF best practices:

    Actually coming up with that test beforehand, though, is a very different matter.

    Actually, it's fairly easy. All you have to do is cover all possible code paths (100% code and branch coverage). Wherever you validate input, have at least one test for each type of valid input and at least one test for each rejection reason. Also, in addition to unit test, have at least one test that takes everything together and tests the whole functionality from start to finish. These rules require next to no thinking and cover 95% of things that can go wrong. Also, to prevent regressions, for every bugfix have at least one test that verifies if it's fixed.

    @masonwheeler said in WPF best practices:

    This is actually one of the strongest arguments against test-first development

    No one in their right mind advocates tests-first approach. The only person I know who takes it seriously also insists on all functions being 4 lines or shorter.


  • Impossible Mission - B

    @gąska said in WPF best practices:

    Actually, it's fairly easy. All you have to do is cover all possible code paths (100% code and branch coverage).

    There is nothing at all easy about that. It literally requires orders of magnitude more code in your tests than in your production codebase. (I'm aware of only one project that's actually done it: SQLite, whose tests outweigh production code by a factor of something like 20,000.)


  • ♿ (Parody)

    @kt_ said in WPF best practices:

    Either way, @mods, this should probably be Jeffed?

    With a View Jeff or a Model View Jeff or an overly Controlling Jeff?


  • Banned

    @masonwheeler said in WPF best practices:

    @gąska said in WPF best practices:

    Actually, it's fairly easy. All you have to do is cover all possible code paths (100% code and branch coverage).

    There is nothing at all easy about that. It literally requires orders of magnitude more code in your tests than in your production codebase.

    OK, maybe 100% is too much. But 95% is very realistic and requires only 2x as much test code as production code - most of which will be copy-pastes with slight variations. If you end up with much more, it's a sign a refactoring is due.


  • Banned

    @boomzilla oh for fuck's sake you just had to jeff right when I was replying!



  • @boomzilla said in Unit Fighting:

    @kt_ said in WPF best practices:

    Either way, @mods, this should probably be Jeffed?

    With a View Jeff or a Model View Jeff or an overly Controlling Jeff?

    Someone should make a backronym for MVC/MVVM/whatever that is JEFF.


  • Dupa

    @boomzilla said in Unit Fighting:

    @kt_ said in WPF best practices:

    Either way, @mods, this should probably be Jeffed?

    With a View Jeff or a Model View Jeff or an overly Controlling Jeff?

    Why, with a Unit Jeff Controller Monitor Manager Widget Stuffer, of course!


  • Banned

    @ben_lubar Juice-Entity-Facade Framework. Users see Facade, it data-binds to Entities, and all the gory stuff ends up in Juice that powers your application.



  • @gąska said in Unit Fighting:

    All you have to do is cover all possible code paths

    Paths being the key word. Code Coverage (every line executing at least once) is not sufficient..

    No one in their right mind advocates tests-first approach

    Disappointed you are claiming I am not in my "right mind".



  • @masonwheeler said in Unit Fighting:

    And so do bad ones, unfortunately.

    What I got from the linked essay (paraphrased):

    Bad tests are bad.

    D'uh.

    Writing tests has a cost.

    D'uh.

    Unit tests are no use if they are used in the wrong way.

    D'uh.

    You can't (formally) prove that unit tests cover all potentially interesting cases so use functional / integration tests instead.

    ... because, since even at the level of a single unit you can't prove anything, I'm sure you can prove it on the level of the entire freaking system...?

    Computing has come to shit because we don't use punch cards anymore

    :belt_onion:

    The shannon entropy of a string of 1's is very low.

    Sure yeah, bravo for having kept up with your information theory. Unfortunately that insight is not really relevant to the problem at hand.

    All in all, to me he sounds like some CS / information theory high-flier who hasn't actually worked on code since the mainframe has gone out of fashion 🤷♀



  • @ixvedeusi

    He does make some few valid points though, if you are prepared to sift through the bullshit to find them. For example, I'd agree that "100% coverage" is a dangerous myth. The complexity of even the simplest modern software system is way too high to even get remotely close to testing all possible states, and insisting on "100% coverage" (by whatever flawed metric) will only produce a huge pile of essentially useless tests.

    Though IMHO the above only applies to compiled languages. For interpreted languages, or languages with "lazy" (run-time) variable and attribute bindings (e. g. Python), I'd still want to have 100% line-of-code coverage, so that I can at least verify there's no syntax errors or typos in the code. And INB4 code reviews, these are dony by humans and thus are prone to errors the JIT compiler or interpreter will catch.


  • ♿ (Parody)

    @ixvedeusi said in Unit Fighting:

    He does make some few valid points though, if you are prepared to sift through the bullshit to find them.

    I haven't read TFA (duh) but I agree with the points you pull out. However, in the interest of stuff that interests me, I'd point out that those are all strawmen WRT the original point of contention between @masonwheeler and @kt_.


  • Impossible Mission - B

    @ixvedeusi You're completely missing the most important part of the article's line of reasoning. It goes like this:

    • Code can have bugs, and generally does, at a rather predictable rate of one serious bug per X lines of code.
    • Tests are made of code.
      • Many people acknowledge this point intellectually, but don't think about the ramifications, that tests can be buggy too. They treat tests as some sort of higher, purer, more noble form of code, that's worthy of being authoritative.
    • In order to thoroughly test a codebase, you need more test code than code code, often orders of magnitude more. (See above, where @Gąska claims it's possible to get a "reasonable" 95% code coverage with "only" about 2X as much test code as codebase.)
    • Since bugs occur at a predictable rate per lines of code, your comprehensive test bed is virtually certain to contain more bugs than your codebase, simply by virtue of being larger than it.
    • Since the test code is a noble, pure, canonical spec, its bugs become authoritative and get enforced rather than fixed.
      • This has been demonstrated experimentally by testing the tests: when bugs are randomly seeded in both the codebase and the test bed, users will consistently uncritically "fix" the codebase to match the tests, even when it's the tests that are in error.

    Therefore, a test bed large enough to be useful literally does more harm than good. It's not the simple tautology of "bad tests are bad" that some people are trying to reduce it to; it's pointing out that bad tests are inevitable, and will inevitably cause harm.


  • ♿ (Parody)

    @masonwheeler This sounds like a rationalization to never write any automated tests. Also I can authoritatively say that:

    Since the test code is a noble, pure, canonical spec, its bugs become authoritative and get enforced rather than fixed.

    Is (nearly) 100% incorrect in my experience.



  • @masonwheeler said in Unit Fighting:

    a test bed large enough to be useful literally does more harm than good.

    Complete disagreement [although it is far too common for people to have large flawed test beds, but that is a problem with their implementation, not the approach.

    A comprehensive test bed means that any code which passes all of the tests is suitable for production. If there is something that occurs in production that is a "problem" then that means there is a missing test!

    By definition "Unit" tests can not be the sole part of the testbed, because they do not test the interactions between units. Granular interaction tests (that minimize duplication of testing elements already covered by unit tests) address this.


  • Impossible Mission - B

    @boomzilla You really think your team would do better at a random-bug-seeding audit?

    Just think about your codebase. You've probably got a bunch of code somewhere in there that's 10+ years old, written (along with its tests) by someone who's no longer with the company, that no one is really a subject matter expert on anymore. It's there, it basically works, and no one likes to have to touch it. (I know you do, because there's been code like that everywhere, at every place I've worked in my career.)

    If a test fails in an area like that, what's a developer going to do? The tests are the only authoritative source they have! The way it always works out shouldn't surprise anyone, really.


  • Impossible Mission - B

    @thecpuwizard said in Unit Fighting:

    A comprehensive test bed means that any code which passes all of the tests is suitable for production. If there is something that occurs in production that is a "problem" then that means there is a missing test!

    The system is perfect! Any flaws in it can only be a result of people not following it correctly!


    Filed under: Every fanatical political ideologue ever


  • ♿ (Parody)

    @masonwheeler said in Unit Fighting:

    You really think your team would do better at a random-bug-seeding audit?

    At fixing the code or fixing the test? Yes. Especially since I'm the one who usually looks at that sort of thing and figures out what's going on. I've updated a lot of tests when I've discovered assumptions that were bad, either originally or due to desired code changes.

    @masonwheeler said in Unit Fighting:

    Just think about your codebase. You've probably got a bunch of code somewhere in there that's 10+ years old, written (along with its tests) by someone who's no longer with the company, that no one is really a subject matter expert on anymore. It's there, it basically works, and no one likes to have to touch it. (I know you do, because there's been code like that everywhere, at every place I've worked in my career.)
    If a test fails in an area like that, what's a developer going to do? The tests are the only authoritative source they have! The way it always works out shouldn't surprise anyone, really.

    :wtf: You go in and figure out what's going on. You don't go making blind fucking changes like you're suggesting! Yeah, I suppose that I can see why you're afraid of tests now.


  • Impossible Mission - B

    @boomzilla said in Unit Fighting:

    At fixing the code or fixing the test?

    At fixing the right things and not the wrong things.

    :wtf: You go in and figure out what's going on. You don't go making blind fucking changes like you're suggesting!

    It's not just "figure out what's going on;" it's "figure out what's supposed to be going on." Not all bugs are as obvious as "code threw an exception."

    Yeah, I suppose that I can see why you're afraid of tests now.

    Experience?


  • ♿ (Parody)

    @masonwheeler said in Unit Fighting:

    @boomzilla said in Unit Fighting:

    At fixing the code or fixing the test?

    At fixing the right things and not the wrong things.

    Yes, that's what I was saying (though admittedly not as clearly as might have).

    :wtf: You go in and figure out what's going on. You don't go making blind fucking changes like you're suggesting!

    It's not just "figure out what's going on;" it's "figure out what's supposed to be going on." Not all bugs are as obvious as "code threw an exception."

    Well, yeah, I thought that was obvious. Interestingly, I get a lot of bugs that are users not understanding how the system was designed to work (based on the requirements from past users). So we convert those from bugs to feature requests or whatever.

    Yeah, I suppose that I can see why you're afraid of tests now.

    Experience?

    Sure, flail around in a code base without understanding it and you're going to have a bad time.


  • ♿ (Parody)

    @masonwheeler said in Unit Fighting:

    This has been demonstrated experimentally by testing the tests: when bugs are randomly seeded in both the codebase and the test bed, users will consistently uncritically "fix" the codebase to match the tests, even when it's the tests that are in error.

    Related: I'm actually looking at a failing test right now and trying to figure out if the problem is with the test or the code. But now the test is passing...



  • @masonwheeler said in Unit Fighting:

    The system is perfect! Any flaws in it can only be a result of people not following it correctly!

    Basically, YES.... There is a difference between the "conceptual system" and the "specific implementation" of said system. Perfection in the former is possible. However in the latter case it is not because people make mistakes and therefore do not "follow the system properly".

    Even early in my career, there were some systems that could never be "activated" in a testable environment; do developing robust means of testing the components and interconnects was the only viable choice.

    More and more companies are adopting this approach and minimizing (Sometimes eliminating) system level testing. When doing CI/CD and there are multiple pushes to production in very short intervals (daily, or even more rapidly), targeted testing [using elements such as Test Impact Analysis to determine what tests need to be run based on examination of the changes to the code base] becomes ever more important.


  • ♿ (Parody)

    @boomzilla said in Unit Fighting:

    @masonwheeler said in Unit Fighting:

    This has been demonstrated experimentally by testing the tests: when bugs are randomly seeded in both the codebase and the test bed, users will consistently uncritically "fix" the codebase to match the tests, even when it's the tests that are in error.

    Related: I'm actually looking at a failing test right now and trying to figure out if the problem is with the test or the code. But now the test is passing...

    UPDATE: It was the test. Suck it @masonwheeler!


  • Impossible Mission - B

    @boomzilla :rolleyes:

    Remember, the plural of "anecdote" is not "data."


  • ♿ (Parody)

    @masonwheeler Honestly, I don't mind you choosing the words used to describe you being wrong.



  • @kt_ said in Unit Fighting:

    @masonwheeler hey, but you’re the guy who still believed that camel has two humps. In 2017.

    Hey, TIL about that retraction.



  • @masonwheeler said in Unit Fighting:

    You're completely missing the most important part of the article's line of reasoning. It goes like this:

    Code can have bugs, and generally does, at a rather predictable rate of one serious bug per X lines of code.
    Tests are made of code.

    Many people acknowledge this point intellectually, but don't think about the ramifications, that tests can be buggy too. They treat tests as some sort of higher, purer, more noble form of code, that's worthy of being authoritative.

    In order to thoroughly test a codebase, you need more test code than code code, often orders of magnitude more. (See above, where @Gąska claims it's possible to get a "reasonable" 95% code coverage with "only" about 2X as much test code as codebase.)
    Since bugs occur at a predictable rate per lines of code, your comprehensive test bed is virtually certain to contain more bugs than your codebase, simply by virtue of being larger than it.
    Since the test code is a noble, pure, canonical spec, its bugs become authoritative and get enforced rather than fixed.

    This has been demonstrated experimentally by testing the tests: when bugs are randomly seeded in both the codebase and the test bed, users will consistently uncritically "fix" the codebase to match the tests, even when it's the tests that are in error.

    Therefore, a test bed large enough to be useful literally does more harm than good. It's not the simple tautology of "bad tests are bad" that some people are trying to reduce it to; it's pointing out that bad tests are inevitable, and will inevitably cause harm.

    I got this, IMHO it's fully covered by "Unit tests are no use if they are used in the wrong way", no need to say more.

    Mainly what he does is complain that unit tests can't verify that your system behaves according to spec. That seems rather obvious to me. Unit tests verify that a specific unit behaves (and continues to behave) as the developer intended it to behave, no more no less; and in my personal experience they have been very valuable for that. Unit tests are somewhat like scaffolding in a construction site, they provide additional rigidity for the code base and "somewhere to stand on" during development. They are not a QA tool but a development tool.

    I've worked with code bases without unit tests and on code bases with unit tests, and from my own, personal experience they are extremely helpful. There have been many times where I found subtle bugs in my new code while writing the tests, and I can assure you it was a lot easier to fix them before publishing my changes than it would have been once I'd have gotten far enough to be able to run integration tests.

    Concerning the "(unit) tests become gospel" argument, my personal experience has been very much the contrary: my co-workers seem to quite happily just change the tests so that they match what their code does rather than think about it for a moment and discover the mistake in their code.

    One final point, our unit tests do sometimes take a bit of effort to write, but once that's done, maintenance effort is in general very low. The functional and integration tests on the other hand take a lot more of our time.


  • BINNED

    @thecpuwizard said in Unit Fighting:

    Disappointed you are claiming I am not in my "right mind".

    You're posting here, aren't you? 🚃



  • The interesting part is just how thoroughly wrong @masonwheeler is. Because even bad tests are useful.

    Now, I'm going to have to make sure to point out what I mean by bad tests. Not the kind the morons who have been testing parts of the system we're just inheriting used in some cases, where they just call some stuff, assert nothing, and move on. I mean actual tests, which are consistently repeatable with the same results, but which expect the wrong outcome.

    If you have a piece of software old enough to be called legacy, the single most important thing that software can do is behave consistently. If a bug comes in, you make a change, and and something somewhere goes sideways, even if that new behavior is what it always should have done, it's almost certainly out of scope. Your legal department might have to review the consequences for a month before you can do it, or it would create more support tickets.

    In any case, what tests do is keep your system consistent. If a change needs to be made, you do need to update tests. But the tests help you know if you've inadvertently broken something somewhere else. They give you a warning, and increased safety.

    Yeah, you'll have more tests than code. Yeah, that's more code you have to write. Yeah, it might have bugs. It may cost more.

    Not doing it is not worth the risk. You either act like a professional and put in the effort to make your software be of good quality, or you stay a moron javascript kiddy and let it rot because quality is just too hard for you.



  • @masonwheeler said in Unit Fighting:

    Code can have bugs, and generally does, at a rather predictable rate of one serious bug per X lines of code.
    Tests are made of code.

    Many people acknowledge this point intellectually, but don't think about the ramifications, that tests can be buggy too. They treat tests as some sort of higher, purer, more noble form of code, that's worthy of being authoritative.

    Also, I'd like to point out that this IMHO is bullshit. The rate of bugs per line of code is not actually predictable or constant over a given code base at all. If you take a large enough code base and average over it, the average may be similar as that of another code base. However if you look into the code base, you'll very probably find that certain areas contain a lot more bugs per line than other areas (if that is even a measure which can be reasonably defined. To which line of code do you attribute e. g. a bug caused by flawed design?)

    The thing is, testing code is (supposed to be) rather straight-forward: you call something, check if the result matches expectations. I'm pretty sure most people would make a lot fewer mistakes in such code than, say, in a complex and highly optimized algorithm.

    Of course there will be bugs in your test code, and if you take them as gospel you'll be in for a lot of hurt; but if you take any code as gospel you're :doing_it_wrong: and I can't help you.

    Yes, unit tests can be harmful. That's not the question. The question is "does the potential harm offset the potential benefit?" and that depends mostly on how you do unit tests.


  • Impossible Mission - B

    @magus And yet, the highest-quality software I've ever worked on, that all but took over its whole industry and drove several competitors out of business, had zero automated tests. Why? Because we didn't need them; we just acted like professionals and put in the effort to make our software be of good quality.



  • @masonwheeler Something something anecdotes.


  • Impossible Mission - B

    @magus Fair enough, but something something experience. At other places, I've actually found a pretty strong positive correlation between having lots of tests and having a big :wtf: of a codebase. My personal theory is that it has something to do with complacency, thinking the tests will take care of... well... *waves hands* all the stuff you're claiming they will take care of, and so you end up using them as a crutch instead of taking quality seriously.



  • @masonwheeler Yeah, but most people have no idea what they're doing even in this industry. If you've deluded yourself into thinking programming is an industry for 'smart people' just because that's the way people outside this industry see it, you've set yourself up for a lot of pain.

    We're in an industry where Fizzbuzz eliminates 80% of people.

    People have weird beliefs, do things horribly wrong without believing it, and are complacent. That's what this entire website is about.

    But you don't eliminate useful tools because some idiot down the hall was using them wrong, and some other guy you know can do more with Vi in a console window on the server than you'll ever be able to do with all the tools in the world at your disposal.


  • kills Dumbledore

    @magus said in Unit Fighting:

    most people have no idea what they're doing evenespecially in this industry

    Or has my maintenance of terrible legacy software made me overly cynical?



  • @antiquarian said in Unit Fighting:

    @thecpuwizard said in Unit Fighting:

    Disappointed you are claiming I am not in my "right mind".

    You're posting here, aren't you? 🚃

    You owe me for a new keyboard!!!!!! (My current one is saturated with Coffee ;) )


  • ♿ (Parody)

    @magus said in Unit Fighting:

    If you've deluded yourself into thinking programming is an industry for 'smart people' just because that's the way people outside this industry see it, you've set yourself up for a lot of pain.

    Somewhat related. This morning I was reading this:

    The human brain wasn’t built for accounting or software engineering. A few lucky people can do these things ten hours a day, every day, with a smile. The rest of us start fidgeting and checking our cell phone somewhere around the thirty minute mark. I work near the financial district of a big city, so every day a new Senior Regional Manipulator Of Tiny Numbers comes in and tells me that his brain must be broken because he can’t sit still and manipulate tiny numbers as much as he wants. How come this is so hard for him, when all of his colleagues can work so diligently?


Log in to reply