PSA: dnSpy is a terrifyingly good .NET decompiler


  • Discourse touched me in a no-no place

    @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    IMO low level unit tests are something invented by people on dynamic languages to compensate for the lack of a decent compiler.

    There's always errors that the programmer can have in their code that the compiler can't find. ALWAYS. The fundamental math of computer program analysis guarantees that compilers can't know everything, especially if they're going to be tractable programs that do their job reasonably quickly.


  • Fake News

    @Gribnit said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @dkf said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    You do know that the order of catch clauses matters?

    Yes. You are still catching Error and Throwable in your code.

    While you should never catch it in a library, catching it in some code which is running a thread might still be beneficial. If you know that your runnable is what's keeping the current thread alive, receiving a StackOverflowError is not an issue because you know the entire stack has unwound. At that point you want to log it and move on.


  • Discourse touched me in a no-no place

    @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    printf is an horrible function that fails in so many ways it belongs to thedailwtf front page, that is why I picked it as an example.

    Actually, it's the reverse — scanf() — that's truly bust by design. Formatted prints are useful as they let you do things like pulling the actual format from a localization file; not using a little language (for that's what the format is, in very restricted form) for that is just too limiting.



  • @dkf I didn't say that unit tests wouldn't catch any error, just that they aren't worth the effort. Specially if you're doing a lot of effort, and making your code harder to read to keep it testable.

    There is a trade off, and the fact that it wasn't obvious for everyone that the guy asking about testing getters and setters was ridiculous is telling how brainwashed people are onto this.


  • Considered Harmful

    @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    if you're doing a lot of effort, and making your code harder to read to keep it testable.

    This seems like a personal problem.


  • Banned

    @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @Gąska IMO low level unit tests are something invented by people on dynamic languages to compensate for the lack of a decent compiler.

    Depends on what you mean by "low". For me, the lowest level it makes sense to make tests for is functional unit (deliberately ill-defined because it's up to intuition - usually it's one file, or one class, or one small module, but none of these is perfect for every case).

    If I were to work with something testable again, I would probably do tests that map directly, or almost directly, to the program use cases. I'm not going to write a test for a 20 line class.

    The point of unit tests is to test small fragments of the entire workflow in isolation from other fragments. They should map to use cases, but they shouldn't encompass entire use cases. For that, there's integration tests (or module tests, or component tests, or whatever you call them and however you define them).

    The code I'm working right now would need to written from scratch to be remotelly testable, anyway.

    Yeah, that's the problem with programming principles - it's mostly impossible to apply them retroactively. I used to work on a project that got most things right - we had 94% line coverage (few files were under 100%, and they were mostly false negatives), we did IoC wherever possible, we were extremely serious about code reviews, and we did frequent refactorings wherever needed (we've dedicated some development capacity to refactoring tasks every few sprints, and did on-the-spot refactorings as part of usual tasks). And it seemed to work - our velocity was through the roof, we never missed the deadline more than a week or two, and we had the lowest customer issues count in the company.


  • Banned

    @Gribnit said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @Gąska said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @Gribnit the thing is, if you're unit testing views or whatever, then in a properly designed code, you shouldn't be testing the business logic classes at the same time. Integration tests, sure - but not unit tests. In unit tests, all business logic should be abstracted away.

    why would I mock a damn pojo that just carries state? don't be a twerp.

    Likely you miss that a view that can be tested has a contract of what it expects from the model objects it displays. Consisting of the model objects, which there is no reason to mock. How they're generated? Yes from a mock / fake whatever for a view test.

    Christ, you keep running for that soapbox somebody's gonna kick it into splinters one of these days.

    I considered adding a paragraph into that post that I'm not talking about trivial objects that contain no logic of their own. But I decided it's so obvious that it doesn't have to be said. I have never been so wrong.


  • Banned

    @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    The platform I'm using has a broken sprintf, that even when you give it a null pointer to make it return how many bytes your parameters will need to write, like we do on linux and windows, it will result in a buffer overflow if you need more than X bytes. For a moment I thought it simply didn't support the NULL argument, but in that case it should break even writing a single byte (writing one byte at *NULL will cause a system reboot).

    Hey, don't be so hard on them. I'd program it the exact same way they did on my first try. Because what's the most obvious way to answer the question "how many bytes are left to write"? Of course, if they wrote it in Rust or some other high-level low-level language, they could use some kind of null IO sink that only recorded byte count and not actual contents, but doing the same in low-level low-level languages is lots of code repetition and weird looking constructs. It's just much easier to just write to buffer and check its length.



  • @Gąska It's implemented in a way that it's impossible to check if it's safe to call it. It wasn't support to be written by beginners, it's a platform on the flagship device of a large company.


  • Banned

    @sockpuppet7 you don't have to be a beginner to never think of some edge case. But yeah, I agree it's a pretty major fuckup. Especially considering that it's not sprintf but snprintf, which has the specific purpose of being like sprintf but not being subject to buffer overflows.



  • @Gąska said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    I used to work on a project that got most things right

    You can claim whatever you want to be "right". I disagree with your definition and I would hate to work on such project.


  • Banned

    @sockpuppet7 you'd hate to work on project where code is high quality and you have instant feedback whenever you break something?



  • @Gąska Look at our forum bugs category. Try to find one where a unit test would help. I honestly can't recall of any.

    I dont like to remember a lot of silly dependencies I don't care every time I instantiate a class.

    I dont like to write silly tests on stupid simple code because some boss or co-worker is bugging me about the coverage.

    I don't like to workaround libraries and frameworks and make my code uglier so it's testable.



  • @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    Try to find one where a unit test would help.

    ...all of them?



  • @Gąska I may have some irrational hate on doing anything I think is useless or stupid.



  • @ben_lubar said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    Try to find one where a unit test would help.

    ...all of them?

    What unit test would prevent my zoom bug? Can you think any that would be reasonable to think before you know the bug exists? Can you even test CSS?


  • Banned

    @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @Gąska Look at our forum bugs category. Try to find one where a unit test would help. I honestly can't recall of any.

    Frontend web development is special. Also, UI itself is special, but this forum having so many UI bugs has more to do with frontend webdev than UI.

    I dont like to remember a lot of silly dependencies I don't care every time I instantiate a class.

    If your class has dependencies you don't care about, something is wrong with your code.

    I dont like to write silly tests on stupid simple code because some boss or co-worker is bugging me about the coverage.

    Right now, you sound like a child who doesn't want to eat they veggies. Unit tests aren't for the sake of having unit tests. High coverage isn't for the sake of having high coverage. Hell, I'd even say that testing is the least important reason to have unit tests! But I know it can be very hard to understand without having worked in a codebase with high test coverage, done with meaningful tests.

    I don't like to workaround libraries and frameworks and make my code uglier so it's testable.

    It doesn't just make code more testable - it also makes it more modular, which makes adapting code for new use cases not envisioned at first quite a bit easier. It's still hard and takes lots of work, but it's easier than otherwise.



  • @Gąska said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    Right now, you sound like a child who doesn't want to eat they veggies. Unit tests aren't for the sake of having unit tests.

    There is data demonstrating why you should eat veggies. 100% code coverage unit tests is more like vegans trying to push their diet on me. Or like those jeova witness knocking the door on a sunday morning.


  • Banned

    @sockpuppet7 you want a study that high code coverage improves quality? Here you are:

    https://ieeexplore.ieee.org/document/7081877/

    Sadly, everyone who does studies only focuses on bug finding and not on the real value of comprehensive test suite - reducing regressions and easing up refactorings. Probably because they're harder to measure.



  • @Gąska I don't really have a strong opinion in all this, but it might be that you're taking the tree for the forest. You said yourself earlier that coding practices are to help good coders be more efficient, so somewhat conversely, it implies (well not strictly speaking, but I wouldn't be surprised if it's the case) that places that use these coding practices consistently have a large population of good coders, and therefore produce good code. But that doesn't show that this good code is the result of the coding practices rather than of the good coders themselves.

    (for a simpler analogy: a formula one driver isn't a good driver because he drives a formula one, he drives a F1 because he's a good driver in the first place)

    If you have a place with good coders, who are genuinely preoccupied with the quality (whichever way you measure it) of what they produce, they will end up producing good code, regardless of the method they use. They might adopt some widely known methods (of testing or other), they will probably create their own (whether they recognize them as such or not doesn't matter), and they will probably be more productive because of all that than if they didn't, but then if they didn't use those methods they wouldn't be good coders in the first place.

    At the other end of the scale, you have all the bad coders, and those might adopt any method you like, they'll still churn out bad code, because they don't care. So they'll follow the letter of the method and aim for 100% test coverage including stupid ones like tests for getters/setters and end up writing bad code. With some luck, they'll decide (or someone will decide for them...) to drop the method as it doesn't help them, but that doesn't prove anything about the method.

    Really, a lot of these coding practice and studies about them sounds very much like the "N things that successful people do". They're not successful because they do these things, they do these things because that's how they are and yes it helps them being successful, but you can't just ape them and expect to be successful as well (despite what entire libraries' shelves try to tell us...).


  • Discourse touched me in a no-no place

    @remi said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    But that doesn't show that this good code is the result of the coding practices rather than of the good coders themselves.

    Good coders recognise they make fuckups all the time. Tests (and CI, and coverage analysis, and…) help catch those fuckups so that they stay minor instead of costing lots of time and money to track down when in a desperate crunch prior to a major release, and so that they're much less likely to cause problems for downstream customers/users.

    Bad coders think they don't make fuckups, or don't care whether they do or not.

    If you have a place with good coders, who are genuinely preoccupied with the quality (whichever way you measure it) of what they produce, they will end up producing good code, regardless of the method they use.

    My direct experience suggests that there's also people who are pretty good programmers who need tests and so on so that they “stay on the wagon” of being good programmers and don't lapse into being bad programmers under the pressure of trying to do releases and so on. It's very easy to say “Ugh, I can't be bothered with this right now as I'm trying to get this problem fixed right now” but that way leads to hell, one paving stone of good intentions at a time.



  • @dkf how do you do unit tests with all that custom hardware, custom everything you have?


  • Banned

    @sockpuppet7 does your code compile for the platform you develop on? Then you can test it.



  • @dkf said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @remi said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    But that doesn't show that this good code is the result of the coding practices rather than of the good coders themselves.

    Good coders recognise they make fuckups all the time. Tests (and CI, and coverage analysis, and…) help catch those fuckups

    Yes. But before we had "tests", good coders already did something to help them with this. Before we had formal code reviews, good coders asked their co-workers for advice on code. And so on.

    I'm not saying tests and what-not don't help good coders. I'm saying that a bad coder doesn't become a good one because he uses tests etc. And also that if a good coder did not worry about the problems that tests etc. are solving, then he wouldn't be a good coder in the first place. So tests are just formalizing "the N habits of successful people".

    I've been doing some DIY recently so let me offer a slightly different analogy: having some appropriate cleaning stuff at hand while painting (wet rag, sponge, white spirit... depending on the paint) makes for a much better end result, because you catch up minor fuck ups immediately and cleanly. A good painter will therefore have them ready. Not because some painting guru said so, but because being a good painter, he cares about what he does. If you sell some fancy "cleaning kit", the good painters might buy them, or they might do as good a job with hand-made stuff. But if you force a bad painter to lug them with him, you still don't get a good painter. Being able to clean up on the spot doesn't ensure he will actually notice when he should be cleaning, nor that he will clean up properly (and that's not saying anything of the rest of the paint job).

    Like you say below, the only case where it helps is for those who care but don't know how, as it provides them with a tool to solve a problem that they are aware of (or would become aware of if they could take time to take a step back) but might not yet have found their own solution.

    Bad coders think they don't make fuckups, or don't care whether they do or not.

    And therefore even if you force them to use tests they will fuck up in some way or another. They will ignore tests results, they will "fix" the tests in the wrong way, they will write stupid tests that don't test anything, the list goes on and on.

    My direct experience suggests that there's also people who are pretty good programmers who need tests and so on so that they “stay on the wagon” of being good programmers and don't lapse into being bad programmers under the pressure of trying to do releases and so on. It's very easy to say “Ugh, I can't be bothered with this right now as I'm trying to get this problem fixed right now” but that way leads to hell, one paving stone of good intentions at a time.

    I agree on that. But a wider point is that we need good coding practice to help average programmers, so that they keep producing average code. Good ones will always somehow manage, bad ones will always fuck up. The average ones are those that you can shift between producing good or bad code with the right nudge.

    And never forget that they are (or should be) the most common category. You don't design roads for people with racing experience or reflexes, nor for DUI people. You design for the average driver, who's not that bad but need permanent crutches to ensure he doesn't fuck up and kills someone.


  • Discourse touched me in a no-no place

    @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    how do you do unit tests with all that custom hardware

    The unit tests run against mocks. The integration tests run against a test board we've got (i.e., a real piece of hardware just for testing purposes).



  • @remi This is very true, and very hard for some people to grasp. From my field, there's all sorts of examples:

    • Some good teachers let kids learn through inquiry. Does that mean that allowing inquiry makes one a good teacher? No.
    • Some good teachers use lectures. Does that mean that using lectures is good? No.
    • Some bad teachers use lectures too. Does that mean that lectures are bad? No.

    Good teaching practices are legion. What is a "good" practice for some people may not be for others. And more importantly, it's how you use it that matters (INB4 "that's what she says"). All these things are merely tools. The tool is the same whether it's used by a master or by a hack. Tools are useful in some circumstances, but not in others. Cargo-cult mentality ("If it's good here, it must be good everywhere!) never helps anyone.



  • @Benjamin-Hall Like I said, this is the thinking between all the books on successful people. It gets more traction in the management domain because it's not really based on a lot of science in the first place (and I'm being generous here) so it's easy to say/sell any bullshit. But I see the same principle at work for many "coding best practices", they are being over-hyped as the solution whereas they are just a tool...



  • @remi Yup. It's the whole proxy metric vs real metric problem. Use of certain coding practices are a proxy for coder quality. Which is fine until everyone starts pushing those practices blindly (without doing the rest of the things that makes a good coder good). That simply dilutes the value of the proxy.

    Advanced Placement classes/tests (used in the US to get college credit for taking rigorous classes in high school) are another example.

    1. The best students take multiple AP classes. So the number of kids taking APs is a proxy for the quality of the students.
    2. So let's give incentives for the schools based on number of kids taking APs.
    3. Oops, now AP classes are dumbed down (because they're shoving tons of unprepared kids into them) and now they're no longer a good proxy.
    4. So let's make the test (required to get credit) harder and more :wtf: to weed out those who can't jump through hoops.
    5. (The present day) AP classes are pretty much useless, both as a metric and as a learning tool. Because they became a cargo-cult metric and have been distorted beyond recognition.

  • Discourse touched me in a no-no place

    @Benjamin-Hall said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    It's the whole proxy metric vs real metric problem

    … which happens any (and every) time the real metric is difficult to measure.



  • @dkf said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @Benjamin-Hall said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    It's the whole proxy metric vs real metric problem

    … which happens any (and every) time the real metric is difficult to measure.

    Which is most of the time, really. Life is hard.



  • @remi @gaska You guys just redefined "good developers" as "developers who agree with me". I think good developers dosn't waste time chasing fads and bullshit.



  • @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @remi @gaska You guys just redefined "good developers" as "developers who agree with me". I think good developers dosn't waste time chasing fads and bullshit.

    You misunderstood. Many good developers use tests and such, not because they're chasing fads but because those practices support their other practices. Some don't, because those practices don't work for them.

    If you're chasing fads (and not doing it because you've found it works for you personally and professionally), you're :doing_it_wrong:


  • ♿ (Parody)

    @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @Gąska said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    sockpuppet7 agreed wholeheartedly, except that even in application (ie. non-library) code, I find DI (manual via constructors, not the framework bullshit) to be great for increasing testability.

    Tests are another thing that can be useful, but I think some people have gone too far.

    I didn't RTFA (duh) but here is my case for testing getters and setters:

    As previously stated, I use integration tests more than unit tests. Our (Java) app uses JSF2 for the actual page markup, which has a particular syntax for referencing stuff from code: #{someBean.foo}. That's basically equivalent to someBean.getFoo().

    In our integration tests we can use that markup to interact with the code in approximately the same way as a page does. Of course, it's kind of a pain to keep it in sync with stuff, but that's also true of the pages themselves, so this is actually a feature for me. Like, did I forget to update some other page that also uses something I changed. So the tests help me find those things before users stumble onto that rarely used interface that I've now broken.


  • Banned

    @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @remi @gaska You guys just redefined "good developers" as "developers who agree with me".

    I didn't. Leave me out of this. I just said implementing SOLID and scrupulous unit testing can make good developers work even better.

    I think good developers dosn't waste time chasing fads and bullshit.

    I think good developers don't dismiss other people's opinion just because it's different from theirs.


  • Banned

    @remi said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @Gąska I don't really have a strong opinion in all this, but it might be that you're taking the tree for the forest. You said yourself earlier that coding practices are to help good coders be more efficient, so somewhat conversely, it implies (well not strictly speaking, but I wouldn't be surprised if it's the case) that places that use these coding practices consistently have a large population of good coders, and therefore produce good code. But that doesn't show that this good code is the result of the coding practices rather than of the good coders themselves.

    At first, we were less scrupulous about proper design principles. Then our company funded us the whole Clean Code tutorial series, and we started to implement (most of) the principles there in our code the best we could (and thought made sense). Over time, we've accumulated lots of code written in the "new" style, but also had lots of code written in "old" style - and it was very noticeable that working with "old" code was much harder, and not just because it was old. The old "new" code didn't age as fast as the old "old" code. Also, the main reason why we had high quality was because we refactored often, and we wouldn't dare to refactor that often if we didn't have a very comprehensive test suite to check if we broke something.

    I know it's very anecdotal and it might have just been that one project. But now I work at different company, and over here we're not nearly as scrupulous about any of this - and I can really feel the effects. Instead of refactoring, we're building new architecture side by side with the old architecture, duplicating functionality instead of replacing.

    (for a simpler analogy: a formula one driver isn't a good driver because he drives a formula one, he drives a F1 because he's a good driver in the first place)

    Put a good driver in Force India car and he won't perform nearly as good. Still better than an amateur, though.

    If you have a place with good coders, who are genuinely preoccupied with the quality (whichever way you measure it) of what they produce, they will end up producing good code, regardless of the method they use. They might adopt some widely known methods (of testing or other), they will probably create their own (whether they recognize them as such or not doesn't matter), and they will probably be more productive because of all that than if they didn't, but then if they didn't use those methods they wouldn't be good coders in the first place.

    It's like the PHP discussion: yes, a good programmer can write good code in any language - but they'd write even better code if they used a good language.

    At the other end of the scale, you have all the bad coders, and those might adopt any method you like, they'll still churn out bad code, because they don't care. So they'll follow the letter of the method and aim for 100% test coverage including stupid ones like tests for getters/setters and end up writing bad code. With some luck, they'll decide (or someone will decide for them...) to drop the method as it doesn't help them, but that doesn't prove anything about the method.

    Yes, I addressed that already.

    Really, a lot of these coding practice and studies about them sounds very much like the "N things that successful people do". They're not successful because they do these things, they do these things because that's how they are and yes it helps them being successful, but you can't just ape them and expect to be successful as well (despite what entire libraries' shelves try to tell us...).

    As I said earlier, I'm not arguing that good coding principles will make bad programmers good code. I'm arguing they'll make GOOD programmers write EVEN BETTER code. What makes bad programmers write good code is an entirely different discussion.



  • @Gąska said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    I think good developers don't dismiss other people's opinion just because it's different from theirs.

    I think you're mixing "good developers" and "nice people".

    <🚎>
    I've yet to see that there is a significant overlap between the two groups. And reading TDWTF doesn't help much on that front.
    </🚎>



  • @Gąska said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    As I said earlier, I'm not arguing that good coding principles will make bad programmers good code. I'm arguing they'll make GOOD programmers write EVEN BETTER code. What makes bad programmers write good code is an entirely different discussion.

    I guess my take on this is that I don't really care that much about what makes good programmer even better, because for one thing they're already good and for another part of them being good means that they'll find themselves ways to be even better. So you're probably right, these practices help good coders, but discussing all that is like discussing why McLaren sucks even though they've ditched Honda: that's nice gossip for F1 fans, and it does hugely matter for the F1 teams themselves, but doesn't do squat for the average driver.

    I'd rather focus on finding ways to either minimize the amount of bad code that bad coders write, or better (because bad coders being bad coders, there is little you can do to actually prevent them writing bad code...), minimize the amount of bad code that average coders write. Because they are the one that are the most frequent, and they are the ones that write most code, and they are the ones where changing their output a bit might help improve significantly the overall quality of the end result.

    I feel there is too much focus on a tiny minority of very good coders. They are and will only ever be a tiny minority, and only tiny start-ups might have a chance to say with a straight face that they only have good coders. "We only hire rock stars" is just as much of a lie as any other HR bullshit.

    So if you want to discuss what makes good coders even better, I'll probably just fade into the background and let you go on.


  • Discourse touched me in a no-no place

    @remi said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    I feel there is too much focus on a tiny minority of very good coders.

    There's too much focus on saying shitty practices are good just because lots of coders use them. (Case in point: SAFe.) Let's call shit what it is.


  • Impossible Mission - B

    @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    I don't approve of a compiler trying to impose those things on me. The compiler already has all the means to know what exceptions a function throws, I don't need to say it for it.

    That information isn't there for the compiler's benefit; it's there for the developer's, so you can be more informed in your coding.

    (That's the theory, at least. Java's implementation thereof is messy.)


  • Banned

    @remi said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @Gąska said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    As I said earlier, I'm not arguing that good coding principles will make bad programmers good code. I'm arguing they'll make GOOD programmers write EVEN BETTER code. What makes bad programmers write good code is an entirely different discussion.

    I guess my take on this is that I don't really care that much about what makes good programmer even better, because for one thing they're already good and for another part of them being good means that they'll find themselves ways to be even better.

    Not always. There's lots and lots of programmers that once they became good, they stopped refining their skills any further.

    I'd rather focus on finding ways to either minimize the amount of bad code that bad coders write, or better (because bad coders being bad coders, there is little you can do to actually prevent them writing bad code...), minimize the amount of bad code that average coders write. Because they are the one that are the most frequent, and they are the ones that write most code, and they are the ones where changing their output a bit might help improve significantly the overall quality of the end result.

    You do you, I do me. Maybe maximizing the potential of good programmers isn't as valuable in the grand scheme of things as raising average quality of code by average programmers, but I don't think it's entirely pointless. Especially if you look at it from self-development perspective - I'm not saying how to make the world better; I'm saying how to make yourself better. The theme of this website makes it so most forum members are good programmers - I believe many would appreciate tips specifically targeted at them.

    There's also another thing. I know a few ways in which good programmers can improve. I know nothing about how bad programmers can improve, and even less about how good programmers can trick bad programmers into improving themselves automatically via code architecture. It's only natural that I don't talk anything about the latter. Though if anyone has some good tips on the latter, I'm all ears. "Forget about everything else, focus on this one thing because it costs the industry the most bucks" isn't a good tip on how to do anything about it.



  • @Gąska said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    I think good developers don't dismiss other people's opinion just because it's different from theirs.

    I'm not dismissing your opinion, just don't like the way you pushed it. And I like to argue for the sake of arguing.



  • To get back on topic (:doing_it_wrong:) I think decompilers and similar stuff are a great thing. I always disliked software as a "black box" where you can't peek and poke inside. Stuff like the developer tools in browsers are an example of how to do it right.

    Literally the only disadvantages are it makes cheating in online games and piracy slightly easier, but I don't think that's a big enough deal.


  • Banned

    @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @Gąska said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    I think good developers don't dismiss other people's opinion just because it's different from theirs.

    I'm not dismissing your opinion, just don't like the way you pushed it.

    Relating personal experience and saying what I think works?

    And I like to argue for the sake of arguing.

    Arguing involves two sides presenting their cases. Here, I presented my case and you were like "nope nope nope". You didn't even really address my argument - you just compared me to Jehovahs and decided it's enough proof that testing doesn't work!



  • @anonymous234 If someone could make something that works as well as dnSpy does on .NET for C++, I'd be in heaven.



  • @ben_lubar I guess C/C++ is too optimized for decompilers to figure out. Whereas CIL leaves the hardest stuff to the runtime.



  • @anonymous234 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @ben_lubar I guess C/C++ is too optimized for decompilers to figure out. Whereas CIL leaves the hardest stuff to the runtime.

    I'd even take shitty generated C++ code over what I have access to now.


  • ♿ (Parody)

    @anonymous234 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    I think decompilers and similar stuff are a great thing. I always disliked software as a "black box" where you can't peek and poke inside. Stuff like the developer tools in browsers are an example of how to do it right.

    I had to use one to figure out :wtf: was going on inside some COTS that we were integrating. The error message and stack traces were particularly unhelpful but it didn't take long to see what the problem was once I could see the (java) code.


  • Considered Harmful

    @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @remi @gaska You guys just redefined "good developers" as "developers who agree with me". I think good developers dosn't waste time chasing fads and bullshit.

    They're not. The fad follows the usefulness, not the other way around.



  • @sockpuppet7 said in PSA: dnSpy is a terrifyingly good .NET decompiler:

    @ben_lubar throw new PedanticDickweedException();

    Hey! I take exception to that! I'm an exceptional pedantic dickweed.


Log in to reply