Make sure it's initialized!



  • @TheCPUWizard said:

    @blakeyrat said:

    @TheCPUWizard said:
    These are all things which have an impact on a program [the topic of which programs this impact is significant for is a different one..and also IMHO quite interesting]. Yet they are all things which can not be determined at the language level, requiring at least IL knowledge, and in many cases knowldge of the native code generated by the JIT.

    They are also all things you shouldn't give a **** about.

    I am sure bosses/employers/clients would love to hear that ("I don't give a **** that the code I wrote for you is useless") when their program is failing to meet performance requirements [see: which programs this impact is significant for in my post above].  Many HFT programs directly translate microseconds of execution speed to potentially thousands of dollars (per execution). Many industrial control applciations, translate these same timeframes to safety (equipment damage, personal injury) related effects.

    Even many "conventional business programs" suffer when scaled to significant size (tens of thousands of transactions per second). In these cases, "scale-out" may be an option, but that introduces additional concerns about synchronization which may make things worse (or at least much more expensive)

    His point might be that empirical evidence is good enough; we don't need to know why. Using blocks are obviously a good idea because it's really hard to forget to Dispose when you use it. If someone has a suspicion that Using causes a performance problem, then it's pretty easy to whip up an A/B test to see if Using does or doesn't incur a performance penalty. The time spent thinking why could better be spent on your next problem. If we run into an edge case, we'll hire you to give us a hand. Only a very small fraction of programmers really need to be able to handle the corner cases where things that normally don't cause performance problem do.

    Also, corner case optimization is actually bad for most of us. It's pretty common for the next version of a framework class to have performance improvements. If we spent a bunch of time implementing custom disposability for a class due to a performance problem, it's likely that we wouldn't benefit from the improvements of the next version. That would either cause our code to be slower than naive code or cause a lot of effort re-working it. The best case I can think of is all of the patterns of keeping a global database connection that were popular before the mid 1990s. After Microsoft introduced connection pooling, suddenly the most performant pattern was to create as late as possible and destroy as early as possible.



  • @Jaime said:

    @TheCPUWizard said:

    @blakeyrat said:

    @TheCPUWizard said:
    These are all things which have an impact on a program [the topic of which programs this impact is significant for is a different one..and also IMHO quite interesting]. Yet they are all things which can not be determined at the language level, requiring at least IL knowledge, and in many cases knowldge of the native code generated by the JIT.

    They are also all things you shouldn't give a **** about.

    I am sure bosses/employers/clients would love to hear that ("I don't give a **** that the code I wrote for you is useless") when their program is failing to meet performance requirements [see: which programs this impact is significant for in my post above].  Many HFT programs directly translate microseconds of execution speed to potentially thousands of dollars (per execution). Many industrial control applciations, translate these same timeframes to safety (equipment damage, personal injury) related effects.

    Even many "conventional business programs" suffer when scaled to significant size (tens of thousands of transactions per second). In these cases, "scale-out" may be an option, but that introduces additional concerns about synchronization which may make things worse (or at least much more expensive)

    His point might be that empirical evidence is good enough; we don't need to know why. Using blocks are obviously a good idea because it's really hard to forget to Dispose when you use it. If someone has a suspicion that Using causes a performance problem, then it's pretty easy to whip up an A/B test to see if Using does or doesn't incur a performance penalty. The time spent thinking why could better be spent on your next problem. If we run into an edge case, we'll hire you to give us a hand. Only a very small fraction of programmers really need to be able to handle the corner cases where things that normally don't cause performance problem do.

    Also, corner case optimization is actually bad for most of us. It's pretty common for the next version of a framework class to have performance improvements. If we spent a bunch of time implementing custom disposability for a class due to a performance problem, it's likely that we wouldn't benefit from the improvements of the next version. That would either cause our code to be slower than naive code or cause a lot of effort re-working it. The best case I can think of is all of the patterns of keeping a global database connection that were popular before the mid 1990s. After Microsoft introduced connection pooling, suddenly the most performant pattern was to create as late as possible and destroy as early as possible.

    Jamie,

     I dont disagee with anything you posted, in fact I agree with most of it.  As a general rule (as a statistical percentage of programs) everything you say is true.  "Premature" or "Over" optimization is at best counter-productive, and at worst a disaster.  My point is that as soon as there is a single valid case where such things do matter, a blanket/absolute statement is incorrect - unless qualified. BalkeyRat's statement was "you shouldn't". If this is taken to mean me personally, then it would indeed have the ramifications I mentioned. If it is taken to mean "the reader of this post" ( a common usage of the word "you" in online postings, then it would have to apply to every potential reader (the "you") under every circumstance.

    Now my own experience, shows that these conditions do occur. If someone can conclusively prove that they have *never* occured, then I will conceed - but that is inherently impossible without having first hand knowledge of every program that has been written.

    My bigger concern (and a core professional belief) is that more of these situations exist than the average programmer realizes. This means that developers are creating problematic code without even realizing it at the time it is written, resulting in issues that are much harder to remediate later (sometimes causing a complete rewrite - although this is rare).  Wouldn't it be better if developers were aware that a) These Conditions Exist, b) The Situations where they are Applicable, and c) That there are Designs that can avoid these issues, when necessary [even if the person in question does not have the direct knowledge to implement such designs]?????



  • @TheCPUWizard said:

    Wouldn't it be better if developers were aware that a) These Conditions Exist, b) The Situations where they are Applicable, and c) That there are Designs that can avoid these issues, when necessary [even if the person in question does not have the direct knowledge to implement such designs]?????

    Yes.

    However, most programmers are only capable of specializing in so many things. Half of the cargo-cult programming out there came from performance optimization. Most of the worst sins made by programmers were made in the name of performance. Also, most developers will take any distraction they can get to avoid dealing with "soft skills" like usability, proper naming, and even proper architecture. It's often beneficial to steer programmers away from performance tweaking until they have matured to the point where they ignore your guidance and do the right thing.

    So, I generally tell people to ignore performance issues until they know better than to listen to me. As long as they listen to me, they are either novices or cargo-cultists and can't be trusted making performance optimizations, so they are right to listen to me. As soon as they can present a strong, well thought out argument for what they want to do, it's time to let them go.

    BTW, my world is very different from yours. I can't buy a server small enough to have real performance problems with the workloads I have.



  • @blakeyrat said:

    .ini files are fucking obsolete. They're trash. If you use them, you're trash. You need to leave the industry and get a job picking up trash. Or recycling. I boldfaced this paragraph because I think that "trash talk" is pretty damn clever, as is the pun in this sentence.

    In the Javascript thread, you posted [url=http://blogs.msdn.com/b/oldnewthing/archive/2007/11/26/6523907.aspx]this little gem[/url] and I never replied to it.  However, since this subject has reared its ugly head again . . .

    Problems with the Registry [url=http://www.virtualdub.org/blog/pivot/entry.php?id=312]here[/url], [url=http://www.codinghorror.com/blog/2007/08/was-the-windows-registry-a-good-idea.html]here[/url], and even an alternative to the Registry [url=http://msdn.microsoft.com/en-us/library/k4s6c3a0.aspx]here[/url], where Microsoft is suggesting a feature in .Net to create custom configuration files.  (It has been years since I programmed in .Net and can't attest to the usefulness of this feature; nevertheless, Microsoft documents a feature here that they suggest using for application and user settings which is in direct opposition to the "store it in the Registry" mantra.)  Finally, I have had Registry corruption cause an unrecoverable workstation that required a reinstall; I've never had a corrupted .cfg file under . . . let's just say "other operating systems" . . . cause me to not be able to get it back up and running, even if I had to boot into rescue/emergency mode.

    Now, based on what I can see I will agree that Microsoft's implementation of text-based configuration files is shit.  They've not updated the interface to process them, leading to the limitations that existed since Win 3 (probably 2, but I had no experience with that).  There's no reason an updated interface couldn't provide better .ini handling that could handle the problems that you believe exist.  UNIX-y OSes have used config files for decades.  The Registry is but 17 years old and is only used by one common OS.  If the idea was really so much better, I'd have expected other OSes to adopt something similar.  But they haven't.

     



  • @blakeyrat said:

    *) Wow! Look how fast that video encoder is! It encodes videos 2 hours faster than the competition! Of course it takes 4 hours to learn the fucking arcane UI, so your net gain is -2 hours.

    It's a one-time cost.  Once the UI is understood, the program will encode each video two hours faster than the competition and there's no additional learning curve required.  If in all other aspects it was superior to the competition, I'd still purchase TheCPUWizard's product and take the temporary hit during training.



  • @Jaime said:

    @TheCPUWizard said:
    Wouldn't it be better if developers were aware that a) These Conditions Exist, b) The Situations where they are Applicable, and c) That there are Designs that can avoid these issues, when necessary [even if the person in question does not have the direct knowledge to implement such designs]?????

    Yes.

    However, most programmers are only capable of specializing in so many things. Half of the cargo-cult programming out there came from performance optimization. Most of the worst sins made by programmers were made in the name of performance. Also, most developers will take any distraction they can get to avoid dealing with "soft skills" like usability, proper naming, and even proper architecture. It's often beneficial to steer programmers away from performance tweaking until they have matured to the point where they ignore your guidance and do the right thing.

    So, I generally tell people to ignore performance issues until they know better than to listen to me. As long as they listen to me, they are either novices or cargo-cultists and can't be trusted making performance optimizations, so they are right to listen to me. As soon as they can present a strong, well thought out argument for what they want to do, it's time to let them go.

    BTW, my world is very different from yours. I can't buy a server small enough to have real performance problems with the workloads I have.

     100% agreement.. What I think makes software development so interesting is that you can get a fairly large group together and have each person still say "my world is very different from yours" to at least half of the people in the room. When people follow blindly (e.g. "you shouldn't care" without knowing anything about the actual situation) then cargo cultism is almost a certainity, with all of the problems entails. But when people listne and learn from each other, adapting what applies, and knowing what to ignore, it seems that everyone ends up learning more.



  • @nonpartisan said:

    Problems with the Registry here, here, and even an alternative to the Registry here, where Microsoft is suggesting a feature in .Net to create custom configuration files. (It has been years since I programmed in .Net and can't attest to the usefulness of this feature; nevertheless, Microsoft documents a feature here that they suggest using for application and user settings which is in direct opposition to the "store it in the Registry" mantra.)

    Are you trying to imply that I said the Registry doesn't have any problems? Or... what is the point of this paragraph?

    @nonpartisan said:

    Now, based on what I can see I will agree that Microsoft's implementation of text-based configuration files is shit. They've not updated the interface to process them, leading to the limitations that existed since Win 3 (probably 2, but I had no experience with that).

    That's because they've been deprecated since Windows 3. Duh?

    @nonpartisan said:

    There's no reason an updated interface couldn't provide better .ini handling that could handle the problems that you believe exist.

    Raymond lists several reasons in the link I gave you. You should actually read it. If we're going to discuss the article, it would be nice if you demonstrated that you at least skimmed it briefly.

    @nonpartisan said:

    UNIX-y OSes have used config files for decades.

    Yeah, and that's why it's so popular for corporate networks-- oh wait it's not. Because the Registry provides all those fancy handy-dandy things like Group Policies that large centralized networks need so badly.

    Yes, UNIX-y OSes have used config files for decades. UNIX-y OSes have also sucked ass for decades, and they've gone decades without a substantial increase in user-base. So obviously citing that is, shall we say, not a good idea.

    @nonpartisan said:

    If the idea was really so much better, I'd have expected other OSes to adopt something similar. But they haven't.

    First of all, yes they have. Netware had a Registry, for example, although I'm not sure if the most recent version still does. Strangely, Netware was also very popular for managing large corporate networks-- I'm noticing a pattern here!

    Secondly, while it's true that other OSes haven't adopted something similar to the Registry, it's also true that no other OS has features the Registry enables. CAR ANALOGY: "your car doesn't have a dashboard readout for the TPMS!" "That's because my car doesn't have a TPMS."

    Here's a fun challenge for you: you're a network administrator, and you have a network full of Macs. How do you force them to "require password on wake from sleep?" In Windows, it's trivial. In OS X? The most advanced UNIX-esque OS? Well... you can do it at install time, I guess, but if you don't do it then, then you're fucked-- you're back to the 1995 solution of walking to every single workstation, changing the setting, then locking the user account from changing it back. EFFICIENT!

    What you need to point out to me, when saying the Registry is oh so horrible, is an OS that manages to have all of the features Windows has while simultaneously not having a Registry. That thing doesn't exist. Or if it does, please show it to me, because I'd love to see it.



  • @TheCPUWizard said:

    When people follow blindly (e.g. "you shouldn't care" without knowing anything about the actual situation) then cargo cultism is almost a certainity, with all of the problems entails.

    Except if you really agreed with Jamie, you'd realize "you shouldn't care" is the default position to take when it comes to optimization, for the reasons Jamie discusses. You're trying to have it both ways, buddy, and it's just not gonna happen, ok?

    I already conceded that there is, in that 0.01% of cases, some value in knowing when to dig into this stuff for performance optimization. But:
    1) If you ever reached that case, you've probably already failed somewhere else in your app (it's freakin' 2012, we all know to scale wide, not high, right?)
    2) If you do implement your "optimizations", your code becomes much more fragile and likely to break and requires more maintenance in the long run
    3) In the time it takes you to figure out your optimization and get the code humming, computer hardware got 2 times faster and it's no longer an issue-- buying a new server, even a really beefy one, is cheaper than 3 months of an employee's time



  • @blakeyrat said:

    Here's a fun challenge for you: you're a network administrator, and you have a network full of Macs. How do you force them to "require password on wake from sleep?" In Windows, it's trivial. In OS X? The most advanced UNIX-esque OS? Well... you can do it at install time, I guess, but if you don't do it then, then you're fucked-- you're back to the 1995 solution of walking to every single workstation, changing the setting, then locking the user account from changing it back. EFFICIENT!

    Guess you don't use Macs in a corporate environment much do you????  (I do very, very little with Apple products, but even I know this)...There are plenty of solutions, there is absolutely no need for " the 1995 solution of walking to every single workstation".  One system that I know is used in a couple of big (>1K machines) corporate environments: http://www.jamfsoftware.com/
    "



  • @blakeyrat said:

    @TheCPUWizard said:
    When people follow blindly (e.g. "you shouldn't care" without knowing anything about the actual situation) then cargo cultism is almost a certainity, with all of the problems entails.

    Except if you really agreed with Jamie, you'd realize "you shouldn't care" is the default position to take when it comes to optimization, for the reasons Jamie discusses. You're trying to have it both ways, buddy, and it's just not gonna happen, ok?

    I already conceded that there is, in that 0.01% of cases, some value in knowing when to dig into this stuff for performance optimization. But:
    1) If you ever reached that case, you've probably already failed somewhere else in your app (it's freakin' 2012, we all know to scale wide, not high, right?)
    2) If you do implement your "optimizations", your code becomes much more fragile and likely to break and requires more maintenance in the long run
    3) In the time it takes you to figure out your optimization and get the code humming, computer hardware got 2 times faster and it's no longer an issue-- buying a new server, even a really beefy one, is cheaper than 3 months of an employee's time

    There is a world of difference between "not caring" (or "not knowing") and having sufficient knowledge to know that something is not applicable.  I have never said that developers need to DO anything in the vast majority of cases.  Even taking your percentage, there are an estimated 20-35 million application written per year, so this means there are thousands of cases per year. (I would place the percentage between 0.5% and 2%)

    a) Try to "scale wide" when you have power constraints, or space/weight constraints....yes, these apply to .NET applications, in fact there was a great MSDN article about 18 months ago (http://msdn.microsoft.com/en-us/magazine/gg232761.aspx)

    b) If your code fails to meet requirements, then it is a fail. So, yes, programs which are "pushing the envelope" will be more difficult to develop and test.  In my experience however, this has NOT translated into "more fragile", "more likely to break" or "requires more maintenance" provided that the ongoing support is done by people with equivilant knowledge, skills and experience as the people who developed the original code.

    c) If one has already invested the time to "to figure out your optimization and get the code humming", then there is very little time required (hours or days, worst case weeks) to apply this to future work. Additionally the improvement continues as the hardware gets faster (it has been quite some time since pure CPU time has actualy gotten "2 times faster" at a given price point). Even once the hardware is faster, the optimized software will still be faster than the non-optimized version, this can often be the opportunity to add new features while still meeting performance obligations.



  • @nonpartisan said:

    here

    Wow Atwood's a dumbass and so are his readers.

    1) He doesn't mention a single positive aspect of the Registry, probably because he's entirely ignorant of why it's there or what features it enables. He could have researched this by talking to-- oh wait research? Hahaha!
    2) In the comments he blames the Registry for a driver bug, in the paraphrased words of one comment, "maybe the Registry gets all the blame because that's where buggy software keeps all their buggy settings".

    The readers are even dumber:

    3) Registry functions are kept in the Microsoft namespace because they're specific to Microsoft OSes. .net is cross platform, dumbshits. (Atwood piles on to this retarded point.)
    4) .net doesn't use the Registry for ... the exact same reason as point 3. Not because .net developers "hate" the Registry, or whatever fake-ass reason they're assuming, but because .net is cross-platform and therefore can't rely on the Registry even existing at runtime.
    5) One genius proposed replacing the Registry with a database. What the fuck do you think the Registry is, numbnuts!? (Presumably he means "relational database", but he's still a retard. And doesn't even explain how a relational database would behave any differently than how it behaves now. I guess it would be slightly slower.

    Jesus. For once I'd like to read a Jeff Atwood article that didn't leave me feeling dumber then before I started reading. How the hell does that guy have a following at all?



  • @TheCPUWizard said:

    Guess you don't use Macs in a corporate environment much do you????

    Not in the last 5 years!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    It seems that Apple has finally caught-up to what Microsoft was providing in 1995. Kudos for them.



  • @TheCPUWizard said:

    a) Try to "scale wide" when you have power constraints, or space/weight constraints....yes, these apply to .NET applications, in fact there was a great MSDN article about 18 months ago (http://msdn.microsoft.com/en-us/magazine/gg232761.aspx)

    What does that article have anything to do with optimizing an application? I mean the entire article's point is, "hey this device is very power constrained, let's make it a dumb client of a beefy server elsewhere" which is duh.

    Did you post the wrong link? Or what?

    @TheCPUWizard said:

    provided that the ongoing support is done by people with equivilant knowledge, skills and experience as the people who developed the original code.

    You are an extremely optimistic person.

    @TheCPUWizard said:

    (it has been quite some time since pure CPU time has actualy gotten "2 times faster" at a given price point).

    Spoken by a man who has never used Amazon Web Services.



  • @blakeyrat said:

    @TheCPUWizard said:
    Guess you don't use Macs in a corporate environment much do you????

    Not in the last 5 years!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    It seems that Apple has finally caught-up to what Microsoft was providing in 1995. Kudos for them.

    Typical response... first you start with a challenge in the present tense "Here's a fun challenge for you: you're a network administrator, "  [in case you are unaware, "you're" is a contraction of you ARE" (not was)...

    Then you try to beg off by claiming no knowledge becuase you havent used them in 5 years.......When the product family I linked to has been around for 10 years (+/- a few months)...meaning it was an available toolfor (approx) 5 years when by your own statement you last worked with Macs in a corporate environment.

     But coming from you (based on your posts on thsi site) , such a narrow minded view is quite expected.



  • @blakeyrat said:

    @TheCPUWizard said:
    a) Try to "scale wide" when you have power constraints, or space/weight constraints....yes, these apply to .NET applications, in fact there was a great MSDN article about 18 months ago (http://msdn.microsoft.com/en-us/magazine/gg232761.aspx)

    What does that article have anything to do with optimizing an application? I mean the entire article's point is, "hey this device is very power constrained, let's make it a dumb client of a beefy server elsewhere" which is duh.

    Did you post the wrong link? Or what?

    @TheCPUWizard said:

    provided that the ongoing support is done by people with equivilant knowledge, skills and experience as the people who developed the original code.

    You are an extremely optimistic person.

    @TheCPUWizard said:

    (it has been quite some time since pure CPU time has actualy gotten "2 times faster" at a given price point).

    Spoken by a man who has never used Amazon Web Services.

    Double checked the link..it is the intended article....my point was how would you "scale out", "add another server", etc... to a device mounted on the handlebars of a bicycle?!?!?!?!?!?!

    Also, I have used AWS quite frequently. While there are conditions where it is fantastic, there are limitations. Also, the price point is not as good as it may seem for a wide variety of situations. If your needs are intermittent (or periodic) then yes, the ability to spin up instances can provide a much lower cost than building local hardware. But there have actualy been a fair number of companies (mainly those doing computational research) that are finding the long terms costs are not so great, and have moved back to their own (upgraded) datacenters.



  • @blakeyrat said:

    If you're that one guy who needs to know it, fine. But don't pretend its useful general knowledge for programmers, because the entire point of garbage collection and the CLR is to shield the programmer from having to think about any of this shit at all, ever. So stop thinking about it and build some software.

    All that time you're spending wanking around with these pointless experiments is time you could have been spending on additional QA to smooth out the rough bits, or on user testing to make sure your application isn't a wide-awake nightmare to use. (Which, from my experience, most applications overly concerned with performance are*.) Look, you disagree with me: fine, I get that.

    I can only speak for myself but I investigated because I find it an interesting topic. Now that I know it's not an issue I can stop worrying about it and nest try, using, catch and finally up the wazoo (for suitable values of wazoo) if that is more readable without worrying about the performance impact.

    I wrote this test code during some idle time in the weekend so I wouldn't have been doing QA anyway. And as an added bonus I learned about the Stopwatch class in .NET. Time well spent over a rainy weekend.



  • @blakeyrat said:

    If you're that one guy who needs to know it, fine. But don't pretend its useful general knowledge for programmers, because the entire point of garbage collection and the CLR is to shield the programmer from having to think about any of this shit at all, ever. So stop thinking about it and build some software.

    So true. Except, the dotnet GC isn't ok, it is not a solved problem. It actually causes problems. And knowing how it works. well, in my experience, that didn't help much.



  • @Weps said:

    Except, the dotnet GC isn't ok, it is not a solved problem. It actually causes problems.

    Do you have a reference about the sorts of problems it causes? Or even something resembling a detail?



  • @boomzilla said:

    @Weps said:
    Except, the dotnet GC isn't ok, it is not a solved problem. It actually causes problems.

    Do you have a reference about the sorts of problems it causes? Or even something resembling a detail?

    Imagine a time critical image processing application, that should not be the limiting factor for the hardware. And it should process, say, 20.000 images in 10 minutes.

    Now add a GC that itself will determine when to free memory. And, when it does, it will block threads.



  • @Weps said:

    @boomzilla said:

    @Weps said:
    Except, the dotnet GC isn't ok, it is not a solved problem. It actually causes problems.

    Do you have a reference about the sorts of problems it causes? Or even something resembling a detail?

    Imagine a time critical image processing application, that should not be the limiting factor for the hardware. And it should process, say, 20.000 images in 10 minutes.

    Now add a GC that itself will determine when to free memory. And, when it does, it will block threads.

    Uh huh...? And...? We're waiting for the second half of this amusing tale!



  • Dunno, 20,000 images in 10 minutes means 30ms per image. GC could conceivably have an impact on that timing.

    Let's wait for the conclusion of this epic mythos.

     

     



  • @pkmnfrk said:

    Uh huh...? And...? We're waiting for the second half of this amusing tale!

    Enlighten me. (referring to your hint). 



  • @Weps said:

    @pkmnfrk said:

    Uh huh...? And...? We're waiting for the second half of this amusing tale!

    Enlighten me. (referring to your hint). 

    YOU SET UP A STORY AND NEVER FINISHED IT!

    Look, so we have 20,000 images, 10 minutes, the GC decides when to free memory, and it can suspend threads to do so. That's a great set-up, now where's the FREAKING STORY!?

    If you don't provide one, I'll just assume it concludes, "and so the program ran fine with like 3 minutes to spare, man I'm an idiot why did I think the GC was bad I'm so stupid."



  • @blakeyrat said:

    @Weps said:
    @pkmnfrk said:

    Uh huh...? And...? We're waiting for the second half of this amusing tale!

    Enlighten me. (referring to your hint). 

    YOU SET UP A STORY AND NEVER FINISHED IT!

    Look, so we have 20,000 images, 10 minutes, the GC decides when to free memory, and it can suspend threads to do so. That's a great set-up, now where's the FREAKING STORY!?

    If you don't provide one, I'll just assume it concludes, "and so the program ran fine with like 3 minutes to spare, man I'm an idiot why did I think the GC was bad I'm so stupid."

    Gosh, my fault thinking I was dealing with Sherlocks over here.

    So, I guess, there is no one here who finds it abnormal that a GC blocks your application, on all threads. And in 'our' case, that was for SECONDS.... Yes, we have a few tera bytes of data to process.

    And I guess, there is no one here that finds it abnormal that you can actually have out of memory exceptions caused by a "full" GC.

    None of these issues are solved, just 'prevented'. More CPU power, more cores, more memory and a few more calls to the GC to clean up (which afaik it still only does when it wants to). 

    Neither of those two issues would have occured without a GC, or maybe with the java variant (I got no clue at to how the java gc works). And I assume they changed or fixed stuff in the umptieth dot net patch. I don't know. But I do know that if it was that good, I would not need anything to know about it.



  • @Weps said:

    So, I guess, there is no one here who finds it abnormal that a GC blocks your application, on all threads. And in 'our' case, that was for SECONDS.... Yes, we have a few tera bytes of data to process.

    And I guess, there is no one here that finds it abnormal that you can actually have out of memory exceptions caused by a "full" GC.

    None of these issues are solved, just 'prevented'. More CPU power, more cores, more memory and a few more calls to the GC to clean up (which afaik it still only does when it wants to). 

    Neither of those two issues would have occured without a GC, or maybe with the java variant (I got no clue at to how the java gc works). And I assume they changed or fixed stuff in the umptieth dot net patch. I don't know. But I do know that if it was that good, I would not need anything to know about it.

    I have worked with large systems, but never where there are "a few tera bytes of data" in memory (which is all the CG cares about) [the biggest server I have access to has 512GB]...

    With any memory management (automatic or hand coded), there will be time taken. So even if the specific issues you saw did not happen, it is likely that something else would have. So it is not a choice between "problem" and "perfection", but rathere a choice between problem domains.

    I have worked with real-time procssing of images where there is a large amount of data being processed continually. Even here it is extremely rare to have "Full" garbage collections (GEN 2), perhaps one every 2-3 days; and then they run with about 250mS to 600mS of "all threads blocked". Given that Windows is not a RTOS, there are plenty of other "normal" system conditions where application threads may not execute for even longer periods, so it has not be a determining factor.

    Based on systems I have worked on where issues such as you are reporting have occured, I have found a few common design/implementation flaws:

        * Not selecting the "right" GC engine.
        * Keeping objects around longer than necessary (triggering promotions). Even on a small machine (i-5 2.4Ghz 8GB), .NET has no problem with upwards of 5 million allocations (that distinct calls to "new", not a byte count) per second without noticable performance differential.
        * Not properly handling "large objects" [over 80KB for a single monolith].



  • @Weps said:

    @pkmnfrk said:

    Uh huh...? And...? We're waiting for the second half of this amusing tale!

    Enlighten me. (referring to your hint). 

    Depending on what you mean by "image processing", the maths would likely be slower than the GC. If you're just cropping them, then maybe not.

    @Weps said:

    Gosh, my fault thinking I was dealing with Sherlocks over here.

    So, I guess, there is no one here who finds it abnormal that a GC blocks your application, on all threads. And in 'our' case, that was for SECONDS.... Yes, we have a few tera bytes of data to process.

    And I guess, there is no one here that finds it abnormal that you can actually have out of memory exceptions caused by a "full" GC.

    None of these issues are solved, just 'prevented'. More CPU power, more cores, more memory and a few more calls to the GC to clean up (which afaik it still only does when it wants to). 

    Neither of those two issues would have occured without a GC, or maybe with the java variant (I got no clue at to how the java gc works). And I assume they changed or fixed stuff in the umptieth dot net patch. I don't know. But I do know that if it was that good, I would not need anything to know about it.

    Would you rather it doesn't suspend your thread, so that when it runs it's modifying pointers in code that is actively executing?

    If you're generating terabytes of garbage objects all the time, then yes, duh, you're going to have a lot of clean up.

    If you're generating terabytes of objects which get promoted to higher generations and then releasing them all after a bit... Well, have fun with that.



  • @TheCPUWizard said:

    @Weps said:

    So, I guess, there is no one here who finds it abnormal that a GC blocks your application, on all threads. And in 'our' case, that was for SECONDS.... Yes, we have a few tera bytes of data to process.

    And I guess, there is no one here that finds it abnormal that you can actually have out of memory exceptions caused by a "full" GC.

    None of these issues are solved, just 'prevented'. More CPU power, more cores, more memory and a few more calls to the GC to clean up (which afaik it still only does when it wants to). 

    Neither of those two issues would have occured without a GC, or maybe with the java variant (I got no clue at to how the java gc works). And I assume they changed or fixed stuff in the umptieth dot net patch. I don't know. But I do know that if it was that good, I would not need anything to know about it.

    I have worked with large systems, but never where there are "a few tera bytes of data" in memory (which is all the CG cares about) [the biggest server I have access to has 512GB]...

    With any memory management (automatic or hand coded), there will be time taken. So even if the specific issues you saw did not happen, it is likely that something else would have. So it is not a choice between "problem" and "perfection", but rathere a choice between problem domains.

    I have worked with real-time procssing of images where there is a large amount of data being processed continually. Even here it is extremely rare to have "Full" garbage collections (GEN 2), perhaps one every 2-3 days; and then they run with about 250mS to 600mS of "all threads blocked". Given that Windows is not a RTOS, there are plenty of other "normal" system conditions where application threads may not execute for even longer periods, so it has not be a determining factor.

    Based on systems I have worked on where issues such as you are reporting have occured, I have found a few common design/implementation flaws:

        * Not selecting the "right" GC engine.
        * Keeping objects around longer than necessary (triggering promotions). Even on a small machine (i-5 2.4Ghz 8GB), .NET has no problem with upwards of 5 million allocations (that distinct calls to "new", not a byte count) per second without noticable performance differential.
        * Not properly handling "large objects" [over 80KB for a single monolith].

     Well, we use 'm and 'free' 'm. Ain't much more to it.

    And of course, there will always be time taken by a MM, tho, I'ld rather have that time taken directly when an instance is no longer referenced than further up the road (when whatever makes the GC do its thing).



  • @pkmnfrk said:

    @Weps said:
    @pkmnfrk said:

    Uh huh...? And...? We're waiting for the second half of this amusing tale!

    Enlighten me. (referring to your hint). 

    Depending on what you mean by "image processing", the maths would likely be slower than the GC. If you're just cropping them, then maybe not.

    So what's the hint here? Be faster than the GC?



  • @blakeyrat said:

    Are you trying to imply that I said the Registry doesn't have any problems? Or... what is the point of this paragraph?

    In your original post, you told me that I "have to admit" that the Registry solves a number of problems and provided that link.  The Registry is like those ads where they say "Take MegaDrug to cure Athlete's Tongue!!", but then "May cause drowsiness, halitosis, dizziness, vomiting, diarrhea, frequent urination, rapid heart rate, slow heart rate, rapid breathing, slow breathing, tongue swelling, permanent nerve damage, sexual attraction to [pangolins/corpses/trees/buildings/soda bottles], suicidal or homicidal ideations."  Maybe it fixed the immediate perceived problems, but the side effects just blow chunks.  There was no reason the .INI format couldn't have been tweaked a little bit and enhanced to provide similar capabilities to the Registry.  Want enhanced permissions?  Okay, put certain settings in one .INI file and other settings in another .INI file and set the permissions as appropriate.  Allows for a hierarchical setup of configuration files and prevents corruption of a single .INI file from completely wrecking the system in the event of corruption.  Windows doesn't normally compress the allocated size of the Registry on its own; if I have a text config file that goes from 100 KB down to 20 KB (for any reason -- take your pick), I get that space back in the file system.  I don't get that space back in the Registry, even though internally the Registry really acts like a file system.  It's easy to add comments to a .INI file to understand what's going on. @blakeyrat said:

    First of all, yes they have. Netware had a Registry, for example, although I'm not sure if the most recent version still does. Strangely, Netware was also very popular for managing large corporate networks-- I'm noticing a pattern here!

    Novell was used under DOS and Win3.1 without the client needing to use a massive Windows registry.@blakeyrat said:

    Here's a fun challenge for you: you're a network administrator, and you have a network full of Macs. How do you force them to "require password on wake from sleep?" In Windows, it's trivial. In OS X? The most advanced UNIX-esque OS? Well... you can do it at install time, I guess, but if you don't do it then, then you're fucked-- you're back to the 1995 solution of walking to every single workstation, changing the setting, then locking the user account from changing it back. EFFICIENT!

    Massive fail there.  I don't know the name of the product -- it's being managed by our field technical services group -- but I set up the networking for the servers that are used to manage our corporate Macs.  And TheCPUWizard already linked to a product that could do exactly this that has been around for many years.  Next?@blakeyrat said:

    What you need to point out to me, when saying the Registry is oh so horrible, is an OS that manages to have all of the features Windows has while simultaneously not having a Registry. That thing doesn't exist. Or if it does, please show it to me, because I'd love to see it.

    Whatever I come up with, even if the feature is comparable, you're going to shoot down my examples.  So why don't you list the features that Windows provides that you feel can't even be possible without using the Registry and I'll find counterexamples to those?



  • @Weps said:

    @pkmnfrk said:
    @Weps said:
    @pkmnfrk said:

    Uh huh...? And...? We're waiting for the second half of this amusing tale!

    Enlighten me. (referring to your hint). 

    Depending on what you mean by "image processing", the maths would likely be slower than the GC. If you're just cropping them, then maybe not.

    So what's the hint here? Be faster than the GC?

    The hint here is that if you're generating a lot of garbage, then don't be surprised when Mom takes forever to clean it up.



  • @blakeyrat said:

    3) Registry functions are kept in the Microsoft namespace because they're specific to Microsoft OSes. .net is cross platform, dumbshits. (Atwood piles on to this retarded point.)

    4) .net doesn't use the Registry for ... the exact same reason as point 3. Not because .net developers "hate" the Registry, or whatever fake-ass reason they're assuming, but because .net is cross-platform and therefore can't rely on the Registry even existing at runtime.

    Okay, if we're playing the "you have to admit" game, you have to admit that, for all practical purposes, .Net is cross-platform in name only.  I know of Mono, but realistically right now anything that is .Net has to run on a Windows-based platform of some kind.  Which means there is some form of a Registry.  Which means that Microsoft is still advocating for a way to save configuration data outside of the Registry.



  • @pkmnfrk said:

    The hint here is that if you're generating a lot of garbage, then don't be surprised when Mom takes forever to clean it up.

    Ahh, the greatest fallacy...but first a diversion....

    What is the difference between how how a single woman cleans their apartment and a bachelor does? [please no posts about stereotypes]

    The answer is that the woman will spend hours cleaning up the garbage, while the backelor will take the "good stuff" (non-garbage), move to a new clean apartment and start over...

    The relationship is that people think performance of the GC is in any way related to the amount of garbage. It is not. It is directly related to the number of "live objcts" that have to be scanned and preserved.

    Reduce the number of objects that have to be scanned, and reduce the number of objects that have to be preserved (promoted), and the time taken for a GC cycle will drop tremendously.

    During training, I oeften use a parasitic example. The first condition creates a bunch of objects, and keeps them all "live". The second condition creates the exact same structure of objects, then makes them all "garbage". I ask the students which will take longer for a GC cycle, and the majority (often all) of them will reply "the one with the most garbage, of course" - and they are all wrong. When they see the "0 garbage" take nearly 100mS and the "2GB of garbage" take under 100uS, the first reaction is "your test is flawed". After a bit of study, the lightbulbs start to go on.

    (But nobody "shoude care" about this.... right?)



  • @nonpartisan said:

    @blakeyrat said:

    3) Registry functions are kept in the Microsoft namespace because they're specific to Microsoft OSes. .net is cross platform, dumbshits. (Atwood piles on to this retarded point.)
    4) .net doesn't use the Registry for ... the exact same reason as point 3. Not because .net developers "hate" the Registry, or whatever fake-ass reason they're assuming, but because .net is cross-platform and therefore can't rely on the Registry even existing at runtime.
    Okay, if we're playing the "you have to admit" game, you have to admit that, for all practical purposes, .Net is cross-platform in name only.  I know of Mono, but realistically right now anything that is .Net has to run on a Windows-based platform of some kind.  Which means there is some form of a Registry.  Which means that Microsoft is still advocating for a way to save configuration data outside of the Registry.

    What are you smoking? 1) You admitted that you know of mono, it's ON LINUX.  2) IT'S AN OPEN SPEC.  The only thing stopping there from being a version for MacOS is APPLE.



  • @Sutherlands said:

    The only thing stopping there from being a version for MacOS is APPLE FANBOIS.

    FTFY. Apple would probably do it if there were money in it. As it is, they know what their users' reaction to .NET would be: ".NET? sniff That doesn't even go with my brushed aluminum case, my MINI Cooper with the 'Think Global, Act Local' bumper sticker, or my ironic army jacket."



  • @Sutherlands said:

    What are you smoking? 1) You admitted that you know of mono, it's ON LINUX.  2) IT'S AN OPEN SPEC.  The only thing stopping there from being a version for MacOS is APPLE

    You're absolutely right.  I wasn't aware that Mono was so far advanced.  I shall eat my serving of crow.  +1 to Blakey for the cross-platform bit, but -1 to Blakey for not knowing that Macs can be centrally managed without using a Registry.



  • @nonpartisan said:

    Finally, I have had Registry corruption cause an unrecoverable workstation that required a reinstall;

    In what decade?

    @nonpartisan said:

    I've never had a corrupted .cfg file under . . . let's just say "other operating systems" . . . cause me to not be able to get it back up and running, even if I had to boot into rescue/emergency mode.

    If you had backed up your registry you should have been able to get it working again (doesn't System Restore just do that automagically now?) Just as you should back up your text config files. What you're basically saying is "I don't do backups and then I get all pissy when I lose shit."

    Not to mention, Gnome, the most common FOSS desktop environment uses a registry. Of course I wouldn't expect you to know this since you just installed Ubuntu for the first time last week.

    It seems like the entire anti-registry argument boils down to "There might be some crap in there I don't need" which is stupid, unless it's causing an actual problem. A modern Linux install has thousands upon thousands of config files crammed full of shit most people never even look at. The nice thing about a registry is that configuration settings appear in a predictable place (instead of scattered wherever the upstream project and/or distro maintainer decided) and have a good API for programmatic manipulation. Also you get strict-typing, field-level security and a consistent API that's easily understood by any programmer familiar with your platform. The downside is that it appears to be slightly more opaque than a text file, which hardly seems to be much to complain about.



  • @morbiuswilters said:

    @nonpartisan said:
    Finally, I have had Registry corruption cause an unrecoverable workstation that required a reinstall;

    In what decade?

    @nonpartisan said:

    I've never had a corrupted .cfg file under . . . let's just say "other operating systems" . . . cause me to not be able to get it back up and running, even if I had to boot into rescue/emergency mode.

    If you had backed up your registry you should have been able to get it working again (doesn't System Restore just do that automagically now?) Just as you should back up your text config files. What you're basically saying is "I don't do backups and then I get all pissy when I lose shit."

    Not to mention, Gnome, the most common FOSS desktop environment uses a registry. Of course I wouldn't expect you to know this since you just installed Ubuntu for the first time last week.

    It seems like the entire anti-registry argument boils down to "There might be some crap in there I don't need" which is stupid, unless it's causing an actual problem. A modern Linux install has thousands upon thousands of config files crammed full of shit most people never even look at. The nice thing about a registry is that configuration settings appear in a predictable place (instead of scattered wherever the upstream project and/or distro maintainer decided) and have a good API for programmatic manipulation. Also you get strict-typing, field-level security and a consistent API that's easily understood by any programmer familiar with your platform. The downside is that it appears to be slightly more opaque than a text file, which hardly seems to be much to complain about.

    "But I can't edit it in vi!"



  • @pkmnfrk said:

    @morbiuswilters said:
    @nonpartisan said:
    Finally, I have had Registry corruption cause an unrecoverable workstation that required a reinstall;

    In what decade?

    @nonpartisan said:

    I've never had a corrupted .cfg file under . . . let's just say "other operating systems" . . . cause me to not be able to get it back up and running, even if I had to boot into rescue/emergency mode.

    If you had backed up your registry you should have been able to get it working again (doesn't System Restore just do that automagically now?) Just as you should back up your text config files. What you're basically saying is "I don't do backups and then I get all pissy when I lose shit."

    Not to mention, Gnome, the most common FOSS desktop environment uses a registry. Of course I wouldn't expect you to know this since you just installed Ubuntu for the first time last week.

    It seems like the entire anti-registry argument boils down to "There might be some crap in there I don't need" which is stupid, unless it's causing an actual problem. A modern Linux install has thousands upon thousands of config files crammed full of shit most people never even look at. The nice thing about a registry is that configuration settings appear in a predictable place (instead of scattered wherever the upstream project and/or distro maintainer decided) and have a good API for programmatic manipulation. Also you get strict-typing, field-level security and a consistent API that's easily understood by any programmer familiar with your platform. The downside is that it appears to be slightly more opaque than a text file, which hardly seems to be much to complain about.

    "But I can't edit it in vi!"

    vi can edit anything: you're just not trying hard enough. Also: there are CLI tools for editing the Registry for Windows and Linux, if for some reason you are afraid of GUIs.



  • @morbiuswilters said:

    In what decade

    Last decade.  Windows XP.  Generic workstation in the emergency department.  Based on a generic build so it wasn't difficult to restore.  No permanent data storage -- used for reading radiology films, patient record access, etc.  Still, took a couple of hours.  On a machine that used text config files I'd have had the machine back up in less time than that without a full reinstall. @morbiuswilters said:

    Not to mention, Gnome, the most common FOSS desktop environment uses a registry. Of course I wouldn't expect you to know this since you just installed Ubuntu for the first time last week.

    I don't have a problem with a "registry" per se, with a common API to store settings.  I have a problem with the Windows Registry, a binary blob of data that can get totally fucked if one sector gets corrupted, potentially causing the entire machine to be unbootable.  This is the context from which my other replies should be taken.  Anyone who understands the idea of "context" should have understood I was talking about the Windows implementation of a registry (thus always capitalizating the "R" to refer to the Windows Registry).  No, I don't think the binary blob method was necessary.  Yes, I believe they could have enhanced the .INI file API and achieved the same effect.  That said, the GNOME registry is all text-based; if I need to copy in a section from a different machine, I could do it using cut-and-paste between two terminal windows.@morbiuswilters said:

    It seems like the entire anti-registry argument boils down to "There might be some crap in there I don't need" which is stupid, unless it's causing an actual problem. A modern Linux install has thousands upon thousands of config files crammed full of shit most people never even look at. The nice thing about a registry is that configuration settings appear in a predictable place (instead of scattered wherever the upstream project and/or distro maintainer decided) and have a good API for programmatic manipulation

    Your example of GNOME proves my point . . . you do not need to have a binary-based Registry to be able to store settings.  It's all text-based (XML).  The hierarchy can be represented in the regular file system.  And even then, this applies to the user environment.  If I have a problem with my GNOME profile that I can't figure out, I can wipe it away (like wiping away the profile directory under Windows) and have it recreated.  Yes, I lose all my settings -- just like I lose those settings under my Windows profile if I do that.  And yet, the base system can keep running.  And if a config file in my /etc directory becomes corrupted, I can edit it or copy a replacement over.  If my base system Registry gets corrupted, I'm fucked.  That may not be as true a case with Windows 7 -- I've not had my Registry corrupt under it.  But my experience with XP Registry problems was not pleasant.



  • @blakeyrat said:

    Wow Atwood's a dumbass and so are his readers.

    This is the guy who thought salting passwords meant adding the word "salt" to every password before hashing it and told all of his readers to do that. I also remember some article where he said the only way to get SQL Server to work was to use read uncommitted for all queries. I believe this is when he was doing the coding for Stack Exchange.



  • @morbiuswilters said:

    Also you get strict-typing

    Not really. Registry types are little more than hints - nothing prevents you from reading a DWORD as a string and vice-versa, since registry only stores binary blob and it's length (which means that strings can have embedded nuls).



  • @nonpartisan said:

    On a machine that used text config files I'd have had the machine back up in less time than that without a full reinstall.

    You probably could do the same with XP - there are backup copies of registry hives in C:\System Volume Information. Been there, done that.



  • @nonpartisan said:

    @morbiuswilters said:
    In what decade
    Last decade.  Windows XP.  Generic workstation in the emergency department.  Based on a generic build so it wasn't difficult to restore.  No permanent data storage -- used for reading radiology films, patient record access, etc.  Still, took a couple of hours.  On a machine that used text config files I'd have had the machine back up in less time than that without a full reinstall.

    So you aren't backing up and now you're complaining like this is somehow M$'s fault. Let's all take a moment to bask in your competency.

    @nonpartisan said:

    Yes, I believe they could have enhanced the .INI file API and achieved the same effect.

    Yeah, let's endlessly extend a shitty, outdated file format.

    @nonpartisan said:

    That said, the GNOME registry is all text-based; if I need to copy in a section from a different machine, I could do it using cut-and-paste between two terminal windows.

    So "allowing end users to copy-and-paste the entire registry instead of using the correct tool for the job" is now a reason to make shitty technical decisions?

    @nonpartisan said:

    Your example of GNOME proves my point . . . you do not need to have a binary-based Registry to be able to store settings.  It's all text-based (XML).  The hierarchy can be represented in the regular file system.

    An RDBMS can also eschew binary data and just store everything in XML. Obviously, XML is the best format for everything. (And are you seriously editing hundreds of lines of XML in a text editor or are you using an actual tool for the job?)

    @nonpartisan said:

    And if a config file in my /etc directory becomes corrupted, I can edit it or copy a replacement over.

    So your DR plans consist of "look through the thousands of text files and try to find the one corrupted, then fix it by hand".

    Don't you think there are better uses for your 1 hour of free Internet access at the public library than trolling a community of professionals? There's probably some charity group in your area with a website you can scam. Or maybe you can look up how to cook meth using Tylenol Cold and Sinus, Coleman fuel and an empty 2-liter bottle? It seems like the short-lived thrill you get from wasting my time with your retarded comments is nothing compared to the rush you'd get from some fresh-cooked scante.



  • @ender said:

    @morbiuswilters said:
    Also you get strict-typing
    Not really. Registry types are little more than hints - nothing prevents you from reading a DWORD as a string and vice-versa, since registry only stores binary blob and it's length (which means that strings can have embedded nuls).

    I think most strictly-typed systems have ways to avoid the type system so they only provide "hints" as well.



  • @ender said:

    @nonpartisan said:
    On a machine that used text config files I'd have had the machine back up in less time than that without a full reinstall.
    You probably could do the same with XP - there are backup copies of registry hives in C:\System Volume Information. Been there, done that.

    Unless you disable System Restore, been there, done that



  • @morbiuswilters said:

    I think most strictly-typed systems have ways to avoid the type system so they only provide "hints" as well.

    I'd imagine a strongly-typed system to require at least a flag if you're reading something in a different format - with Registry you just call a read function for a different format, and you'll get the data in that format.
    @serguey123 said:

    Unless you disable System Restore, been there, done that

    In that case, there are saved versions of the hives, which are usually enough to bring up the system without doing a reinstall.



  • @morbiuswilters said:

    < blah blah blah fucking stupid "answers" blah blah blah>

    @morbiuswilters said:

    An RDBMS can also eschew binary data and just store everything in XML. Obviously, XML is the best format for everything. (And are you seriously editing hundreds of lines of XML in a text editor or are you using an actual tool for the job?)

    Quote me where I said that XML is always the best format.  Go ahead, I'm waiting.  In the meantime, an RDBMS file corruption does not take down the whole fucking machine, preventing me from using the basic maintenance tools available for the OS.  A Registry corruption can, and has, done that.@morbiuswilters said:

    community of professionals?

    You're right.  My apologies to TheCPUWizard, snoofle.@morbiuswilters said:

    Or maybe you can look up how to cook meth using Tylenol Cold and Sinus, Coleman fuel and an empty 2-liter bottle?

    The fact that you know this much about cooking meth shows you know more about it than me.  Thankfully you're moving off the mainland.



  • @nonpartisan said:

    In the meantime, an RDBMS file corruption does not take down the whole fucking machine, preventing me from using the basic maintenance tools available for the OS.

    You're right, it just makes the machine unusable for it's primary purpose until the database is recovered. Which is why people make backups and have a DR plan. You keep ignoring this point, by the way.

    @nonpartisan said:

    @morbiuswilters said:
    community of professionals?
    You're right.  My apologies to TheCPUWizard, snoofle.

    Don't forget dhromed. He's a professional*, too.

    @nonpartisan said:

    @morbiuswilters said:
    Or maybe you can look up how to cook meth using Tylenol Cold and Sinus, Coleman fuel and an empty 2-liter bottle?
    The fact that you know this much about cooking meth shows you know more about it than me.  Thankfully you're moving off the mainland.

    I'm glad I'm moving, too, although I don't know why that matters to you. Does the fact that I know basic chemistry and drug policy frighten you? Do you look down on DEA agents for knowing about drugs?



  • @nonpartisan said:

    In the meantime, an RDBMS file corruption does not take down the whole fucking machine, preventing me from using the basic maintenance tools available for the OS

    sudo cat /dev/random > /etc/inittab

    Luckily, that won't corrupt a Linux system so that it won't boot* oh whoops.



  • @pkmnfrk said:

    @nonpartisan said:
    In the meantime, an RDBMS file corruption does not take down the whole fucking machine, preventing me from using the basic maintenance tools available for the OS

    sudo cat /dev/random > /etc/inittab

    Luckily, that won't corrupt a Linux system so that it won't boot* oh whoops.

    Boot a rescue disk, replace /etc/inittab, reboot.  15 minutes.

     


Log in to reply

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.