Best Practice'd to death



  • I ran across this code at work a while back (although I changed the subject to hats for anonymization purposes, ignore the silliness of what it seems to do). I couldn't help think that this code follows every best practice that a just-out-of-school programmer is taught their first week of work, but every one of them applied as annoyingly as possible. Amazingly, although the coder seems to have done everything according to the coding standards, this couldn't be any worse.



    Here is a list of practices that I see applied:

    • Naming conventions
    • Declarations at the beginning of the procedure (almost)
    • Clean up code
    • Exception handling


    However, here is what I see wrong with it:
    • Although the code follows hungarian naming conventions, no benefit is gained. Almost every prefix is "Obj" even though there are several incompatible classes represented. The other prefixes only serve to make the code harder to read.
    • Most of the declarations screw up the scope of the variables. There is no reason for ObjHat to exist outside the loop, moving the declaration up only serves to increase the scope and reduce the likelihood that the compiler will help you find misplaced code when it is maintained in the future. Most of the other variables have the same problem. Some, such as ObjHats, probably shouldn't exist at all.
    • Every single piece of cleanup code in this procedure is meaningless. Setting a reference to nothing at the end of a procedure does nothing. Even in languages where memory management is all manual, simply nulling the references doesn't do it. The variable will go out of scope in a few nanoseconds anyways and the result will be exactly the same. If the objects had actually been disposed, I could see the point. BTW, the Using loop was copied from all of the other procedures in the class, so the coder gets no credit for this one tiny shining piece of success.
    • Yes, there is an exception handler. However, an exception handler that handles System.Exception and does absolutely nothing other that return a null reference is a horrible idea. Now any caller has to worry that if anything goes wrong, they will get back a null reference. The caller will also have no idea why the call failed. Any attempt to find the source of failure will be fruitless. Removing all the code from the exception handler would be a functional equivalent. I can't think of a worse criticism of a block code than the fact that it is functionally equivalent to no code at all. Had the coder not written an exception handler at all, this code would be improved. Actually, I think that's a worse criticism -- the entire exception handler has negative value.
    • The function return is done VB6 style. It's almost as if the coder prefers the old style. Personally, I was very happy when VB.Net got a proper Return.


    This rant wasn't about this code in particular, but about the stupid battle of "coding standards" that I've seen in too many jobs. I have seen very few coding standards that would actually promote reasonable code in my past jobs. At the same time, I've never worked anywhere that didn't have a three page section on variable naming, complete with examples and penalties when the standard is not followed. I would like to know how this pile of garbage would do against your current coding standards. Please refrain from "Our coding standards prohibit VB" as equivalent garbage code can be created in any language. It's just easier to find volunteers to create it in VB.

        <WebMethod()> _
        Public Function GetHatSizes(ByVal HatIDs As List(Of Integer)) As List(Of HatSize)
            Using cn As New SqlClient.SqlConnection(strConnString)
                Dim ObjHats As New List(Of Hat)
                Dim ObjHat As Hat
                Dim IntHatCircumference As Integer
                Dim ObjHatSize As HatSize
                Dim CollHats As New List(Of HatSize)
    
            Try
                ObjHats = Hat.GetHatsByIDs(cn, Nothing, HatIDs)
    
                For Each ObjHat In ObjHats
                    Select Case ObjHat.HatSize
                        Case Is &lt;= 0
                            IntHatCircumference = 0
                        Case Is &gt; 12
                            IntHatCircumference = Math.Floor((ObjHat.HatSize * 0.8))
                        Case Else
                            IntHatCircumference = (ObjHat.HatSize + 10) * 5
                    End Select
                    ObjHatSize = New HatStuffInstance()
                    ObjHatSize.HatID = ObjHat.HatID
                    ObjHatSize.Circumference = IntHatCircumference
                    CollHats.Add(ObjHatSize)
                    ObjHatSize = Nothing
                Next
                ObjHat = Nothing
                ObjHats = Nothing
    
                GetStuff = CollHatInventory
            Catch ex As Exception
                If Not ObjHat Is Nothing Then ObjHat = Nothing
                If Not ObjHats Is Nothing Then ObjHats = Nothing
                GetStuff = Nothing
            End Try
        End Using
    End Function
    


  • @Jaime said:

    This rant wasn't about this code in particular, but about the stupid battle of "coding standards" that I've seen in too many jobs. I have seen very few coding standards that would actually promote reasonable code in my past jobs.
     

    1. Writing code that is easy to read, follow  and understand

    is entirely separate and different from

    2  Writing code that actually works

     Coding standards gives you #1 and makes the WTFs a little easier to find when you don't get #2.  Without coding standards you may get #1.   or #2.  or both.  or neither.  Nobody will ever really know for sure.

     



  • @Jaime said:

    I would like to know how this pile of garbage would do against your current coding standards
    As much as I rage about our codebase I have to say that at least I don't have to deal with pointless coding standards, mostly because we don't formally have any. Things like this are agreed upon informally (we're a tiny team) and every single coding standard has a reason for being used. None of this cargo cult nonsense. Unfortunately we are a bit more lax than we should be sometimes, but I guess you can't have everything.


  • :belt_onion:

    @Jaime said:

    Every single piece of cleanup code in this procedure is meaningless. Setting a reference to nothing at the end of a procedure does nothing. Even in languages where memory management is all manual, simply nulling the references doesn't do it. The variable will go out of scope in a few nanoseconds anyways and the result will be exactly the same.
    Well, there used to be a bug in the VB5 (IIRC) compiler which didn't handle references going out of scope correctly and the work-around was to manually set references to Nothing.

    The rest is history



  • @Jaime said:

    Setting a reference to nothing at the end of a procedure does nothing. Even in languages where memory management is all manual, simply nulling the references doesn't do it. The variable will go out of scope in a few nanoseconds anyways and the result will be exactly the same.
    I've seen this in a lot of VB6-style code. It's common enough that I bet someone, somewhere decided that this was a really good idea, but I can't imagine why. It might be for ensuring that loop variables aren't used outside their loops, but it would be better to declare such variables in loop scope in that case.

    Does anyone who has actually worked with VB6 know why VB6 programmers do this?

    Edit: Nevermind. Looks like bjolling has the answer. That is fucked up. So a bug in VB5 is affecting code readability more than a decade later? Fucking hell...


  • Discourse touched me in a no-no place

    @Welbog said:

    So a bug in VB5 is affecting code readability more than a decade later? Fucking hell...




    VS 2008: http://msdn.microsoft.com/en-us/library/hks5e2k6.aspx

    Unlike Class_Terminate, which executes as soon as an object is set to nothing, there is usually a delay between when an object loses scope and when Visual Basic calls the Finalize destructor. Visual Basic 2005 and later versions allow for a second kind of destructor, Dispose, which can be explicitly called at any time to immediately release resources.

    VS 2005: http://msdn.microsoft.com/en-us/library/8bz4b1h8%28VS.80%29.aspx
    Error Message
    'Class_Terminate' event is no longer supported. Use 'Sub Finalize' to free resources.

    The Class_Terminate event of previous versions of Visual Basic is replaced by class destructors.

    By default, this message is a warning.


  • "x = Nothing" and "x.Dispose" are not the same thing.  One nulls the reference and waits for the garbage collector to destroy the object and the other explicitly destroys the object as mentioned in your first link.  Nulling a reference at the end of a procedure is pointless.  The best way to handle the situation is to use a Using block that will guarantee that Dispose is called when the block is exited.

    This is from your first link:

    Another difference between the garbage-collection systems involves the use of Nothing. To take advantage of reference counting in Visual Basic 6.0 and earlier versions, programmers sometimes assigned Nothing to object variables to release the references those variables held. If the variable held the last reference to the object, the object's resources were released immediately. In later versions of Visual Basic, while there may be cases in which this procedure is still valuable, performing it never causes the referenced object to release its resources immediately. To release resources immediately, use the object's <FONT color=#0033cc>Dispose</FONT> method, if available. The only time you should set a variable to Nothing is when its lifetime is long relative to the time the garbage collector takes to detect orphaned objects.



  • Setting a variable for nothing has a purpose and was hinted at in the origial post.

    If you have a reference type, and that type has a dispose, then you should set the variable to nothing after disposing.

    The reasoning behind this is simple.  Disposing of an object does not set it our of scope. An object still in scope but disposed and then reintstantiated with NEW will throw an exception about using a variable that is disposing.This is not a garuntee to happen because you have no idea when the garbage collector will grab it.  The simple way to avoid this is to set your variable to nothing right after disposing so that instance is gone from the variable, then you can reuse the variable to create a new instance.

    So this leads to people just setting everything to Nothing as clean up code happens so you dont forget about the exceptions.  This then leads people to forget what those exceptions are.  This then leads to most of us here that thinks setting a variable to nothing has absolutly no purpose. This is the real WTF in this case.



  • @KattMan said:

    Setting a variable for nothing has a purpose and was hinted at in the origial post.

    If you have a reference type, and that type has a dispose, then you should set the variable to nothing after disposing.

    The reasoning behind this is simple.  Disposing of an object does not set it our of scope. An object still in scope but disposed and then reintstantiated with NEW will throw an exception about using a variable that is disposing.This is not a garuntee to happen because you have no idea when the garbage collector will grab it.  The simple way to avoid this is to set your variable to nothing right after disposing so that instance is gone from the variable, then you can reuse the variable to create a new instance.

    So this leads to people just setting everything to Nothing as clean up code happens so you dont forget about the exceptions.  This then leads people to forget what those exceptions are.  This then leads to most of us here that thinks setting a variable to nothing has absolutly no purpose. This is the real WTF in this case.

    A Using block will Dispose the object as the variable goes out of scope.  The real WTF is trying to manually emulate this behavior instead of just using "Using" if the class implements IDisposable.  If the class doesn't implement IDisposable then the issue is moot.  So, most instances of "x = Nothing" (when x is a small scoped variable), are either instances of a missing Using block or instances of unneccessary nulling.



  • @Jaime said:

    A Using block will Dispose the object as the variable goes out of scope.  The real WTF is trying to manually emulate this behavior instead of just using "Using" if the class implements IDisposable.  If the class doesn't implement IDisposable then the issue is moot.  So, most instances of "x = Nothing" (when x is a small scoped variable), are either instances of a missing Using block or instances of unneccessary nulling.
     

    Did I not say that this only applies to object with Dispose?

    Also Using is relativly new to VB.Net, so emulating it is a necessity when it is not available.

    Using also hides the fact that the object is disposed and finalized essentially setting it to nothing so it can be re-instantiated again if the Using is in a loop, further hiding the fact that many of us will forget about this exception to Nothing having a purpose.



  • So, are you suggesting that the five "var = Nothing" statements in the code in the original post are good because they will help you not forget those one-in-a-million times when you are using a seven year old version of the framework (nine if using C# because using was introduced in 1.1 for C#) and the object happens to implement IDisposable?  Also, consider the fact that that "best practices" code still won't help since the developer is not conditioning himself to properly Dispose objects, only to null the references, which we all agree is generally useless, except after Disposing and a few other corner cases.

    Nothing has a purpose, but it is obscure.  No amount of over-using it will make a developer remember to use it properly in the obscure cases.  This is a classic example of "cargo-cult programming" where a developer does something because other people do it, but without any actual understanding of what they are doing.  The people who understand, do it right and only do it when necessary.  The people who don't understand, do it sporadically and usually get burned in those cases where proper usage makes a difference.



  •  My company's coding standards document has, as its last standard: Understand the reasoning behind each of these rules, and do not follow them where their reasoning does not apply.

     



  • @Jaime said:

    So, are you suggesting that the five "var = Nothing" statements in the code in the original post are good because they will help you not forget those one-in-a-million times when you are using a seven year old version of the framework (nine if using C# because using was introduced in 1.1 for C#) and the object happens to implement IDisposable?
     

    No I am not suggesting that at all.

    I am saying people do this so they dont forget to do it when needed, which is wrong.  By doing this to everything they forget why it is needed in the first place.  Then since they have forgotten why it is needed, people believe that it is never needed (as many here seem to think).  This, in its entireity, is a WTF all on its own as I stated in my original reply here.



  • @KattMan said:

    @Jaime said:

    So, are you suggesting that the five "var = Nothing" statements in the code in the original post are good because they will help you not forget those one-in-a-million times when you are using a seven year old version of the framework (nine if using C# because using was introduced in 1.1 for C#) and the object happens to implement IDisposable?
     

    No I am not suggesting that at all.

    I am saying people do this so they dont forget to do it when needed, which is wrong.  By doing this to everything they forget why it is needed in the first place.  Then since they have forgotten why it is needed, people believe that it is never needed (as many here seem to think).  This, in its entireity, is a WTF all on its own as I stated in my original reply here.

    Then I'm with you.  I somehow took your assessment of "them" as your opinion.  Sorry for any frustration.



  • I know this may not be applicable, however, in VBA your can disconnect a dataset from it's underlying database (creating an in-memory, disconnected data set to work with) by setting it's datasource = nothing.  This can significantly speed subsequent interataction with the dataset (for read-only, modifying, reconnecting and updating the original source become a teensy bit trickier).



  • @Medezark said:

    I know this may not be applicable, however, in VBA your can disconnect a dataset from it's underlying database (creating an in-memory, disconnected data set to work with) by setting it's datasource = nothing.  This can significantly speed subsequent interataction with the dataset (for read-only, modifying, reconnecting and updating the original source become a teensy bit trickier).

    That was how to create a disconnected recordset in VB ten years ago.  Nowadays, all recordsets are automatically disconnected.  Anyways, setting a property to Nothing and setting a reference variable to Nothing are (usually) two different things.  The only Nothing that really matters in the context of this discussion is "Nothing-ing" that last thing that has a reference to an instance of an object.



  • @Jaime said:

    I ran across this code at work a while back (although I changed the subject to hats for anonymization purposes, ignore the silliness of what it seems to do). I couldn't help think that this code follows every best practice that a just-out-of-school programmer is taught their first week of work, but every one of them applied as annoyingly as possible. Amazingly, although the coder seems to have done everything according to the coding standards, this couldn't be any worse.

    Here is a list of practices that I see applied:

    • Naming conventions
    @Jaime said:

    However, here is what I see wrong with it:
    • Although the code follows hungarian naming conventions, no benefit is gained. Almost every prefix is "Obj" even though there are several incompatible classes represented. The other prefixes only serve to make the code harder to read.

    Well yeah, hang on - who ever said that Hungarian was a best practice?  It's always been a stupid idea, and for exactly the reasons you mention, only ever significantly promoted by Microsoft - and even they realised it was dumb the first time they found themselves writing the (now legendary) "DWORD wParam".

     


  • Discourse touched me in a no-no place

    @DaveK said:

    @Jaime said:
    However, here is what I see wrong with it:

    • Although the code follows hungarian naming conventions, no benefit is gained. Almost every prefix is "Obj" even though there are several incompatible classes represented. The other prefixes only serve to make the code harder to read.
    Well yeah, hang on - who ever said that Hungarian was a best practice?
    The code in the first post isn't using Hungarian.

    Hungarian is using a prefix to describe what type the variable represents, not what type the variable is being represented by (the latter is what Microsoft did, giving Hungarian a bad name.)

    is an example of how Hungarian should be implemented.


  • @PJH said:

    The code in the first post isn't using Hungarian.

    Sure it is.  It's just a horrible variation of Apps Hungarian instead of the more sane, but lesser used, Systems Hungarian.  The whole WTF here is that this bad code does meet the current coding standard at many organizations.  People who work at companies that make software for a living usually aren't subject to this torture, but many IT departments in other industries are infested with people who think that setting a common naming standard and a few other rules, and then enforcing them to rediculous extents, is the golden path to success.

    I once worked somewhere that forbid any database access that didn't go through the "standard" stored procedures, no rights were granted directly to the tables.  The standard stored procedures where machine generated.  There were always three procedures, one each for INSERT, UPDATE, and DELETE.  The system also generated a SELECT procedure for each key on the table.  Security was generated too, the application user was granted EXECUTE to all of the generated stored procedures.  In summary, no users can do anything to any table, but any user can affect any change to the database by running the generated procedures.  Brillant.



  • @PJH said:

    Hungarian is using a prefix to describe what type the variable represents, not what type the variable is being represented by (the latter is what Microsoft did, giving Hungarian a bad name.)

    When working with scripting languages that are not typed I use this bastardized form of Hungarian naming so that I can easily keep track of what is an int, string, object etc. I do this to add a layer of mental checking that what I am doing with a variable is something that should be expected of data held in that variable.



  • @OzPeter said:

    @PJH said:
    Hungarian is using a prefix to describe what type the variable represents, not what type the variable is being represented by (the latter is what Microsoft did, giving Hungarian a bad name.)
    When working with scripting languages that are not typed I use this bastardized form of Hungarian naming so that I can easily keep track of what is an int, string, object etc. I do this to add a layer of mental checking that what I am doing with a variable is something that should be expected of data held in that variable.

    Sorry, I forgot to mention that untyped languages are the one situation where it's not a complete waste of time.  Then again, I would argue that untyped languages are in themselves a waste of time and effort(*) already... or to put it another way, one of the bad consequences of designing an untyped language is the extra effort and pain users have to go to in an attempt to mitigate the bugs that this will inevitable cause, such as hideously uglifying everything with Hungarian.

    (*) - Good for fifty line scripts doing simple file processing jobs.  Bad for anything more complex.



  • @KattMan said:

    Setting a variable for nothing has a purpose and was hinted at in the origial post.

    If you have a reference type, and that type has a dispose, then you should set the variable to nothing after disposing.

    The reasoning behind this is simple.  Disposing of an object does not set it our of scope. An object still in scope but disposed and then reintstantiated with NEW will throw an exception about using a variable that is disposing.This is not a garuntee to happen because you have no idea when the garbage collector will grab it.  The simple way to avoid this is to set your variable to nothing right after disposing so that instance is gone from the variable, then you can reuse the variable to create a new instance.

    So this leads to people just setting everything to Nothing as clean up code happens so you dont forget about the exceptions.  This then leads people to forget what those exceptions are.  This then leads to most of us here that thinks setting a variable to nothing has absolutly no purpose. This is the real WTF in this case.

     

    Under .Net setting a variable serves no purpose when the variable is defined in a function / sub as the runtime will detect when the variable is out of scope and make it eligible for garbage collection, in a release build this will be tracked based on when the variable is last referred to in the method and not when the method exits so deliberately setting it to Nothing serves no purpose in terms of memory usage.

    If an object only refers to managed resources then there is no need to manage the memory of objects that are scope to a single class or routine as their lifetime is tracked by the runtime and they are entirely the GC's responsibility. If the object holds onto non-managed resources (file handles, db connections, window handles etc.) then these resources cannot be left to the vagaries of the GC so the .Dispose method is there to provide a mechanism which gives a deterministic way to free these non-managed resources, once called there is no need to set an object to Nothing as the rest is back to being the GC's problem.

    @KattMan said:

    An object still in scope but disposed and then reintstantiated with NEW will throw an exception about using a variable that is disposing.
    If you declare a variable and assign an instance of an object that requires Disposing (i.e. FileStream) and then call .Dispose and then assign a new instance you will not get any such error as you are not re-instantiating the object, you are creating a new instance and assigning it to the old variable, the previous object is now out of scope and has become the GC's problem. Setting the variable to nothing serves no purpose here either.

     



  • @spenk said:

    If an object only refers to managed resources then there is no need to manage the memory of objects that are scope to a single class or routine as their lifetime is tracked by the runtime and they are entirely the GC's responsibility
     

    You obviously have never worked with object with a large unmanaged block of memory inside a loop where the object is constantly being re-used.  In this case "Using" does not completly solve the problem.  The situation is this, you have a bitmap variable inside a loop (Using or not) you process the image into another image (cant go with Using here) and save it and move to the next one.  The .Net runtime sees the object constantly being reused so does not collect.  But the unmanaged memory blocks are not always recovered and start fragmenting.  Since the variables are constantly reused, the memory is not claimed and eventually you blow the memory stack and the app crashes.  The only solution to this was to garuntee dispose set to nothing and GC.Collect.  I no longer care what the theory is, I have seen this in practice.

     @spenk said:

    If you declare a variable and assign an instance of an object that requires Disposing (i.e. FileStream) and then call .Dispose and then assign a new instance you will not get any such error as you are not re-instantiating the object

    Read up on the cause of System.ObjectDisposedException and think again.

     



  • @KattMan said:

     @spenk said:

    If you declare a variable and assign an instance of an object that requires Disposing (i.e. FileStream) and then call .Dispose and then assign a new instance you will not get any such error as you are not re-instantiating the object

    Read up on the cause of System.ObjectDisposedException and think again.

    I am fully aware of what an ObjectDisposedException is - it is thrown when attempting to use an object that has been disposed which has nothing at all to do with assigning a new instance of an object to a variable that points to a disposed instance. If you are correct then something like

            Dim fs As New FileStream("c:\test.txt", FileMode.Create)
            fs.WriteByte(0)
            fs.Dispose()

            fs = New FileStream("c:\test2.txt", FileMode.Create)

    would throw an exception which it simply does not do.

            Dim fs As New FileStream("c:\test.txt", FileMode.Create)
            fs.WriteByte(0)
            fs.Dispose()

            fs.WriteByte(1)

    would throw such an exception  however as it is attempting to use the disposed object.

    @KattMan said:

    @spenk said:

    If an object only refers to managed resources then there is no need to manage the memory of objects that are scope to a single class or routine as their lifetime is tracked by the runtime and they are entirely the GC's responsibility
     

    You obviously have never worked with object with a large unmanaged block of memory inside a loop where the object is constantly being re-used.  In this case "Using" does not completly solve the problem.  The situation is this, you have a bitmap variable inside a loop (Using or not) you process the image into another image (cant go with Using here) and save it and move to the next one.  The .Net runtime sees the object constantly being reused so does not collect.  But the unmanaged memory blocks are not always recovered and start fragmenting.  Since the variables are constantly reused, the memory is not claimed and eventually you blow the memory stack and the app crashes.  The only solution to this was to garuntee dispose set to nothing and GC.Collect.  I no longer care what the theory is, I have seen this in practice.

    The very first thing in the block you quoted says "if the object only refers to managed resources" and you claim I am wrong with an opening argument of "You obviously have never worked with object with a large unmanaged block of memory" - I was talking about managed resources and I am wrong because you are talking about unmanaged resources - read what I posted and then argue.


  • @Jaime said:

    The function return is done VB6 style. It's almost as if the coder prefers the old style. Personally, I was very happy when VB.Net got a proper Return.

    Actually, I think that function always returns Nothing and places its result in the GetStuff instance/shared variable.

    You see what I did there?


  • :belt_onion:

    @KattMan said:

    @Jaime said:

    So, are you suggesting that the five "var = Nothing" statements in the code in the original post are good because they will help you not forget those one-in-a-million times when you are using a seven year old version of the framework (nine if using C# because using was introduced in 1.1 for C#) and the object happens to implement IDisposable?
     

    No I am not suggesting that at all.

    I am saying people do this so they dont forget to do it when needed, which is wrong.  By doing this to everything they forget why it is needed in the first place.  Then since they have forgotten why it is needed, people believe that it is never needed (as many here seem to think).  This, in its entireity, is a WTF all on its own as I stated in my original reply here.

    I just noticed Spenk already explained why you are wrong, but I want to add something. Not only is it NEVER necessary to set objects to Nothing manually in .NET (even the older versions), it is even a bad practice. You are interfering with the optimizations that the runtime can provide.

    Say you write this code:

    BJObject myObject = new BJObject("bjolling");
    myObject.DoSometing
    
    ... snip code where myObject isn't used
    
    myObject = nothing
    

    You are preventing the GC to clean "myObject" from memory after the DoSometing call. Now the GC has to keep myObject in memory until it reaches the useless "myObject = nothing".



  • @Spectre said:

    @Jaime said:
    The function return is done VB6 style. It's almost as if the coder prefers the old style. Personally, I was very happy when VB.Net got a proper Return.

    Actually, I think that function always returns Nothing and places its result in the GetStuff instance/shared variable.

    You see what I did there?

    Sorry, I screwed up the anonymization.



  • @KattMan said:

    @spenk said:
    If an object only refers to managed resources then there is no need to manage the memory of objects that are scope to a single class or routine as their lifetime is tracked by the runtime and they are entirely the GC's responsibility
     

    You obviously have never worked with object with a large unmanaged block of memory inside a loop where the object is constantly being re-used.  In this case "Using" does not completly solve the problem.  The situation is this, you have a bitmap variable inside a loop (Using or not) you process the image into another image (cant go with Using here) and save it and move to the next one.  The .Net runtime sees the object constantly being reused so does not collect.  But the unmanaged memory blocks are not always recovered and start fragmenting.  Since the variables are constantly reused, the memory is not claimed and eventually you blow the memory stack and the app crashes.  The only solution to this was to garuntee dispose set to nothing and GC.Collect.  I no longer care what the theory is, I have seen this in practice.

    Sounds like a Mono thing.  It's actually known for its "conservative" GC -- that is, not collecting stuff that "looks like" it might be in use.  It doesn't know for sure, though, cause the collector apparently doesn't know the difference between a pointer and an int.

    Anyway...

    (1) Variables aren't inside a loop.  Code is inside a loop.  At most the bitmap is being created in a loop, and I assume you mean over and over, since it's the only situation that makes sense without further info.   In that case, the former bitmap gets replaced entirely, and is thus eligible for finalization.  Finalizers for IDisposable's should be calling Dispose(), and if they aren't, that's a bug in the library.

    (2) The runtime doesn't care whether an object is "used" or "reused".  It cares whether that object is reachable.  If running code has a reference to the object, even if you're not using it anymore, it can't be collected.  This means it can be a good thing to set a reference variable to null once you're done with it, if you're not intending to immediately either leave the function or reuse the variable to refer to another object.

    (3) GC.Collect() doesn't free unmanaged resources -- that's part of what "unmanaged" means, and it's the whole reason for the IDisposable interface.

    (4) Non-"value type" objects aren't created on the stack in most cases.  (It's possible, if the JIT realizes the object will never be used outside the current function, but should never be relied on.)  And any object in memory, if it's not on the stack and isn't "pinned", can be (and often is) moved around to reduce fragmentation.  Even "unmanaged" bitmaps and such are managed by the OS -- and much better than you seem to be implying here. Only unmanaged memory can create such fragmentation problems.  And if that's your problem, then TRWTF is that you're playing with P/Invoke and IntPtr's without knowing what you're doing, and then blaming the runtime for letting you.

    (5) Two different bitmaps using the same memory is a WTF in itself, unless the OS does some copy-on-write thing i've never heard about. 

    In any case, if you have some code that can demonstrate this crash, i would *love* to see it.  As would the .net team, no doubt.  But til you can produce it, i call bullshit.

     

    @KattMan said:

    @spenk said:
    If you declare a variable and assign an instance of an object that requires Disposing (i.e. FileStream) and then call .Dispose and then assign a new instance you will not get any such error as you are not re-instantiating the object

    Read up on the cause of System.ObjectDisposedException and think again.

    Variables are not objects.  They are *references* to objects.  Reassigning has little to do with the old object, other than potentially making it unreachable (and thus, collectable).  If the code below throws an exception on your VM, then your VM is broken.

    [code] Sub Main()
    Dim b As System.Drawing.Bitmap
    For i As Integer = Integer.MinValue To Integer.MaxValue
    b = New System.Drawing.Bitmap(1024, 1024, Drawing.Imaging.PixelFormat.Format32bppArgb)
    Console.WriteLine("{0} bytes left", GC.GetTotalMemory(False))
    b.Dispose()
    Next
    End Sub
    [/code]

    This code has been running on my machine for 20 minutes now.  And if you comment out the "b.Dispose()" line, you'll see why i called bullshit earlier.  It still runs just fine, cause the finalizer for a Bitmap does what any self-respecting IDisposable's finalizer does -- it calls Dispose() on its own.



  • @Jaime said:

    Almost every prefix is "Obj" even though there are several incompatible classes represented.
    This has got to be one of my pet hates, variable after variable prefixed with 'obj' when technically everything is an object!


Log in to reply