A field of mutually exclusive bits.



  • @joe.edwards said:

    Exceptions for flow control are just plain wrong.

    Well I should have clarified.

    Bad use of exceptions for flow control: @http://c2.com/cgi/wiki?DontUseExceptionsForFlowControl said:

    
    void search( TreeNode node, Object data ) throws ResultException {
    	if (node.data.equals( data ))
    		throw new ResultException( node );
    	else {
    		search( node.leftChild, data );
    		search( node.rightChild, data );
    	}
    }
    

    The cute trick here is that the exception will break out of the recursion in one step no matter how deep it has got.

    Good use of exceptions for flow control:

    
    void SomeMethod()
    {
    	bool succeeded = DoStuff();
    	if (!succeeded)
    		throw new DoStuffFailedException("DoStuff");
    
    succeeded = DoSomeMoreStuff();
    if (!succeeded)
    	throw new DoStuffFailedException("DoSomeMoreStuff");
    

    }


    and so on. You use exceptions to stop partway through the method when you realize there's no point in continuing. And instead of "goto cleanupcode;" you'd put this code in a try block and the cleanup code in the finally.

    Of course it would be better if DoStuff() threw the exception itself, but sometimes you do have to interface with someone else's code/library.



  • @Ben L. said:

    @Arnavion said:
    If your language doesn't have exceptions (I'm looking at you, Go), then the only way is to have a "do X -> did X fail? -> early return with error code" block for each X.

    Not exactly.

    @Arnavion said:

    Filed under: "We put exceptions in the language but we repeatedly warn you not to use them and that we'll hate you if you do", Cue someone correcting me that Go does have exceptions


  • Considered Harmful

    @Ben L. said:

    @Arnavion said:
    If your language doesn't have exceptions (I'm looking at you, Go), then the only way is to have a "do X -> did X fail? -> early return with error code" block for each X.

    Not exactly.

    Defer sounds potentially neat. I wish morbs was around to point out the obvious pitfall I'm missing that makes it retarded.



  • @joe.edwards said:

    @Ben L. said:
    @Arnavion said:
    If your language doesn't have exceptions (I'm looking at you, Go), then the only way is to have a "do X -> did X fail? -> early return with error code" block for each X.
    Not exactly.
    Defer sounds potentially neat. I wish morbs was around to point out the obvious pitfall I'm missing that makes it retarded.
    IIRC, even Morbs said Go may have some good features hiding among the mountains of WTF (but not enough to be worth looking for them). Maybe you just found one.


  • Considered Harmful

    @HardwareGeek said:

    @joe.edwards said:

    @Ben L. said:
    @Arnavion said:
    If your language doesn't have exceptions (I'm looking at you, Go), then the only way is to have a "do X -> did X fail? -> early return with error code" block for each X.

    Not exactly.

    Defer sounds potentially neat. I wish morbs was around to point out the obvious pitfall I'm missing that makes it retarded.
    IIRC, even Morbs said Go may have some good features hiding among the mountains of WTF (but not enough to be worth looking for them). Maybe you just found one.

    So, C# (and others) has the try { ... } finally { ... } construct to handle "must happen no matter what" type actions like cleaning up file handles (there's also using but it's semantic sugar for try...finally).

    If the try block is lengthy, the cleanup code might be a screen or more away from the open code, so it's not immediately obvious at-a-glance if everything is being cleaned up. Also, opening multiple resources at once can lead to deeply indented blocks of code. Those are the downsides.

    Go's defer statement solves both the issues of the cleanup code being separated from the resource acquisition, and the nesting issue. Go's approach, however, does not clean up resources until the end of the method (as far as I can tell), so if you only needed that file handle for a moment, you end up locking it for the duration of the method instead of only for the moment you actually were using it. This may be mitigated by refactoring.

    I still have to award the point to C#, because the cons are mostly "it's ugly" and the pro is "finer grained control of when resources are freed."



  • @joe.edwards said:

    If the try block is lengthy, the cleanup code might be a screen or more away from the open code, so it's not immediately obvious at-a-glance if everything is being cleaned up. Also, opening multiple resources at once can lead to deeply indented blocks of code. Those are the downsides.

    Those are the downsides of using *try-finally*. "using" has none of those downsides, and is in fact better than Go because it doesn't have the issues you mention later in Go's way.

    try-finally gives you more flexibility because you can do anything you want in the finally, rather than being restricted to calling Dispose(). You can have logging, for example. But the Go way of doing that would be worse, because you'd have to write the contents of your would-be-finally block in reverse order. So defer file.Close() goes first, then defer log("Something went wrong."), etc. which is worse for readability.


  • Considered Harmful


    using( var dbConnection = Factory.GetConnection() ) {
    if( dbConnection == null ) throw new Exception();
    using( var cmd = new SqlCommand( "sp_FooBar", dbConnection ) ) {
    using( var fileOutput = File.Open( @"Foo" ) ) {
    using( var sw = new StreamWriter( fileOutput ) ) {
    using( var reader = cmd.ExecuteReader() ) {
    while( reader.Read() ) {
    sw.WriteLine( reader.GetString( 0 ) );
    }
    reader.Close();
    }
    sw.Flush();
    sw.Close();
    }
    fileOutput.Close();
    }
    }
    dbConnection.Close();
    }

    Nope, using doesn't suffer from the indention problem at all.



  • @Arnavion said:

    @joe.edwards said:
    If the try block is lengthy, the cleanup code might be a screen or more away from the open code, so it's not immediately obvious at-a-glance if everything is being cleaned up. Also, opening multiple resources at once can lead to deeply indented blocks of code. Those are the downsides.

    Those are the downsides of using *try-finally*. "using" has none of those downsides, and is in fact better than Go because it doesn't have the issues you mention later in Go's way.

    try-finally gives you more flexibility because you can do anything you want in the finally, rather than being restricted to calling Dispose(). You can have logging, for example. But the Go way of doing that would be worse, because you'd have to write the contents of your would-be-finally block in reverse order. So defer file.Close() goes first, then defer log("Something went wrong."), etc. which is worse for readability.

    it's even worse than that, you'd have to specifically check in the defered function whether you're panicking or not, and then react to it in the event of something going wrong. The even worse bit about this is that if you want something like a stack trace you have to specially design your panic codes to encode the stack trace into it. So GO manages to have all the inconveniences of error codes, while also having most of the inconveniences of exceptions too. It's like the designers looked at existing languages and tried to take only the parts that produced the worst designs.



  • Despite your nice coding of doing a billion things in one function...

    @joe.edwards said:

    using( var dbConnection = Factory.GetConnection() ) 
    {
        if( dbConnection == null ) 
            throw new Exception();
    
    using( var cmd = new SqlCommand( "sp_FooBar", dbConnection ) )
    using( var fileOutput = File.Open( @"Foo" ) )
    using( var sw = new StreamWriter( fileOutput ) )
    using( var reader = cmd.ExecuteReader() )
    {
        while( reader.Read() ) {
            sw.WriteLine( reader.GetString( 0 ) );
    }
    

    }


    Nope, using doesn't suffer from the indention problem at all.

    FTFY


  • Considered Harmful

    @Sutherlands said:

    Despite your nice coding of doing a billion things in one function...

    *gasp* You mean my example code to illustrate a possible problem is not best practice?

    I've also been bitten by Dispose methods that didn't fully finalize, like an IO class that would just close the handle on Dispose without Flushing the output buffer, so the output was truncated - which is why I had that Flush call in there.

    Might not be required here, but once bitten twice shy.



  • @joe.edwards said:

    using( var dbConnection = Factory.GetConnection() ) {
    if( dbConnection == null ) throw new Exception();
    using( var cmd = new SqlCommand( "sp_FooBar", dbConnection ) ) {
    using( var fileOutput = File.Open( @"Foo" ) ) {
    using( var sw = new StreamWriter( fileOutput ) ) {
    using( var reader = cmd.ExecuteReader() ) {
    while( reader.Read() ) {
    sw.WriteLine( reader.GetString( 0 ) );
    }
    reader.Close();
    }
    sw.Flush();
    sw.Close();
    }
    fileOutput.Close();
    }
    }
    dbConnection.Close();
    }

    Nope, using doesn't suffer from the indention problem at all.

    I don't think you've ever used using.

    @FTFY said:

    using( var dbConnection = Factory.GetConnection() ) {
    using( var cmd = new SqlCommand( "sp_FooBar", dbConnection ) )
    using( var fileOutput = File.Open( @"Foo" ) )
    using( var sw = new StreamWriter( fileOutput ) )
    using( var reader = cmd.ExecuteReader() )
    while( reader.Read() ) {
    sw.WriteLine( reader.GetString( 0 ) );
    }

    The whole point of using is that those explicit Close() are not there. And if you use the default C# formatting (aka everything VS does) then you'll notice using's don't get indented for exactly this reason.

    @Sutherlands said:


    using( var dbConnection = Factory.GetConnection() )
    {
    if( dbConnection == null )
    throw new Exception();

    I think that's a superfluous null-check. I'm not near a compiler right now but I'm pretty sure if Factory.GetConnection() returns null the using will throw an NPE. I might be wrong though.



  • @joe.edwards said:

    I've also been bitten by Dispose methods that didn't fully finalize, like an IO class that would just close the handle on Dispose without Flushing the output buffer, so the output was truncated - which is why I had that Flush call in there.

    A genuine concern. But if I were in your position I'd drop back to using try-finally. Having both would-be-in-a-finally-block code and a using smells.


  • ♿ (Parody)

    @Arnavion said:

    @FTFY said:
    using( var dbConnection = Factory.GetConnection() ) {
    using( var cmd = new SqlCommand( "sp_FooBar", dbConnection ) )
    using( var fileOutput = File.Open( @"Foo" ) )
    using( var sw = new StreamWriter( fileOutput ) )
    using( var reader = cmd.ExecuteReader() )
    while( reader.Read() ) {
    sw.WriteLine( reader.GetString( 0 ) );
    }

    And if you use the default C# formatting (aka everything VS does) then you'll notice using's don't get indented for exactly this reason.

    That seems like a bad idea. I also hate omitted brackets, though I suppose I could live with an exception when you "chain" usings like this. But why would you want brackets to look differently here? Doesn't this also make the scope less obvious? Maybe it only does this when you have usings on consecutive lines? The positioning of "while " in your example seems to say so.

    That aside, I like the RAII-ness of using. It seems like a pretty reasonable hack around the nature of the GC.



  • @boomzilla said:

    Maybe it only does this when you have usings on consecutive lines? The positioning of "while " in your example seems to say so.

    That is correct. It's a special case in Visual Studio's indentation rules for consecutive using statements.



  • @Arnavion said:

    I think that's a superfluous null-check. I'm not near a compiler right now but I'm pretty sure if Factory.GetConnection() returns null the using will throw an NPE. I might be wrong though.
     

    It won't.  It will probably throw a NRE in new SqlCommand, or somewhere else down the line... but code throwing a NRE is a bug in the code.  Instead it should throw a more descriptive error.

    Also, read the tags.



  • @joe.edwards said:

    *gasp* You mean my example code to illustrate a possible problem is not best practice?
     

    Yes.  Point being that you don't have indention problems if you don't have so many usings.

    @joe.edwards said:

    I've also been bitten by Dispose methods that didn't fully finalize, like an IO class that would just close the handle on Dispose without Flushing the output buffer, so the output was truncated - which is why I had that Flush call in there.

    Might not be required here, but once bitten twice shy.

    I double-checked that Dispose calls Flush and Close for those methods.  Of course there can be a bug in a library, but I wouldn't put those in unless we found a bug.


  • @boomzilla said:

    It seems like a pretty reasonable hack around the nature of the GC.

    The greatest hack around the nature of the GC is IIS recycling worker processes. This works well because it is a scaled-down version of the system resources deallocation sequence as defined in page 1 of the Microsoft Maintenance Manuel (aka "reboot").



  • @Sutherlands said:

    It won't.  It will probably throw a NRE in new SqlCommand, or somewhere else down the line.

    Just tried it out on a compiler and I stand corrected. I thought using's had a null check on their value (because how else would they call Dispose() on whatever they have) but apparently they don't.



  • @eViLegion said:

    If you have tight restrictions on memory, unions are incredibly useful for container classes that may contain values of different types. This can easily happen in situations outside of embedded systems.

    No. If you must have a field of multiple types, you duplicate your struct declaration. If you must have many fields of multiple types independent of one another, you're doing it wrong.

    @eViLegion said:

    Additionally, if you are regularly copy-constructing an object which contains a lot of booleans, it is considerably more efficient to put those booleans into a bitfield unioned with some integer, and simply copying the integer rather than the individual bits.

    Undefined behavior.

    @eViLegion said:

    In exactly the same vein, if you are populating some message object, which will transmit information about the state of some other object with lots of bools (as above), you can populate that message much faster by copying the unioned integer, rather than the individual booleans. This, however, generally requires the platform (or at the very least endianness) to be the same.

    Undefined behavior.

    @eViLegion said:

    Would you like to explain why you think there are none, rather than simply stating it without any form of justification whatsoever (with the exception of "my mind hasn't managed to think of any yet")?

    Much like "goto", there was never a situation where I found it necessary and/or more clear.


  • Discourse touched me in a no-no place

    @Faxmachinen said:

    @eViLegion said:
    Additionally, if you are regularly copy-constructing an object which contains a lot of booleans, it is considerably more efficient to put those booleans into a bitfield unioned with some integer, and simply copying the integer rather than the individual bits.

    Undefined behavior.

    @eViLegion said:

    In exactly the same vein, if you are populating some message object, which will transmit information about the state of some other object with lots of bools (as above), you can populate that message much faster by copying the unioned integer, rather than the individual booleans. This, however, generally requires the platform (or at the very least endianness) to be the same.

    Undefined behavior.
    This SO answer has the specifics about why type punning is (generally) UD in both C and C++ for those interested in language lawyering. Other languages may be available in your area.



  • I'm not sure what you mean by 'duplicate your struct definition'. I'm genuinely interested to know.

    There is nothing wrong with using undefined behaviour if you know the actual behaviour of the platform/devices you are targeting and you don't mind your code hypothetically not working on other devices that it will never run on.



  • @Ben L. said:

    Your game engine only allows one truck and one car?

    That game is going to suck.

     

    It's just for the prototype. He'll fix it when they get kickstarter funding.

     


  • Discourse touched me in a no-no place

    @joe.edwards said:

    Dispose methods that didn't fully finalize, like an IO class that would just close the handle on Dispose without Flushing the output buffer, so the output was truncated
    Ooooh, evil technical WTF there. And a REALLY good reason to not use that IO library. To the point of switching languages (if it is a standard system library).



  • @Mo6eB said:

    @Ben L. said:

    Your game engine only allows one truck and one car?

    That game is going to suck.

     

    It's just for the prototype. He'll fix it when they get kickstarter funding.

     

    I think your comprehension is lacking. The pointer to the actual vehicle object is not shown in the snippets above. But the enums describing the vehicle category and passenger category are also stored with any actor entity which is a passenger, as it is convenient for the actors update code to know that info without having to lookup the actual vehicle itself.



  • @eViLegion said:

    There is nothing wrong with using undefined behaviour if you know the actual behaviour of the platform/devices you are targeting and you don't mind your code hypothetically not working on other devices that it will never run on.

    Except likely the platform/devices you're targeting can be described as, "every OS made after {today's date}", in which case you still have no way of knowing what the next version of iOS or Windows will do in the case of undefined behavior. So while what you said is technically correct, it's practically useless.

    Don't write shitty software; write good software.



  • @blakeyrat said:

    @eViLegion said:
    There is nothing wrong with using undefined behaviour if you know the actual behaviour of the platform/devices you are targeting and you don't mind your code hypothetically not working on other devices that it will never run on.

    Except likely the platform/devices you're targeting can be described as, "every OS made after {today's date}",...

    If you're writing applications, then yeah. If you're writing XBOX games, not so much.

    @blakeyrat said:

    ...in which case you still have no way of knowing what the next version of iOS or Windows will do in the case of undefined behavior. So while what you said is technically correct, it's practically useless.

    Don't write shitty software; write good software.

    In this case, it has got sod all to do with the operating system, and everything to do with the compiler.

    So, yeah, your compiler might change what it does in future versions, I guess. But if you can still just use the old compiler.



    Plus, let us consider the case of an integer unioned with a bitfield, within some class:



    You have an instance of the class, and you set the bits via some methods, then you copy construct a new instance using the integer for efficiency.



    Now, in a 32 bit integer, every single one of those bits has some significance... change one bit, and you end up with a different valid integer. So, there is no other permutation of bits which will represent that integer value. Therefore, when you copy construct that second instance, the integer value will be the same, and more-to-the-point the bits HAVE to be the same. If those integer bits have been unioned with a bitfield, then the bitfield must also be the same. To get them to not be the same is more difficult. So, yeah, its "undefined", but its actually more difficult to design something coherent that doesn't work this way.



    The "undefined" aspect about this essentially means "will be different between hardware platforms", not "the same hardware will do different things at different times" and not "will change in the future because we've arbitrarily changed x86 so that it's 'bizarre-endian'".


  • Considered Harmful

    @dkf said:

    @joe.edwards said:
    Dispose methods that didn't fully finalize, like an IO class that would just close the handle on Dispose without Flushing the output buffer, so the output was truncated
    Ooooh, evil technical WTF there. And a REALLY good reason to not use that IO library. To the point of switching languages (if it is a standard system library).

    Eh, maybe. I interpret the IDisposable contract as, "in addition to the normal steps you need to take when using this class, you also need to call Dispose when you are finished." That is, it is an additional responsibility for the caller; it doesn't necessarily free you of the normal responsibilities of flushing and closing.


  • Considered Harmful

    @eViLegion said:

    The "undefined" aspect about this essentially means "will be different between hardware platforms or compilers"

    FTFY. When the spec leaves something undefined, compilers have carte blanche to do whatever they want; so, different compilers on the same platform may conformantly do different things.

    It's probably fine if you're just passing data structures around internally within a single module of a single program, but the minute you marshal or serialize that data structure, all bets are off.



  • @eViLegion said:

    If you're writing applications, then yeah. If you're writing XBOX games, not so much.

    If you're writing Xbox games you should be even MORE careful, so you can port to PS3, Wii-U and PC in the future.

    @eViLegion said:

    In this case, it has got sod all to do with the operating system, and everything to do with the compiler.

    The point still applies. And if your compiler's runtime has a security hole (say, it has code to parse an image format, and someone finds a vulnerability in it), then you might not have the choice to skip an upgrade. Unless you want to ship insecure software.

    So again: your advice is technically correct, but practically useless. Code to the documentation, not to the undocumented behavior. Always. No matter the device, or program.

    @eViLegion said:

    So, yeah, its "undefined", but its actually more difficult to design something coherent that doesn't work this way.

    You could just not waste hours of time writing shitty code to save 20 fucking bytes you are the worst programmer the worst



  • @joe.edwards said:

    I interpret the IDisposable contract as, "in addition to the normal steps you need to take when using this class, you also need to call Dispose when you are finished." That is, it is an additional responsibility for the caller; it doesn't necessarily free you of the normal responsibilities of flushing and closing.

    That is not wrong. The documentation only talks about performing actions on *unmanaged* types (as well as propagating the Dispose() call to members, base class, etc.) @http://msdn.microsoft.com/en-us/library/system.idisposable.dispose.aspx said:

    Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.

    Use this method to close or release unmanaged resources such as files, streams, and handles held by an instance of the class that implements this interface. By convention, this method is used for all tasks associated with freeing resources held by an object, or preparing an object for reuse.

    Furthermore, while it does say: @http://msdn.microsoft.com/en-us/library/system.idisposable.dispose.aspx said:

    Users might expect a resource type to use a particular convention to denote an allocated state versus a freed state. An example of this is stream classes, which are traditionally thought of as open or closed. The implementer of a class that has such a convention might choose to implement a public method with a customized name, such as Close, that calls the Dispose method.
    it doesn't say anything about the other way around.

    So I guess your library is not technically wrong to not call Flush() and Close() in Dispose()



  • @blakeyrat said:

    @eViLegion said:
    If you're writing applications, then yeah. If you're writing XBOX games, not so much.

    If you're writing Xbox games you should be even MORE careful, so you can port to PS3, Wii-U and PC in the future.

    Game programming (and game-related programming like drivers) is one of those dark recesses of software development where people ignore the standard libraries' list class for optimization concerns, abuse UB to copy bits, abuse processor timings for effects, use display drivers with undocumented fast-paths to cheat in framerate contests... If that's what eViLegion does for a living then all I can say is I understand where he's coming from.



  • @Arnavion said:

    Game programming (and game-related programming like drivers) is one of those dark recesses of software development where people ignore the standard libraries' list class for optimization concerns, abuse UB to copy bits, abuse processor timings for effects, use display drivers with undocumented fast-paths to cheat in framerate contests...

    All of which is unnecessary and a waste of time in 2013. How many times have you seen a buggy-ass game, because it was written in C++ when there was absolutely no reason to write it in a non-managed language? If XNA has contributed anything, it's to demonstrate that C# is more than sufficiently optimized to write any game you can imagine. And, meanwhile, one of the highest selling games ever isn't just written in Java (probably the WORST language to use for game development), it's not even GOOD Java.

    More like: game development is a field full of terrible programmers who operate almost solely on the "we've always done it that way" principle.



  • @blakeyrat said:

    @eViLegion said:
    If you're writing applications, then yeah. If you're writing XBOX games, not so much.

    If you're writing Xbox games you should be even MORE careful, so you can port to PS3, Wii-U and PC in the future.

    So blakey, would you like to explain how much you know about porting between these systems please. But before you do that, do lets us know how much code you've ever written that runs on such systems, so we can get to grips with just how full of shit you are.



    Again, the only ACTUAL issue with the specific "undefined behaviour" that we've been discussing is the endianness, as previously mentioned. I have already explained a number of techniques that can be used to deal with this minor issue. And this is only an issue when trying to create data that works across platforms of difference endianness.



    So, the code will work (and in fact does work) just as well on a PS3, PC and WiiU as it will on an XBOX... the only difference is that on some systems the same bit will be in a different place. Data naively serialised on one will not deserialise correctly on the other, that's all.

    @blakeyrat said:

    @eViLegion said:
    In this case, it has got sod all to do with the operating system, and everything to do with the compiler.

    The point still applies. And if your compiler's runtime has a security hole (say, it has code to parse an image format, and someone finds a vulnerability in it), then you might not have the choice to skip an upgrade. Unless you want to ship insecure software.

    So again: your advice is technically correct, but practically useless. Code to the documentation, not to the undocumented behavior. Always. No matter the device, or program.

    Does your C++ compiler parse images? Nice feature... but why?



    More to the point, even if the compiler changes, it isn't likely to change this behaviour of unioned bitfields, because the undefinedness is a product of the hardwares endianness, which a compiler cannot change.



    Blakey, you need to stop making blanket statements for which you're unable to justify yourself. 'Always' is a powerful word, and you're clearly too weak to use it properly. For one thing... what if the documentation is wrong? Should we compile programs that just don't work, or should we find out what the actual behaviour is in order to ship software which makes some actual money?

    @blakeyrat said:

    @eViLegion said:
    So, yeah, its "undefined", but its actually more difficult to design something coherent that doesn't work this way.

    You could just not waste hours of time writing shitty code to save 20 fucking bytes you are the worst programmer the worst

    I'm talking about the compiler you fool. It's more difficult to write a compiler that doesn't do the same thing each time (regarding unions with bitfields), yet still creates workable code that complies with language specifications. Sucks to be you blakey, but then I guess you already know that.



  • @blakeyrat said:

    @Arnavion said:
    Game programming (and game-related programming like drivers) is one of those dark recesses of software development where people ignore the standard libraries' list class for optimization concerns, abuse UB to copy bits, abuse processor timings for effects, use display drivers with undocumented fast-paths to cheat in framerate contests...

    All of which is unnecessary and a waste of time in 2013. How many times have you seen a buggy-ass game, because it was written in C++ when there was absolutely no reason to write it in a non-managed language? If XNA has contributed anything, it's to demonstrate that C# is more than sufficiently optimized to write any game you can imagine. And, meanwhile, one of the highest selling games ever isn't just written in Java (probably the WORST language to use for game development), it's not even GOOD Java.

    More like: game development is a field full of terrible programmers who operate almost solely on the "we've always done it that way" principle.

    Blakey, it isn't possible for me to be insulted by someone about whom I couldn't give a shit.



    So, attempting to troll me, just because I have totally demolished you in every single debate we've ever had, has the twin effects of making you look like an sore loser, and making me happy not to have agreed with the biggest fucking idiot subscribed to this forum.



  • @blakeyrat said:

    All of which is unnecessary and a waste of time in 2013. How many times have you seen a buggy-ass game, because it was written in C++ when there was absolutely no reason to write it in a non-managed language? If XNA has contributed anything, it's to demonstrate that C# is more than sufficiently optimized to write any game you can imagine. And, meanwhile, one of the highest selling games ever isn't just written in Java (probably the WORST language to use for game development), it's not even GOOD Java.

    C# specifically ties you to something Mono runs on, aka most PC OSes and some phone OSes but no mainstream consoles except the XBox and PS3. I don't know about Java on consoles - Google says there's no Java for the PS3 and IIRC the XBox version of Minecraft was written from scratch in C++ so there probably isn't for XBox either.

    As for perf, I don't play XBox games so I don't know how XNA games compare with native ones there, but on the PC atleast Kerbal Space Program (Mono) seems to have a lower "perf" (mediocre graphics quality, long-ish load times, sound choppiness when multiple things are going on at the same time, etc.) compared to what non-managed games have. Of course, subjective measurement, single sample point, differing developer competencies, talking out of my ass, etc.



  • @eViLegion said:

    Again, the only ACTUAL issue with the specific "undefined behaviour" that we've been discussing is the endianness, as previously mentioned.

    Incorrect. Reading a different member than the one that was last written to is UB. Reading a type T using any pointer other than a T* (and char*) is UB.

    @eViLegion said:

    So, the code will work (and in fact does work) just as well on a PS3, PC and WiiU as it will on an XBOX

    Of course, one of the possible behaviors of UB is to be repeatable and reliable. It's still UB as far as the spec is concerned and more importantly, there's no real guarantee it will work for all possible inputs.

    @eViLegion said:

    Does your C++ compiler parse images? Nice feature... but why?

    He said compiler runtime. AKA the OS libraries, updating which may or may not require an updated compiler, or even otherwise cause a change in the behavior of the compiler. For example, the code of memcpy() might be changed due to a security patch which might cause your previously "working" UB to stop working

    @eViLegion said:

    I'm talking about the compiler you fool. It's more difficult to write a compiler that doesn't do the same thing each time (regarding unions with bitfields), yet still creates workable code that complies with language specifications.

    Incorrect. Compilers have and continue to do optimizations that essentially remove code that they detect is using UB altogether, because the best optimization is to not have code. Go read the LLVM blog entries about clang and UB: Part 1 Part 2 Part 3

    Fun fact: The optimization mentioned in part 2 (dereferencing a pointer to a local, then checking if it's null) is actually something they have a flag to disable, because it's an idiom spread throughout the Linux kernel.


  • ♿ (Parody)

    @eViLegion said:

    Blakey, it isn't possible for me to be insulted by someone about whom I couldn't give a shit.



    So, attempting to troll me, just because I have totally demolished you in every single debate we've ever had, has the twin effects of making you look like an sore loser, and making me happy not to have agreed with the biggest fucking idiot subscribed to this forum.

    An obvious git user.


  • Discourse touched me in a no-no place

    @eViLegion said:

    Again, the only ACTUAL issue with the specific "undefined behaviour" that we've been discussing is the endianness, as previously mentioned.
    Perhaps you should start discussing integer trap representations as well then?



  • @eViLegion said:

    So, attempting to troll me, just because I have totally demolished you in every single debate we've ever had, has the twin effects of making you look like an sore loser, and making me happy not to have agreed with the biggest fucking idiot subscribed to this forum.

    Wait a minute, was there a poll on this? Who's the runner-up, the black guy dressed as a clown or the ice cream cat-rapist?



  • @Arnavion said:

    @eViLegion said:
    Again, the only ACTUAL issue with the specific "undefined behaviour" that we've been discussing is the endianness, as previously mentioned.

    Incorrect. Reading a different member than the one that was last written to is UB. Reading a type T using any pointer other than a T* (and char*) is UB.

    Re-read what I said. I know it is "undefined behaviour". But the point is, on any given system it is deterministic behaviour, because making it non deterministic (yet still comply with specs) is either very difficult, or impossible. Between systems, all bets are off, hence the need to deal with special serialisation code for your platform. In this case, therefore, the only real issue to consider is endianness while serialising.

    @Arnavion said:

    @eViLegion said:
    So, the code will work (and in fact does work) just as well on a PS3, PC and WiiU as it will on an XBOX

    Of course, one of the possible behaviors of UB is to be repeatable and reliable. It's still UB as far as the spec is concerned and more importantly, there's no real guarantee it will work for all possible inputs.

    Like I said... 32 bit integers have a known layout of bits for any given hardware. While the code is executing, that layout of bits cannot change. The fact that a compiler might change in the future doesn't mean that some software that is currently running has nondeterministic behaviour.



    E.g. If I have two instances of a class, containing the bitfield unioned with an integer, I can be certain that copying that integer from one to the other will recreate the exact pattern of bits in the other, and thereby recreate the bitfield. To not do so would mean that the representations of the same integer, on the same platform, during the same runtime, are different.



    You can also be certain that it will work for all inputs because both a bitifield of 32 bits, and a 32 bit integer have a 1-1 mapping of conceptual bits to hardware bits. Those 1-1 mappings might be different but they do not change from moment to moment. So, within THIS runtime, on THIS system, I know that I can use the integer copy trick safely. I just shouldn't serialise 57 on a big-endian system, transmit, and use 57 on a little-endian system, and expect the unioned bitfield to be the same.

    @Arnavion said:

    @eViLegion said:
    Does your C++ compiler parse images? Nice feature... but why?

    He said compiler runtime. AKA the OS libraries, updating which may or may not require an updated compiler, or even otherwise cause a change in the behavior of the compiler. For example, the code of memcpy() might be changed due to a security patch which might cause your previously "working" UB to stop working

    Ah yes... I see, yeah I didn't read that properly.

    It's still basically not that relevant...the only problem is if the union/bitfield constructs are compiled differently by the same compiler... which won't happen. If it does, the WTF is the compiler, not the union/bitfield construct. Even if a later compiler version changes the way in which the bitfield bits map to that integer, it will STILL be a 1-1 mapping, so within the same runtime, two instances will be treated the same.

    @Arnavion said:

    @eViLegion said:
    I'm talking about the compiler you fool. It's more difficult to write a compiler that doesn't do the same thing each time (regarding unions with bitfields), yet still creates workable code that complies with language specifications.

    Incorrect. Compilers have and continue to do optimizations that essentially remove code that they detect is using UB altogether, because the best optimization is to not have code. Go read the LLVM blog entries about clang and UB: Part 1 Part 2 Part 3

    Fun fact: The optimization mentioned in part 2 (dereferencing a pointer to a local, then checking if it's null) is actually something they have a flag to disable, because it's an idiom spread throughout the Linux kernel.

    I'm pretty sure the best optimisation is never to not do what the programmer told you. That's just crazy. Incidentally, I skimmed that article, and didn't see any mention of optimistations that affect unions or bitfields.


    In fact, as I understand it, if you write to a float (for example), but read from a unioned integer then the standard doesn't define what integer value is produced, but you do get SOME valid integer value, it's not a trap representation, and the compiler is NOT allowed to optimize on the assumption that you have not done this.



  • @eViLegion said:

    Again, the only ACTUAL issue with the specific "undefined behaviour" that we've been discussing is the endianness, as previously mentioned.

    Goddamned. No, the actual issue with any undefined behavior is that it's undefined. You're writing code that works by guesswork. And what are you saving? Saving time? No, it's more work. Saving runtime? I highly doubt it. So you're being dumb but with no payoff.

    @eViLegion said:

    Does your C++ compiler parse images? Nice feature... but why?

    I didn't say compiler. You said compiler. I'm not going to reply to the voices in your head.

    @eViLegion said:

    More to the point, even if the compiler changes, it isn't likely to change this behaviour of unioned bitfields, because the undefinedness is a product of the hardwares endianness, which a compiler cannot change.

    As a former coder of Mac Classic apps, I never take endianness for granted. I can't tell you how much badly-written shit C and C++ code I had to rewrite because it broke due to the original coder assuming every CPU was identical to theirs.

    @eViLegion said:

    Blakey, you need to stop making blanket statements for which you're unable to justify yourself.

    Maybe. The problem is you say something like, "hey do this hacky shit and it works ok", and when I read that I think: whoa nelly. Ok, we know that probably 2/3rds of this forum is full of idiots who don't know shit, right? So they're going to read your advice and think, "now this hacky shit is ok" and go on to write hacky shit code all over. Maybe even worse because they won't understand what it does or when to use it. So I call out doing hacky shit as wrong, because I think we should discourage idiot programmers from using it. So yeah I have an ulterior motive, if you want to call it that.

    So I was trying to be a little more generous and say something like, "your advice is technically correct but practically wrong". There *may* be a situation in which relying on undocumented and/or undefined behavior is ok. I acknowledge that. But until you're an expert enough programmer, don't fucking do it. And if you are an expert enough programmer, you don't need some other wag on a forum telling you when to do it.



  • @Arnavion said:

    C# specifically ties you to something Mono runs on, aka most PC OSes and some phone OSes but no mainstream consoles except the XBox and PS3.

    No mainstream consoles except [two consoles that represent something like 80% of the market]. If you can write the same code that runs on PC, OS X, Xbox 360 and PS3, you're pretty much hitting every human being who can game there. And Wii sucks anyway.

    @Arnavion said:

    Google says there's no Java for the PS3 and IIRC the XBox version of Minecraft was written from scratch in C++ so there probably isn't for XBox either.

    Yes; but the Xbox port of Minecraft was written by the "we've always done it that way" game developers we're talking about. Would it be a better product had it been written in C#? Maybe. It certainly would have come to market quicker.

    @Arnavion said:

    As for perf, I don't play XBox games so I don't know how XNA games compare with native ones there, but on the PC atleast Kerbal Space Program (Mono) seems to have a lower "perf" (mediocre graphics quality, long-ish load times, sound choppiness when multiple things are going on at the same time, etc.) compared to what non-managed games have.

    Kerbal is version 0.22. ZERO-point-twenty-two.

    You're seriously using a game in alpha as your benchmark? Seriously? Fuck off. You're either trolling me or incredibly stupid.

    @Arnavion said:

    Of course, subjective measurement, single sample point, differing developer competencies, talking out of my ass, etc.

    YOU PICKED A GAME THAT IS IN THE MIDDLE OF DEVELOPMENT YOU ASS!



  • @eViLegion said:

    Re-read what I said. I know it is "undefined behaviour". But the point is, on any given system it is deterministic behaviour, because making it non deterministic (yet still comply with specs) is either very difficult, or impossible.

    On any given system that you know about.

    The writer of Fallout 2's installer assumed that DirectX would never be ported to WinNT, so if you run it on a WinNT, it refuses to install-- even though, once installed, the game runs fine. At the time he wrote it, I'm sure he came to some forum and posted, "oh this installer works on any given system."

    Your code assumes a future system has internally consistent endianness (i.e. values can't change from one endianness to another). There's no guarantee that will be true. And when the MegaUltraCPUPlus comes out in 7 years, you'll be working overtime rewriting all your apps while your competitors (who followed the documentation) are relaxing in the Bahamas after stealing all your customers.

    Now is that scenario likely to happen? Not really. But why take the risk?

    Write code that's guaranteed to work, not code that just works by accident.



  • Actually, cutting down both the CPU operations to copy the data in that field by a factor of around 32, cutting down the memory usage by the data in that field by a factor of 32, and that's about it. It's really not a huge amount of work.



    Yes, I misread your sentence. It's still irrelevant though - like I said above, no compiler is going to inconsistent with itself, or the compiler is the WTF.



    I'm not taking endianness for granted either. Like I said, you can't use this technique and just serialise the integer, as you'll get problems. You have to serialise the individual bits. This is no different from having to serialise the bunch-of-bools that you would have done instead.

    @blakeyrat said:

    The problem is you say something like, "hey do this hacky shit and it works ok", and when I read that I think: whoa nelly. Ok, we know that probably 2/3rds of this forum is full of idiots who don't know shit, right? So they're going to read your advice and think, "now this hacky shit is ok" and go on to write hacky shit code all over. Maybe even worse because they won't understand what it does or when to use it. So I call out doing hacky shit as wrong, because I think we should discourage idiot programmers from using it. So yeah I have an ulterior motive, if you want to call it that.

    So I was trying to be a little more generous and say something like, "your advice is technically correct but practically wrong". There *may* be a situation in which relying on undocumented and/or undefined behavior is ok. I acknowledge that. But until you're an expert enough programmer, don't fucking do it. And if you are an expert enough programmer, you don't need some other wag on a forum telling you when to do it.

    Fair enough. I agree that bad coders could easily read this, and conclude they should do it in unsuitable situations (e.g. where memory isn't tight), and they wont consider the things that need considering, and that would be bad. Thankfully I know WTF I'm doing.



    However, I put it to you that bad coders already have a million other paths to gain bad knowledge, and I submit as evidence of this: all the bad code anyone has ever seen.



  • @blakeyrat said:

    @Arnavion said:
    C# specifically ties you to something Mono runs on, aka most PC OSes and some phone OSes but no mainstream consoles except the XBox and PS3.

    No mainstream consoles except [two consoles that represent something like 80% of the market]. If you can write the same code that runs on PC, OS X, Xbox 360 and PS3, you're pretty much hitting every human being who can game there. And Wii sucks anyway.

    Mono actually supports the WiiU according to their website, but I was talking about things like the PSP. Then again I'm not a consolefag nor do I write software for consoles, so I know nothing about that space or how game companies decide which consoles to support.

    @blakeyrat said:

    @Arnavion said:
    Google says there's no Java for the PS3 and IIRC the XBox version of Minecraft was written from scratch in C++ so there probably isn't for XBox either.

    Yes; but the Xbox port of Minecraft was written by the "we've always done it that way" game developers we're talking about. Would it be a better product had it been written in C#? Maybe. It certainly would have come to market quicker.

    I was just using that as an example to say that there's no Java for XBox. I couldn't google for it because the first page of results for "xbox native java" is all about the java console for IE or something and I can't be arsed to look any more.

    @blakeyrat said:

    @Arnavion said:
    As for perf, I don't play XBox games so I don't know how XNA games compare with native ones there, but on the PC atleast Kerbal Space Program (Mono) seems to have a lower "perf" (mediocre graphics quality, long-ish load times, sound choppiness when multiple things are going on at the same time, etc.) compared to what non-managed games have.

    Kerbal is version 0.22. ZERO-point-twenty-two.

    You're seriously using a game in alpha as your benchmark? Seriously? Fuck off. You're either trolling me or incredibly stupid.

    @Arnavion said:

    Of course, subjective measurement, single sample point, differing developer competencies, talking out of my ass, etc.

    YOU PICKED A GAME THAT IS IN THE MIDDLE OF DEVELOPMENT YOU ASS!

    Ouch. Perhaps I wasn't clear. My point was that I was the one with the "subjective measurement, single sample point, differing developer competencies, talking out of my ass, etc." I'm agreeing that KSP is a single data point and not statistically significant. While I have a (subjective) feeling that it's slower than a comparative native game would have been, it is a meaningless comparison because it, as you said, is still in development.



  • @blakeyrat said:

    Now is that scenario likely to happen? Not really. But why take the risk?

    Memory on PS3 is REALLY tight. Seriously.



  • @eViLegion said:

    Fair enough. I agree that bad coders could easily read this, and conclude they should do it in unsuitable situations (e.g. where memory isn't tight), and they wont consider the things that need considering, and that would be bad.

    There's a lot of trash on the streets. I can't solve that problem. But I can put my own trash in a trash can.



  • @Arnavion said:

    Mono actually supports the WiiU according to their website, but I was talking about things like the PSP. Then again I'm not a consolefag nor do I write software for consoles, so I know nothing about that space or how game companies decide which consoles to support.

    Yeah; I was 80% sure it ran on Wii-U also but not sure enough to post that. So in conclusion, this code only runs on those non-mainstream consoles, like Xbox 360, PS3 and Wii-U. All those SUPER-MAINSTREAM consoles like Ouya you're scre-- wait... it runs on Ouya too? Shit.

    @Arnavion said:

    I was just using that as an example to say that there's no Java for XBox.

    Well herpderp, who said there was?

    The real question is why they haven't reverse-ported the C++ version back to PC, where it would run like non-shit. Then thrown the Java version in the trash. Then pissed on it.

    @Arnavion said:

    I'm agreeing that KSP is a single data point and not statistically significant. While I have a (subjective) feeling that it's slower than a comparative native game would have been, it is a meaningless comparison because it, as you said, is still in development.

    THEN WHY THE FUCK WOULD YOU EVEN TYPE IT

    You deserve to be insulted.


  • Considered Harmful

    @blakeyrat said:

    Then thrown the Java version in the trash. Then pissed on it and lit it on fire.

    FTFY



  • @eViLegion said:

    Re-read what I said. I know it is "undefined behaviour". But the point is, on any given system it is deterministic behaviour, because making it non deterministic (yet still comply with specs) is either very difficult, or impossible.

    @eViLegion said:
    I'm pretty sure the best optimisation is never to not do what the programmer told you. That's just crazy.


    In fact, as I understand it, if you write to a float (for example), but read from a unioned integer then the standard doesn't define what integer value is produced, but you do get SOME valid integer value, it's not a trap representation, and the compiler is NOT allowed to optimize on the assumption that you have not done this.

    You should read the articles I linked you, because this is wrong.

    @eViLegion said:

    You can also be certain that it will work for all inputs because both a bitifield of 32 bits, and a 32 bit integer have a 1-1 mapping of conceptual bits to hardware bits. Those 1-1 mappings might be different but they do not change from moment to moment.

    @eViLegion said:
    Like I said... 32 bit integers have a known layout of bits for any given hardware. While the code is executing, that layout of bits cannot change. The fact that a compiler might change in the future doesn't mean that some software that is currently running has nondeterministic behaviour.

    This is correct but not relevant to anything I said, or anyone else said in this conversation.

    @eViLegion said:

    So, within THIS runtime, on THIS system, I know that I can use the integer copy trick safely.

    @eViLegion said:
    E.g. If I have two instances of a class, containing the bitfield unioned with an integer, I can be certain that copying that integer from one to the other will recreate the exact pattern of bits in the other, and thereby recreate the bitfield.

    You don't, and you cannot.

    @eViLegion said:

    Incidentally, I skimmed that article, and didn't see any mention of optimistations that affect unions or bitfields.

    :facepalm:


Log in to reply