Java is a statically typed language which couldn't care less for type safety



  • @dkf said:

    Gaska:
    Leave C++ alone. If not for legacy stuff, it would be very good language.

    And if not for the fact that compiling and linking anything written in C++ can be an ordeal, I might possibly agree with you.

    Building projects in C++ can legitimately be a bitch, but my experience with this is that it's usually a bitch because whoever set up the build process did it wrong. I think the #1 sin here is turning off warnings when creating a new project. Developers who do this seem to never go back at any point to fix those warnings... which probably explains every bug they ever have to swat.

    And, frankly, I have experienced this same hassle in managed languages more frequently because the original devs buttumed that their managed language's nature meant portability was guaranteed. To be fair, this may be true of the compiled program, but it is rarely true of compiling the source. Since this is usually touted as something devs don't have to worry about with managed languages, I'm guessing it's much like poorly written makefiles for C/C++: the dev failed to specify the project requirements in the manifest or whatnot (it's been a couple years since I touched .net, and my recent experience with Java was with Android x86, so I'd be remiss to cast aspersions on the languages or developers in general on the basis of these).

    TLDR: stop blaming "languages" for what really amounts to "shitty developers".


  • Discourse touched me in a no-no place

    @powerlord said:

    * java.util.Date is a giant wtf.

    Yes. But there's a really good third-party replacement (JodaTime). Indeed, the library was ported to C# too.
    @powerlord said:
    * GUI stuff tends to be really archaic (AWT) or look weird due to not using system widgets (Swing).

    You're using the wrong theme if Swing isn't looking native.
    @powerlord said:
    * enums in Java are actual objects.

    That's one of these crazy/brilliant things, as it lets the enums be experts on something rather than just purely some tag that you pass around and which mediates expertise in other types.
    @powerlord said:
    Did I mention Java doesn't support overloading operators?

    That's actually a good thing in many ways; Java being like that was a reaction to the early days of C++, when it became clear that free operator overloading tends to end up with some very obscure code being produced. One of the biggest things about Java is that you can easily look at some code and see what it is doing, at least at an immediate level. Doing the same in C++ is significantly harder.

    Operator overloading would be more allowable in my eyes only if it was done by implementing algebraic structures like sets, groups, rings and fields. But very few mainstream programming languages are that bold (or maybe too many programming language designers just aren't aware-enough of mathematics).

    If you're going to allow arbitrary operator overloading, why not allow defining entirely new operators too instead of restricting to the pre-defined collection? It's not that hard to write lexers and parsers which can do it; languages were doing that 20 years ago or more…



  • @dkf said:

    That's one of these crazy/brilliant things, as it lets the enums be experts on something rather than just purely some tag that you pass around and which mediates expertise in other types.

    C++11 implemented that as well (enums as actual types, not as specializations of int, use enum struct or enum class to get at this).

    @dkf said:

    Operator overloading would be more allowable in my eyes only if it was done by implementing algebraic structures like sets, groups, rings and fields.

    Here's the problem: some very good operator overloading applications don't fit into the basic algebraic structures well. Parsing Expression Grammars are a good example of this.


  • Discourse touched me in a no-no place

    @VaelynPhi said:

    TLDR: stop blaming "languages" for what really amounts to "shitty developers".

    Which doesn't change the fact that it's a much bigger problem in C++ than anything else. Something is deeply wrong, and pointing to other languages won't change that. (It's like the fact that it's possible to write reasonable code in PHP, but the community practice is mostly abysmal.)

    I'm quite happy to accept that the code I've got (and which I didn't write this part of) is wrong in some way, but the nature of the wrong is deeply unobvious; if I use one version of a compiler — built locally by me — everything works with exactly one warning (about file extension), but if I use the next version of the compiler (with matched libraries, so far as I can tell) then it all blows up in a system header included from a system header somewhere deep inside, while referencing a type that the code makes no explicit reference to, so far as I can tell. I'm using a completely clean build in both the two cases; I know not to make that sort of rookie blunder.


    Huh. It seems that adding -stdlib=libc++ was what was required to fix it. Obviousness itself!


  • Banned

    @dkf said:

    That's actually a good thing in many ways; Java being like that was a reaction to the early days of C++, when it became clear that free overloading tends to end up with some very obscure code being produced.

    Note that's true for any function - just create a function that has name sufficiently different from what it actually does.



  • @powerlord said:

    Why is Class generic, for instance?

    So that its methods can have more specific types, duh. Like newInstance, for example.



  • @Gaska said:

    Correct me if I'm wrong, but doesn't type erasure mean that compiler has no way to statically check type safety of generic containers when interacting with precompiled 3rd-party library?

    Depends what you mean. The original generic signatures of classes and methods are retained, so you can still check type safety of your own code that interacts with the library. You can't determine whether the library itself was compiled with unchecked warnings or not, though.



  • @dkf said:

    I'm quite happy to accept that the code I've got (and which I didn't write this part of) is wrong in some way, but the nature of the wrong is deeply unobvious; if I use one version of a compiler — built locally by me — everything works with exactly one warning (about file extension), but if I use the next version of the compiler (with matched libraries, so far as I can tell) then it all blows up in a system header included from a system header somewhere deep inside, while referencing a type that the code makes no explicit reference to, so far as I can tell. I'm using a completely clean build in both the two cases; I know not to make that sort of rookie blunder.

    Indeed, compiler messages are never as helpful as a good debugger; I think one of the unfairnesses done C++ is expecting its compilers to emit debugger-style messages. Having a canonical compiler would be nice too.


  • Discourse touched me in a no-no place

    @Gaska said:

    50-line error messages that sometimes pop up if you get verily-templated library functions wrong

    50?

    Sometimes?

    :trollface:


  • Banned

    @PJH said:

    50?

    Sometimes?

    Yes, sometimes. Usually they're around 20-30 (not counting line wraps).



  • @Gaska said:

    C++ should have a real metaprogramming language embedded that allows explicitly examining abstract syntax tree

    Something like Roslyn will support for the near future, you mean? Or if you would prefer something usable today; in-code creation or modification of expression trees, which has been possible ever since .NET 3.5 was released almost 5 years ago?

    (This explicitly meant for the dumbass that said C# was a 'toy' compared to C++)


  • Banned

    @Ragnax said:

    Something like Roslyn will support for the near future, you mean? Or if you would prefer something usable today; in-code creation or modification of expression trees, which has been possible ever since .NET 3.5 was released almost 5 years ago?

    Roslyn is dev tool framework. Expression trees are for dynamically creating C# snippets at runtime. What I meant is a compile-time metaprogramming facility - something that if I wrote the following (pseudocode):

    class Foo<type T>
    {
        T t;
        #if T has_method get()
        {
            typeof(T::get()) get()
            {
                return t.get();
            }
        }
        #else
        {
            T get()
            {
                return t;
            }
        }
    }
    
    class A
    {
        int i;
    
        int get()
        {
            return i;
        }
    }
    
    class B
    {
    
    }
    

    Then, if I instanced Foo<A> and Foo<B>, the compiler would automatically generate the following:

    class Foo<A>
    {
        A t;
        int get()
        {
            return t.get();
        }
    }
    
    class Foo<B>
    {
        B t;
        B get()
        {
            return t;
        }
    }


  • @blakeyrat said:

    Ok; but the only example you've given of why RAII is better is "it saves 5 characters of typing" which, and excuse me if you don't agree, is fucking stupid.

    Arrrg. You are the most frustrating person. It's not the saved keystrokes (though I'll give another example in a minute for where it really makes a difference); it's the reduced chance for an error. Even good programmers can make mistakes sometimes, or even not know that a class requires closing in the first place. Or a class that you use "normally" might be modified so now it does, and now you need to go back and add usings everywhere you used that class. (In C++ of course, you don't actually need to do anything.)

    I'm not saying that RAII makes C++ better than C#. I'm saying that RAII is better than using, and I think C# and other languages would be better for it if they added a more C++y way of doing that particular thing. (I have a strong love/hate relationship with C++. I have been trending toward "memory unsafe languages should almost never be used" over the last few years though.)

    But here's another example where, at a syntactic level, I think C++-style RAII works much better than you can do in C# (at least AFAIK). I have a class that I use for tracing what's going on in one of the components I partially "own." I find it almost essential to indent the lines in the log to correspond to where in the computation it is. (An alternative would be to output to some treeish language like XML.) In C++, you can actually do this very nicely: somewhere you keep track of the current indent level, and then you make a class that will increase that level in the constructor and decrease it in the destructor. Then you can say something like

    void foo() {
        IndentingLogger logger;
        logger.log("Computing foo()");
        int x = bar();
        int y = x * x;
        logger.log("y=", y);
    }
    void bar() {
        IndentingLogger logger;
        logger.log("Computing bar(): returning 7");
        return 7;
    }
    

    and you get the output

    Computing foo()
        Computing bar(): returning 7
    y=49
    

    You could do the same thing with using, but having to start a whole new scope for something that is ancillary to the actual work that is getting done is ugly at best. This gets even more dramatic because I have often found it useful to make two IndentingLogger objects in the same function so I can show substeps that are small enough they haven't been factored out into their own function. And of course when I discover that I need to log something else, I just make a new IndentingLogger; in C#, you'd have to add a new using block and reindent a bunch of stuff.

    I'm not going to claim these are earth-shattering differences, but IMO the difference between being able to RAII and not is quite clear, and C++ wins that bout.



  • @Ragnax said:

    Or if you would prefer something usable today; in-code creation or modification of expression trees, which has been possible ever since .NET 3.5 was released almost 5 years ago?

    You have it the wrong way around. What we want is the ability to execute parts of our program at compile time. If Forth on a freaking Z80 can do this (immediate words, bro!), and Lisp has had macros practically since its inception in 19-freaking-58, why can't our SuperWhizBangLanguages™ of 2014? Oh, that's right, nobody looked at history and saw just how powerful such a feature was. Either that, or they ran away terrified from the idea. 😛



  • This post is deleted!

  • Discourse touched me in a no-no place

    @tarunik said:

    What we want is the ability to execute parts of our program at compile time.

    Compile time is just run time of a different program.

    (You are aware that in a real system, an IndentingLogger like that would suck? But not because of the language you're using: the real issue is that you'll probably have to deal with multiple threads and the resulting log will be a godawful mess. But this is just nitpicking at an example for reasons unrelated to why you used it.)


  • Discourse touched me in a no-no place

    @Spectre said:

    You can't determine whether the library itself was compiled with unchecked warnings or not, though.

    And not everything that is actually safe can be type-checked by the compiler to the point where the safety is automatically provable. I've encountered that a few times, e.g., where someone had built a map, indexed by class, of factories for instances of that class (or its subclasses). The methods were nicely type-safe (and had the right declaration), but proving their internal safety was beyond what the compiler could do.

    Sometimes you've just got to hide some ugliness somewhere to make the rest nice.


  • Discourse touched me in a no-no place

    @Gaska said:

    Note that's true for any function - just create a function that has name sufficiently different from what it actually does.

    True, but operators can turn that up a few notches. Java started life as a reaction to that sort of thing, so instead it has lots of long-winded blathering. Many Java programmers make things worse though; AbstractFactoryBuilderStrategyPatternCreatorService anyone?



  • @dkf said:

    (You are aware that in a real system, an IndentingLogger like that would suck? But not because of the language you're using: the real issue is that you'll probably have to deal with multiple threads and the resulting log will be a godawful mess. But this is just nitpicking at an example for reasons unrelated to why you used it.)
    Or you could pass an output stream to the IndentingLogger and log to different things in each thread. Or do parallelism using message passing between processes instead of intra-process parallelism. (Of course, they still have to go to different streams in that case; you can't set it up and then fork and not change where it goes.) Anyway, the thing I work on most of the time is actually single-threaded. This... is a problem, because what we're doing is computationally demanding, but on the flip side it's also a problem that tends to parallelize quite poorly, so if we wanted to better we might wind up doing a metric asston of work and not see much benefit. (Or even potentially slowdown!)

    @dkf said:

    Compile time is just run time of a different program
    Technically true, but they're very different. But of course you know that. For something like Lisp it doesn't matter so much, but for a language like C++ where typechecking happens after your macro transformations it makes a big difference.


  • Banned

    @dkf said:

    Compile time is just run time of a different program.

    Compile-time things are run once during making binaries and never again after that. Runtime things are run at every program execution. Even JIT can't prevent that.



  • @Gaska said:

    Roslyn is dev tool framework. Expression trees are for dynamically creating C# snippets at runtime. What I meant is a compile-time metaprogramming facility

    Then you want T4 templates.

    (And btw. Roslyn is bit more than 'a dev tool framework'.)


  • Banned

    @Ragnax said:

    Then you want T4 templates

    Can T4-generated classes be used as generics, with varying implementations depending on type parameter? Because it seems like a one-time find-and-replace tool (that's not even part of C# itself!).

    @Ragnax said:

    And btw. Roslyn is bit more than 'a dev tool framework'

    What is it, then? Can it be used for the task at hand (at compile time, not runtime)? Will it be accepted in Windows Store?



  • @powerlord said:

    GUI stuff tends to be really archaic (AWT) or look weird due to not using system widgets (Swing). I guess you could use SWT, but when IBM creates better features for your language than you do...

    UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
    

    See also: NapkinLAF



  • The TOC on their home page uses Comic Sans.



  • @Ragnax said:

    (And btw. Roslyn is bit more than 'a dev tool framework'.)
    @Gaska said:
    What is it, then?
    The compiler.@Gaska said:
    Can it be used for the task at hand (at compile time, not runtime)?
    Yes. You'll need to make an MSBuild task so you can do stuff at compile time (before it hits Microsoft's IL compiler), and you'll need to modify Roslyn to accept your metaprogramming syntax, but it's doable.@Gaska said:
    Will it be accepted in Windows Store?
    Visual Studio's validator will probably fail, but it would still be accepted into the Store. Once preprocessed, the compiler will just be digesting regular old C# code, and Microsoft will have no way of telling you've fed it anything different.



  • @aliceif said:

    The TOC on their home page uses Comic Sans.
    That's the point of that one! Using Napkin instead of system-specific or Metal serves as a constant visual reminder that they're using a mockup rather than the real, final application.

    For system widgets, you want the system look and feel.



  • They use a much more pretty font in the actual theme, though.


  • Banned

    @TwelveBaud said:

    Yes. You'll need to make an MSBuild task so you can do stuff at compile time (before it hits Microsoft's IL compiler), and you'll need to modify Roslyn to accept your metaprogramming syntax, but it's doable

    Will it confuse code completion much?



  • @TwelveBaud said:

    ```java
    UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());

    ---
    See also: [NapkinLAF](http://napkinlaf.sourceforge.net/)</blockquote>
    
    The System LAF only *looks* like (a version of) the native interface.  However, it's subtly enough different from it to do unexpected things to users.
    
    Also, that "a version of" is there because it looks like a specific version of an operating system's UI.  For instance, Java 6's Windows LAF will look like Windows XP regardless of which version of Windows its actually running on.


  • @powerlord said:

    Java 6's Windows LAF will look like Windows XP regardless of which version of Windows its actually running on.

    Doesn't look that way to me. I just did a quick swing demo, and the widgets look like Windows 8.



  • Heh. Java has been much better since Generics were added. OO isn't a bad thing, as long as you don't overdo it. It is at least far better than the horrifying mess that was Visual Basic (VB6 and older), at least VB.NET is really syntax sugar for C#, the "real" .NET language. And given that C# is mostly "pirated Java" anyway, that isn't a bad thing.

    I actually like some of the C# fixes made to the Java language, though. I've coded in both, and both languages have their strengths and weaknesses. C#'s downside is mostly that it's tied to the MS ecosystem, but other than that it's a fairly OK language. 😄



  • Old classes @since JDK1.0, that nobody really used and nobody cared to generify.

    I've been programming in Java for >10 years now and never used Observer or Observable...



  • Apparently this problem has since been fixed... even in Java 5 (I actually used my Oracle account to download JDK 5u22 to find out).

    However, since it's not a native GUI they're constantly playing catch up. Most noticeable is how long it takes file dialogs to look like the latest version of Windows after a new version is released. In fact, I haven't even checked to see if they always look like the newest released version at the time or not.

    Hmm, I should install Java 8 and find out...


  • Banned

    Game developers have it much easier - they don't have to care about any UI conventions at all, let alone native look&feel ;)



  • Which is why there are so many games with shitty UI out there.


  • Banned

    Nope. Shitty UI is because of something entirely different - performance. You just can't draw pixel by pixel - you have to make textures and triangles so GPU can digest it easily, and you must also optimize control flow to reduce pointer-jumping (which leads to cache misses). Also reason for terrible Unicode support.


  • FoxDev

    @Gaska said:

    You just can't draw pixel by pixel - you have to make textures and triangles so GPU can digest it easily

    that doesn't mean that the colour choices of your textures shouldn't be well thought out so that someone who is... say red green colour blind can't read your menues at all.

    or as every single edition of borderlands so far has done: make 3+ loot colors indistinguishable to someone who is red-green colour blind and not noting the quality of the item anywhere but the color forcing me to help determing loot colors for him.


  • BINNED

    @accalia said:

    say red green colour blind can't read your menues at all.

    HATE! HATE!

    especially the highlight color ... so you have almost no way of knowing what you have selected.

    I even have that sometimes with DVD menu's ...
    :angry:


  • FoxDev

    i can't say that i know that feeling personally but i know several people (i game with them regularly) that have some variant of colour blindness. so i get a lot of second hand flack from games that have issues with accessibility.


  • kills Dumbledore

    Even without colour blindness, some games make it less than obvious which of a choice of two options are actually selected. It's pretty annoying when it's on something like "Save before quitting? Yes:no" with cycling options (so you can't just go up/left and know you're on the first option)

    I can't think of any particular game where this happens but there have been some that confused me with it


  • BINNED

    I mostly ask my daughter. She's still amused by her super-hero dad having trouble with the stupid colors.


  • ♿ (Parody)

    @Jaloopa said:

    Even without colour blindness, some games make it less than obvious which of a choice of two options are actually selected.

    YES. Like @Luhmann mentioned, I get this with DVD menus. With computer stuff, at least I can usually click with a mouse. I don't console.


  • FoxDev

    what's that in the sky?

    is it a bird?

    is is a plane?

    no!

    it's SUPER LUHMANN!

    (i couldn't resist, hopefully it was as funny as it was in my head)



  • @Jaloopa said:

    I can't think of any particular game where this happens but there have been some that confused me with it

    There's some retro-style game (I thought it was Jet Gunner, but I can't find any screenshots or videos showing the screen) that I played recently with a difficulty selection screen that has white text, and the actual menu options are blue and white. It looks like blue is the highlighted colour, since only one word on the screen is blue, but that's actually the off colour in the menu.


  • BINNED

    @accalia said:

    is it a bird?

    is is a plane?

    May I present: Mega Mindy


  • kills Dumbledore

    @hungrier said:

    It looks like blue is the highlighted colour, since only one word on the screen is blue, but that's actually the off colour in the menu

    That's the kind of thing I was thinking about. The colours are obviously different, it's just entirely opaque which is which.

    @boomzilla said:

    With computer stuff, at least I can usually click with a mouse

    doesn't help on consoles or older/weird games that don't support a mouse



  • @Gaska said:

    Nope. Shitty UI is because of something entirely different - performance. You just can't draw pixel by pixel - you have to make textures and triangles so GPU can digest it easily, and you must also optimize control flow to reduce pointer-jumping (which leads to cache misses). Also reason for terrible Unicode support.

    Not sure if troll, sorry.



  • @Arantor said:

    Not sure if troll, sorry.

    Yeah, apparently pointing your windowing system's 2D drawing code at an off-screen surface (for Windows GDI, a memory DC) and then blitting the contents of that into a texture isn't a thing. Why?



  • @tarunik said:

    Yeah, apparently pointing your windowing system's 2D drawing code at an off-screen surface (for Windows GDI, a memory DC) and then blitting the contents of that into a texture isn't a thing. Why?

    That wasn't even my original point. Whether you're drawing pixel by pixel or doing textures, there's still no excuse for the fuckery that is the UI in some games.

    Space Empires IV comes to mind because if ever you saw the UI on that, you'd go WTF like I did the first time I tried to play it.


  • Banned

    Well, I was speaking purely technical. Artsy choices are entirely different manner that I don't want to argue about, because I'm terrible at artsy things (can't tell violet from purple, for example). Accessibility problems is yet another thing. Game developers usually don't care about it at all - for a good reason: average person has two hands with five fingers each, has two fully working eyes and two fully working ears. Making extensive use of all of these features makes games much more enjoyable for majority of people. But it bans blind people from using video games, prohibits one-handed people from playing on consoles, somewhat cripples deaf in games like Counter-Strike, etc. As horrible as it sounds, game developers care much, much more about making buttons pretty for non-colorblind than making it usable at all in at least basic scope for colorblind.

    @tarunik said:

    Yeah, apparently pointing your windowing system's 2D drawing code at an off-screen surface (for Windows GDI, a memory DC) and then blitting the contents of that into a texture isn't a thing. Why?

    Because when doing real fullscreen, it actually, literally, physically isn't a thing. But that's not how things work these days.


Log in to reply