Which language is the least bad?
-
I think there's a lack of agreement in what OOP is supposed to be about in the first place
- You have the "original vision" of objects being more or less independent actors that communicate via a well defined message passing interface
- And on the other hand you have "data structures with methods attached for convenience"
And then real life code ends up being an incoherent mix of the two ideas.
Getters and setters are obviously necessary in the first model, since there would in fact be no such thing as an "attribute" in the first place.
In the second one, they're debatable. I still think they're ugly as fuck, and should at least be hidden by the programming language.
-
@sloosecannon said in Which language is the least bad?:
@wharrgarbl said in Which language is the least bad?:
@kt_ said in Which language is the least bad?:
Yeah, globals, not understanding benefits of properties. You are the kind of a person that make me hate coming to work, from time to time.
I also don't see the point in functions too. I can do everything inside main with gotos. It gets better performance because then I don't have function prologues.
Oh FFS.
Swampy, is that you?
No one deletes knowledge like this!
-
@boomzilla said in Which language is the least bad?:
@gordonjcp said in Which language is the least bad?:
@boomzilla said in Which language is the least bad?:
Python adds ambiguity and makes it more difficult to format code however the fuck I want to see it.
Name one language that doesn't.
C. C++. Java. C#.
I know, 4 != 1.
I've seen some of the discussions on C++ esoterics around here. That is not what I would call a language that reduces ambiguity.
-
@dreikin said in Which language is the least bad?:
That is not what I would call a language that reduces ambiguity.
It reduces indentation ambiguity.
-
@dkf said in Which language is the least bad?:
@dreikin said in Which language is the least bad?:
That is not what I would call a language that reduces ambiguity.
It reduces indentation ambiguity.
A long as only one or the other type of leading whitespace is allowed, and there's always at least two extra spaces for an indentation, whitespace-based indentation is much less ambiguous for (insufficiently visually impaired) humans (and not significantly harder for computers). Counting braces is (relatively) hard, especially since they can be anywhere, whereas observing the indentation level is easy (especially in editors/IDEs that use the vertical bar things and block highlighting on mouseover).
-
I waited until there were 255 posts just so I could be #256.
Now so actually read this topic...
-
@dreikin said in Which language is the least bad?:
A long as only one or the other type of leading whitespace is allowed, and there's always at least two extra spaces for an indentation, whitespace-based indentation is much less ambiguous for (insufficiently visually impaired) humans (and not significantly harder for computers).
Not for this human.
-
@boomzilla said in Which language is the least bad?:
@dreikin said in Which language is the least bad?:
A long as only one or the other type of leading whitespace is allowed, and there's always at least two extra spaces for an indentation, whitespace-based indentation is much less ambiguous for (insufficiently visually impaired) humans (and not significantly harder for computers).
Not for this human.
Are you saying you're @Tsaukpaetra's sock puppet?
-
@captain said in Which language is the least bad?:
Every times I start a Scala tutorial: "Hmm, this doesn't look too bad, kind of like simplified Java. Let me just skip a few pages ... oh"...
Maybe you shouldn't "skip a few pages..."
It wouldn't be so bad if his pagination wasn't set to 50...
-
@wharrgarbl said in Which language is the least bad?:
@sloosecannon well, I only accept something as a good practice if it demonstrably prevent more effort than it costs. So far in 20 years of programming I never needed to put logic in a getter or setter after the fact.
In I have !
-
@gordonjcp said in Which language is the least bad?:
512 words of flash
Wow, that's limiting for sure...
-
-
@dreikin said in Which language is the least bad?:
@boomzilla said in Which language is the least bad?:
@dreikin said in Which language is the least bad?:
A long as only one or the other type of leading whitespace is allowed, and there's always at least two extra spaces for an indentation, whitespace-based indentation is much less ambiguous for (insufficiently visually impaired) humans (and not significantly harder for computers).
Not for this human.
Are you saying you're @Tsaukpaetra's sock puppet?
Do previous releases count as socks?
-
@dreikin said in Which language is the least bad?:
@boomzilla said in Which language is the least bad?:
@dreikin said in Which language is the least bad?:
A long as only one or the other type of leading whitespace is allowed, and there's always at least two extra spaces for an indentation, whitespace-based indentation is much less ambiguous for (insufficiently visually impaired) humans (and not significantly harder for computers).
Not for this human.
Are you saying you're @Tsaukpaetra's sock puppet?
You might want to try a different language next time when you use Google translate.
-
@boomzilla said in Which language is the least bad?:
@dreikin said in Which language is the least bad?:
@boomzilla said in Which language is the least bad?:
@dreikin said in Which language is the least bad?:
A long as only one or the other type of leading whitespace is allowed, and there's always at least two extra spaces for an indentation, whitespace-based indentation is much less ambiguous for (insufficiently visually impaired) humans (and not significantly harder for computers).
Not for this human.
Are you saying you're @Tsaukpaetra's sock puppet?
You might want to try a different language next time when you use Google translate.
-
@dreikin said in Which language is the least bad?:
A long as only one or the other type of leading whitespace is allowed
I've seen a lot of Python code recently where the number of spaces per indent level varied quite a bit between different parts of the same function; it was semantically valid, but awful. (User-generated test scripts have all sorts of ugly crap in them.) In most languages, you can just tell the IDE to reindent all that stuff to get something sane, but that seems to be ridiculously difficult with Python…
-
@dreikin said in Which language is the least bad?:
@boomzilla said in Which language is the least bad?:
@dreikin said in Which language is the least bad?:
@boomzilla said in Which language is the least bad?:
@dreikin said in Which language is the least bad?:
A long as only one or the other type of leading whitespace is allowed, and there's always at least two extra spaces for an indentation, whitespace-based indentation is much less ambiguous for (insufficiently visually impaired) humans (and not significantly harder for computers).
Not for this human.
Are you saying you're @Tsaukpaetra's sock puppet?
You might want to try a different language next time when you use Google translate.
I.e., I have no idea what you were trying to say.
-
@boomzilla said in Which language is the least bad?:
@dreikin said in Which language is the least bad?:
@boomzilla said in Which language is the least bad?:
@dreikin said in Which language is the least bad?:
@boomzilla said in Which language is the least bad?:
@dreikin said in Which language is the least bad?:
A long as only one or the other type of leading whitespace is allowed, and there's always at least two extra spaces for an indentation, whitespace-based indentation is much less ambiguous for (insufficiently visually impaired) humans (and not significantly harder for computers).
Not for this human.
Are you saying you're @Tsaukpaetra's sock puppet?
You might want to try a different language next time when you use Google translate.
I.e., I have no idea what you were trying to say.
That if you find brace counting easier than indentation level observation, you might be more like an AI than a human. Since @Tsaukpaetra at least affects a manner like an AI, I inverted the joke that everyone's your sock puppet to suggest you were really his.
-
@dkf said in Which language is the least bad?:
@dreikin said in Which language is the least bad?:
A long as only one or the other type of leading whitespace is allowed
I've seen a lot of Python code recently where the number of spaces per indent level varied quite a bit between different parts of the same function; it was semantically valid, but awful. (User-generated test scripts have all sorts of ugly crap in them.) In most languages, you can just tell the IDE to reindent all that stuff to get something sane, but that seems to be ridiculously difficult with Python…
Huh, that surprises me. I wonder if it's a tooling issue, or something in the language design. Maybe I'll try that on pycharm and vs code later.
-
@dreikin said in Which language is the least bad?:
That if you find brace counting easier than indentation level observation, you might be more like an AI than a human.
Ah. Well, I only trust the one, and the other is trivially easy to fix.
-
@dreikin said in Which language is the least bad?:
That if you find brace counting easier than indentation level observation, you might
be more like an AI than a humanhave an editor that does it for you.
-
@sumireko said in Which language is the least bad?:
There are no bad languages, just bad coders.
Of course there are bad languages (which is not to say that there aren't bad coders, don't get me started on that one). There are languages designed so clumsily that instead of creating the logic you want you're fighting an uphill battle against the mouthbreathing moron(s) who designed the language (for an example of this, see this piece PHP which was linked somewhere upthread I think). Then there are languages that are mostly sane, but have this one feature that trips you up over and over and over again, and you're pretty sure the designers put it in there just to be cute (looking at you, Elixir).
@sumireko said in Which language is the least bad?:
Even ES6 can be turned into a modern masterpiece using the right people.
ES6 is not the problem itself, it's (comparatively) not as clumsy as the previous versions and (again, comparatively) a step in the right direction. The bullshit about ES6 lies partly in the fact that it's supported almost absolutely nowhere (maaaybe Node has complete support by now? No browsers tho) so if you want to use it you condemn yourself to using some of the shittiest tooling in the universe.
<rant>Oh, and if you want to support something besides the latest and greatest browsers, then you better watch your step because Babel ain't your mother and its authors admit freely that their transpiler simply fucks up sometimes because they're too lazy to fix it. And from my experience that list isn't even complete.</rant>
The other bullshit part about ES6 is that while it helps writing somewhat sanely-looking code, the internals didn't change one bit. It's all fine and dandy that they've added a
class
keyword now, but it's just syntactic sugar on top of regular objects, so that sucks anyway.
For language that's not hard to read and write, Ruby doesn't suck. If you want something fast though, I've been enjoying Rust lately, but I don't know it well enough to tell whether it sucks or not yet.
-
@pjw said in Which language is the least bad?:
For language that's not hard to read and write, Ruby doesn't suck.
The single largest problem with Ruby comes in maintenance, as the language encourages cute hacks that end up being really nasty. There's a bunch of other stuff lurking once you do production use, but that's arguably more about the weirdness of Rails and many of the common Ruby libraries.
I pray that they've fixed their stupid string concatenation. That was just insane.
-
@dkf I wouldn't say the language itself encourages the cute hacks, it just enables them, but the community certainly does. Rails is pretty much one huge monkeypatch, changing the behavior of core classes. Like String.
In recent versions (can't remember which one exactly) a mechanism for scoping monkeypatches, called refinements, was introduced - basically so library authors can get cute and monkeypatchy with whatever they want and the cuteness doesn't leak - but since it's purely optional, and IMO unwieldy as fuck, hardly anyone bothers. Except for zealots.
Which stupidity about string concatenation are you talking about? ;) Actually, I feel that there has been a new one introduced recently - since all string literals are frozen by default, trying someting like
str = 'Hello' str << ', world'
now raises a runtime error. It'd make for a lovely curveball in fizzbuzz whiteboarding, but annoys the guts out of most of my peers.
-
@dkf Yeah. Ruby's great for prototyping, and if you have half-decent coders it's not too bad for low volume production work, but it's hard to scale up. The main problem is that marketeers and hipsters alike think that prototype == saleable product.
-
@dkf said in Which language is the least bad?:
I pray that they've fixed their stupid string concatenation. That was just insane.
Is Ruby the one where if you have two strings in different encodings and try to concatenate it just mushes the bytes together and declares it to be UTF8?
-
@pjw said in Which language is the least bad?:
Which stupidity about string concatenation are you talking about?
There was certainly a point where if you had two strings in different encodings (e.g., some in ISO8859-1 and others in UTF-8), a thing which it's very easy to end up with when working with legacy databases, then the concatenation would simply glom the bytes together and produce a load of brokenness. This was particularly noticeable when you then used the strings in an XML document that you fired back to a client in some other language; the other language's parser would notice the confusion and throw a really quite gnostic error message about invalid byte sequence (which it would be completely correct about). OK, it's not a nice situation to be in in the first place, but merely concatenating the bytes when they're different encodings is really not doing the right thing. (Actually, I'm not sure that it's always correct even when the strings are in the same encoding; some of the things that go on in some of the Japanese encodings are pretty odd IIRC.)
I believe that was the day when I used the phrase “fucking shit-kicking dickwads” about the implementors of Ruby.
-
@tsaukpaetra It kind of comes with quiescent currents in the hundreds-of-nanoamp and fully awake currents in the hundreds-of-microamp range.
These are a bit specialised.
-
@tufty said in Which language is the least bad?:
not too bad for low volume production work
There at least was a horrible problem at one point with the interaction between the multi-process http server library and the database cache, where the first thing the database cache would do in a new process would be to build all its in-memory caches of ALL THE THINGS, yet the default (and strongly recommended for stability raisins) configuration of the http server library was to use a new process for each request. If you had a database of any size and anything more than the most minimal amount of traffic (i.e., other than a simple developer test) then the whole ensemble would start sucking vast amounts of information from the database all the time despite not needing it. I suspect that this sort of thing is what is really behind Rails's reputation for needing extremely beefy servers to run on.
The individual pieces were fine and the decisions reasonable, but the concatenation of circumstances was just terrible.
-
@gordonjcp said in Which language is the least bad?:
@tsaukpaetra It kind of comes with quiescent currents in the hundreds-of-nanoamp and fully awake currents in the hundreds-of-microamp range.
These are a bit specialised.
Sounds like fun. Are you allowed to share any details on the architecture?
I'm rather fond of Forth for that kind of really small footprint work.
-
@tufty I love Forth, and indeed ported it to a 1980s sampler, but the footprint is a bit small even for that. They're somewhat like PICs, but NDAed to hell and gone.
-
@tufty said in Which language is the least bad?:
I'm rather fond of Forth for that kind of really small footprint work.
Have you tried Factor?
-
@gordonjcp said in Which language is the least bad?:
@tsaukpaetra It kind of comes with quiescent currents in the hundreds-of-nanoamp and fully awake currents in the hundreds-of-microamp range.
These are a bit specialised.
It almost sounds like you're working on biofuel-powered devices, or practically-a-passive-device devices...
-
@dkf it's far from solved, I'll admit it freely. When Ruby encounters two strings of different encodings being concatenated, it'll raise an exception:
utf8 = 'zażółć gęślą jaźń' iso = 'zażółć gęślą jaźń'.encode('iso-8859-2') con = utf8 + iso #=> Encoding::CompatibilityError: incompatible character encodings: UTF-8 and ISO-8859-2
It's much funnier when Ruby doesn't know that the strings are in different encodings - one of the standard library's functions for reading over HTTP will happily assume it's getting UTF-8 when it's not told explicitly (e.g. via a HTTP header) what the server is sending. Then it will of course gladly concatenate strings, operate on them, write them to files... Makes for some fun debugging when it comes up all mangled on your end because the server's UTF-16LE was interpreted as UTF-8.
Similarly to dates, strings are apparently hard.
I was happily chugging along in Ruby and now you're making me remember all the quirks, fuckups and caveats :D
@dkf said in Which language is the least bad?:
I believe that was the day when I used the phrase “fucking shit-kicking dickwads” about the implementors of Ruby.
Truth be told I'd be much happier if they were actually dickwads, but they seem like a nice bunch ;)
-
@pjw said in Which language is the least bad?:
When Ruby encounters two strings of different encodings being concatenated, it'll raise an exception:
That's a great improvement over what it used to do, which would be to just blithely assume that the result is the same encoding as the first string. (Or was it the second? I digress…) At least now there's a chance that the person who has the problem can notice the fact and fix it.
It's much funnier when Ruby doesn't know that the strings are in different encodings
Yeah, well that's a different problem that will catch out lots of other languages too, unless they actually verify the encoding of the string on import. Oh, look, that's what many of them actually do. (I include both C# and Java on that list.) The general approach used is to convert to some internal unicode variant that is distinct from the external encodings usually used (often something like WTF-16 or a UTF-8 relative) on the basis that a character (codepoint in abstract Unicode space) is what is preserved, not the byte sequence. It costs some time when things are correct, but truly avoids a whole crapton of trouble otherwise. It's possible to optimise further, but a trust-but-verify approach is still advised, and Ruby still goes beyond that into highly troublesome territory.
The worst problems of all come when the DB itself contains things in a mixed encoding within a single column. (It is much easier to tolerate different encodings in different columns.) But that's its own special circle of hell, filled with bust user data and weird accents on mojibake…
-
@pjw said in Which language is the least bad?:
It's much funnier when Ruby doesn't know that the strings are in different encodings - one of the standard library's functions for reading over HTTP will happily assume it's getting UTF-8 when it's not told explicitly (e.g. via a HTTP header) what the server is sending. Then it will of course gladly concatenate strings, operate on them, write them to files... Makes for some fun debugging when it comes up all mangled on your end because the server's UTF-16LE was interpreted as UTF-8.
If anyone ever lands in that case, I think 7-bit ASCII is the safest assumption.
-
@pleegwat said in Which language is the least bad?:
If anyone ever lands in that case, I think 7-bit ASCII is the safest assumption.
ISO8859-1 is actually the safest. It's probably wrong ;) but it at least maps characters to characters in a way that doesn't corrupt the byte sequence, meaning there's a chance (sometimes) that you can figure out what the data should have been.
-
@dkf Depending on locale. Flagginhg any codepoint >=0x80, at least during development, will force the developer to take a stance on the matter.
Then again, they may just blindly apply utf-8 and iso-8859-* is much less likely to generate illegally encoded output.
-
@tsaukpaetra Apparently it's to sit running off a lithium battery for years occasionally Doing Stuff and very rarely talking. I've never seen the actual hardware.
-
@dkf said in Which language is the least bad?:
The general approach used is to convert to some internal unicode variant that is distinct from the external encodings usually used (often something like WTF-16 or a UTF-8 relative) on the basis that a character (codepoint in abstract Unicode space) is what is preserved, not the byte sequence.
This. A string type is not supposed to have an encoding, only its serializations do. Which is also why string != array of bytes and any language which uses the same type for both is broken.
-
@ixvedeusi said in Which language is the least bad?:
A string type is not supposed to have an encoding
Obviously it does have an encoding because it needs to be stored somehow, but you're right in the sense that the programmer shouldn't need to know or care what that internal encoding is.
A decent API in a decent language shouldn't even allow the programmer to determine the internal encoding without hackery like directly inspecting memory.
-
I like how people still cite the "fractal of bad design" post for PHP, when it's now 5 years old and a surprising number of its criticisms have been remedied, and some of the rest require major compatibility breaks, though there is every sign of them being tackled going forward.
-
@arantor once you destroy the trust in your brand, it isn't easy to rebuild it.
-
@arantor once you destroy the trust in your brand, it isn't easy to rebuild it.
(Ironically nodebb just failed to post this in the first try)
-
@twelvebaud said in Which language is the least bad?:
Fortunately, at the API level, all you need to do is rebuild; there's no syntax difference between field access and property access in most CLR languages, so as soon as the compiler notices the changed metadata you're good to go.
Unless something somewhere reflects over the class, in which case the reflection APIs for fields are properties are vastly different and things will break silently once what used to be a field is now a property.
-
@wharrgarbl said in Which language is the least bad?:
(Ironically nodebb just failed to post this in the first try)
It's a strategy, see, if nobody can trust you in the first place to be basically competent, you have no place to go but up.
-
It doesn't seem like anyone's been discussing Rust yet. I would say that it closely competes with C# for my least-hated language. What's the general opinion of Rust here?
-
@pie_flavor said in Which language is the least bad?:
What's the general opinion of Rust here?
Have they managed to write a GUI in it yet? No? Why would I think it is not a toy then?
-
@dkf said in Which language is the least bad?:
@pie_flavor said in Which language is the least bad?:
What's the general opinion of Rust here?
Have they managed to write a GUI in it yet? No? Why would I think it is not a toy then?
I don't know what 'they' you're referring to. There are certainly bindings for Gtk, Qt, WinAPI, and a bunch of others. I would expect that a desktop operating system would not be able to be written in a 'toy' language.
What makes it a toy to you?
-
@pie_flavor said in Which language is the least bad?:
It doesn't seem like anyone's been discussing Rust yet.
"Quick! The Rust Evangelist Strike Team must rectify that!"