C stringsþÝ«ÌΉŠ‹ÿ
-
-
RAII is better,
It really depends on your data. RAII on the head of a list is an unavoidable immediate O(n) if you drop the whole list, whereas a GC can amortize deleting the list nodes over a few updates if it desires..
-
It really depends on your data. RAII on the head of a list is an unavoidable immediate O(n) if you drop the whole list, whereas a GC can amortize deleting the list nodes over a few updates if it desires..
One: Lists are shitty data structures. Don't use them where performance matters.
Two: If performance matters, that's the realm of optimization. So if you need to, reorganize your code so that it deletes as many or as few elements from the list (which you shouldn't be using because you are in a part of code where performance matters) as you can afford.
-
OK fine, but just pointing out the myth that RAII is always better than GC in any situation.
-
OK fine, but just pointing out the myth that RAII is always better than GC in any situation.
If you mean that GC lets you get away with using bad data structures in performance critical areas, I agree.I disagree that that makes it better in that situation.
-
I'm not trying to suggest that. Let's reframe the issue a little:
Thirty-or-so years ago, people used to write performance critical code in assembler because compiler optimizers couldn't be trusted to produce fast enough code. These days, optimizers are orders of magnitude more advanced and can probably spit out faster asm than you can write by hand. I expect that at some point the state of the art with GC will improve similarly, so that it will be a lot harder to write reference-counted code which is as fast as the machine-optimized GC. We're probably not there yet, but a lot of people reflexively kneejerk against GC in the same way than 1980's developers kneejerked against optimizers.
So all I am really taking issue to is the idea that GC will always be inferior to RIAA in every case forever.
-
The problem with GC is that it can't deal with resources in a uniform way -- it's great for memory, but lousy for database connections, file handles, etc...
-
So all I am really taking issue to is the idea that GC will always be inferior to RIAA in every case forever.
My objection to GC, aside from tarunik's point that it can only properly handle memory at this time, is that it is often used as a crutch to hide poor design.
I don't want my tools to punish me for mistakes, but I also don't like tools that coddle me. I would like my tools to encourage correct use, and discourage poor practices. I believe RAII matches this better than GC.
You may be able to find examples that give GC an advantage, but the only one such example you showed is one where GC allows you to do the wrong thing more easily. With RAII, the architecture of the language itself points out that you are doing something expensive in a part of the code where you can't afford to do expensive things, and it offers different tools to fix it. Using a better data structure is the one that immediately comes to mind.
-
crutch
Your objection to GC doesn't have anything to do with my objection to @tarunik's objection to my objection to the myth that automated GC will always beat rolling your own GC.
Filed under: objection!
-
I object!
-
Oh FFS you are an idiot aren't you? I was responding within the context of my defense of using dynamic types in C# when appropriate and you seemed to forget about the context ... REALLY?
The problem with "when appropriate" is that it's either ALWAYS or NEVER appropriate EVEN IN THE SAME CONTEXT - the only difference is what the person is trying to prove.You are a fucking tool aren't you
Yes, I am purple dildo.The recommended way is to use undefined rather than null (it been mentioned in the JS community 1000s of times do some fucking reading or watch a JSConf video)
All I can find is "undefined and null both mean no value, but it's different no for each of them and should be used appropriately".GC is a good thing.
For lazy programmers, yes.Even the this keyword really isn't that difficult to deal with if you spend 5 seconds reading the docs.
But it's still counter-intuitive that your function can do something completely different depending on how it was called. Even Python doesn't do that!As I said the most vocal opponents of JS are the ones that really don't understand anything about the language or the community. You've just proved my point.
I wasn't particularly vocal. And I wasn't even talking particularly about JS, just dynamic typing in general.know what the your reply is going to be like "oh it is bad language design and it is still there" etc etc etc. Well for fucking obvious reasons they can't throw the crap stuff out overnight, just like PHP, C# and any other language that has been around for a while.
I don't hate JS for not throwing out bad stuff; I hate it for putting in bad stuff in the first place. It's badly designed language that works not too well but still okay for webpages that has been popularized only thanks to royal decree. This is fact, and no amount of whining will change that.Mind you, I also hate C, C++, C#, Pascal, Java, ActionScript, Linux shell scripting, Lua, Go, Python, PHP, HTML, Clausewitz Engine scripts, UML, x86 Assembler, and several other languages. Some more, some less, but there's something I dislike very much in each of them.
-
There are only two kinds of languages: the ones people complain about and the ones nobody uses.
— Bjarne Stroustrup
-
The problem with "when appropriate" is that it's either ALWAYS or NEVER appropriate EVEN IN THE SAME CONTEXT - the only difference is what the person is trying to prove.
Oh FFS you are an idiot aren't you?
Look, you talk to a server, and you get a bunch of JSON data. Now if you're lucky, the data will follow a strongly-typed contract and allow you to map them to a PO?O model. If you're less lucky (say, you're working with Discourse pseudo-API), they won't map nicely, because one of the fields can be either a number, string, another object, or even not be there at all.
You can fuck around with dictionaries, or you can just use a clean object syntax for that. And even map it to an actual type in the next step using that.
-
OK, that's one legitimate use case. Though I'm not sure the data will be any useful if there's no API contract (I mean, how do you parse data if you don't know the format?).
-
(I mean, how do you parse data if you don't know the format?).
You try until it works.
Seriously though, it's not that there's no contract, it's that the contract may not be strongly-typed itself. With things like "if Jupiter is in zenith and x is equal to 7, the "description" parameter is an object containing an error code and a random Shakespeare quote, otherwise it's a string with a description".
-
Ah okay. Fair enough. But someone should fire (at) spec guys somewhere...
-
Ah okay. Fair enough. But someone should fire (at) spec guys somewhere...
Now for the ramblings part:
http://i.imgur.com/ahheHQh.png
"Instead of sending your draft as a sub-object, we'll send it as a nicely escaped JSON string, because fuck you, that's why".
With JSON, you never know what to expect. With Discourse, doubly so.
(this one is actually consistent, but still...)
-
-
But someone should fire (at) spec guys somewhere...
Ask blakey. We do that eventually in every thread here.
-
-
-
Can I inherit from @boomzilla?
-
You can inherit his behavior, but his money wil be handled through the Child references in the LastWill object.
-
All I can find is "undefined and null both mean no value, but it's different no for each of them and should be used appropriately".
> typeof null "object" > typeof undefined "undefined"
Undefined is preferred, so you just use that ... problem solved.
It's badly designed language
In your opinion.
-
Casting a pointer to a uintptr_t, doing arithmetic on it
Shouldn't you be using
ptrdiff_t
if you're doing arithmetic on pointers?You are a fucking tool aren't you:
Been at the gin again Lucas? ;)
I object!
That anything like I, Robot?
-
Undefined is preferred, so you just use that ... problem solved
Except JSON can't have undefined.In your opinion.
Not just mine. But tell me, what kind of language implements bitwise operators but doesn't implement integer type?
-
struct type_hdr * hdr = recv(...); struct type_data * data = (struct type_data *)(hdr + 1);
-
Shouldn't you be using ptrdiff_t if you're doing arithmetic on pointers?
Useful pointer arithmetic is pretty much restricted to computing the sum of a pointer and an integer, in which case the result will be a pointer of the same type; or the difference between two pointers, which yields a ptrdiff_t integer result.
C scales the integers involved in pointer arithmetic by the sizeof whatever the pointers point to. If you want to work with raw machine addresses, cast the pointers to pointer-to-char because sizeof (char) is defined to be 1.
uintptr_t defines an integer type you can losslessly park pointers in. The relationship between the bits of the uintptr_t and the bits of a pointer is implementation-dependent (for example, there are all kinds of ways to pack an x86 far pointer into a 32 bit int). That means that once you've cast your pointer to a uintptr_t, it's best treated as an opaque value.
Of course people don't treat uintptr_t as opaque; they make assumptions about what the various bits mean, and do bit-fiddling operations on the converted pointer type that C won't allow on a char* directly. I don't like designs that do this because they almost always cause trouble down the road; I'm sure I'm not the only TDWTF reader old enough to remember the 68020 Mac debacle.
-
what kind of language implements bitwise operators but doesn't implement integer type
The kind whose designer knows that 64-bit floats make perfectly cromulent 52-bit ints?
-
The kind whose designer knows that 64-bit floats make perfectly cromulent 52-bit ints?
I didn't say it can't be done (it was done after all). I'm saying it's a bad idea. Bitwise operations are useful only for low-level optimizations - so implementing them in JS is doubly nonsensical because 1) if you write in JS, you don't care about low level optimizations, and 2) casting to int, performing operation and casting back to float is slower than having each bit as separate boolean variable, so there's nothing to gain.
-
My objection to GC, aside from tarunik's point that it can only properly handle memory at this time, is that it is often used as a crutch to hide poor design.
In many ways, the biggest problem with GC is that it tends to push up memory consumption. The issue is that some memory can't usually be deleted until you do an (expensive!) mark-and-sweep collection. It's possible to hide the collections in most apps fairly easily — GC technology has come on a long way — but the overall increased hunger for memory and the related reduction in density of memory use (and so somewhat worse cache performance) remains an irritant. It's also really hard to integrate stuff that uses one GC with stuff that uses a different one.
A standardised way of doing garbage collection would be nice. We're definitely not there yet.
(RAII supporters, remember this: it only deals with the easy cases, and then in a way that many others were doing before anyway. The awkward cases such as reference loops are much more tricky.)
-
68020
So, unitptr_t (had it existed at the time) should've ideally been 3 bytes wide on this architecture?
-
So, uintptr_t (had it existed at the time) should've ideally been 3 bytes wide on this architecture?
Treating it as if it actually was 3 bytes wide is kind of what caused the problem.The original Mac design only had 128K of RAM; the idea that you'd ever put megabytes of RAM into a personal computer was generally thought of as kind of pie-in-the-sky stuff, a cultural error that also led to things like the 640K RAM limit in the IBM PC architecture.
The Mac was based on a 68000 processor. The 68000 family architecture has 32-bit address registers, but the original 68000 had only a 24-bit address bus, so the most significant 8 bits of the address registers never affected anything outside the CPU.
The Mac memory manager (part of an OS shipped in ROM) saved a fair bit of space by stuffing flag bits and whatnot into the "unused" upper 8 bits of assorted address structures. It was written in 68000 assembler, not C, so no actual uintptr_t type was ever involved; but the kinds of address manipulation done were exactly the kind of thing that uintptr_t in C lets you get away with doing.
When later Mac models shipped with 68020 processors - essentially the same architecture, but with a full 32-bit address bus - the fact that the OS memory management routines only worked with 24-bit addresses translated to an inability for the system to address more than 8MiB of RAM, even though by that time you could easily buy a Mac with more than that fitted.
-
There was a little checkbox in the Mac control panel for a few versions to turn on "24-bit aware" mode.
Then later, another one for "32-bit aware" mode. IIRC. It's obviously been a few yea-- decades.
-
The kind whose designer knows that 64-bit floats make perfectly cromulent 52-bit ints?
I know that, in the past, we had some sections of code which used 64-bit floating points to store integer values that did not fit in 32 bits. The quoted reason was that floating point was more portable.
The code has never needed to run on anything but 32-bits and 64-bits x86 architectures, and was written after 2000. I do not think it is still in place.
-
I believe that on modern processors you can use the bottom few bits for doing magic with type flagging. This is because, with the exception of
char
andshort
(and the types related to/derived from them) everything has to be aligned to at least a 4 byte boundary. Memory allocators typically use an 8-byte boundary. Try to access a 32-bit value that isn't aligned and you'll (probably, unless you take special steps) get a bus error.This is, of course, a horribly dirty hack. I believe that Lisp implementations do this sort of thing much more commonly than C-derived languages do.
-
Lisp implementations do this sort of thing
VM's for dynamic languages in general do this. Ruby's VM did, I think, last time I checked (about 5 years ago...)
-
VM's for dynamic languages in general do this.
Some do, some don't. I never had any idea what to mark in there myself, other than that the programmer is a smartass git. (I never needed to mark that; it's a constant
1
in my programs. )
-
I wonder if 2 or 3 bits is enough to hold flag data for a GC implementation...
-
The clang compiler implementation also likes having a bit party with many of its pointers.
-
I'm told that the jvm does this too. IIRC they even use custom signal handlers for SIGSEGV/SIGBUS/..., in order to handle accesses to bit-partied-with pointers that weren't properly "unpartied".
-
JSON isn't JavaScript. It is another non-issue because null and undefined behave the same when being evaluated in a conditional.
Again all these complaints sounds like someone that hasn't actually any real experience using the language.
-
Been at the gin again Lucas?
Only a little bit. However considering the combination of a dogma and I bothered looking up some realistic scenarios as examples, I feel completely justified.
-
Ruby solves the problem by having nil be an object that you can define methods on. Go solves the problem by having the zero-value of any type be a valid value for that type. C++ solves the problem by avoiding pointers where possible. JavaScript avoids the problem by defining two nulls, having them equal each other in some cases but not others, and crashing with the very helpful error message "null is null or not an object".
-
Ruby solves the problem by having nil be an object that you can define methods on
That's horrible.Go solves the problem by having the zero-value of any type be a valid value for that type
As in, dereferencing null pointer creates new default-constructed object?C++ solves the problem by avoiding pointers where possible
Avoiding pointers my ass.
-
Ruby solves the problem by having
nileverything be an object that you can define methods on.
-
@ben_lubar said:
Go solves the problem by having the zero-value of any type be a valid value for that type
As in, dereferencing null pointer creates new default-constructed object?
A nil slice or map acts like a read-only empty slice or map. A nil channel acts like a blocking channel with nothing on the other side.
-
So basically, it puts value-type semantics on everything, and with weird defaults to boot? That's... probably causing more problems than it solves.
-
Imagine
java.util.Map<java.lang.String, java.lang.Integer> x = new java.util.HashMap<>;
if you could dox["foo"]++;
and it would Just Work™.Now imagine iterating over
java.util.Map<java.lang.String, java.lang.String> y;
(defined as an instance variable, never explicitly initialized) without crashing.Now imagine the statement
var z string
in Go. That makes a variable namedz
of typestring
. All variables default to the zero value of their type if they are not initialized. The zero value ofstring
is""
. The zero value of*string
isnil
. The zero value of any numeric type is0
.
-
Imagine java.util.Map<java.lang.String, java.lang.Integer> x = new java.util.HashMap<>; if you could do x["foo"]++; and it would Just Work™.
public class DictionaryForSpecialSnowflakes<TKey, TValue> : Dictionary<TKey, TValue> where TValue : struct { public TValue this[TKey key] { get { if (this.ContainsKey(key)) return base[key]; base[key] = default(TValue); return base[key]; } set { base[key] = value; } } }
There, it's trivial to implement in pretty much any language. And it's also a horrible idea.
Say, I have a
Dictionary<string, int> temperatures
, I ask fortemperatures["Alaska"]
and get 0. Did I put that 0 in? It's a valid value after all.