The abhorrent 🔥 rites of C
-
It honestly doesn't seem so illegible
It's just too bad it doesn't compile: http://goo.gl/0RUlxv
Pro-tip: if you're going to defend a language, you could at least be fluent enough in it to make a trivial function compile. Let's go through the errors:
if(*out == NULL)
out
is the pointer to the int you have to write to.*out
is the value of the int. So you just dereferenced a pointer without checking that the pointer isn't null, you're reading a probably uninitialized value (since it was given to the function for the function to write to it) and you're checking that some random int doesn't equal a null pointer. That line should readif (out == NULL)
Next:
*ptr = 10;
You declared
ptr
asvoid* ptr
. So*ptr
is referring to the value of a void object. Which doesn't exist, meaning it's an error. The fix here is to declareptr
asint* ptr
and cast the result of the malloc toint*
.Finally, the same question that confused me before:
return TRUE;
TRUE
is usually defined as 1. Since you are returning a pointer, you're trying to assign an int to a pointer, which is a nonsensical operation. The compiler rightly objects. The fix is to return an int instead of a pointer to int.
-
The version I proposed is more cunning(1) - it is the computer-science equivalent of Gödel's Incompleteness Theorem, and the program CANNOT be proven to either terminate or loop.
But cannot be truly written, since the Halting Problem is really a proof that you can't write the program; if the problematic piece existed, a contradiction would be derivable by trivial construction, so the part of the program that causes the problem does not exist. (Or at least not in finite space. Infinite “algorithms” don't even make formal sense.) Picking a problem where the search program is trivial to write but the answer is currently unknown is a more cunning approach; it depends on the general complexity of mathematics, so showing that there's really no easy algorithm for proving termination at all.
-
See Why Pascal Is Not My Favourite Programming Language.
I've seen it. It's an obnoxiously self-serving argument from a guy who had a direct financial interest in the success of the C programming language, seeing as how he wrote the book The C Programming Language. It doesn't change the fact that Pascal has proper strings and arrays while C never did, or that C (and C++) continues to be plagued by buffer-based security holes to this very day. (While such errors are theoretically possible in Pascal, I'm only aware of one such error actually happening in released code, as opposed to the monthly security updates I get for OSes, browsers, and other vulnerable C and C++ software.)File pointers are different because on most operating systems you have a limited number of files that you can open simultaneously, so the special handling is justified.
Then it should handle them differently, as a file type rather than a pointer. If the compiler will let you callfree()
on something that's a valid pointer variable, when it's never actually a valid action to callfree()
on that variable, your language is .You can do quicksort in haskell with exactly 1 line of code.
Can you really?As sorting algorithms go, quicksort is actually pretty bad in a lot of ways, except for one thing that's right there in its name: it's really, really fast. (In the best and average cases, at least.) But this high performance is attained by mutating the array being sorted, something Haskell devotees will proudly tell you is impossible in Haskell.
Is it possible to achieve Quicksort's performance characteristics in one line of Haskell? Or is it possible to create a recursive divide-and-conquer sort that sort of looks like Quicksort if you squint just right, that isn't actually quick?
-
@Steve_The_Cynic said:
The version I proposed is more cunning(1) - it is the computer-science equivalent of Gödel's Incompleteness Theorem, and the program CANNOT be proven to either terminate or loop.
But cannot be truly written, since the Halting Problem is really a proof that you can't write the program; if the problematic piece existed, a contradiction would be derivable by trivial construction, so the part of the program that causes the problem does not exist. (Or at least not in finite space. Infinite “algorithms” don't even make formal sense.) Picking a problem where the search program is trivial to write but the answer is currently unknown is a more cunning approach; it depends on the general complexity of mathematics, so showing that there's really no easy algorithm for proving termination at all.
(pendantry) It's the decider that can't be written, not the otherwise trivial program that shows that the decider can't be written, but the "infinite space algorithm" part is only part of the puzzle. Even if the decider could be written in finite space and could run in finite space and time, the existence of trivially-constructable items like the "I loop if I would halt, but I halt if I would loop" program is what makes the universally-capable decider impossible. A decider that refuses to reason about itself or about programs that use itself would not be bothered by this program, but, equally, wouldn't be complete (which was more or less Gödel's point, of course).
-
A decider that refuses to reason about itself
It turns out that it is impossible to build such a thing. Equivalence over functions is another of these undecidable problems…
-
@Steve_The_Cynic said:
A decider that refuses to reason about itself
It turns out that it is impossible to build such a thing. Equivalence over functions is another of these undecidable problems…
I did mean absolutely literally itself (given that the breaker-program calls Decider X, if Decider X refuses to reason about Decider X, the breaker-program cannot break in the particular way it does). But that's just cheesing the thing, and ultimately you have to degrade the decider to refuse to reason about code that analyses code rather than just refusing to analyse itself.And that, too, is the point. A complete/universal decider cannot be written. I think we are violently agreeing by now.
-
Pascal was
n't a viablearguably a far better systems programming language(or, frankly, a viable programming language)than C, despite never gaining a massive market foothold,for quite some timebecause ofmissuperior features like the size of an array or string being part of itstyperepresentation,
Fixed that for you.
-
I did mean absolutely literally itself (given that the breaker-program calls Decider X, if Decider X refuses to reason about Decider X, the breaker-program cannot break in the particular way it does).
I know you did, but determining if something is being fed itself is genuinely difficult, since it could also be encoded in any random Gödel numbering scheme (of which there are infinitely many). The complex type systems that have sprung up in advanced programming languages (of which Haskell is just the merest shadow) are attempts to block this, but they all either are undecidable anyway or decidable and only capable of expressing primitive-recursive functions. The reasoning-completeness limits are where they are because they're utterly fundamental.
I think we are violently agreeing by now.
Yep.
-
I learned Pascal before C (and rather liked it, given that I was coming from a brain-damaged BASIC background). The main problems with it were that standard Pascal had some really odd restrictions when it came to interactions with the OS, so instead you needed some non-standard libraries. Libraries were never standard, not even by generally accepted practice. (Didn't stop there from being good libraries, but portability between platforms wasn't there.)
Other things needed to change too, such as the string length limits, but they would've been possible to fix without getting deep into the language definition.
But it didn't win. C (and its myriad progeny) did.
-
and rather liked it, given that I was coming from a brain-damaged BASIC background
Congratulations for (presumably) proving Dijkstra wrong.
-
What do you mean by this? Why wouldn't you use standard free() ?
Because you have to remember to use it on every execution path which necessitates the deallocation of memory. Because you need to know which deallocation function to use, eg. Foo *foo; Should you call free() in it or is there a free_foo() function somewhere? Should you call it at all, since someone else might claim ownership? The compiler won't tell you. In C++ you get this stuff done automatically.
Explain why then. Don't use an argument from authority. C++ was created in the 80's. Windows NT was created in 1993. If it is such a no brainer , why didn't Microsoft programmers use C++ entirely instead of going for a mixture of C++ and C (and C#)?
Because the guys who knew how to make an OS, were the guys who already had experience in it. And guess which language the had to get their experience with? That is the essence of the C vicious circle.
Incidentally, which OS are you developing?
Tizen
You can find operating systems written in Java or Haskell. How many of those caught on?
I actually can't remember any and I don't know if any caught on. What does that have to do with the language used? Nothing at all. A shitty OS is always shitty, regardless of the language, eg. Tizen will always be shit.
The "garbage collected languages" part was not referring to C++, but to the likes of Java and C#.
So why do you mix it in the discussion regarding C++? I could also add that using whitespace as keywords is also a bad idea, but what would that have with the topic? Nothing.
The alternatives didn't seem to catch on however, and the landascape is dominted by C. If the advantages are so many that one would have to be insane to ingore the tradeoff why do so many people choose to do so? You cannot chunk it all to argumenting cruft or programmer ignorance since there are systems that started from scratch and didn't adopt C++.
The reason for that was already explained to you by various people. You can always pretend it wasn't, tho.
Can I extrapolate this to ALL the algorhythms and not just sort and sqrt?. I know some functions in the STL were faster in C++ than in C, but so is asking for the first element of a 10 000 element list in haskell (or the first element of an infinite list, for that matter). What I meant was how fast was in the general case of running programs. As it turns out, this depends on the compiler and the implementation and not on the language itself. What compiler are you using? GCC, LLVM, Intel C++ Compiler?
The point is that C++ allows for optimizations, which are impossible to do in C. You can argue that's the compiler, but without the language support, the compiler wouldn't be able to do those. And a C compiler can't, as explained in that book.
So, you're claiming that when writing low level code, you do not need to manage memory by hand?
Explain what you mean by "low-level". The lowest level for using raw pointers are custom allocators and new/delete (that's obvious). Since those are already abstracted away, you in fact never have any reason to deal with owning raw pointers to memory. Now why would someone do that? Why do this stuff manually? In fact it even takes more code to write, so even being lazy is a reason to write good code.
So what's C++ alternative to owning raw pointers?
In other words, you know shit about C++, yet you dare compare it with C and give absolute statements, like it not being memory safe. Oh, you're not getting away that easy - learning how to properly managed memory (that is, not in idiotic C way) will be your homework. Don't post an answer until you do it.
Because people want performance more than they want a sane language I guess.
Not really. In fact I can't think of anyone who would give up good practices for pretty much no gains in performance. That's not like switching from Java to C++.
You made the claim that C++ was more performant and safe in the first place. The burden on proof is on you, not on me.
Posted an example link with an explanation. You could at least bother to read it.
-
Congratulations for (presumably) proving Dijkstra wrong.
Dijkstra was wrong about many things, but at least he was wrong for good and interesting reasons. There are other people who I only wish could be that sort of wrong.
-
The point is that C++ allows for optimizations, which are impossible to do in C.
There's plenty of optimisations possible in C
-
I think he meant
@NeighborhoodButcher said:The point is that C++ allows for optimizations
,which are impossible to do in C.
-
I think he meant
@NeighborhoodButcher said:The point is that C++ allows for optimizations
,which are impossible to do in C.I'm curious as to which optimisations he's thinking of.
-
I fail to see how that is an issue, though. You open() a file and you close() it. The interface is perfectly consistent
We know you fail to see the issue, since you're a C programmer. The issue is - in C you have to do this manually and remember to do it every time. In C++, you just don't do it and let the compiler do the work. In C++, if it compiles, that's enough proof it works (assuming working standard library, but that is a rather safe assumption). In C you never know, untill it explodes in your face as another exposed vulnerability.
C++ does a lot of stuff behind the scenes to make that happen. Doing that in high level applications is good. Dealing with stack unwinding and exceptions in low level code is not so ideal.
Been there, done that. Missed argument, move on.
In theory this sounds horrible, but in practice, it's not such a huge problem. C was intended for low level usage and this kind of "unsafe semantics" were intentionall. Most tools would detect uses after free, tough (if you bother to use them).
Oh yes, because it's better to rely on some 3rd party tool to show you your code is broken, instead of a compiler telling you that upfront. Because it's sooo much better to do all the management yourself, instead of 0 (or near-0) overhead abstractions. Want some owning memory with a lifetime guarantee?
auto foo = std::make_unique<Foo>();
done
Want thread-safe shared memory?
auto foo = std::make_shared<Foo>();
done
Now show me your compile-time error-checking thread-safe reference-counted memory management in C. (Hint: don't try - you can't)
-
I'm curious as to which optimisations he's thinking of.
That link is one of the answers. Please read it, since I don't want to copy-paste a book. And I'm too lazy to write it all myself.
-
How brain damaged? ZX Basic? Commodore Basic?
-
I'm curious as to which optimisations he's thinking of.
The typical example is algorithms, like sort. In C, you'd have a function that sorts based on a comparator function, and you call the sort function with a function pointer parameter. The program then has to do dynamic dispatch to call the comparator function on each pair of elements.
In C++, you'd have a function template that can take a functor class (a class that overloads the function call operator) parameter. During compilation, when the function template is instantiated the compiler can see who the comparator is, and it can either do a static function call or even inline the comparator if it wants to.
To get the same kind of optimization opportunities, the C programmer would have to write a specialized sorting function for each type, repeating the same code over and over with just a change in the comparator called.
-
In C++, you'd have a function template that can take a functor class (a class that overloads the function call operator) or preferably a lambda parameter.
let's be modern
-
That's redundant, since a lambda is just syntax sugar for an anonymous functor class :D
But yes, lambdas are great at getting rid of boilerplate, and so making algorithms much easier to use.
-
Haskell programs can be proven for termination
Haskell is not a total language, so you can't prove termination (in the general case).
-
For example, since lists are allowed to be infinite (because of lazy evaluation), a function that uses all the list elements can't be total. A function that uses all the elements of an infinite list will obviously never halt. But what about a function that only uses some elements?
Derp, now you're back at Rice's theorem, since that class is not trivial.
-
It doesn't change the fact that Pascal has proper strings and arrays while C never did
It didn't have proper strings or arrays when Ritchie wrote that, in 1981. Because the size of the string/array was part of its type, so it was impossible to write a function that worked on 'a string' or 'an array'.
Fixed that for you
1981, remember. The size of strings and arrays was part of the type in the original version of Pascal (which was the standard in 1981), so a function that took strings of size 10 couldn't take strings of size 9 or 11. This is a crippling bug.
Similarly I consider the lack of break/continue/return combined with logical operators not having a defined order of execution to be "Oh fuck off" territory.
AFAIK this stuff was all fixed later, mostly by extensions to the language in the compilers, but in the meantime C didn't have those problems, and "Whoops you can stomp all over memory" is less of a blocker to adoption, particularly back then, when it was practically a feature.
-
@ronin said:
You mean like Pascal?File pointers are different because on most operating systems you have a limited number of files that you can open simultaneously, so the special handling is justified.
Then it should handle them differently, as a file type rather than a pointer. If the compiler will let you callfree()
on something that's a valid pointer variable, when it's never actually a valid action to callfree()
on that variable, your language is .
Either you propose that allocated objects should carry type information that automagically associates them with their proper deallocation function, which is incompatible with the requirements for programming bare metal, or you provide some genericmalloc
/getmem
/whatever and allow the programmer to shoot themselves in the foot.
-
It's just too bad it doesn't compile
You're right, it doesn't. I didn't have a C compiler handy at the moment and it was written in a rush. The intention was to return an int and to assing the newly created pointer to the old pointer.
Then it should handle them differently, as a file type rather than a pointer.
Its a special type of pointer. FILE is already a type. The FILE * semantic actually tells you that it is a FILE pointer. FILE is the type. * signifies that its a pointer to a file. What would actually change if you omit the pointer? Nothing. Virtually no one tries to use
free()
on a file pointer. It is not even an issue.Can you really?
Okay, I lied, two lines:
qsort [] = [] qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs)
I didn't account for the first line, which handles the case of an empty list.
Is it possible to achieve Quicksort's performance characteristics in one line of Haskell?
I haven't measured the code for speed (or claimed that it was faster for that matter). Do you have a large list of randomly generated numbers and enough time? Hold on, I'll fetch my beer.
Because you have to remember to use it on every execution path which necessitates the deallocation of memory.
Doing
fclose()
on a file is not an issue amongst C programmers. Show me a large security hole that has resulted from it. Is not more complicated than doing.close()
on a file object. You are arguing non issues.Because the guys who knew how to make an OS, were the guys who already had experience in it. And guess which language the had to get their experience with? That is the essence of the C vicious circle.
So Donald Knuth, Theo DeRaadt, Linus Torvalds and Bertrand Meyer are all wrong to distrust C++ and you are right, then?
Tizen
You mean the one that is based on the Linux kernel, which in turn is written in C? ( https://en.wikipedia.org/wiki/Tizen ). Given how unenthusiastic you are about C, why haven't you reimplement the rest of the kernel in C++?. I though that was the meat of your argument, that you know what is it like to implement an OS from the ground up in C++.
But regardless, you must have some killer performance measurements of what writing those not-quite-userspace parts of the OS in C++11 are. Can you share them with us?
The point is that C++ allows for optimizations, which are impossible to do in C. You can argue that's the compiler, but without the language support, the compiler wouldn't be able to do those. And a C compiler can't, as explained in that book.
Some optimizations are. Also some aren't. C++ has worse spatial locality than C. In fact so does anything object-oriented.
Oh yes, because it's better to rely on some 3rd party tool to show you your code is broken, instead of a compiler telling you that upfront.
Compilers can do that too now. I was hesitant to mention sanitizer routines since I don't know if they exist in GCC. On clang -fsanitize=< memory/address/etc > would do exactly that.
In other words, you know shit about C++, yet you dare compare it with C and give absolute statements, like it not being memory safe.
I wasn't aware of a C++11 feature, which uses object oriented parts to do its job. Exception handling has an overhead. Again...do you use object oriented constructs in the low level parts of the kernel? A little googling tells me that this is not the case. Because your operating system has those parts implemented in C. Because it uses the Linux kernel and that is written in C. You claim that you can get much better performance writing the entire OS in C++? Fine, so be prepared to backup your claims instead of restorting to angry outbursts against the C language or claiming that you are the only one in possesion of the truth.
Yes, I read the link you posted.C++ can
qsort()
faster than C, but that doesn't prove that its faster for coding operating systems. This is why I brought up haskell. Generating a 10000 element list and asking for the first element is faster than both C and C++ because it computes fewer things. That doesn't make it faster for the general case and I wouldn't even want to see the code that generates.The claim that C has to evolve and be replaced is a valid one. The claim that is a bad language is, at best a dubious one. The claim that C++ can replace it for ALL THE THINGS has to have some foundations. And instead of assuming that the rest of the world is idiotic and insane for using C, and going on lengthy and angry triades, calm down and try to provide evidence for your claims. If they are so compelling as you say they are, you will find no opposition.
-
I wasn't aware of a C++11 feature, which uses object oriented parts to do its job. Exception handling has an overhead. Again...do you use object oriented constructs in the low level parts of the kernel? A little googling tells me that this is not the case. Because your operating system has those parts implemented in C. Because it uses the Linux kernel and that is written in C. You claim that you can get much better performance writing the entire OS in C++? Fine, so be prepared to backup your claims instead of restorting to angry outbursts against the C language or claiming that you are the only one in possesion of the truth.
Not knowing about RAII or unique_ptr is roughly equivalent to not really understanding what structs are when writing C; it's kind of a big deal.
Again, you only pay for exceptions if you actually throw one. The happy path - the path executed the vast majority of the time - is just as fast. The slow path - the one where the compiler has to unwind the stack - might be slower than versions where you return error codes, maybe, but it's also much safer if you do it right, because you know you're not going to leak anything (destructors are guaranteed to be called, RAII, etc.).
unique_ptr does not have significant overhead, even if it's an object. No virtuals, no dynamic dispatch, it's equivalent to a bare function call that passes a pointer to a struct. The optimizer might be able to inline the function calls, as well, they're fairly simple.
I wouldn't expect unique_ptr to have any significant overhead compared to malloc()/free(), and it has the advantage of being automatic.
-
You mean the one that is based on the Linux kernel, which in turn is written in C? ( https://en.wikipedia.org/wiki/Tizen ). Given how unenthusiastic you are about C, why haven't you reimplement the rest of the kernel in C++?. I though that was the meat of your argument, that you know what is it like to implement an OS from the ground up in C++.
Oh snap. You just went there.
waits for @NeighborhoodButcher to set off a small thermonuclear device at @ronin's place of residence
Why do you think he hates C so much? Because he has to work on that piece of excrement, that's why...
-
Haskell is not a total language, so you can't prove termination (in the general case).
Of course not. If Haskell was total, it would not be turing complete and there would be at least one substantial and interesting class of problems it could not tackle. But Haskell (as with other general programming languages) doesn't bother with that restriction and so supports “interesting” behaviour.
Which isn't to say that programs are uninteresting just because they're bound to trivially be finite. Just that languages that can't tackle non-trivial problems are typically not very interesting.
-
In fact so does anything object-oriented.
Not strictly necessarily true, but the code to avoid it will make both you and @NeighborhoodButcher deeply nauseous.
In performance terms, the problem C++ has is that the “TEMPLATE ALL THE THINGS!” approach reduces code locality in anything beyond trivial examples. We don't have an infinite amount of L1 Icache. The bigger the object code, the more likely you are to have cache problems, and I now usually prefer
-Os
to-O3
as a standard optimisation flag (for both C and C++) exactly because it actually generates faster code because of this non-intuitive factor. Cache busting is a real downer.
-
Again, you only pay for exceptions if you actually throw one. The happy path - the path executed the vast majority of the time - is just as fast.
Actually that part depends on the exception handling mechanism used (which can depend on OS and architecture). IIRC, it's true in Win64 which uses static tables for exception handling, but there is an overhead in Win32 on x86 because exception handling uses a execution-stack-based linked stack of handlers with the "top" pointer in thread-local storage (which means any function that needs to handle exceptions -- be it for containing a
catch
clause or for containing a local variable with a destructor -- must suffer the overhead of pushing a handler and popping it).
-
1981, remember. The size of strings and arrays was part of the type in the original version of Pascal (which was the standard in 1981), so a function that took strings of size 10 couldn't take strings of size 9 or 11. This is a crippling bug.
...which was already fixed in most Pascal compilers of the day (just not yet in the standard because standards take a long time to catch up to the state of the art; see also C, C++, HTML, etc...) and hasn't been relevant today for decades.in the meantime C didn't have those problems, and "Whoops you can stomp all over memory" is less of a blocker to adoption, particularly back then, when it was practically a feature.
Yeah, people didn't get it back then. (Well, some people didn't. Tony Hoare warned us in his Turing Award speech of exactly what would happen if we let stupid things like the C language happen, but we didn't listen.) The Morris Worm should have cured us of that idiocy. Unfortunately, it did not.You mean like Pascal?
That's got to be some of the most ridiculously contrived Pascal code I've ever seen. First, no one ever in the history of ever uses a string-pointer type in Pascal, because they don't need to. String is a true primitive type (unlike in C) that doesn't need the user to manage it with pointers. Second, your example is assigning to an uninitialized variable, and the compiler does generate a warning for that. It is explicitly not "a valid pointer variable" for the purposes of this discussion.But actually yes, I do mean like Pascal. (Read up on the
file
keyword in classic Pascal to see how file IO was handled early on. It worked pretty well.) I also mean like modern OO languages, all of which seem to have converged upon the Stream pattern as the appropriate way to handle IO, all of which have destructors/finalizers/whatever-you-call-it-in-your-language that will automagically clean up the underlying file handle appropriately.Either you propose that allocated objects should carry type information that automagically associates them with their proper deallocation function
Yup. That works.which is incompatible with the requirements for programming bare metal
...and that should tell you something. It seems like you can have code that works, or you can have "bare metal", but not both. (Yes, i'm serious. Again, see unending stream of monthly security patches for bare metal C/C++ code. Over a quarter-century since the Morris Worm and we still haven't solved buffer overruns, because it can't be solved without abandoning C.)Okay, I lied, two lines:
qsort [] = []
qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs)I didn't account for the first line, which handles the case of an empty list.
Some simple Googling turns up this, which claims that this implementation is 1000x slower than a true quicksort. This is actually an incorrect claim. It's 1000x slower for the sample size that whoever ran the test tested it on, because what's actually true is that this is in a slower asymptotic performance class than a true quicksort (ie. it's at least O(n^2 log n) rather than O(n log n)). The author goes on to write an implementation that, according to him at least, successfully deals with the problems inherent in this implementation.It's much larger and more complicated than the standard C quicksort we all learned in CS 101. Not surprising since, as I pointed out, the real problem is Haskell's immutability, which Haskellites proudly claim is a feature, not a bug. Which it is right up until it isn't.
Not knowing about RAII or unique_ptr is roughly equivalent to not really understanding what structs are when writing C; it's kind of a big deal.
Yeah. You need ugly hacks like RAII in C++ because the language is broken and it doesn't have a propertry/finally
construct. Which is interesting because RAII, in its entirety, is built upon atry/finally
construct, but it's not exposed to the C++ developer, which means that other uses oftry/finally
that don't involve freeing memory turn into ugly workarounds. It's the very definition of "abstraction inversion."Don't believe me? Try translating this simple Object Pascal code into C++ without having to create any extraneous classes for the purpose of making RAII work:
foo.DisableUpdates(); try LoadDataInto(foo); finally Foo.EnableUpdates(); end;
This is an extremely common pattern you'd run into in real-world code all the time, particularly GUI-related code where
foo
is some sort of list or tree control and you want to load a large amount of data into it without all your attached events going off a few thousand times. And you can't do it in C++, at least not in any way that's anywhere near this simple and readable.
-
Don't believe me? Try translating this simple Object Pascal code into C++ without having to create any extraneous classes for the purpose of making RAII work:
[code]
foo.DisableUpdates();
try
LoadDataInto(foo);
finally
Foo.EnableUpdates();
end;
[/code]This is an extremely common pattern you'd run into in real-world code all the time, particularly GUI-related code where
foo
is some sort of list or tree control and you want to load a large amount of data into it without all your attached events going off a few thousand times. And you can't do it in C++, at least not in any way that's anywhere near this simple and readable.It would look like this in C++ (to exploit the implicit
finally
):
[code]
try
{
FooDisabler foodisable(foo);
foo.LoadDataFromSomewhere(somewhere);
}
catch( ... )
{
// Explode
}
[/code]See, what you're saying is that explicit-call lock/unlock (enable/disable, open/close, etc.) items require a
finally
construct, which I think everyone here actually knows.Or, indeed, you can just put the
DisableUpdates
/EnableUpdates
calls insideFoo::LoadDataFromSomewhere()
, where they belong.So you're now saying that a
try
/finally
construct is an enabler for poor code design. Is that really what you were driving at?
-
One of the better code-structure inventions of recent years has been the
using
/try
-with-resources of C# and Java. (Though in fact it's just a thing that's been in the likes of Lisp for utterly ages. Even the slowest donkey will eventually move I guess…) Now, I accept that RAII has the same effect, but it has the problem of concealing the fact that a release is happening; the programmer is too readily unaware of the places in the code where significant computation might take place. In addition, the relationship with failures during resource destruction is very uncomfortable, and yet it is a necessary part of life; shit really can happen, and if you take special steps to prevent it you lose many of the benefits.C doesn't pretend it is handling any of this stuff, of course. C's almost entirely built on the principle of giving you enough rope to hang yourself with, and not putting warning labels on things. (Except on
gets()
. Friends don't let friends usegets()
.)
-
It would look like this in C++ (to exploit the implicit finally):
try { FooDisabler foodisable(foo); foo.LoadDataFromSomewhere(somewhere); } catch( ... ) { // Explode } ```</blockquote> No, it would look like this: ```cpp { FooDisabler foodisable(foo); foo.LoadDataFromSomewhere(somewhere); }
plus you'd have to have that auxiliary class defined somewhere.
-
We know you fail to see the issue, since you're a C programmer. The issue is - in C you have to do this manually and remember to do it every time. In C++, you just don't do it and let the compiler do the work. In C++, if it compiles, that's enough proof it works (assuming working standard library, but that is a rather safe assumption).
Now - do I need to use
delete
ordelete[]
on this pointer...? Oh - hang on, I used placementnew
- I don't need to do anything with it... Now that other pointer from legacy code that wasmalloc()
'd on the other hand...So many bullets, so few feet...
-
@LaoC said:
I had forgotten whatYou mean like Pascal?
That's got to be some of the most ridiculously contrived Pascal code I've ever seen. First, no one ever in the history of ever uses a string-pointer type in Pascal, because they don't need to. String is a true primitive type (unlike in C) that doesn't need the user to manage it with pointers. Second, your example is assigning to an uninitialized variable, and the compiler does generate a warning for that. It is explicitly not "a valid pointer variable" for the purposes of this discussion.malloc
is called in Pascal, and the first tutorial I googled used code pretty much like that.
Of course you can just insert aq := nil
to silence the compiler and simulate the more common case that someone passes you a NULL by accident.freemem
doesn't care.@LaoC said:
Either you propose that allocated objects should carry type information that automagically associates them with their proper deallocation function
Yup. That works.@LaoC said:
Don't pretend C didn't *work*. It's just *hard* to make it safe. But that should settle the question of system programming. Of course there is a good case to be made for writing everything but a pretty thin layer over the hardware in something higher-level, but you really don't want to set up MMU tables or do ACPI shit in $HLLwhich is incompatible with the requirements for programming bare metal
...and that should tell you something. It seems like you can have code that *works*, or you can have "bare metal", but not both.
-
I don't give a fuck what people think. I'd still like to learn C.
-
Either you propose that allocated objects should carry type information that automagically associates them with their proper deallocation function, which is incompatible with the requirements for programming bare metal
Why is std::unique_ptr<> incompatible with the requirements for programming bare metal?
Not knowing about RAII or unique_ptr is roughly equivalent to not really understanding what structs are when writing C
QFFT. Educate youself, @ronin, before you claim that C++ cannot ensure that the correct deallocation function is called without any overhead.
So Donald Knuth, Theo DeRaadt, Linus Torvalds and Bertrand Meyer are all wrong to distrust C++ and you are right, then?
So the opinion of some authorities is your only argument, then?
C++ has worse spatial locality than C. In fact so does anything object-oriented.
That's just a blatant lie. Unless you willingly write code that calls virtual functions and allocates everything on the heap in random memory locations.
The claim that C++ can replace it for ALL THE THINGS has to have some foundations.
No, it's fucking obvious: Every C program can be rewritten in C++. C++ doesn't force you to use its more expensive features, you know?
You need ugly hacks like RAII in C++
RAII is anything but an ugly hack.
One of the better code-structure inventions of recent years has been the using/try-with-resources of C# and Java.
...which is super ugly to use when you need a factory function for the resource (at least in Java, I don't use C#).
No, it would look like this:
{
FooDisabler foodisable(foo);
foo.LoadDataFromSomewhere(somewhere);
}plus you'd have to have that auxiliary class defined somewhere.
C++17 defines a few generic RAII helper classes, so you won't even need to define an auxiliary class.
Now - do I need to use delete or delete[] on this pointer...?
If you're asking yourself questions like that, you're still using C++98. Grab a tutorial, learn C++11.
-
which is super ugly to use when you need a factory function for the resource (at least in Java, I don't use C#)
The factory pattern is nowhere near as prevalent in C# as it is in Java
-
The factory pattern is nowhere near as prevalent in C# as it is in Java
But you still write factory functions to construct complex objects, right? To reduce code duplication?
-
Factory functions exist, sure, but IME only where they're actually useful
-
IME only where they're actually useful
Same in the Java code base I'm working on. ;)
-
...which is super ugly to use when you need a factory function for the resource
This is ugly?
try (Foo foo = bar.gimmeAFoo()) { // ... }
I guess everyone has different standard of beauty. (Some people choose shit names for things, but that's orthogonal.)
-
@PJH said:
Now - do I need to use delete or delete[] on this pointer...?
If you're asking yourself questions like that, you're still using C++98. Grab a tutorial, learn C++11.You're going to have to be a bit more explicit on what I should be looking for.
google("C++11 operator delete array");
didn't turn up much to enlighten me..Even the C++ FAQ fails to mention a difference between '98 and '11: https://isocpp.org/wiki/faq/freestore-mgmt#delete-array
-
@NeighborhoodButcher said:
We know you fail to see the issue, since you're a C programmer. The issue is - in C you have to do this manually and remember to do it every time. In C++, you just don't do it and let the compiler do the work. In C++, if it compiles, that's enough proof it works (assuming working standard library, but that is a rather safe assumption).
Now - do I need to use
delete
ordelete[]
on this pointer...? Oh - hang on, I used placementnew
- I don't need to do anything with it... Now that other pointer from legacy code that wasmalloc()
'd on the other hand...So many bullets, so few feet...
The standard "shoot yourself in the foot" joke says that C++ makes it hard to do, but when you do it takes your whole leg off.
-
Just don't use "raw" pointers in the first place. unique_ptr and shared_ptr will do the right thing for you.
-
There were a few edge cases where what you wrote won't do what you expect in Java. I have to admit that I don't remember them, though, and I'm too lazy to write a few tests.
-
There were a few edge cases where what you wrote won't do what you expect in Java.
O RLY? LNK PLZ! :)
-
you're still using C++98. Grab a tutorial, learn C++11.
That will just make me sad, though.