Light sorting
-
fDots
is an array containing the dot product of the camera's forward vector and the displacement from the light to the camera. But that isn't important. Here's how they "sorted" the lights.// sort them for ( int i = 0; i < nActiveDepthShadowCount - 1; i++ ) { for ( int j = 0; j < nActiveDepthShadowCount - i - 1; j++ ) { if ( fDots[ j ] < fDots[ j + 1 ] ) { ClientShadowHandle_t nTemp = pActiveDepthShadows[ j ]; pActiveDepthShadows[ j ] = pActiveDepthShadows[ j + 1 ]; pActiveDepthShadows[ j + 1 ] = nTemp; } } }
-
I know a bubblesort when I see one…
-
Doesn't bubblesort usually work?
-
Yes; doesn't that piece of code?
*suddenly sees the issue*
Oh…
-
-
I'm not seeing the bug.
-
The array being checked by the
if
test isn't the same as the array who's elements are swapped
-
Oh geeeeez. Now I feel dumb.
-
If it makes you feel any better, I didn't see it first time either
-
Neither did Valve, apparently, as the bad sorting's effects can be seen in the game with this code on the first level.
-
So uh what programming language are they using that doesn't have a quicksort?
I mean it's wrong, but it's wrong in this hilariously triple-wrong way where it's wrong both because it's wrong and because it exists at all.
-
I got that SQL one with the missing @ on the first try, so. I'm 50/50.
-
Aren't all the cool kids using timSort now?
-
-
The array being checked by the if test isn't the same as the array who's elements are swapped
The name of the variable being used to reference the array differs. There is insufficient information to determine if they point to the same actual array!
Bronze this dickweed -- bz
-
float
andClientShadowHandle_t
aren't the same type, though. I hope.
-
Well, at least they didn't use Bogosort.
-
float and ClientShadowHandle_t aren't the same type
I don't see where "float" appears..... I tend to agree with the WTF, my primary point is that assumptions are being made in the posts, leading to new WTFs
-
-
Source Engine uses systems hungarian notation.
-
I am truly bogo'd.
I wonder how we could involve Ackerman's function and Graham's number in, perhaps, an Ackerman-Graham-bogo-bogo-sort. The goal would be to take the all remaining time in the universe while sorting a two entry list.
-
See, this is what happens when you ask people to describe sorting algorithms in interviews. They go ahead and use that knowledge later.
-
But what if you write code for embedded systems that don't have a sort function in their standard library?
-
The goal would be to take the all remaining time in the universe while sorting a two entry list.
int currentSeed = Universe.getSeed(); while (!universe_exploded()) { swap(&list[0], &list[1]); } //since the universe has now exploded, both elements are lost in the cold, empty void of space //therefore they're equal to each other and sorted rebuild_universe(seed) //restore initial conditions and proceed with a sorted list
-
You could save all the numbers as files on the filesystem and then get them back sorted using the output of
ls
!
-
For nearly sorted data (and/or data with very few elements), a bubblesort isn't too bad, which is why it pops up every now and then in rendering/game code. (Nearly-sorted data is also where a lot of quicksort implementations have their worst case.)
Usual caveats apply, though: profile your code to make sure you're fixing something that needs fixing, etc.
Filed under: Also, make sure your damn code actually works.
-
For nearly sorted data [...], a bubblesort isn't too bad
Isn't an insertion sort pretty much always faster in such a case?
-
Isn't an insertion sort pretty much always faster in such a case?
The usual recommendations for starting points for sorting are insertion sort when you've got sorted or almost-sorted data, and mergesort or quicksort if you haven't. (Merge sort is stable, quicksort is faster. Both subject to a whole bunch of provisos.) Timsort seems reasonable provided it is implemented right but seems excessively tricky.
Oh, and if you've got the type of data that supports it, bin sort is the best. Rare case though. :D
-
There is insufficient information to determine if they point to the same actual array!
Have a .
-
Isn't an insertion sort pretty much always faster in such a case?
Yeah, true (for some reason I tend to think of those two methods almost interchangeably, maybe not a good thing).
There's a different feature with the bubble sort, though: for a given number of element it always performs the same number of operations (regardless of the input values). So, there's no (unlikely) worst case that will make your frame-time suck at random (and very inconvenient times). So, it might be slow, but at least it's rather predictable.
(FWIW, I'm not a huge fan of using it regardless. The standard sort is typically good enough, and IIRC somewhat smart about choosing appropriate methods at different stages of the sort. Also, for on-GPU sorting, radix-sorts are still king for this kind of applications.)
-
For nearly sorted data (and/or data with very few elements), a bubblesort isn't too bad
AFAIK insertion sort is always at least as good, often better, and is simpler to code besides; there is no excuse for bubblesort ever.
-
AFAIK insertion sort is always at least as good, often better, and is simpler to code besides; there is no excuse for bubblesort ever.
Quickly reviewing both sorts, it seems to me that bubble sort should behave better w.r.t. to caches, but YMMV (and always verify these things with a profiler etc). Also, it seems easier to parallelize/vectorize.
Disclaimer: I'm not a huge fan of this "optimization" for nearly sorted data either way (and would only attempt it as a last resort, and with a profiler etc).
-
it seems to me that bubble sort should behave better w.r.t. to caches
Can't see why that should be the case.
The nice thing about insertion sort that distinguishes it from most of its O(n2) rivals is that most of the data movement steps end up being simple copies rather than full swaps.
One particularly effective application for an isort is to use it in conjunction with a deliberately incomplete quicksort. For any quicksort call where the sublist to be sorted has p or fewer elements, where p * sizeof(element) comes to about the size of a cache line, you simply do nothing; when a top-level quicksort exits, the list will therefore be in a roughly sorted order, with no element more than about one cache line away from where it ought to be. Then you run a single pass of insertion sort over the whole lot. This will complete in O(p*n) time, and saves loads of function call and loop control overhead compared to sorting all the small partitions separately.
-
Merge sort is stable, quicksort is faster.
Not in my experience. Every time I've tried, merge sort was faster than quicksort. But quicksort needs only O(log(n)) extra memory, while merge sort needs O(n). That's why it's the darling of every CS teacher.
-
Can't see why that should be the case.
I'll admit only looking at it briefly. What stuck out to me was that with the isort you will, when element n is out of order, start comparing element n against the first element, the second, etc. If the data doesn't fit into the cache, this means that you will potentially for each mis-sorted element refetch a lot of stuff during the insertion pass. This happens even if the data is almost fully sorted.
With bubble sort you'd be swapping with a close neighbour, which has a much bigger chance of being resident in the cache.
But YMMV. I imagine that isort might end up being better if few elements have to move far, and bsort would win when (potentially many) elements would move short distances (or something like that). And things will look different if the data completely fits into the cache or even a cache line (e.g., as with your example for dealing with the last level of a qsort).
-
That's why it's the darling of every CS teacher.
If you want to mess with clueless CS teachers, ask them about radix sort. Technically it's O(n) operations and can be implemented with O(1) auxiliary storage (IIRC).
It's also a good way to demonstrate the algorithmic complexity doesn't always tell the whole story... even in sequential settings.
-
with the isort you will, when element n is out of order, start comparing element n against the first element, the second, etc.
Generally you make a temporary copy of element n, then start shuffling elements n-1, n-2, n-3 and so on up by one place until you find the final resting place for n. If the list is already almost sorted, none of those passes need to reach very far back down the list; locality is quite good. Only when the list starts in reverse sorted order does insertion sort perform as badly as bubblesort.
These animated visualizations are a good guide.
-
Didn't think of going backwards. Blame the dummy-animation on wikipedia. Thanks for pointing that out. ;-)
Still, for now, I stick to my statement about parallelization/vectorization...
-
-
Every time I've tried, merge sort was faster than quicksort.
It's critically dependent on a vast number of things. Both are O(nlogn) sorts so they're both pretty fast on big datasets, and the extra memory is not usually a problem. Or if it is, you'll be using a different sort anyway that's optimised for the case where you want to avoid dragging the data all into memory; typically only database engines bother with those.
-
Also, the usual reason for recommending merge sort is that it is stable. That's an ultra-useful property to have in practice, as it means you can do things like multi-field sorting without having to use complex comparators.
-
Merge sort isn't used here. They use insertion sort and SymMerge, whatever that is.
-
Merge sort isn't used here. They use insertion sort and SymMerge, whatever that is.
Excellent comments on those functions. They cite the literature you need to read to understand to get their design choices.
They seem to be trying to limit the amount of extra memory and stack space used as well as keeping things stable.
-
Timsort seems reasonable provided it is implemented right but seems excessively tricky.
What timsort is good at is making extra computations to shave off a few (assumed-to-be expensive) comparisons.
Basically, if your
compareTo
method containsThread.sleep(1000)
then Timsort will win on the constant factors.
-
if your compareTo method contains Thread.sleep(1000)
If your comparison function has that in there, you're a Bad Person. :D
-
If your comparison function has that in there, you're a Bad Person.
int IComparable<Evil>.CompareTo(Evil obj) { if (Debugger.IsAttached) { Thread.sleep(1000); } return level - obj.level; }
-
-
It's just some type with a
level
property.
-
implied sorting by evil level. it's C# so there's an implied
this.
infromt of the firstlevel
-