Weird way to do min/max

This is a very weird way to find min/max. Assuming that is what in fact it does.
// Note that this function is only looking at waveform as 'ints'
// even though they're stored as floats
void CGeneralFunctions::find_min_and_max(const vector<float> &waveform, float *min, float *max)
{
int len = waveform.size();
int comparisons = (len1)/2;
if(len==0)
{
*min = *max = 0;
return;
}
*min = *max = (int)waveform[0];
if(len==1)
return;
for(int pos=1, counter=0; counter<comparisons; counter++, pos+=2)
{
// Compare the two adjacent values in the array. The smaller value will be
// compared to the min value and the larger value will compared to the max
// value.
if((int)waveform[pos]<=(int)waveform[pos+1])
{
if((int)waveform[pos]<*min)
*min = (int)waveform[pos];
if((int)waveform[pos+1]>*max)
*max = (int)waveform[pos+1];
}
else
{
if((int)waveform[pos+1]<*min)
*min = (int)waveform[pos+1];
if((int)waveform[pos]>*max)
*max = (int)waveform[pos];
}
}
// If there's one item remaining in the array
if((len1)%2!=0)
{
if((int)waveform[len1]<*min)
*min = (int)waveform[len1];
else if((int)waveform[len1]>*max)
*max = (int)waveform[len1];
}
}<float>
<comparisons; counter++,="" pos+="2)" {="" compare="" two="" adjacent="" values="" in="" array.="" smaller="" be="" min="" and="" larger="" value="" will="" compared="" to="" the="" max="" value.="" if((int)waveform[pos]=""></comparisons;></float>

It seems reasonable enough, but does assume a stereo wave. Quite confusingly written though; the author must have been absent the day they did comments.

@Eric Shinn said:
void CGeneralFunctions::find_min_and_max(const vector<float> &waveform, float *min, float *max)
OT: That looks like C++; why are they using pointers instead of references? Or shouldn't I ask?

@PJH said:
OT: That looks like C++; why are they using pointers instead of references? Or
shouldn't I ask?Some authours discourage use of non<font face="Courier New">const</font> references, because from a function call it's not immediately obvious that the value is being changed.

@Spectre said:
because from a function call it's not immediately obvious that the value is being changed.
Isn't that what documentation and in/out specifications are for...Ah  I see.

@Eric Shinn said:
// Note that this function is only looking at waveform as 'ints'
// even though they're stored as floatsIEEE float format allows you to do this actually, assuming you have a compatibly sized int type.

It's not a bad algorithm for this but it's really no better than the obvious method. Something like this:
min = max = waveform[0]; for (int pos = 1; i < len; ++i) if (waveform[pos] > max) max = waveform[pos]; else if (waveform[pos] < min) min = waveform[pos];
has a worst case of roughly 2n comparisons and a best case of n  1 comparisons. The OP's has a worst (and best) case of roughly 1.5n comparisons. Hardly worth the effort.

@bstorer said:
Aha, so that's why it was done like that.It's not a bad algorithm for this but it's really no better than the obvious method. Something like this:
<clip>
has a worst case of roughly 2n comparisons and a best case of n  1 comparisons. The OP's has a worst (and best) case of roughly 1.5n comparisons. Hardly worth the effort.
It seems all the more useless in the context that it is only called a few times when the program is run, and the values are remembered.

Well I guess it's not a bad algorithm, as in it'll still work (I think), but it's really, really stupid to do it that way when you could do it much more clearly the way you did it in less than half the number of lines (which is really the only worthwhile way of doing it unless you've got some kind of strange special requirements).

@burntfuse said:
Well I guess it's not a bad algorithm, as in it'll still work (I think), but it's really, really stupid to do it that way when you could do it much more clearly the way you did it in less than half the number of lines (which is really the only worthwhile way of doing it unless you've got some kind of strange special requirements).
This is why the rules of optimization are
Rule 1: Don't do it.
Rule 2 (for experts only): Don't do it yet.I'm sure the author thought he was doing something clever and worthwhile, but it just obfuscates with very little gain (and potentially, loss!).

I like pie.

@Eric Shinn said:
@bstorer said:
Aha, so that's why it was done like that.It's not a bad algorithm for this but it's really no better than the obvious method. Something like this:
<clip>
has a worst case of roughly 2n comparisons and a best case of n  1 comparisons. The OP's has a worst (and best) case of roughly 1.5n comparisons. Hardly worth the effort.
It seems all the more useless in the context that it is only called a few times when the program is run, and the values are remembered.
Ah, but you missed where the author failed miserably. He spent his this time optimizing the algorithm, but he is referencing *min and *max directly instead of locals. I know what you're thinking. "Oh, the compiler caches these so it doesn't matter". Ah, but it doesn't, because if someone called the function with min and max pointing to the same memory, then caching would not work because the output produced would be different due to overlapping writes. Obviously no one in their right mind would do this and the results would be wrong, but the compiler doesn't know this. The compiler instead knows that it needs to maintain correct memory access behavior, so it forces a write to these memory addresses every time, and a subsequent read on every comparison.
Here is a simplification:
int set_and_add(int *a, int *b)
{
*a = 5;
*b = 6;
return *a + *b;
}
This would normally return 11. Unless a == b, in which case it returns 12. And this is the correct behavior. Since the compiler supports this behavior, *a and *b are written and then *a is read back before adding. Similarly with the WTF's min and max.
The C keyword "restrict" solves this problem, for you trying this at home.


@bstorer said:
@burntfuse said:
Well I guess it's not a bad algorithm, as in it'll still work (I think), but it's really, really stupid to do it that way when you could do it much more clearly the way you did it in less than half the number of lines (which is really the only worthwhile way of doing it unless you've got some kind of strange special requirements).
This is why the rules of optimization are
Rule 1: Don't do it.
Rule 2 (for experts only): Don't do it yet.I'm sure the author thought he was doing something clever and worthwhile, but it just obfuscates with very little gain (and potentially, loss!).
And if you ever do it  comment it clearly so the poor SOB that has to maintain your code has a clue!

@bstorer said:
It's not a bad algorithm for this but it's really no better than the obvious method. Something like this:
min = max = waveform[0];
has a worst case of roughly 2n comparisons and a best case of n  1 comparisons. The OP's has a worst (and best) case of roughly 1.5n comparisons. Hardly worth the effort.
for (int pos = 1; i < len; ++i)
if (waveform[pos] > max)
max = waveform[pos];
else if (waveform[pos] < min)
min = waveform[pos];I think it is more important to look at the branches than the arithmetic operations. (disclaimer, I design processor fetch units, so I am biased.)
On a processor with a conditional move instruction, the obvious method above would be much more efficient as it would involve no hardtopredict branches. Even without a conditional move instruction, the two datadependent branches would eventually be easy to predict as the new data value would almost always be between the min & max values if the list is long enough.
The "optimized" version likely has n/2 hardtopredict branches comparing the adjacent data values (unless, of course, the "waveform" is slowly changing relative to the sampling rate) and would have to maintain state for more conditional branches in total. I expect it would benchmark slower.

In addition to the unpredictable branches, the memory is not being accessed sequentially. Some processors (Pentium4 for example) can detect sequential memory access and prefetch the memory automatically. I doubt there's much that can improve the speed as this is doing very little computation for each memory access so it is severly memory limited.
It does look like an optimisation attempt without any profiling.
Skizz

@Skizz said:
Yup. If they had profiled, they would have realized that they should have been doing the data filtering in a background thread, because that is why the app takes 45 sec  1 min to start up.It does look like an optimisation attempt without any profiling.
Skizz