Easier Than Fizz Buzz - Why Can't Programmers Print 100 to 1? (article)
-
The wooden table
-
Just gave this to my two juniors.
Junior 1, about a year in. Talented but lazy. Done in two minutes.
for (var i = 0; i > -100; i--) { console.log(100 + i); }
If he can pick a stranger of two solutions, he will. But he pretty reliably gets the job done.
Junior 2 is The Intern... sigh.
After 7 minutes:
...
I tried to set it to 100 inside the loop, but it doesn't even enter the loop because i >= 1
...Oh, bother. BUT, after 11 minutes
Done, I thought it was harder
for (var i=0; i <= 99; i++) { console.log(100-i); }
So.... progress?
-
I was pointing out that a logical AND has the same properties as multiplication. At least for a single digit. I have no idea how to or even if it extends to multiple digits.
2 x 2 = 2?
2 x 3 = 2?
1 x 3 = 1?
1 x 2 = 0?
1 x 1 = 1!
1 x 0 = 0!
2 x 0 = 0!
3 x 0 = 0!It works for all values of 1 times 1, and n times 0!
-
can't you reimplement parts of the API?
You'd be shown the door right after that line.
If the existing code is bullshit, replace it.
Do you always have the time? The resources? Hell, there's a couple megs of source I'd really love to reap, plough, and sow corn on it, but that would take about a year or so of major development. For which it's unlikely we'd be able to bill anyone.
So.... progress?
Well, the first 15 minutes of solving the problem is basically a measurement error. So yeah, I'd say he just barely managed to make it.
-
I had five minutes while uploading a site so:
for(var i = 0; i < 1; i++){ var prev = 0; while(true){ if(prev === 101) break; var rand = Math.floor(Math.random() * 101) + 1; if(rand === prev + 1){ console.log(prev); prev = rand; } } }
Oh, and 2 minutes were spent seeing what the hell this Firefox's WebIDE thing is... looks promising.
-
-
store a circular buffer of probably about 5 elements and calculate a moving average from those values
When I had a similar data smoothing issue to deal with, I did it by just remembering the previous value, then doing
value = weighting * new_value + (1 - weighting) * value
where
weighting
was a configurable number between 0.0 and 1.0 expressing responsiveness to new sensor readings.This is a dirt-simple low-pass IIR filter; your solution with the average over a circular buffer is a dirt-simple low-pass FIR filter. Either should work reasonably well for sensor de-noising. The IIR is a better fit for a tiny micro because it uses less RAM.
-
There is a solution to this, just not in JavaScript.
Hint: it starts with a
g
and ends with anoto
.
Filed under: Technically not a loop... statement
-
You're right... missed that!
-
There is a solution to this, just not in JavaScript.
Seems like a javascript solution should have...more anonymous function action...
for( i = 0; i < 101; ++i ){ var v; if( !v ){ v = new Array; v.push( function(y){ console.log(y); } ); } else{ v.push( function(y){ console.log(y); v.pop()(y-1); } ); } } v.pop()(100);
-
Now without the while loop:
for(var i = 0; i < 1; i++){ var while_loop = function(prev){ if(prev === 101) return; var rand = Math.floor(Math.random() * 101) + 1; if(rand === prev + 1){ console.log(prev); prev = rand; } while_loop(prev); }; while_loop(0); }
Funny, Firefox throws a "Too much recursion" error while Chrome chugs along.
-
Funny, Firefox throws a "Too much recursion" error while Chrome chugs along.
Firefox blows the stack at about 100-deep? Unless it's actually that Chrome detects and applies tail-call optimisation, and Firefox doesn't. Which is also a
-
IDK, Chrome does fine and this shit runs close to 10k iterations.
-
Actually, it's random. Firefox sometimes throws the error and sometimes it doesn't... guess I'm pushing some limit there.
-
Firefox sometimes throws the error and sometimes it doesn't
[img]http://what.thedailywtf.com/uploads/default/_emoji/wtf.png?v=0[/img]
-
Yeah, it's some recursion limit protection I'm hitting. With Chrome it happens too when I try to print 0-1000. Oh well.
-
Firefox sometimes throws the error and sometimes it doesn't
Math.random()
There's your answer
-
var i=0; i <= 99; i++
For some reason, when people do <= instead of < inside loops like that, it strikes me as a warning flag. I guess because I fear they will offbyone a lot... at least in 0-based arrays where you have the count of elements in the array and go until < count.
-
I fear they will offbyone a lot...
That's a nice array you have there... It would be a shame if someone indexed... past its boundaries...
-
-
Why do you think I switched to Chrome? ;)
-
When I had a similar data smoothing issue to deal with, I did it by just remembering the previous value, then doing
value = weighting * new_value + (1 - weighting) * value
where weighting was a configurable number between 0.0 and 1.0 expressing responsiveness to new sensor readings.
This is a dirt-simple low-pass IIR filter; your solution with the average over a circular buffer is a dirt-simple low-pass FIR filter. Either should work reasonably well for sensor de-noising. The IIR is a better fit for a tiny micro because it uses less RAM.
That is elegantly simple. I probably should have suggested that. But...
that is what I came up with in seconds with a head cold.
-
I had five minutes while uploading a site so:
I suggest this piece of misery.for(var i = 0; i < 100;) { var r = Math.floor(Math.random() * 100); if (r+i == 100) { console.log(r); i++; } }
-
when people do <= instead of < inside loops like that, it strikes me as a warning flag ... at least in 0-based arrays where you have the count of elements in the array and go until < count.
Agreed. Such a person has clearly never understood the relationship between pointers, indexes and arrays in C, and self-styled "programmers" uncomfortable with those things are not to be trusted.
-
you cannot write anything before "for(int i=0;" and you can't use two loops.
for(int i=0; i;); for(int i=100; i>0; --i) printf("%d\n", i);
Firstfor
statement exits immediately, has no loop step expression, requires no backward branches to implement, will almost certainly be optimized away entirely, and is therefore not a loop.
-
For which it's unlikely we'd be able to bill anyone
That's why you refactor as you go.
Comon, keep up, join us in the 21st century.
-
It works for all values of 1 times 1, and n times 0!
Perfect! My computer only uses 0's and 1's anyway.
-
That's why you refactor as you go.
- Start refactor. Slowly replacing one paradigm with another. Estimated time to completion is 5 years instead of 1, because developer is doing it as they have time.
- Developer moves on to new job after year 3.
- New guy gets hired, see the half-done refactor with 2+ paradigms.
- Repeat.
-
Failure to pass on knowledge about work done.
You're saying the idea that people don't refactor over time is invalid because they won't do it right.
Well, leaving bullshit code is not doing it right, in a much worse way.
-
I'm more saying that business goals (new versions/products), personal goals (leaving for another job/project), and personal preferences (which refactor is best) trump long-term technical goals more often than not, especially when the technical goals do not align or are not justified by those three items.
Attempting a long-term technical goal within a business setting without considering the non-technical elements that could make it a waste of time is foolish.
Speaking, to the third item (personal preference), I prefer code that is OK and consistent to code that is OK with some Excellent that's inconsistent, because that's harder to understand. Perhaps you disagree?
On the other hand, if something is already Bad and inconsistent, then, whether or not it's ever completed, I'd accept that refactoring certainly couldn't make things worse. Or could it?
-
You had
n
paradigms, you try to refactor, you haven+1
paradigms. That's what happens.You don't refactor as you go, because a) it leaves your codebase wildly inconsistent, and b) there usually is no time to do things right. You're lucky if your application is modular enough to rewrite a part of it, but if you have a shower drain monster at hand, you either make an even bigger mess or accept the insanity and try to go along with it.
And again - it slows you down, and there's all too often no time to slow down.
-
And again - it slows you down, and there's all too often no time to slow down.
Refactoring is a form of maintenance. It's overhead. Yet if you don't do it, your ability to do the things that you do want becomes more and more constrained over time. Sometimes you've just got to pop the hood and fix the broken stuff anyway, and getting a head-start on it at a time you control is better than waiting for it to break when you really don't want it to…
My point is that you shouldn't promise that there will be no overheads. Life is not that kind.
-
Yeah, but there's a difference between "adding a week to a month of changes to go through the code again and clean it up" and "having a codebase so fucked up it would take a separate year-long project just for cleanup". The first you can explain to the client. The second, good luck getting paid for it.
-
Yeah, but there's a difference between "adding a week to a month of changes to go through the code again and clean it up" and "having a codebase so fucked up it would take a separate year-long project just for cleanup". The first you can explain to the client. The second, good luck getting paid for it.
Meh...whatever. I don't have an allergy to incrementally making stuff better as I go. I typically try to only touch the stuff that needs it. But sometimes that can cast a big net. I guess I'm fortunate that I can convince the powers that there's no way around the technical debt.
With multiple people having touched the code to begin with (some of whom are thankfully gone) there's plenty of different paradigms and ways of doing things already.
-
Well my project is downright fucking insane, but at least somewhat consistently so. For example, the DB abstraction layer has three sublayers which are totally redundant and require you to write three classes that consist mostly of
public string Stuff { get { return lowerLayerObject.Stuff; } set { lowerLayerObject.Stuff = value; } }
Now I can either "make it better" and bypass the whole thing in new objects I write - but woe be upon the dev who tries to make sense of why some objects go through the process and others don't - or just go along and write those three layers, keeping the code still as sucky as it was, but consistently sucky.
I generally go with the second option. We wish we could just fix all the shitty code, but by now it's too much of a behemoth, so we try to just not add to the mix.
-
I generally go with the second option.
In your case I probably would, too. I guess my project probably is a lot more modular. But having built up enough credibility to convince the customer that I need to do stuff is pretty awesome.
-
Some more WTF solutions, written in C#:
Trying to be pointlessly obtuse and use unusual checks/assignments
for (int i = 0; i.GetType() == typeof(int); ) { if (i == 0) { i = 100; } Console.WriteLine(i); if (i == 1) { break; } i += Convert.ToInt32(Math.Pow(-1, 3)); }
One liner, with a silly loop ending condition:
for (int i = 0; i % 100 > 0 || i == 0; Console.WriteLine(100 - i++)) ;
Fairly sure this one will work eventually, so far it's only printed
100
for me:for (int i = 0, j = 0; i < 1000000000 && j < 1000000000; j++) { if (j == 999999999) { if (i % 10000000 == 0) { Console.WriteLine(100 - (i / 10000000)); } i++; j = 0; } }
-
for (int i=0; i<1; i++) {
System.out.println("100");
System.out.println("99");
System.out.println("98");
System.out.println("97");
System.out.println("96");
System.out.println("95");
System.out.println("94");
... snip ...NOOOOOOOOOOOO!!!!!! You beat me to it!!!
Filed under: Late to the party
-
And I guess you could also do:
for (int i=0; i<1; i++) {} Enumerable.Range(0, 100).ToList().ForEach(i => Console.WriteLine(100 - i));
Yes, there is presumably a loop inside the
ForEach
, but it isn't a loop I'm writing.
-
Hmm, hold on, I could even do:
for (int i=0; i<1; i++) {} Console.WriteLine(string.Join(Environment.Newline, Enumerable.Range(1, 100)));
-
What language is that?
In C#, the String.Joinargumentsparmsparams are the other way around.
-
Fixed. Writing code in a text editor is harder than doing the same in an IDE.
-
Hmm, hold on, I could even do:
for (int i=0; i<1; i++) {} Console.WriteLine(string.Join(Environment.Newline, Enumerable.Range(1, 100)));</blockquote>
.... want to do something about the fact that that doesn't solve the problem? that prints 1 to 100 not 100 to 1
-
that prints 1 to 100
for (int i=0; i<1; i++) {} Console.WriteLine(string.Join(Environment.Newline, Enumerable.Range(1, 100).Reverse()));
-
that was the fix i was thinking of. ;-)
-
I don't think anyone has posted this fast solution yet.
for (int i = 0; i < 100; i++) { string old = string.Empty; if (File.Exists("temp.txt")) { old = File.ReadAllText("temp.txt"); } File.WriteAllText("temp.txt", (i + 1) + Environment.NewLine + old); } string f = File.ReadAllText("temp.txt"); File.Delete("temp.txt"); Console.Write(f);
-
Did really nobody think of
[code]
i=100; while(i) {log(i--)};
[/code]resp.
[code]
for (i=100;i;) log(i--);
[/code]
-
@Keith said in Easier Than Fizz Buzz - Why Can't Programmers Print 100 to 1? (article):
FAIL!
It's an insightful contribution to the thread, sure, but 43 likes? Am I missing something here?
-
@PWolff Article says "cannot use two loops"
-
@Yamikuronue said in Easier Than Fizz Buzz - Why Can't Programmers Print 100 to 1? (article):
@PWolff Article says "cannot use two loops"
without two loops..... okay i can do that.....
(page up to the original to make sure i got the rules right)
okay let's do this!
(scroll down to find yami's post to quote)
oh.... well that takes all the fun out of it.....
@accalia said in Easier Than Fizz Buzz - Why Can't Programmers Print 100 to 1? (article):
for (var i = 0; i < 100; i += 1) { console.log(100-i); }