@Bulb said in Help Bites:
You need to think about [whatever]
But that's the thing, I don't want to have to think about things I don't have any clue about!
(this is getting way out of a simple "help" question, and turning into a rant, but since I'm the one who asked, I don't feel any guilt in derailing it)
Yes, yes, you could probably make an argument that if I'm using a library to do X then I should learn how to do X and if that means setting some special parameters, I should learn about it. And you have a point, blindly calling into a library without having a clue as to what it does it not a good idea, to put it mildly. But the point of using a library rather than reinventing the wheel is to avoid having to think about all those nitty-gritty details. And now we're getting into the "leaky abstraction" thing -- that always happens, yes, as this example shows, but that still doesn't make it a good thing when that happens, on the contrary.
This is one of the things I dislike with a lot of neural network stuff. To get it to work you have to set some values of something or some other thing (number of neurons, layers, activation functions and what-not) and as someone tackling whatever domain-specific problem I'm looking at, I don't have the slightest clue as to what those are.
I've read many, many papers in my domain where people show how to get some result with a NN and there's always this part where they describe the setup of the thing and there is never any justification as to why they used n
rather than n+1
for this parameter -- at best it's pure cargo-cult "we're reusing the values used by Smith & Smith as it worked for them."
Anyway. Moving on.