Easy 1,000 bucks



  • Solve P vs. NP for 1,000 dollars.

    The best part about this, as with most things in these "hire a coder" sites, is in the comments.



  • Hmm, that Millennium Prize money does seem tempting, but an extra $1k on top of it? I'm sold!



  • The response from Pierre de Fermat was kind of amusing, I guess.



  • The funny part for me is that you can sort of predict the quality of the solutions based on the comments people leave on the jobs. I wonder if any good code ever comes from these sites.



  • It becomes rather obvious which ones are bots when faced with something like this.  Wow you are claiming to have solved multiple projects that involve P vs NP quite impressive.


  • ♿ (Parody)

    My favorite is the guy who said:

    I have studied SET theory in my regular study. I think we can do your project with ARRAY (corresponding to set) Please tell me some detail of your project.
    It's nearly relevant, so it doesn't look like a generic bot, and would appear that he actually thought about the problem. What he thought is probably a mystery best left unsolved.


  • @boomzilla said:

    My favorite is the guy who said:

    I have studied SET theory in my regular study. I think we can do your project with ARRAY (corresponding to set) Please tell me some detail of your project.
    It's nearly relevant, so it doesn't look like a generic bot, and would appear that he actually thought about the problem. What he thought is probably a mystery best left unsolved.

    I know: I'll use a regex!


  • ♿ (Parody)

    @Anketam said:

    It becomes rather obvious which ones are bots when faced with something like this.  Wow you are claiming to have solved multiple projects that involve P vs NP quite impressive.

    Well, really, once you've solved one you've solved them all. But that doesn't mean you shouldn't be at least a little bit humble about it.



  •  Who are these people? And how do we ban them from coding permanently?

    @boomzilla said:

    My favorite is the guy who said:
    I have studied SET theory in my regular study. I think we can do your project with ARRAY (corresponding to set) Please tell me some detail of your project.
    My favourite part from that guy was this:
    I did OOP programming in VB and PHP action script platform.
    See if you can count how many things are wrong with this sentence.

     



  • @morbiuswilters said:

    The response from Pierre de Fermat was kind of amusing, I guess.
     

    It was his last response, too.

    Those comments are smirk-worthy, "plz send me the codez" in reverse.



  • There seems to be a bit of wiggle room in the lack of specification. I'm disappointed that none of the bidders tried that angle.



  • @pjt33 said:

    There seems to be a bit of wiggle room in the lack of specification. I'm disappointed that none of the bidders tried that angle.

    What wiggle room do you think there is to solve P vs NP in polynomial time?

  • BINNED

    @boomzilla said:

    I have studied SET theory in my regular study. I think we can do your project with ARRAY (corresponding to set) Please tell me some detail of your project.
    It's nearly relevant, so it doesn't look like a generic bot, and would appear that he actually thought about the problem. What he thought is probably a mystery best left unsolved.
     

    IMO, SCIgen creates way more sensible gibberish, so it could still be a spambot.

     

    @DOA said:

    I did OOP programming in VB and PHP action script platform.
    See if you can count how many things are wrong with this sentence.

    $ echo I did OOP programming in VB and PHP action script platform. | wc -w
    11


  • @morbiuswilters said:

    I know: I'll use a regex!
     

    Now you have a regex that may or may not be in the set of problematic expressions but you'll never know!


  • BINNED

    @Sutherlands said:

    What wiggle room do you think there is to solve P vs NP in polynomial time?
    Oh, that's easy:

    There's exactly 2 possibilities:
    a) P = NP, or
    b) P != NP
    tertium non datur

    I can write two programs that output either a) or b) respectively and do so in constant time. One of them solves the P vs NP problem (while the other is wrong) and surely does so in polynomial time.

     



  • Theory question. Is it possible to solve these problems in polynomial wall-clock time, while using above-polynomial CPU-time (e.g. by spinning up a certain number of EC2 instances or some crazy MapReduce job or the like) ?



  • @fennec said:

    Theory question. Is it possible to solve these problems in polynomial wall-clock time, while using above-polynomial CPU-time (e.g. by spinning up a certain number of EC2 instances or some crazy MapReduce job or the like) ?
    Well, sure, but if you had the hardware why wouldn't you just run the other instances in (1 / 2^n) of the time?



  • @fennec said:

    Theory question. Is it possible to solve these problems in polynomial wall-clock time, while using above-polynomial CPU-time (e.g. by spinning up a certain number of EC2 instances or some crazy MapReduce job or the like) ?

    Throwing more resources at the problem does not change the computational complexity of the underlying algorithms.



  • @Sutherlands said:

    @pjt33 said:

    There seems to be a bit of wiggle room in the lack of specification. I'm disappointed that none of the bidders tried that angle.

    What wiggle room do you think there is to solve P vs NP in polynomial time?

    The wiggle room is in not solving P vs NP. The spec talks about taking input of integers, but doesn't define integer. If you use a language with a 32-bit type called integer you could argue that you're fulfilling the spec as written, and the problem will be solvable in polynomial time.



  • @pjt33 said:

    There seems to be a bit of wiggle room in the lack of specification. I'm impressed that many of the bidders fully understand the requirements from such a vague specification.

    FTFY.



  • @fennec said:

    Theory question. Is it possible to solve these problems in polynomial wall-clock time, while using above-polynomial CPU-time (e.g. by spinning up a certain number of EC2 instances or some crazy MapReduce job or the like) ?

    He IS asking for "PS: The program should run in polynomial time." and does not state complexity, but just (wall clock) time there.

    But, the number of Amazon EC2 instances is probably bounded, so this won't work.

    However, due to Moore's Law, the performance of a single computer will exponentially increase over time. This suffices to solve any exponential-time problem in polynomial time, if you just in regular intervals save the current state, upgrade to a newer machine, and restore the state and continue where you left off.



  • @OperatorBastardusInfernalis said:

    However, due to Moore's Law, the performance of a single computer will exponentially increase over time.
     

    Its a lie, mine only becomes slower.


  • Discourse touched me in a no-no place

    @OperatorBastardusInfernalis said:

    However, due to Moore's Law, the performance of a single computer will exponentially increase over time.

    Where can I buy that computer?



  • You need to water your PC to allow it to grow.



  • @steenbergh said:

    You need to water your PC to allow it to grow.

    That only works with smart phones.  Smart phones grow into PCs.  <Resisting urge to make a Harvest Moon/Rune Factory reference>


  • @OperatorBastardusInfernalis said:

    @fennec said:
    Theory question. Is it possible to solve these problems in polynomial wall-clock time, while using above-polynomial CPU-time (e.g. by spinning up a certain number of EC2 instances or some crazy MapReduce job or the like) ?

    He IS asking for "PS: The program should run in polynomial time." and does not state complexity, but just (wall clock) time there.

    But, the number of Amazon EC2 instances is probably bounded, so this won't work.

    However, due to Moore's Law, the performance of a single computer will exponentially increase over time. This suffices to solve any exponential-time problem in polynomial time, if you just in regular intervals save the current state, upgrade to a newer machine, and restore the state and continue where you left off.

    You misunderstand the whole concept of P versus NP. The reason that the differentiation is interesting is because polynomial algortihm scale rather nicely, while non-polynomials do not.
    The idea is clearly visible if you take a look at the next table and use the following definition: given processing power P and time-limit T, how large a problem (given by n) can you solve?

    Complexity  P10P1000P
    O(n)n1      10 n1 1000 n1
    O(n3)n2 ~2.15 n2 10 n2
    O(2n)n3 n3 + ~3.3      n3 + ~10
    O(n!)n4 n4* n4**
    * for n > 10 ** for n > 1000

    As you see, for exponential algorithms throwing more processing power at it is not a solution, as the n tends to get very large in simple cases. E.g. if an algorithm for TSP might be able to solve a n=20 problem in 1 hour on a single computer (P) even the amazon cloud (say 1 million P) won't be able to solve it for every city in germany (~2000).

    Furthermore even a high-exponent polynomial algortihm usually implicated a good understanding of the problem (for nontrivial problems) which means that the exponant can usually be bought down to 3-4 with further study.



  • Of course, I know all that, and the reference to Moore's Law was nothing but a joke. Not a bad one, I thought, though.

    Basically, every 1.5 years, CPU performance doubles. So if you every 1.5 years buy the now-current computer, and continue your task on that one:

    After 1.5 years, you have 1.5 years of work of the initial CPU done.

    After 3 years, you have 4.5 years of work of the initial CPU done.

    After 4.5 years, you have 10.5 years of work of the initial CPU done.

    After 6 years, you have 22.5 years of work of the initial CPU done.

    After 7.5 years, you have 46.5 years of work of the initial CPU done.

    After 9 years, you have 94.5 years of work of the initial CPU done.

    After 10.5 years, you have 190.5 years of work of the initial CPU done.

    After 12 years, you have 382.5 years of work of the initial CPU done.

    After 13.5 years, you have 766.5 years of work of the initial CPU done.

    After 15 years, you have 1534.5 years of work of the initial CPU done.

    After 16.5 years, you have 3070.5 years of work of the initial CPU done.

    After 18 years, you have 6142.5 years of work of the initial CPU done.

    After 19.5 years, you have 12286.5 years of work of the initial CPU done.

    This actually has an important practical consequence for judging quality of cryptography: if someone claims their cryptography would take millenia of years to be broken on the current computers, that doesn't mean it'll actually take that long, if we assume growth to continue! It may be broken in just 20 years!



  • @OperatorBastardusInfernalis said:

    Of course, I know all that, and the reference to Moore's Law was nothing but a joke. Not a bad one, I thought, though.

    Basically, every 1.5 years, CPU performance doubles

    In before the obv "but that's not Moore's Law" post...



  • @OperatorBastardusInfernalis said:

    Of course, I know all that, and the reference to Moore's Law was nothing but a joke. Not a bad one, I thought, though.

    Basically, every 1.5 years, CPU performance doubles. So if you every 1.5 years buy the now-current computer, and continue your task on that one:

    But that's not Moore's law...


  • I have studied SET theory in my regular study. I think we can do your project with ARRAY (corresponding to set) Please tell me some detail of your project.
    I wonder if he would respond in the same way when asked to make a simple calculator.

    "I have studied NUMBERS theory in my regular study. I think we can do your project with OPERATIONS (corresponding to numbers) Please tell me some detail of your project."

    @morbiuswilters said:

    @boomzilla said:
    My favorite is the guy who said:

    I have studied SET theory in my regular study. I think we can do your project with ARRAY (corresponding to set) Please tell me some detail of your project.
    It's nearly relevant, so it doesn't look like a generic bot, and would appear that he actually thought about the problem. What he thought is probably a mystery best left unsolved.

    I know: I'll use a regex!

    I actually wondered how regexes could be used to solve this. You only need to add a few things before they become turing-complete anyway.



  • @Strolskon said:

    I actually wondered how regexes could be used to solve this.

    What? You seriously considered regex as a solution? O_o



  • @morbiuswilters said:

    @Strolskon said:
    I actually wondered how regexes could be used to solve this.

    What? You seriously considered regex as a solution? O_o


    Not in a serious way. More like "writing a brainfuck-to-C optimizer-compiler in brainfuck" for the lulz (it exists BTW).



  • @OperatorBastardusInfernalis said:

    This actually has an important practical consequence for judging quality of cryptography: if someone claims their cryptography would take millenia of years to be broken on the current computers, that doesn't mean it'll actually take that long, if we assume growth to continue! It may be broken in just 20 years!

     

     If P=NP, then breaking encryption becomes trivial: it would mean that any encryption function can be transformed into something trivialy solvable - ie all difficult problems can be turned into much easier ones.

     

     



  • @zipfruder said:

    If P=NP, then breaking encryption becomes trivial: it would mean that any encryption function can be transformed into something trivialy solvable - ie all difficult problems can be turned into much easier ones.

    No, just certain encryption methods. True, it's most of the ones currently used for practical security.



  • @morbiuswilters said:

    @zipfruder said:
    If P=NP, then breaking encryption becomes trivial: it would mean that any encryption function can be transformed into something trivialy solvable - ie all difficult problems can be turned into much easier ones.

    No, just certain encryption methods. True, it's most of the ones currently used for practical security.

    I'm safe here; I encrypt all of my data with one-time pads. Unfortunately, since I need a constant source of new pad data at least as long as the data, I use my mirrored drive as the pad. The cipher-text is totally uncrackable (000000000000...), but now I have a key-management problem.



  • @Strolskon said:

    I actually wondered how regexes could be used to solve this. You only need to add a few things before they become turing-complete anyway.

    Perl's regexes are NP-complete.


  • BINNED

    @pjt33 said:

    Perl's regexes are NP-complete.
    So is this proof that Perl is slow as fuck?



  • @pjt33 said:

    @Strolskon said:
    I actually wondered how regexes could be used to solve this. You only need to add a few things before they become turing-complete anyway.

    Perl's regexes are NP-complete.

    What does this mean, practically? That Perl's regexes are difficult to solve? (wat) Or that they can be used to solve NP-complete problems?



  • @lettucemode said:

    @pjt33 said:
    @Strolskon said:
    I actually wondered how regexes could be used to solve this. You only need to add a few things before they become turing-complete anyway.
    Perl's regexes are NP-complete.
    What does this mean, practically? That Perl's regexes are difficult to solve? (wat) Or that they can be used to solve NP-complete problems?
    It means that no one can say "my regex engine will evaluate a string against a regular expression in a scalable manner" without putting limits on the expression.  As an example, greedy matching causes a lot of backtracking and creates Shlemiel the Painter type performance problems.  So, it would be a very bad idea for Google to allow unlimited regular expression based searching of its corpus.



  • @lettucemode said:

    @pjt33 said:
    Perl's regexes are NP-complete.

    What does this mean, practically? That Perl's regexes are difficult to solve? (wat) Or that they can be used to solve NP-complete problems?

    Practically, it means that you should minimise your use of backreferences.

    In answer to your tag, the pedantic statement would be that the problem of determining whether a string is in the language accepted by a PCRE is NP-complete.

    If you really want to be pedantic, you can observe that the page linked only actually shows that it's NP-hard. Feel free to prove to your own satisfaction that it's in NP.



  • @Jaime said:

    It means that no one can say "my regex engine will evaluate a string against a regular expression in a scalable manner" without putting limits on the expression.

    Its a good thing then, that google wrote their own regular expression library which does scale.


  • ♿ (Parody)

    @Salamander said:

    @Jaime said:
    It means that no one can say "my regex engine will evaluate a string against a regular expression in a scalable manner" without putting limits on the expression.

    Its a good thing then, that google wrote their own regular expression library which does scale.

    Yeah, that was Jaime's point. From your link:

    RE2 disallows PCRE features that cannot be implemented efficiently using automata. (The most notable such feature is backreferences.)


  • @dtech said:

    Furthermore even a high-exponent polynomial algortihm usually implicated a good understanding of the problem (for nontrivial problems) which means that the exponant can usually be bought down to 3-4 with further study.

    I think you mean, "Furthermore even a high-exponent polynomial algorithm usually indicates a poor understanding of the problem (for nontrivial problems) which means that the exponent can usually be bought down to 3-4 with further study."  Is that right?  Because you actually said that a good understanding of the problem is usually responsible for a high exponent polynomial algorithm.

    On this note, I've actually optimized a routine that originally ran in O(n10) time and made, thanks to the magic of associative arrays, a version that ran in O(n) time.  I can't remember exactly how the bad algorithm worked, but I remember one of the big keys was to not sort data that didn't need to be sorted, and especially don't use a sort algorithm with worst-case performance on a reversed input if your processing to that point has a side effect of reversing the input from the last time you (unnecessarily) sorted it.  Also, I'm pretty sure there was also some unnecessary re-calculation costs, similar to the naive solution for computing combinations.

    For those poor at math, O(n10) is bad enough it may as well be NP, for most intents and purposes.  Most of its input was n=1 data, with occasional n=2.  But one input with n=3 would cause a long pause (several minutes), and an input with n=4 was enough to get my manager to approve me seeing what I could do to improve it (about an hour - if you discount the fact that the person who gave it the n=4 job submitted it a bunch of times because it didn't seem to be working, and, oh, this was an Apache/CGI environment, so they were all running in parallel).  Note the original author was sufficiently competent to have it simply error on any n>4 data.  But that's still not very competent, when a correct solution was far more trivial, and even a version that used unsorted arrays with an exhaustive find would have run many times faster.


  • Discourse touched me in a no-no place

    @tgape said:

    I can't remember exactly how the bad algorithm worked, but[...]
    Couldn't you find a margin big enough to write it in?



  • @tgape said:

    @dtech said:
    Furthermore even a high-exponent polynomial algortihm usually implicated a good understanding of the problem (for nontrivial problems) which means that the exponant can usually be bought down to 3-4 with further study.

    I think you mean, "Furthermore even a high-exponent polynomial algorithm usually indicates a poor understanding of the problem (for nontrivial problems) which means that the exponent can usually be bought down to 3-4 with further study."  Is that right?  Because you actually said that a good understanding of the problem is usually responsible for a high exponent polynomial algorithm.

    No, I meant what I wrote. I was comparing a high-exponent polynomial algorithm to a non-polynomial algorithm. In scientific literature a P algortihm for a problem previously unknown to be in P usually starts with a high exponent (e.g. as a proof of concept), with later papers bringing the exponent down.

    In everyday programming a high exponent may indeed indicate a poor understanding of a problem, because for most everyday problems exponents shouldn't be high.


Log in to reply