Dotnetperls



  • [url=http://dotnetperls.com/]dotnetperls.com[/url]

    has nothing to do with Perl.

    Perhaps he misspelled?



  • Look at his photo!

    Like SGU's smart kid, only shaved.

    Also demented.

     



  • Yes, he misspelled "Pearls" as in "Pearls of Wisdom" although I doubt you will find any on his website

    "A few years ago I decided I wanted to read every computer science and
    programming book ever written. After a while of that, I decided to only
    read the good ones."



  • Or it is a play on the 'pearls of wisdom' phrase.



  • affine expressions based on int i induction variable

    [url]http://dotnetperls.com/i[/url]
    Click that URL.
    Read its fragile confusion.
    Pearl of ... something.



  • [url]http://dotnetperls.com/loop-jamming[/url]
    Oh no, it's "jamming"!
    Prematurely optimized!
    Don't tell Spectate Swamp!



  • [url]http://dotnetperls.com/switch-slow[/url]
    C# and .Net,
    I don't know much about them,
    But this can't be right.



  • Xiro

    Not totally wrong, not totally right



  • stackoverflow.com/questions/767821/is-else-if-faster-than-switch-case

    Tried posting a link but CS Bombed



  • I agree with them,

    and it contradicts the "Perl",

    hence why it's funny.



  • @Xyro 

    The least common vals,

    Should have a seperate switch,

    In the default case.



  •  "39.81 ns 18.79 ns"

     

    Anyone concerned

    about nanos in .net

    is missing the point



  • One could then claim that it's 50% of the time, but the real business code will probably dwarf any structural code in terms of execution time. Put a Grid in there and run it again.



  • Look at the source code.

    It's lacking <body> and <head>.

    <html>, too.



  • Good uses of sleep()

    are manifold, but this page

    mentions none of them.



  • I'm getting VBs,

    no problem with Cs. But what

    in the world are Perls?



  • Do you ever find
    Integer.ToString too slow?
    Go [url=http://dotnetperls.com/tostring-integer-optimization]optimize[/url] it!



  • OMFG

    That's just too wrong, wrong, wrong, wrong.

    Can I hit him first?



  • @b-redeker said:

    OMFG

    O M F G is

    an acronym with only

    four letters. Season.



  • Optimization of ToString?

    Are they serious?

    or simply retarded?



  •  But in the winter

    when read out loud it still has

     A fifth syllable.



  • @Xyro said:

    Integer.ToString

    I ported this gem

    to Ruby and improved it.

    It actually "works".



  •  http://dotnetperls.com/try

    int.Parse("0")
     Thank you sir.  This site is filled with gems.


  • If by "works", you mean

    the code compiles and runs,

    then I agree. Look:

                  user     system      total        real
    old way:  1.891000   0.000000   1.891000 (  1.895000)
    new way: 14.375000   0.031000  14.406000 ( 14.507000)
    


  • Also, I must say,

    I am impressed with both how

    Ruby's slow and cute.



  • @noitcif said:

    Thank you sir. This site is filled with gems perls pearls.

    FTFY



  • HMS Ruby

    Was a 40-gun frigate

    Neither slow nor cute.



  • Re: O M F G is an acronym with only four letters. Season.

     

    Why didn't CS

    Send this gem to my email

    Fuck You until Spring



  • @Xyro said:

    Look:

    It's supposed to show

    that the lookup thing is slow

    but I could not say.



  • I like how dotnetpearls.com redirects to dotnetperls.com :)



  • @derula said:

    Look at the source code.

    It's lacking <body> and <head>.

    <html>, too.

    In HTML 5, they're optional.
    It's catering for the lazy.
    And stupid too.



  • @Kyanar said:

    @derula said:
    Look at the source code.

    It's lacking <body> and <head>.

    <html>, too.

    In HTML 5, they're optional.

    HTML5 is weird.

    Also, he forgot to define the encoding on document level, so it only tentatively passes W3C's validator. Shame on laziness!



  • Ironically the loop-jamming can actually slow down performance quite considerably. Instead of doing a test with 3 arrays, run the same test with 100 arrays....have phun!

    (Hint, What is L1's favorite Pet Benatar song?]



  • @derula said:

    @Xyro said:
    Look:

    It's supposed to show

    that the lookup thing is slow

    but I could not say.

     

    But you're aware that you're doing something entirely different?

    Using a static array, you indeed get a significant speedup as long as the overwhelming majority of values you need are cached.

    Of course there's still the point that if that speedup matters in your app, you're probably doing something very wrong.



  • @Ilya Ehrenburg said:

    But you're aware that you're doing something entirely different?

    Sure, I just had no room to clarify because the site didn't allow messages longer than 187 characters at that moment. I tried to clarify in a tag, but it was cut short, and I was pissed.

    @Ilya Ehrenburg said:

    Using a static array, you indeed get a significant speedup as long as the overwhelming majority of values you need are cached.

    I'm not sure any more, but I think I did try it with an array and cached entries for positive numbers and it turned out to be slower than direct conversion as well. Of course, this is mainly because it was in Ruby, which is slow, especially its arrays.

    My point was: This only works if array access is actually fast enough. I could imagine that Delphi / Ada with runtime range checks turned on might also have some issues to get actual speed improvement.



  • @derula said:

    I'm not sure any more, but I think I did try it with an array and cached entries for positive numbers and it turned out to be slower than direct conversion as well. Of course, this is mainly because it was in Ruby, which is slow, especially its arrays.

    Okay, I can't speak for Ruby, I was thinking of C-like arrays (maybe wrapped in an object to store the length alongside the payload) with [fast] O(1) access. And I tested only with C#. If Ruby's arrays are done differently, it's no surprise it doesn't work in Ruby.

    My point was: This only works if array access is actually fast enough. I could imagine that Delphi / Ada with runtime range checks turned on might also have some issues to get actual speed improvement.
     

    Your point is correct, of course. I don't think runtime range checks will entirely  destroy the speedup for the cached values, but it would make the cases where something like this makes sense even rarer (if there are any situations at all where it makes sense).



  • @Ilya Ehrenburg said:

    I was thinking of C-like arrays ... with [fast] O(1) access.

    Assuming O(1) access for array elements means a constant access time is a FAIL. This is the fundamental problem. There are at least least 5 distinctly different values (assuming "PC-like" architecture] on how long it will take to access a given element (or any variable). If your implementation consistently uses the slowest of the 5, then the performance could be tens of thousands of times slower than one which used the fastest of the 5.



  • tried to post haiku
    during winter's CS fail;
    i forget them now

    fading memories
    like the melting snow; CS
    has 4 gig limit.



  • @DaveK said:

    fading memories
    like the melting snow; CS
    has 4 gig limit.

     

    Awww. This is beautiful.



  • @TheCPUWizard said:

    Assuming O(1) access for array elements means a constant access time is a FAIL. This is the fundamental problem. There are at least least 5 distinctly different values (assuming "PC-like" architecture] on how long it will take to access a given element (or any variable). If your implementation consistently uses the slowest of the 5, then the performance could be tens of thousands of times slower than one which used the fastest of the 5.
    Er... Do you know what O(1) means? I'm guessing not, given your incoherent comments. Pop quiz: you are given two arrays. Array A is sized to carry 5 ints, array B is sized for 5000 ints. On average, which is faster, A[4] or B[4999]? Now perform the same exercise with singly-linked lists instead of arrays. Contrast these differences using big O notation.



  • @Xyro said:

    @TheCPUWizard said:
    Assuming O(1) access for array elements means a constant access time is a FAIL. This is the fundamental problem. There are at least least 5 distinctly different values (assuming "PC-like" architecture] on how long it will take to access a given element (or any variable). If your implementation consistently uses the slowest of the 5, then the performance could be tens of thousands of times slower than one which used the fastest of the 5.
    Er... Do you know what O(1) means? I'm guessing not, given your incoherent comments. Pop quiz: you are given two arrays. Array A is sized to carry 5 ints, array B is sized for 5000 ints. On average, which is faster, A[4] or B[4999]? Now perform the same exercise with singly-linked lists instead of arrays. Contrast these differences using big O notation.

     I do know exactly what it means, and it does NOT mean that the access time will be constant across all of the elements. Consider an array such that the entire content fits in a single L1 cache page (second fastest access), accessing all of the elements after the first one (which presumably had to be faulted in) will be faster. As soon as the array grows past the size of a single page, you will get a minimum of two faults.

    Let us consider a situation where memory resources are tight and there is significant contention for available L1 pages. If you have a big array, and access the elements sequentially, you will have minimum time. However if you perform the exact same number of array access, but change the order so that each access is to a memory addres that will be on a different L1 page, you will begin to see a slower response.

     If you progress from where the content is being shuffled between L1 and L@ to the point where it is between L1 and main RAM, the ifferent is even more significant. Since "PC-Like" architectures use virtual memory, you will potentially get to the point where a given access pattern causes a pgae to be faulted to/from disk. You performance has just been destroyed.

    This is why the sugested optimization may well cause things to be MUCH slower. In the original code the memory accesses would be sequential, resulting in each page of memory being completely initialized before moving to another page. With the optimization, there are now multiple disjoint memory areas contenting for the available cache space.

    Thus the question of "On average, which is faster, A[4] or B[4999]?", has nothing to do with the numerical value of the index, but it does have everything to do with knowing what memory addresses were recently accessed and are in the cache.

    The result is that while an array has a better scalability factor than a single linked list as O(n) or an indexed/tree structure typically O(logN), in PC-Like environments, it has a WORSE scalability factor than O(1), simple becuase as the size get bigger the probability of a cache miss and the inherent overhead becomes higher and the performance decreases. [I could provide the full mathematical calculation based on over a dozen variables that are used for accurately predicting the distribution of access times.

     

     



  • I wrote a reply, but then I deleted it because it occurred to me that we're really talking about different layers of abstraction. And because my fingers are tired now from typing what I deleted, I don't really feel like expounding, but I think you know what I mean by that. In any case, the day page boundaries become relevant to a coder of a high-level language is the the day that coder changes languages.



  •  

    Isn't the classic example of O(1) indexing a constant-sized lookup table?  There is a provision for using a reasonable constant, such as the maximum access time for an array, in determining Big-O notation.  Let's not forget that asymptotic notation (in computer science) has to do with growth rate of an algorithm, not bits-and-bytes of a given hardware configuration.


  • @Xyro said:

    I wrote a reply, but then I deleted it because it occurred to me that we're really talking about different layers of abstraction. And because my fingers are tired now from typing what I deleted, I don't really feel like expounding, but I think you know what I mean by that. In any case, the day page boundaries become relevant to a coder of a high-level language is the the day that coder changes languages.

     NOTHING is ever deleted....

     @Xyro said:

    Why are you even talking about constant time? Ilya was talking about fast O(1) access, and that's exactly what it is. Algorithmic complexity has nothing to do with architecture layout! And truthfully, memory layout only matters to scalability if we're talking about nanoseconds on embedded devices or somesuch. Re-reading the posts, YOU are the one who brought up constant-time access, and then you started ranting about it!

    You do know that O(1) is not talking about the access time, right? Because otherwise nothing would ever be O(1) and it would be useless to mention.

     #1 - READ the original material. The topic was about optimizing TIME (and the technique listed can have the exact opposite effect) to perform the entire operation. I was not the one who brought up big-O, I was respoding to those who thought it had something to do with the ORIGNAL MATERIAL.

     #2 - Big-O is about how the average (typically) time (it could be space or some other metric) SCALES as a function of size. If something is truely O(1) then the average access time would remain CONSTANT regardless of the number of elements. If there is any difference that correlates to the size, then it is NOT O(1). Do a simple experiment. Create an array of 100 integers, then create a loop that RANDOMLY reads an element 100,000,000 times (this should access each element approximately 1M times) [make sure it does not get optimized away], and measure the time. Now change NOTHING except for the size of the array. Make it 100,000,000 items [a 400 MB array] and re-run the test (this should access each items approximately once), the time WILL be significantly different. The average access time INCREASES as a function size.  If you want "real fun", and have a 64 bit system, create a HUGE array that far exceeds your physcial memory. The average time will again increase.

    #3 - "memory layout only matters... ", this is definately false. Consider high-frequency trading applications (something I work on frequently), a difference of a few microseconds for a highly complex set of calculations can result in a major financial impact (beating the competitor gets you a lower price, and a higher profit - if you make the correct trades)



  • Sigh... Look, by your definition of Big-O, [i]NOTHING IS EVER O(1)[/i]! Can you name me one thing in programming that could ever possibly be O(1) so long as physical limits exist?

    Big-O is about algorithmic complexity. Read the [url=http://en.wikipedia.org/wiki/Big_O_notation]'pedia article[/url] if you don't like my words: @Wikipedia said:

    This allows algorithm designers to predict the behavior of their algorithms and to determine which of multiple algorithms to use, in a way that is independent of computer architecture or clock rate.

    Good grief, man.


  • @frits said:

     

    Isn't the classic example of O(1) indexing a constant-sized lookup table?  There is a provision for using a reasonable constant, such as the maximum access time for an array, in determining Big-O notation.  Let's not forget that asymptotic notation (in computer science) has to do with growth rate of an algorithm, not bits-and-bytes of a given hardware configuration.

     You are 100% correct. What I am pointing out is that  most of the fundamental work done in that area was done it a purely theoreticla environment, and attempting to apply it to actual cases (without deep understanding) is problematic. People see O(1) and make false assumptions about the performance characteristics.

     One of my favorite questions to ask, is "Given a choice between an O(n) implementation and an O(logN) implementation, which would you select"? The common answer is "O(logN) bcause it will be faster for larger sets". Unfortunately this does not answer the question. The only possible answer, given just the above information, is "There is insufficient information avilable to answer the question".



  • @TheCPUWizard said:

    People see O(1) and make false assumptions about the performance characteristics.
    And, what, you were just trying to nip it in the bud?



  • What is all this talk
    About Big-O Notation?
    Cowboy code yehaw!



  • @frits said:

    What is all this talk
    About Big-O Notation?
    Cowboy code yehaw!

    When it comes to Cowboy code, a "Big-O" is usually followed by {censored term for excrement}..

     @xyro... Yes, my intention actually was to "tip it in the bud"...alay it did not have that effect.....



  • Ah. I took your quoting of Ilya and declaration of "FAIL" to be some sort of chastisement. Well, I'm glad we have that out of our systems.

    What is constant time?
    Neither the flag, nor the wind:
    The mind is moving.

    O(1) constant time?
    What is constant about time?
    O(1) in our minds.


Log in to reply