Iron Man is now powered by Oracle



  • Clearly we need a language where you can declare a 0array or a 1array. That would end problems once and for all.



  • This is still my favourite though - http://www.youtube.com/watch?v=hkDD03yeLnU

    The fun is the fact they probably know it's utter bollocks themselves but they manage to deliver lines like that with 100% seriousness



  • @boomzilla said:

    but it amazes me how many people love to make 1900 part of the 19th century (i.e., the 1800s) as a consequence of counting backwards.
    How does that work exactly? Because the way I see it 1900 is part of the 19th century if you count forwards, ie correctly. Year 1 to 100 (1st century), 101 to 200 (2nd), ..., 1801 to 1900 (19th).

    I'm guessing you made a typo and meant 1800.

    @blatant_mcfakename said:

    http://www.youtube.com/watch?v=hkDD03yeLnU
    Community server doesn't automatically create links. You can either use an <a href="..."> tag or use TinyMCE's Insert/edit link button.



  • @da Doctah said:

     What I don't get is how product placement can possibly co-exist with the standard "Any resemblance to actual anything is purely coincidental" disclaimer.

     

    I don't see the conflict there.  In the movie we see a super powered, nigh-infallible force of computing capable of providing answers to the most complex problems facing the defenders of the free world.

    In reality, we have... Oracle.  No resemblance at all, other than computers being involved in some way.

     

     



  • Mathematicians do start counting at zero ... factorials, fibanacci etc all start at zero . This also makes sense in computer science, since it is mathematically based. Pointer Arithmetic has it basics in mathematics; the clue is in the name.

    As for filtering elements in JavaScript or C#, you are doing it wrong. Both have predicate functions for sorting, filtering etc. With JavaScript if the browser doesn't support the newer methods, most popular client side libraries add support.


  • ♿ (Parody)

    @Zecc said:

    @boomzilla said:
    but it amazes me how many people love to make 1900 part of the 19th century (i.e., the 1800s) as a consequence of counting backwards.

    How does that work exactly? Because the way I see it 1900 is part of the 19th century if you count forwards, ie correctly. Year 1 to 100 (1st century), 101 to 200 (2nd), ..., 1801 to 1900 (19th).

    I'm guessing you made a typo and meant 1800.

    If you think it was a typo, then you didn't read what I wrote.

    Who starts at the year 1? Why so much emphasis on that? Why should we succumb to ridiculous OCDism regarding a year that no one ever referred to as "year 1" during the actual time period in question. The counting of the years was something created after the fact, and likely isn't even the right year based on the reason for its choice. So your insistence that we count from 1 creates, as I said, an unending procession of special cases.

    The point is that it's more intuitive to group things together when they are similar to each other. IOW, 1900 and 1901 are more similar than 1899 and 1900. So why do things backwards? Is 1900 part of the 1900s or not? Will 2020 be part of the decade known as the twenties? Your fascination with the lack of a year 0 does all sorts of counterintuitive things, so why not just have one special case? I think the answer is that it's a rare occasion for normal people to exercise their pedantic dickweedery.



  • @morbiuswilters said:

    Exactly. It's just sad because we've been doing this long enough that better stuff should have bubbled up. But trends and fads and "make the programmer feel special" seem to do more to drive technology than critical thinking.

    It's bubbling up in the wrong direction. The shitty tools are getting promoted, while the good ones are... I dunno... too boring? There is a certain brand of engineer who will ignore and deride anything made by Microsoft, regardless of its quality. There's also a certain brand of engineer who will promote CLI tools over GUI one, in the mistaken and unexamined belief that they are "more efficient". There's also a certain brand of engineer who believes tools should be difficult to use, as to make their jobs more secure.

    Those asshats are taking over IT. And tearing down everything great we ever built. Slashdot is winning. "Worse is better" turned out to be right, and things are worse than ever now.

    For example, Apple got a load of those asshats from NeXT, and next (haha) thing you know, Mac OS is a ball of shit. What happened to Mac Classic's legendary usability? Gone. Spatial memory? That only benefits non-geeks, so gone. Window borders? Why would you put the close box on the opposite side from the maximize box, it's not like anybody would click that by accident. And why would you give it a meaningful icon instead of making it a ball of red? The red ball vaguely looks like a traffic light so obviously that's superior. Oh and let's make a whole new type of window that looks vaguely like it's a plate of aluminum and behaves totally differently from the existing windows, then let's sprinkle those in our various apps with no rhyme or reason. Is there a single person alive who can internalize the logic of the Dock? There is? Better make it even more confusing then! Notification system? Just because Mac Classic had one, and just because Windows has had one since fucking 1998, we'll take our sweet time and maybe add one in 2011, maybe, if our users are lucky. (And only then because a third-party one got too popular to ignore.)

    Apple was on the Right Path. Now they aren't. Ironically, Microsoft is closer to the Right Path than Apple is now. (Of course, even Apple is much closer to the Right Path when it comes to dev tools-- at least they have a decent IDE, that's better than anything in Linux- or Java-land.)

    Now I'm depressed.



  • @boomzilla said:

    If you think it was a typo, then you didn't read what I wrote.
    I read what you wrote. Doesn't seem I read it as you meant, though. I think we've got a different understanding of what counting "forwards" and "backwards" is.

    But your next post clarified things for me, and I agree with you. FWIW, I promote "1900 is in the 19th century" as being "correct" just because there is no year zero. Whether or not there should be a year zero is another, more delicate question.

    Btw, wouldn't it be wonderful if the 1800s were the 18th century, even if that meant there was a 0th century? (and a 0th BCE too) We'd be in the first decade of the zeroeth century of the second millenium.

    PS, just because: www.timeanddate.com/calendar/monthly.html?year=1582&month=10 (not sure if I was expecting something else)



  • @DCRoss said:

    @da Doctah said:

     What I don't get is how product placement can possibly co-exist with the standard "Any resemblance to actual anything is purely coincidental" disclaimer.

     

    I don't see the conflict there.  In the movie we see a super powered, nigh-infallible force of computing capable of providing answers to the most complex problems facing the defenders of the free world.

    In reality, we have... Oracle.  No resemblance at all, other than computers being involved in some way.

     

     

    Let's consider a classic film, Mars Attacks.  On the one hand, the Slim Whitman people pay big bucks to have their product featured prominently, with the implication that Slim Whitman can put an end to the alien invasion and save the world.  Yay for Slim Whitman, and yay for all the people who make money from him.

    On the other hand, the disclaimer says that the Slim Whitman in the movie is a work of fiction, and that any resemblance to Slim Whitman living or dead is purely coincidental.  For shame!  There it is, right at the foot of the credits, an open admission that Slim Whitman doesn't have the ability to repel alien invaders.

    QED: direct conceptual conflict.

     



  • @Zecc said:

    PS, just because: www.timeanddate.com/calendar/monthly.html?year=1582&month=10 (not sure if I was expecting something else)
    Actually, I just realized this depends on the country you're in. In France for example, the missing days are(n't) in December; in Denmark, on February of 1700; in the UK, the US and Australia on September 1752.

    I did know the Russian October Revolution was in Greg Cal's November though.



  • @morbiuswilters said:

    I actually think there is a market for a well-designed language that fills a specific need--something high-level, object-oriented, garbage-collected but also compiled, with an eye towards performance and parallelism.
    Like C#?



  • @Sutherlands said:

    @morbiuswilters said:
    I actually think there is a market for a well-designed language that fills a specific need--something high-level, object-oriented, garbage-collected but also compiled, with an eye towards performance and parallelism.
    Like C#?

    Yeah, basically C# and the whole .net framework solves all these problems, the only remaining issue is you can't get the morons to use it because Microsoft is TEH EBIL! (Well, the other problem is some mobile platforms don't support it because Microsoft is TEH COMPETITER!)

    The fact that C# and .net are are ECMA and ISO standards seems to not matter in the slightest. Among idiots.



  • @boomzilla said:

    You're still in pointer arithmetic mode, and really saying, get me the element that's offset n places from the start. I've come across places where 0-based and 1-based have advantages over each other. In general, I prefer 1-based, because it's more intuitive for the most common cases. 0-based is often easier when you have to calculate things. Anyways, I've had off by one errors in both cases.

    True, 0-indexing works a bit better for pointer arithmetic, but that's why I keep qualifying my statement by saying "modern, high-level language". And you can do pointer arithmetic with 1-based indexing, it just requires the compiler to be a bit more clever.

    @boomzilla said:

    What's really inexcusable is when someone puts months into an enumeration and January is zero. That's when you know that the cargo cultism has gone too far.

    Yes. That is flat-out insanity.



  • @Snowyowl said:

    Why would you want to jump to the fifth element? Is there something special about the value at the fifth element? Put it in a variable, then, not in an array. Arrays are for iterating over; it shouldn't matter that array[4] is actually the fifth element because there's nothing  to distinguish it from array[3] or array[5].

    What? No. Arrays are not only for iterating over. Nor are they only accessed in a for (int i = 0; i < size; ++i) loop. Try going some advanced work with arrays that requires you to jump around in the array or jump to a specific point, and you'll quickly learn to curse zero-indexing.



  • I honestly can't remember the last time I accessed something by an index.



  • @PSWorx said:

    You want to implement a simple XOR-based encryption obfuscation. Or an animation script. Or anything else where you have to continuously loop over your array, depending on a counter.

    Which one is more intuitive?

    This: currentValue = array[counter % array.size]

    Or this: currentValue = array[((counter - 1) % array.size) + 1]

    Yes, yes, I figured somebody was going to bring up the modulus thing. But: 1) all other array accesses are far more common than the modulus thing; and 2) it's trivial to implement the modulus case as a function/macro like get_mod_offset(array, counter).

    So, no, I don't consider this a valid reason to stick with 0-indexing. It's more important for languages to be intuitive and resistant to bugs than it is to make edge cases look elegant.



  • @lucas said:

    They start at zero because mathematicians start counting at zero, coming from a maths heavy background I find it makes sense.

    1. No they don't. They start at zero to make pointer arithmetic easier for the C compiler to implement. 2) This is irrelevant. Modern languages should strive to make bugs difficult, not adhere to some arbitrary mathematics conventions.

    @lucas said:

    If senior level programmer hasn't grapsed how to write for loop correctly I doubt they should be called "senior".

    It's not an issue of "grapsed". It's an issue of "this is implemented in a way that guarantees a certain amount of bugs, even from experienced programmers". You might as well say the same thing about buffer overflows: they're easy to understand and check for, but they still slip through production software created by senior-level programmers all the time. Most modern languages do bounds-checking because humans are not perfect; if your tool is built on the premise that humans aren't going to make mistakes, it's a shoddy tool.

    @lucas said:

    If you don't like zero based indexing, functional languages or languages that let you do foreach are probably more your cup of tea.

    Lots of languages have foreach and zero-based indexing; the former is great, the latter not. Besides which, I do actual work with programming languages, so functional languages are a no-go.

    @lucas said:

    In C# and JavaScript I rarely have to iterate over a collection while counting, or I manipulate the array/collection before looping.

    Then why does it even matter to you? It matters to me because when I do have to use indexes, I find it irritating that we're still stuck with this ass-backwards idiom.



  • @PSWorx said:

    ...molybdomancy...

    Holy crap, my preacher was right when he told me Europeans were witchcraft-practicing devil worshipers.



  • @lucas said:

    Pointer Arithmetic has it basics in mathematics; the clue is in the name.

    It's also been fucking outmoted for a quarter century. And you don't need 0-based indexes with pointer arithmetic, it just makes the compiler-writer's job easier while making the compiler-user's harder (which is the entire philosophical underpinning of C). Stop with this nonsense.



  • @morbiuswilters said:

    @PSWorx said:
    ...molybdomancy...

    Holy crap, my preacher was right when he told me Europeans were witchcraft-practicing devil worshipers.

    Hey, I do not practice witchcraft.

    You seem to be agreeing a lot with yourself.



  • @blakeyrat said:

    There's also a certain brand of engineer who will promote CLI tools over GUI one, in the mistaken and unexamined belief that they are "more efficient". There's also a certain brand of engineer who believes tools should be difficult to use, as to make their jobs more secure.

    Yes. There's also the "let's make the programmer feel like a special little genius" attitude which is killing this industry. So companies pander to this shit with their employees and we end up with "engineers" who chase after every fad technology. That shit drives me crazy.

    Look: if you enjoy doing your job, that's great, but your first duty is to do your job. So many software "engineers" today seem to think their primary duty is to have fun, stimulate their brain, and then maybe get some work done if they feel like it. I hate those people. And I pretty much hate computers and programming, I really do, but I'm better than the vast majority of people in this industry just because I actually dedicate myself to being good at it. I don't read Hacker News or Slashdot or any of that junk, because it's just filled with the idiotic braying of jackasses (how much you wanna bet somebody will quote that single line and say something like "Yeah, I wouldn't know what that's like" or if they're less clever "Now you know how we feel" or if they're Zylon "HERP UR TALKIN BOUT URSLF HA HA").

    About the only positive thing I can say about software engineering nowadays is that it's improved the quality of french fries. Most of these idiots would be no better working a fryer than they are a debugger and if it weren't for software engineering, they might worm their ways into positions of actual importance where they end up doing real harm, like burning my onion rings.

    @blakeyrat said:

    For example, Apple got a load of those asshats from NeXT, and next (haha) thing you know, Mac OS is a ball of shit. What happened to Mac Classic's legendary usability? Gone. Spatial memory? That only benefits non-geeks, so gone. Window borders? Why would you put the close box on the opposite side from the maximize box, it's not like anybody would click that by accident. And why would you give it a meaningful icon instead of making it a ball of red? The red ball vaguely looks like a traffic light so obviously that's superior. Oh and let's make a whole new type of window that looks vaguely like it's a plate of aluminum and behaves totally differently from the existing windows, then let's sprinkle those in our various apps with no rhyme or reason. Is there a single person alive who can internalize the logic of the Dock? There is? Better make it even more confusing then! Notification system? Just because Mac Classic had one, and just because Windows has had one since fucking 1998, we'll take our sweet time and maybe add one in 2011, maybe, if our users are lucky. (And only then because a third-party one got too popular to ignore.)

    I love when people tell me OSX has the best usability. Then why the fuck is there only a single toolbar at the top of the screen? That shit made sense back when screens were 7" wide and 300px across because it saved space and it was easy to move the mouse up to the toolbar. On a 30" desktop you start getting the feeling Apple has no fucking clue what they're doing anymore, other than turning out overpriced, shiny garbage that hipsters can buy with their parents' credit cards.

    Then there's shit too like the built-in terminal is a fucking joke. There are two WTFs in that sentence: the first is that we're still building terminals into modern OSes (I know Microsoft does it, too, but it's much more essential in Unix-based OSes) and the second is that, even though it's fundamental to doing a lot of shit in OSX, they still managed to fuck it up. I mean, seriously, the first time I sat down to use OSX within five minutes I was like "I've got to find a replacement terminal to install". That's some pretty hard-core failure there: five minutes in and I've already used the terminal, found it wanting and decided it's essential enough that I need to find a replacement to keep working.

    The iPhone seems okay, although I'd hardly call it great. It's just way better than Android, which is a fucking disaster. When I bought my first Android phone the chick at the Verizon store was like "You should download this 'task killer' app so you can stop any background applications from running and free up memory!" Really? So I'm carrying a goddamn Windows 95 PC in my pants now? And the saddest thing is, even with aggressive killing of tasks it's still constantly running out of memory. "Oh, you want to open another Chrome tab? You got 2 open already, motherfucker. I'm just gonna freeze up for about 30 seconds and then pop up a dialog about Chrome hanging. Then when you go to close it the whole goddamn phone will lock up. Ha ha!"



  • For tasks I perform rarely, I prefer GUIs. For things I do frequently, command line really is more efficient. I often learn the GUI, then the command line later, because I am faster.

    Where I work, we very rarely use Microsoft products, and only on customer request. They're too easy to break.

    Like when I build a manufacturing tester on Windows and install the software on a preconfigured user account. Ship it out, and they tell me it doesn't work. 90 minutes of debugging over the phone later, and it turns out they added the machine to their domain because IS/IT told them to. Really, this isn't a computer, it's an appliance. It may look like a computer and act somewhat like a computer, but it must be treated like an appliance. If I use Linux, then they can't break it nearly as easily. This statement has actually been proven.

    On the embedded front, device drivers for Windows CE are an absolute pain because they are so frequently obfuscated. Add in that a Windows CE build requires many Gigabytes of disk traffic, and a CE build becomes far more of a disk-bound operation than CPU-bound, and Windows doesn't manage file-system caches well enough to take advantage of large amounts of RAM. An SSD helps, but it's still disk-bound.

    On the other hand, most of the stuff we make winds up on MCUs with <128KB flash and <32K RAM, so C# isn't much of an option - .netMicroFramework is way too big.

    Regarding MacOS X (note: I don't have a mac, I've just used them), care to explain what you don't like about the Terminal? It seems to work fine for me. As an example of using the command line, a friend had "deleted" a bunch of TimeMachine backups to clear out space on her external hard drive. Well, it turned out they were in the trash, and emptying the trash showed an ETA of over an hour. She wanted to leave the house and have the machine sleep when it was done. Couldn't find a GUI tool to do that, so I wrote a one-line shell script that deleted the contents of the trash folder and told the machine to sleep. I haven't seen many GUI tools that can do sequenced commands like that.



  • @Circuitsoft said:

    For things I do frequently, command line really is more efficient. I often learn the GUI, then the command line later, because I am faster.

    Prove it.

    With data. Not "gut feelings".

    @Circuitsoft said:

    Ship it out, and they tell me it doesn't work. 90 minutes of debugging over the phone later, and it turns out they added the machine to their domain because IS/IT told them to.

    So you wrote your software wrong, and it broke when exposed to a slightly-different network environment. And the problem is Microsoft's?

    @Circuitsoft said:

    If I use Linux, then they can't break it nearly as easily. This statement has actually been proven.

    Then you have a cite to provide us?

    @Circuitsoft said:

    Well, it turned out they were in the trash, and emptying the trash showed an ETA of over an hour. She wanted to leave the house and have the machine sleep when it was done. Couldn't find a GUI tool to do that, so I wrote a one-line shell script that deleted the contents of the trash folder and told the machine to sleep.

    Much easier than just telling her "hey leave the house, it'll sleep after a half hour or so anyway."

    Why would you even LOOK for a GUI tool to do something so unnecessary? "I couldn't find a GUI tool to do what the OS already does by default", well GEE WHIZ PROFESSOR, why do you think nobody's written that brilliant piece of software yet? Man yesterday I tried to find a GUI tool that would make it so when I typed "a" on the keyboard an "a" appeared in the active text box, but I couldn't find shit! UNBELIEVABLE!

    @Circuitsoft said:

    I haven't seen many GUI tools that can do sequenced commands like that.

    If she had a far-superior Mac Classic OS, you'd be able to use AppleScript to do it. And Mac Classic literally didn't even have a command-line at all.



  • @Circuitsoft said:

    For things I do frequently, command line really is more efficient.

    The only benefit CLIs have is scriptability, and you can script a well-designed GUI, too.

    @Circuitsoft said:

    Where I work, we very rarely use Microsoft products, and only on customer request. They're too easy to break.

    You lie. Linux and OSX are far easier to break than Windows. Hell, I don't even have to do anything to have pulseaudio be like "Ha ha, I'm going to just stop making noises!"

    @Circuitsoft said:

    Like when I build a manufacturing tester on Windows and install the software on a preconfigured user account. Ship it out, and they tell me it doesn't work. 90 minutes of debugging over the phone later, and it turns out they added the machine to their domain because IS/IT told them to.

    So the problem is you don't know how to write software that works with machines joined to a domain? How is this anybody's fault except yours and your parents'?

    @Circuitsoft said:

    If I use Linux, then they can't break it nearly as easily. This statement has actually been proven.

    Please share your proof with the class. Otherwise, I will have to say you're just making shit up.

    @Circuitsoft said:

    Windows doesn't manage file-system caches well enough to take advantage of large amounts of RAM.

    [citation needed]

    @Circuitsoft said:

    Regarding MacOS X (note: I don't have a mac, I've just used them), care to explain what you don't like about the Terminal?

    It looks like shit, the keys are mis-mapped compared to every other fucking terminal on Earth so if you try to use something like vim it just shits itself. And the last time I used it you couldn't even change the colors so it was basically like being raped in the eyes.

    @Circuitsoft said:

    She wanted to leave the house and have the machine sleep when it was done. Couldn't find a GUI tool to do that, so I wrote a one-line shell script that deleted the contents of the trash folder and told the machine to sleep.

    This is asinine. Why did she need to force the machine to sleep? Most modern computers can just sleep automatically. Seriously, I want you to take a long, hard look at yourself. This is the argument you've chosen to support CLIs.



  • @Circuitsoft said:

    On the other hand, most of the stuff we make winds up on MCUs with <128K Flash and <32K RAM, so .net Micro Framework barely fits and leaves no room for an application.

    I hate community server. The entire rest of the world has standardized on BBCode.



  • @blakeyrat said:

    If she had a far-superior Mac Classic OS, you'd be able to use AppleScript to do it. And Mac Classic literally didn't even have a command-line at all.

    Exactly. Command lines only seem superior because nobody who writes GUI software even gives a shit anymore (especially on the Unix side of things.) It's like a poor Latin America country: hey, burros are superior to cars because cars require a functional fucking industrial economy to support!

    The saddest part is that rather than learn how to actually build good software, we have hundreds of thousands of man hours pissed away into creating 100 shitty terminals and 100 shitty windowing managers. It would be like a carpenter saying "Well, I could build myself a single house that would last a lifetime, or a series of tar paper shacks that have a Mean Time Between Collapses (MTBC) of a few months."



  • @Circuitsoft said:

    I agree with whatever Morbs just said!



  • @Circuitsoft said:

    @Circuitsoft said:

    On the other hand, most of the stuff we make winds up on MCUs with <128K Flash and <32K RAM, so .net Micro Framework barely fits and leaves no room for an application.

    I hate community server. The entire rest of the world has standardized on BBCode.

    Looks like you're a master of text-based interfaces.



  • @morbiuswilters said:

    You lie. Linux and OSX are far easier to break than Windows. Hell, I don't even have to do anything to have pulseaudio be like "Ha ha, I'm going to just stop making noises!"

    That's actually pretty convenient.

    The view of Windows that most open source fans have is so divergent from reality that it boggles my mind sometimes.

    Here's what I was doing with my Windows last night:
    1) Playing a brand-new CPU/GPU-intensive game
    2) Running the Steam overlay in it
    3) Running the XSplit video grabber in it, broadcasting to the Internet (720p, CD-quality audio)
    4) Recording the entire 1920x1080 video&audio game at 30 FPS with FRAPS to my secondary (spinning) HD, FRAPs does *ZERO* compression so imagine the data rate there
    5) Running a VOIP program to talk to my buddy (Ventrillo, with server set at highest settings)
    6) Running Audacity to record my own audio

    This was all being done simultaneously, and Windows (and the other pieces of software involved) was rock-fucking-solid. The only limit to my recording time wasn't "how long until shit crashes" but "how much disk space is left on my secondary HD. This is what I do like 4-5 days a week; this is my workflow. (Ignoring game availability issues) is it even possible to do this workload in Linux? Or OS X for that matter?



  • @morbiuswilters said:

    @Circuitsoft said:
    For things I do frequently, command line really is more efficient.

    The only benefit CLIs have is scriptability, and you can script a well-designed GUI, too.

    I'm used to using tools that assume they're running in a scriptable environment. That way they don't need to embed their own almost-useful language.

    @morbiuswilters said:

    @Circuitsoft said:
    Where I work, we very rarely use Microsoft products, and only on customer request. They're too easy to break.

    You lie. Linux and OSX are far easier to break than Windows. Hell, I don't even have to do anything to have pulseaudio be like "Ha ha, I'm going to just stop making noises!"

    Very few appliances make noise at all, let-alone use PulseAudio. While I do have that issue on my desktop (using Debian Stable), I've had no problems on my Gentoo laptop, and on the desktop it can be fixed by restarting pulseaudio (pulseaudio --kill; pulseaudio --start) and I don't even have to kill clients.

    @morbiuswilters said:

    @Circuitsoft said:
    Like when I build a manufacturing tester on Windows and install the software on a preconfigured user account. Ship it out, and they tell me it doesn't work. 90 minutes of debugging over the phone later, and it turns out they added the machine to their domain because IS/IT told them to.

    So the problem is you don't know how to write software that works with machines joined to a domain? How is this anybody's fault except yours and your parents'?

    No, the problem is that I wrote the user manual with the assumption that the software had already been installed. Also, connecting it to the domain screwed with the network settings causing it to no longer connect to the ethernet-connected test equipment that was part of the testing system.

    @morbiuswilters said:

    @Circuitsoft said:
    If I use Linux, then they can't break it nearly as easily. This statement has actually been proven.

    Please share your proof with the class. Otherwise, I will have to say you're just making shit up.

    You'll have to go ask the contract manufacturer (I can't name names) but their network people only know Windows and are afraid of it so they don't touch it. That's why it didn't break as soon as they turned it on.

    @morbiuswilters said:

    @Circuitsoft said:
    Windows doesn't manage file-system caches well enough to take advantage of large amounts of RAM.

    [citation needed]

    Basic build time measurements for similar file counts and line number counts. Linux can start and finish compiler processes, open and close files, and manage caching far, far better than any Windows I've seen. 7 is much better than XP in that regard, and Windows Server may be better yet, but Windows Server isn't supposed to be a desktop OS. I don't need to download Ubuntu Server to run a compilation job that will effectively use 64GB RAM and all 24 CPU cores.

    @morbiuswilters said:

    @Circuitsoft said:
    Regarding MacOS X (note: I don't have a mac, I've just used them), care to explain what you don't like about the Terminal?

    It looks like shit, the keys are mis-mapped compared to every other fucking terminal on Earth so if you try to use something like vim it just shits itself. And the last time I used it you couldn't even change the colors so it was basically like being raped in the eyes.

    Fair enough. I haven't spent enough time to try vim in it. I think I've seen people who have configured it to a black background and white text, but that may have been 3rd party tools.

    @blakeyrat said:

    @Circuitsoft said:
    Well, it turned out they were in the trash, and emptying the trash showed an ETA of over an hour. She wanted to leave the house and have the machine sleep when it was done. Couldn't find a GUI tool to do that, so I wrote a one-line shell script that deleted the contents of the trash folder and told the machine to sleep.

    Much easier than just telling her "hey leave the house, it'll sleep after a half hour or so anyway."

    She is a power user and has it configured to only do that on battery, and it was plugged in.



  • @morbiuswilters said:

    @blakeyrat said:
    If she had a far-superior Mac Classic OS, you'd be able to use AppleScript to do it. And Mac Classic literally didn't even have a command-line at all.

    Exactly. Command lines only seem superior because nobody who writes GUI software even gives a shit anymore (especially on the Unix side of things.) It's like a poor Latin America country: hey, burros are superior to cars because cars require a functional fucking industrial economy to support!

    There actually is still AppleScript. I just didn't want to bother learning it for this one little job when I had another ready option that worked. And, while I have done a tiny bit of AppleScript quite a while ago, it's so f***ing verbose that it makes my eyes hurt. Which of the following is easier for your eyes to parse quickly:

    set myvar to the width of mytextbox

    myvar = mytextbox.width

    One is a wall of text that actually needs to be read, while one has characters of different heights to easily pick out what's going on. The . is shorter than the rest of the line, and the = is spaced from the top and the bottom making it easier for your eyes to grab it.

    @morbiuswilters said:

    The saddest part is that rather than learn how to actually build good software, we have hundreds of thousands of man hours pissed away into creating 100 shitty terminals and 100 shitty windowing managers. It would be like a carpenter saying "Well, I could build myself a single house that would last a lifetime, or a series of tar paper shacks that have a Mean Time Between Collapses (MTBC) of a few months."
    There is a lot of NIH syndrome in Window Managers. But, there's also a lot of experimentation. I quite liked one called Ion3, but it's gone EOL, and Awesome (similar) is not as easy to use. At this point, I just use KDE because newer Gnome drives me nuts, and KDE is actually lighter than it seems on first glance.



  • @blakeyrat said:

    ...is it even possible to do this workload in Linux?

    I doubt it. The Linux kernel and core system libraries are pretty rock-solid (they've had 40 years to perfect this shit, so they better be) but anything GUI is going to lack features, have mediocre performance and have bugs. And since Linux a more of a general-purpose kernel that is tuned more for server and embedded use, it tends to suck for doing interactive GUI stuff, even when tweaked. As long as I've used The Linux Desktop I've experienced random lag, freeze-ups that last a few seconds, choppy audio and video..

    The most sophisticated multimedia stuff I do is sometimes watch Flash videos, and even those will sometimes just die (although I blame Flash and not Linux for that one.)



  • @Circuitsoft said:

    I'm used to using tools that assume they're running in a scriptable environment. That way they don't need to embed their own almost-useful language.

    Thus Apple's solution, to make the scripting language belong to the OS (so it's standardized) and require Mac applications to send window messages to themselves (to force exposing all useful program functions to the scripting language, also to provide recordability.) They thought of this shit, back in like 1990. The system they came up with is far superior to anything existant today, including OS X's AppleScript.

    @Circuitsoft said:

    You'll have to go ask the contract manufacturer (I can't name names) but their network people only know Windows and are afraid of it so they don't touch it. That's why it didn't break as soon as they turned it on.

    Lemme guess, healthcare or government?

    You can't blame Windows for shitty-ass admins. Imagine what kind of havoc those shitty-ass admins would wreak with Linux!

    @Circuitsoft said:

    Basic build time measurements for similar file counts and line number counts. Linux can start and finish compiler processes, open and close files, and manage caching far, far better than any Windows I've seen. 7 is much better than XP in that regard, and Windows Server may be better yet, but Windows Server isn't supposed to be a desktop OS. I don't need to download Ubuntu Server to run a compilation job that will effectively use 64GB RAM and all 24 CPU cores.

    Bullshit. You're not going to find anybody, anybody, even in the deepest darkest depths of Linux wankery who will claim that Linux memory management is better than Windows. Because that's a flat-out lie and everybody knows it. (At best, you'll get: Linux is good enough so it doesn't matter if Windows is better.) You have to be truly delusional to think Linux memory management holds a candle to Windows, desktop or server.

    @Circuitsoft said:

    She is a power user and has it configured to only do that on battery, and it was plugged in.

    So unplug it. Problem solved. You honestly didn't think of that?


  • ♿ (Parody)

    @blakeyrat said:

    This was all being done simultaneously, and Windows (and the other pieces of software involved) was rock-fucking-solid. The only limit to my recording time wasn't "how long until shit crashes" but "how much disk space is left on my secondary HD. This is what I do like 4-5 days a week; this is my workflow. (Ignoring game availability issues) is it even possible to do this workload in Linux? Or OS X for that matter?

    The only thing on your list that I regularly do (or have any desire to do) is VOIP, usually over google voice. I regularly have multiple VMs running various things and building / testing software, etc. In general, the only crashes I get are from my own software as I'm building / testing.

    OTOH, I can't even listen to music on my Windows box due to random static that gets injected.



  • @blakeyrat said:

    @morbiuswilters said:
    You lie. Linux and OSX are far easier to break than Windows. Hell, I don't even have to do anything to have pulseaudio be like "Ha ha, I'm going to just stop making noises!"

    That's actually pretty convenient.

    The view of Windows that most open source fans have is so divergent from reality that it boggles my mind sometimes.

    Here's what I was doing with my Windows last night:
    1) Playing a brand-new CPU/GPU-intensive game
    2) Running the Steam overlay in it
    3) Running the XSplit video grabber in it, broadcasting to the Internet (720p, CD-quality audio)
    4) Recording the entire 1920x1080 video&audio game at 30 FPS with FRAPS to my secondary (spinning) HD, FRAPs does *ZERO* compression so imagine the data rate there
    5) Running a VOIP program to talk to my buddy (Ventrillo, with server set at highest settings)
    6) Running Audacity to record my own audio

    This was all being done simultaneously, and Windows (and the other pieces of software involved) was rock-fucking-solid. The only limit to my recording time wasn't "how long until shit crashes" but "how much disk space is left on my secondary HD. This is what I do like 4-5 days a week; this is my workflow. (Ignoring game availability issues) is it even possible to do this workload in Linux? Or OS X for that matter?

    Yes.

    1. Unfortunately, the best Linux GPU drivers are actually Intel, right now, but if you have something that can be done on IvyBridge graphics, it'll actually run faster under Linux.
    2. Steam Overlay (from what I can tell) seems like a pretty lightweight app, so it shouldn't hurt. Overlays are trivial on any halfway-recent video card. PRetty much everyone supports AIGLX now, and has since ~2005. Even the XComposite extension was supported back before XP came out.
    3. Not sure how XSplit works, but, again, it shouldn't be an issue. I think I've seen vnc-based solutions that will do this - VNC server is plenty lightweight, and then a lightweight client can connect to it and grab the data it wants. I'm pretty sure OpenAL can dump your audio stream to other grabber apps. PulseAudio and GStreamer (DriectShow-like framework) definitely can.
    4. Recording the stream to disk would work the same way, and disk bandwidth management is fantastic.
    5. Never heard of a problem with any VOIP on Linux, at least not since Skype finally figured out how it works.
    6. Audacity works perfectly, and there are other options as well. One called TimeMachine (from long before Apple's product of the same name) has a record button that starts 20 seconds before you click it, because it keeps a rolling history buffer in ram.

    Really, this isn't a hard job. If I were doing this on Linux, I'd also have some batch job running in the background at a low priority. If you're doing audio recording and/or effects, then setting up near-realtime latency is much easier on Linux. I can easily get response within a few milliseconds with a MIDI SoftSynth running on my Pentium-III laptop.



  • @Circuitsoft said:

    I'm used to using tools that assume they're running in a scriptable environment. That way they don't need to embed their own almost-useful language.

    You're used to using inferior environments where the only thing scriptable is the CLI.

    @Circuitsoft said:

    Very few appliances make noise at all, let-alone use PulseAudio.

    Way to move the goalposts. So one minute we're talking about GUIs and the next minute it's like "Well, this is embedded so none of that shit has to work."

    @Circuitsoft said:

    While I do have that issue on my desktop (using Debian Stable), I've had no problems on my Gentoo laptop, and on the desktop it can be fixed by restarting pulseaudio (pulseaudio --kill; pulseaudio --start) and I don't even have to kill clients.

    Stability: having to restart your audio daemon because it shit itself again. Man, I remember all those times I had to restart the audio daemon on Windows.. Oh, and restarting pulseaudio doesn't always fix the issue, sometimes rebooting is the only way out.

    @Circuitsoft said:

    No, the problem is that I wrote the user manual with the assumption that the software had already been installed. Also, connecting it to the domain screwed with the network settings causing it to no longer connect to the ethernet-connected test equipment that was part of the testing system.

    So you have extremely-specific requirements and somebody didn't follow them: must be the OS' fault! Meanwhile, if they diverge from the Linux instructions at all they'll probably overwrite their filesystem superblock and fry the whole system. But Windows is easier to break, gotcha.

    @Circuitsoft said:

    ...but their network people only know Windows and are afraid of it so they don't touch it. That's why it didn't break as soon as they turned it on.

    Apparently you never learned that an anecdote is not a proof. And what's more, your entire argument here seems to be "Linux is so user-unfriendly that most people are afraid to even touch it, hence why things were stable!" Hey, my car is rusting through in several places but people are too afraid of getting tetanus to steal it, so it's better than a Ferrari!

    @Circuitsoft said:

    Linux can start and finish compiler processes, open and close files, and manage caching far, far better than any Windows I've seen.

    This is not a citation, this is one person's opinion, with absolutely no backing whatsoever.

    @Circuitsoft said:

    7 is much better than XP in that regard...

    And Linux beats the pants off Multics. Whoo, let's all compare all modern software to shit that is nearly half my age!

    I have a serious question: is this kind of brain damage a pre-requisite for using FOSS? Because apparently I missed out on that requirement. But I've hardly ever read a pro-FOSS argument that didn't bring up Win9X or DOS or some other fucking obsolete technology, and then tried to compare it to modern FOSS. And I'm not talking about shit I read a decade ago (although people were doing it then, too) but about shit I'm reading today, in 2013.

    @Circuitsoft said:

    I think I've seen people who have configured it to a black background and white text, but that may have been 3rd party tools.

    I haven't used it for a few years, so I have no idea what they've added, but I also don't care: it's a terminal, it shouldn't even be there. But back when I did get stuck having to use OSX for a few months, it was S.O.P. to throw out Terminal.app and download iTerm.

    @Circuitsoft said:

    She is a power user and has it configured to only do that on battery, and it was plugged in.

    So she's a "power user" who can't let it go to sleep automatically but has to manually put it to sleep every single time? Clearly she doesn't care about wasting her time, so what's wrong with wasting a little electricity and letting it stay on for the few hours after it finished emptying the trash?

    And what's more, I dispute that it wouldn't have still been faster to just open the settings and set the sleep time and then set it back when she got home, compared to looking up the command to make it go to sleep and then typing in the command.



  • @Circuitsoft said:

    Yes.

    Yes.

    @Circuitsoft said:

    Not sure how XSplit works, but, again, it shouldn't be an issue. I think I've seen vnc-based solutions that will do this - VNC server is plenty lightweight, and then a lightweight client can connect to it and grab the data it wants. I'm pretty sure OpenAL can dump your audio stream to other grabber apps. PulseAudio and GStreamer (DriectShow-like framework) definitely can.

    That doesn't sound like "yes". That sounds like "maybe" or "pretty sure". It also sounds like I'd need to run 2-3 apps to replace the one app I'm using now-- which doesn't even make sense, because then the audio stream would be separate from the video stream? How would you keep them in-sync?

    (BTW, no way in hell VNC is fast enough to stream video, I'm calling bullshit on that right now.)

    @Circuitsoft said:

    Recording the stream to disk would work the same way, and disk bandwidth management is fantastic.

    So what's the Linux equivalent to FRAPS?

    @Circuitsoft said:

    Never heard of a problem with any VOIP on Linux, at least not since Skype finally figured out how it works.

    Well, no, I expect VOIP is perfected enough by this point to not be an issue on any OS.

    @Circuitsoft said:

    Really, this isn't a hard job.

    Then where are all the Linux-using let's players on YouTube? How come this article and its comments makes it sound like a near-impossibility? (And he's talking about doing SOLO let's plays!)

    @Circuitsoft said:

    If I were doing this on Linux, I'd also have some batch job running in the background at a low priority.

    Huh? For what purpose?

    @Circuitsoft said:

    If you're doing audio recording and/or effects, then setting up near-realtime latency is much easier on Linux.

    On Windows you don't have to "set it up", it's just there, working, all the time. Even if the only sound your computer ever makes is a Excel error beep.

    @Circuitsoft said:

    I can easily get response within a few milliseconds with a MIDI SoftSynth running on my Pentium-III laptop.

    Seriously? You're a fucking troll. Fuck this noise.



  • @blakeyrat said:

    So unplug it. Problem solved. You honestly didn't think of that?

    How GUI of you. No, the correct solution is to buy a PIC and wire it into the outlet the computer is plugged into, then write a C application to shut off the outlet after it receives a notification on the serial interface that is connected to the Mac.



  • @Circuitsoft said:

    @blakeyrat said:
    @morbiuswilters said:
    You lie. Linux and OSX are far easier to break than Windows. Hell, I don't even have to do anything to have pulseaudio be like "Ha ha, I'm going to just stop making noises!"

    That's actually pretty convenient.

    The view of Windows that most open source fans have is so divergent from reality that it boggles my mind sometimes.

    Here's what I was doing with my Windows last night:
    1) Playing a brand-new CPU/GPU-intensive game
    2) Running the Steam overlay in it
    3) Running the XSplit video grabber in it, broadcasting to the Internet (720p, CD-quality audio)
    4) Recording the entire 1920x1080 video&audio game at 30 FPS with FRAPS to my secondary (spinning) HD, FRAPs does *ZERO* compression so imagine the data rate there
    5) Running a VOIP program to talk to my buddy (Ventrillo, with server set at highest settings)
    6) Running Audacity to record my own audio

    This was all being done simultaneously, and Windows (and the other pieces of software involved) was rock-fucking-solid. The only limit to my recording time wasn't "how long until shit crashes" but "how much disk space is left on my secondary HD. This is what I do like 4-5 days a week; this is my workflow. (Ignoring game availability issues) is it even possible to do this workload in Linux? Or OS X for that matter?

    Yes.

    Here we see the blakey-rat in its natural habitat, bashing OSes that are not his own. The blakey-rat has no way of knowing what Linux or BSD are like, as he has never seen one himself. Foolishly, the Circuitsoft steps into the blakey-rat's territory. Scene omitted for family audiences. Once again, the fallacy of "it runs on this, so this must be it" has defeated logic. Terrifying. And amazing.

    The reproductive habits



  • @Ben L. said:

    The reproductive habits

    Nice effect.

    But no, the point I was getting at is that the stuff I do on my computer literally cannot be done on Linux. So it doesn't matter how "superior" Linux is to me-- until Linux can do what I need, it's a non-starter.



  • @blakeyrat said:

    @Ben L. said:
    The reproductive habits

    Nice effect.

    But no, the point I was getting at is that the stuff I do on my computer literally cannot be done on Linux. So it doesn't matter how "superior" Linux is to me-- until Linux can do what I need, it's a non-starter.

    What do you mean "literally cannot be done"? I can quite easily record audio and video while playing games and talking on Mumble. And who is this candleja


  • Discourse touched me in a no-no place

    @Circuitsoft said:

    I haven't seen many GUI tools that can do sequenced commands like that.
    That's because such graphical workflow systems get very complicated, very quickly. You can get 90% of the effect with 10% of the effort by using a scripting language of some sort. Even an old MSDOS batch script let you do a useful amount without spending much time fussing with it.

    Sequenced stuff is just very easy to describe with written text.



  • @dkf said:

    Sequenced stuff is just very easy to describe with written text.

    And it's of course utterly impossible to have written text in a GUI-- that's why when you do word processing with Word, you have to type in hieroglyphics.

    Wait, what?



  • @blakeyrat said:

    @dkf said:
    Sequenced stuff is just very easy to describe with written text.

    And it's of course utterly impossible to have written text in a GUI-- that's why when you do word processing with Word, you have to type in hieroglyphics.

    Wait, what?

    That quote and the response juxtaposed confuse me. I think I need to go lie down.



  • @morbiuswilters said:

    They start at zero to make pointer arithmetic easier for the C compiler to implement. 2) This is irrelevant. Modern languages should strive to make bugs difficult, not adhere to some arbitrary mathematics conventions.

    Sorry it isn't some arbitary mathematical convention, I am sorry you have a massive problem with the concept of zero, but most educated people stopped having problems with the concept of zero sometime before the birth of Christ. Like it or not development/computer science has quite a lot of concepts pulled from Mathematics, one of these is zero based counting. I will leave this here:

    http://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD831.html

    @morbiuswilters said:

    It's not an issue of "grapsed". It's an issue of "this is implemented in a way that guarantees a certain amount of bugs, even from experienced programmers". You might as well say the same thing about buffer overflows: they're easy to understand and check for, but they still slip through production software created by senior-level programmers all the time. Most modern languages do bounds-checking because humans are not perfect; if your tool is built on the premise that humans aren't going to make mistakes, it's a shoddy tool.

    I wasn't saying that but seriously having problems with basic programming concepts like iteration and controlling iteration ... oh comon.

    Besides I have not ever seen one "senior" level programmer worth the title have problems with zero based counting. Also I have seen more bugs because somebody decided to break with convention and start counting at 1.

    @morbiuswilters said:

    Lots of languages have foreach and zero-based indexing; the former is great, the latter not. Besides which, I do actual work with programming languages, so functional languages are a no-go.

    Functional languages are programming languages. I have no idea what you are talking about, unless this is some form of elitism and tbh I don't have much time for that anymore. JavaScript is functional, C# has some functional bits put in.

    @morbiuswilters said:

    Then why does it even matter to you? It matters to me because when I do have to use indexes, I find it irritating that we're still stuck with this ass-backwards idiom.
     

    It only ass backwards because you think so. The rest of development community doesn't really agree with you and tbh this is the first time I have even seen it discussed.



  • @Circuitsoft said:

    1. Unfortunately, the best Linux GPU drivers are actually Intel, right now, but if you have something that can be done on IvyBridge graphics, it'll actually run faster under Linux.

     

    The best Linux open source drivers are Intel's, however it doesn't mean they are the best. The nvidia drivers have always been miles ahead of anything else performance wise on Linux and their cards (even lower end) are far faster at 3D than intel can produce currently.

     



  • @boomzilla said:

    @lucas said:
    They start at zero because mathematicians start counting at zero, coming from a maths heavy background I find it makes sense.

    Bullshit. Mathematicians count from 1 just like anyone else.

     

    First, matematicians define a monodimensional vectorial space.

     


  • Trolleybus Mechanic

    @morbiuswilters said:

    The saddest part is that rather than learn how to actually build good software, we have hundreds of thousands of man hours pissed away into creating 100 shitty terminals and 100 shitty windowing managers. It would be like a carpenter saying "Well, I could build myself a single house that would last a lifetime, or a series of tar paper shacks that have a Mean Time Between Collapses (MTBC) of a few months."
     

    That's what happens when:

    a) The architect's boss thinks that blueprints are a silly waste of non-productive time because every minute you spend doing stupid drawings is a minute you're not building a hosue

    b) The development company's board of directors is constantly measuring dollars per hour, and believes in the "80%" rule as the gospel god gold truth-- but they interpret that as "as long as it's good enough, it's good"

    c) The consumers want to spend the absolute bare minimum on the house as possible. There's a constant race to the bottom, and that race has pushed the value of the average house several orders of magnitude below market value. You're now competing with pre-fabs from Walmart, and the customer's nephew who "did some woodworking in grade school" and "knows houses"

    d) The terms "architect" and "builder" get ill-defined to the point where they are used interchangably. You now have a massive overstock of "qualified" personnel to work as architects. The education system pumps them out because they're cheap to train ("This is a nail. 50% of them will have the head on the wrong way so throw them out."), and running their programs are extremely profitable. Because of the mass glut of workers, builders are hired as architects on a builder's salary-- or a reasonable fraction of a builders salary, because there's still competition

    e) So now with clueless managers, greedy owners, unreasonable consumers and incompetant workers, you get you tar shacks as "industry standard", and that's just the way it is.

    * this isn't even taking into account that no one can even agree on which fucking screw type to use. Philips? Nah-- I invented my own that uses a circular indent with a magnet. Except that the screws themselves are ALSO magnetized, so the magnet sticks extra hard. Of course, there's no standard as to North-Pole Head Screws or South-Pole Head Screws. And they aren't visually distingushible. And when you buy a pack of them, 40% are NHP, 40% are SPH, 10% didn't get magnitized an 10% lost their magnatization. So you'll need both a North-Pole Head screwdriver and a South-Pole Head screwdriver and you'll need to try them both each time. And if you don't like it you can always magnatize them yourself, you screw-noob!



  • @lucas said:

    @Circuitsoft said:
    Unfortunately, the best Linux GPU drivers are actually Intel, right now, but if you have something that can be done on IvyBridge graphics, it'll actually run faster under Linux.
    The best Linux open source drivers are Intel's, however it doesn't mean they are the best. The nvidia drivers have always been miles ahead of anything else performance wise on Linux and their cards (even lower end) are far faster at 3D than intel can produce currently.
    Ivy Bridge graphics are actually a lot faster than anything Intel has ever done before. The Intel HD 4600 is comparable to an nVidia GTX260 and right up there with an 8800GTS. While that's not high-end, it is quite respectable out of an on-die graphics controller. One good thing Apple has done for us lately is pushed Intel to make better on-chip graphics.



  • While they are improving, the 8800GTS is about 6 years old. I have a 8800GT sitting in a box in my garage. It depends what you want to do, intel graphics are fine as long as not wanting to do any serious 3D. I am glad they are catching up, hopefully my next notebook's graphics chip won't suck.


Log in to reply