Top 10 dead or dying computer skills - in 2007


  • BINNED

    C++ can be saved, only they have to drop features instead of adding more.
    C is going to stay (unless Rust takes off even in the embedded world), it is a portable way of writing Assembly language. C is shameless and does not try to be what it is not.



  • IPX, for example.


  • Trolleybus Mechanic

    @fbmac said:

    (post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

    You keep saying that term, but I don't think you know what it means.



  • Have you ever worked on a real-time embedded system? One that is required to have response times measured in microseconds?

    Oh, you haven't? Maybe you should stop bitching about the tools that other people use to do their jobs, then.

    C# and Java are pretty useless for tasks like that. C and Ada, on the other hand, are pretty good for that type of work. I could spend quite a bit of time telling you why C#, Java, and countless other languages aren't any good for the type of work I do, but they do have their place. I wouldn't presume to tell you that your pet language is shitty and needs to die (though if your pet language happened to be Malbolge or Brainfuck, one could make a compelling argument ...).

    Instead I'd just be telling you that your pet language isn't suited for the types of work I do every day. Maybe you should try to understand other people's perspectives, instead of being such a short-sighted argumentative prick all the time. You don't know everything, and your "everybody else is stupid and the way I see things is the right way to see things 100% of the time, no exceptions" attitude is going to bite you in the ass hard one of these days. Which, you know, is one way to learn things, but I've found that it is often more productive to learn from other people's mistakes rather than insisting on making the exact same mistakes on your own.



  • @arthurdent421 said:

    Have you ever worked on a real-time embedded system?

    Nope!

    I also didn't read the rest of your post!



  • @arthurdent421 said:

    your "everybody else is stupid and the way I see things is the right way to see things 100% of the time, no exceptions" attitude is going to bite you in the ass hard one of these days.

    We get to see that in his bitching on a regular basis.


  • 🚽 Regular

    @mott555 said:

    I'm curious about Rust but haven't actually seen it in use yet, and I doubt half the platforms we support will have a Rust compiler anytime soon.

    Yep, Rust looks very interesting but would be way out in the future if it happens. Even if there was one for the target there is no way most places/customers would let me use a community-made compiler instead of the manufacturer supported ones and even on the large parts C++ support is pretty grudging. It's C, C, C and C and we'll like it ;)



  • I expect they mostly are talking about networks using protocols such as OSI (which was not a protocl per se, but a model of protocols that was subsumed by the TCP/IP model; several OSI protocols existed, all meant to replace TCP/IP, most of which are long forgotten), X.25 (mostly used by Big Iron in the days of CompuSw erve and Tymnet), ARCNET (the story of the rise and fall of DataPoint is one for the record books, truly bizarre), NetWare (IPX/SPX), AppleTalk, the various S&F networks like Fido, BITNET and UUCPNET, things like that.

    OTOH, they could be talking about the physical layer, too, and just being sloppy about saying it. Token Ring, older forms of Ethernet such as thicknet, the varied coax standards, all pretty much gone with the snows of yesteryear. One hopes.

    And I was recently shocked to see an ad for a ColdFusion position. Seriously? In 2015? WhoTF uses that still (or ever did)?


  • Winner of the 2016 Presidential Election

    @ScholRLEA said:

    ColdFusion […] WhoTF uses that still (or ever did)?

    Paging @Yamikuronue



  • @xaade said:

    The problem isn't that young people don't have good ideas.

    It's that they don't have enough life experience to separate the good ideas from the bad ones.


    https://www.youtube.com/watch?v=3ABg0Vnpo50



  • This post is deleted!


  • Thanks, I didn't remember about IPX and X.25

    Ah! Those times when you had to install TCP/IP support in Windows 95-98


  • Discourse touched me in a no-no place

    @Eldelshell said:

    Those times when you had to install TCP/IP support in Windows 95-98

    I can't remember if 98 came with TCP/IP support, but I do remember installing stuff from my ISP to make the modem work nicely in Windows 3.1 (which I later self-upgraded to 95, so I have no idea what state the OS was in on that machine).

    IPX was nice provided all you were doing was a LAN. The suck happened when you wanted to work with a MAN or WAN. A broadcast discovery protocol does not scale. DNS was the first killer app (err, protocol) of the Internet; it made it possible to reach a global scale without driving every single admin completely crazy.


  • BINNED

    @Eldelshell said:

    X.25

    I' m repressing those memories


  • BINNED

    98 had built-in TCP/IP


  • I survived the hour long Uno hand

    We're switching to Node.js :)



  • @Yamikuronue said:

    We're switching to Node.js :)

    :facepalm:


  • Grade A Premium Asshole

    @arthurdent421 said:

    Have you ever worked on a real-time embedded system? One that is required to have response times measured in microseconds?

    Does it have a filesystem?



  • I've worked on some hard real-time systems that didn't have file systems, but that was several years ago. The stuff I'm working on at the moment is really only soft real-time, and I get to have a real filesystem. Oh, and displays, and ethernet! It is pretty nice.


  • BINNED

    @arthurdent421 said:

    soft real-time

    Is it in audio processing? that is the only case I kind of get the soft in soft-realtime. Otherwise real-time is about guarantees on worst-case latency, there is no soft in it.



  • Keep in mind that many games can be considered soft real-time: Occasional dropped frames are okay, but it reduces the playability and enjoyment of the game.

    From wikipedia (yeah, yeah, I know) : Real-time systems, as well as their deadlines, are classified by the consequence of missing a deadline:

    • Hard – missing a deadline is a total system failure.
    • Firm – infrequent deadline misses are tolerable, but may degrade the system's quality of service. The usefulness of a result is zero after its deadline.
    • Soft – the usefulness of a result degrades after its deadline, thereby degrading the system's quality of service.


  • @cartman82 said:

    7) PowerBuilder

    1. ColdFusion

    We have some old PowerBuilder that still needs to be maintained.
    We only officially stopped new projects from being in ColdFusion last year (when we got a new boss's boss who decided to move away from it finally). We still have to occasionally add new features to existing projects.

    Mostly we are moving to .NET. But we had one lone wolf guy who decided to do his own stuff in JSP/Java...I think for job security...though it looks like I will be part of the team to take it over (I did some JSP/Java in 2000--should be no problem, right?).


  • BINNED

    Good point but with this definition Hard realtime does not exist. A good system guarantees say 99.99% of data availability within say 1us and maybe 99.999999% in 2us, and so on. And these guarantees have to be usually measured, by running different loads under different conditions and checking for failures and logging the failure mode. Even critical systems (a fast missile) will have to tolerate errors, missing a deadline should not just panic the kernel and let the missile drop. So, there is just realtime with different QoS assurances.



  • AFAIR it did have it, but wasn't enabled so you had to add it manually.



  • The definition is incorrect, in any case, because (like far too many interpretations of 'real-time') it fails to consider the minimum time boundary. Real-time performance is a promise or contract to perform an action between time t and time t', which can be enforced (to a greater or lesser degree of QoS) by the software and any underlying system it runs under.

    Thing is, most people forget t and focus exclusively on t'. This sort of an understandable confusion, since most of the things people (mis-)understand to be real-time either have a computational lag L ≥ t, or have a de facto t = 0.

    Still if you were to write a bottling line controller that operated the capper at its physically maximum speed, you would drop a lot of bottle caps, and probably spill a lot of whatever liquid was in the bottles as the capper came down when a bottle was only half-way under it.

    Now consider what that would mean for an aircraft's surface controls, or for a waldo in a nuclear power plant. Right.

    One thing that can be said with certainly is that there are effectively no real-time applications for a desktop OS like Windows, MacOS, or an unmodified Linux distro (there are real-time variants of the Linux kernel, which rip out things like virtual memory in order to make it an RTOS, but they rarely get used since no one in their right mind would do that even if it weren't far too large for a reasonable embedded system). Games and audio don't actually count, even if they have properties similar to real time, since there is no actual requirement for RT performance, even a soft. Yes, a game may not run correctly if the refresh is off, and a sound player may skip, but unlike in a RT system, these interruptions my be caused by the operating system, or even by other applications. If the OS isn't real-time, then neither is the application, period.


  • BINNED

    @ScholRLEA said:

    here are real-time variants of the Linux kernel, which rip out things like virtual memory in order to make it an RTOS, but they rarely get used since no one in their right mind would do that even if it weren't far too large for a reasonable embedded system

    I have used linux-rt in a medical device ;). There are 2 ways to achieve realtime, one is dual-kernel approach, and the other is to basically make everything preemptable, plus yes changes to virtual memory, ...
    In my design, I had it run in a dedicated CPU core and tested the hell out of it for weeks and measured the maximum latency, and it was really good actually.
    Granted it was not a traditional embedded system (having 4 cores and flash drive to load from), and the processing is very well controlled. But no matter what application layer, latency has to be measured because the only real Hard realtime is with FPGA or ASIC and a stable clock.

    @ScholRLEA said:

    One thing that can be said with certainly is that there are effectively no real-time applications for a desktop OS like Windows, MacOS, or an unmodified Linux distro

    The dual-kernel mode actually can do that, easily. Think of it as you run desktop as a VM that runs under the RT kernel, they communicate through special channels.



  • This I did not know. Thanks for the correction.

    Still, it does take at least a modified kernel for the one running in the VMdomain server, if I understand that right. But it's interesting to hear that, nonetheless.

    EDIT: OK, I fixed my initial misunderstanding. I hope.


  • BINNED

    Yes both approaches need special patches, PREEMPT_RT or Ipipe, but they are on their way to be mainlined (the former is taking a little too long). You can check Xenomai they pretty much streamline the whole thing in a nice framework.



  • @cartman82 said:

    WTF is a "PC network administrator"?

    Go back and look at the TIOBE page again; specifically the graph of the top-10 languages. Every single one of those languages is in decline, except maybe VB .NET.

    Scary. Time for a new profession?



  • @dse said:

    the only real Hard realtime is with FPGA or ASIC and a stable clock.

    Not true. You can do hard realtime with a processor as well, sometimes even without requiring you to program it in assembler. Trying to do that on top of a traditional task scheduler is asking for trouble, though.

    I quite like the approach that TI has taken with its Sitara SoC, as used in the BeagleBone Black: the main processing core is a nice capable ARM running at a GHz or so, well suited to running standard OS kernels with high throughput but not particularly predictable timing, and hanging off the side are a couple of much simpler 32-bit peripheral processors running at a couple hundred MHz. Those are completely independent of the core CPU and have tightly specified instruction timings - they're specifically intended for offloading real-time tasks onto.


  • BINNED

    Yes very interesting approach! That would make them ideal for many realtime task, most people will probably use them to generate PWM waveforms. I will have to buy one now for my digital camera project (once I can have a hubbyhobby again).
    There is also NOHZ patch that can basically get scheduler's hand off of some CPU core. I have not used that, but ideally (together with controlling where interrupts get routed to) it can achieve something similar. Reserving some CPU to do realtime (like a co-processor) though is only good if you have full control over your realtime applications (if it is an appliance). While some sort of cooperative threading will be doable, any other complex application that has multiple threads needs scheduler and priority boosting (the latter needs RT kernel).



  • @dse said:

    once I can have a hubby again

    Not sure why this is required. But what did you do with/to your last one?


Log in to reply