Mill CPU



  • @scholrlea said in Intel making us slow down:

    Similarly, any comments about the Mill

    Thanks for that link @ScholRLEA! Really interesting project. I think this deserves its own thread, because IMHO this thing looks awesome, but I'm not intimate enough with the sort of low-level detail of modern CPU execution to really judge. So, I'd like to hear your opinions.

    From what I got from the ~15 hrs of videos they have as the only real high-level documentation on their site, the idea is basically to abandon out-of-order execution and go for highly parallel execution (VLIW style, they speak of ~30 ops per instruction) with an exposed pipeline and static compile-time instruction scheduling instead. They claim they can achieve a substantial (10x) improvement of the performance / power ratio that way.

    They have quite a few very interesting ideas for how to make that actually work and perform well with general purpose code (as opposed to DSP-style data flow processing), including hardware "None" and "NaR" ("Not a Result") values, two instruction pointers, tight control of the side effects of operations, using virtual addresses in the caches (single 64 bits address space), no general purpose registers but a sort of single-assignment FIFO they call the "Belt" instead, etc etc.

    They also have some very interesting security features, with byte granularity for access permissions, per-thread permissions, a protected stack, isolated functions, and more.

    Anyone more familiar with the project? What do you think? Do you think it can deliver what they promise?

    Most of the criticism I heard was doubts that one can get enough instruction level parallelism to actually use its instruction width in general purpose code. Their counter argument is that the novel features of the CPU actually make it possible to get it, which I'm inclined to believe; or at least it seems plausible that most of the limitations of existing CPUs which people have experience with don't apply on this thing.



  • @ixvedeusi It sounds a lot like the Intel Itanium, which sunk like the iTanic because it wasn't x86 and included a terrible hardware x86 emulator. When companies just threw their existing x86 binaries on there and didn't see massive improvements (because you have to compile for Itanium to get them, stupids!) and instead saw ridiculous slowdowns, they completely ignored the platform until Microsoft, HP, and Intel gave up and went away. I think the Mill won't see any kind of broader release because there's no x86 emulation and because people still feel burned by Itanium.



  • @twelvebaud said in Mill CPU:

    @ixvedeusi It sounds a lot like the Intel Itanium, which sunk like the iTanic because it wasn't x86 and included a terrible hardware x86 emulator. When companies just threw their existing x86 binaries on there and didn't see massive improvements (because you have to compile for Itanium to get them, stupids!) and instead saw ridiculous slowdowns, they completely ignored the platform until Microsoft, HP, and Intel gave up and went away. I think the Mill won't see any kind of broader release because there's no x86 emulation and because people still feel burned by Itanium.

    A very good analogy. Thhe Itanium was a fantastic processor, but as you point out it was doomed because of the x86 issue. On the other hand, DEC Alpha's made good use of the processor capabilities.


  • Impossible Mission - B

    @thecpuwizard said in Mill CPU:

    The Itanium was a fantastic processor

    Not sure about that. I actually asked about that a while back on Programmers.StackExchange, and got some very interesting replies back detailing some fundamental-level flaws with the Itanium, including stuff that had nothing to do with the general narrative about the whole compilation story being difficult.



  • @twelvebaud said in Mill CPU:

    Intel Itanium

    Yeah, I don't know much about the Itanium (off to wikipedia I go); Godard mentions it several times in his talks; once he says something along the lines of "the Itanium could have become the Mill if [the original designer] could have finished it". I think this might stand more of a chance because it's so massively different that x86 support is not something anyone with any sense could expect. There is no doubt that code would have to be re-compiled for the Mill, this is way too radical for any hope for binary-level compatibility, and the talks are quite clear about that.

    I think it all depends on if the Mill finds a suitable niche were binary compatibility is less of an issue. Phones or other portables might be a promising target, the alleged decrease in power consumption would be particularly welcome there.


  • Discourse touched me in a no-no place

    @ixvedeusi said in Mill CPU:

    Really interesting project.

    Hard to tell without comparing it on real code. They make lots of claims, but all sorts of people do that all the time; the proof of the pudding is in the eating.

    Also, there's no indication of actual hardware specs. They've not done a tapeout for fabrication (well, either that or the test builds had a criminally low success rate and they're going to have to redo the masks from scratch). That's a bad sign for a processor architecture; it put's them firmly in the area of vapourware. Without knowing a little bit about the type of fab process they're targeting, it's really difficult to say much about the performance claims, but they're really unlikely to beat Intel or AMD on sheer performance any time soon (because they won't have access to the required fabs) assuming we're talking general workloads where the major costs are due to actual data movement.


  • Discourse touched me in a no-no place

    @ixvedeusi said in Mill CPU:

    Phones or other portables might be a promising target, the alleged decrease in power consumption would be particularly welcome there.

    Not as much as you'd think. The big energy costs of a phone are in running the radio (for all types of communication path) and the screen. The CPU itself isn't as big a contributor (provided it idles well) and ARMs designed for mobile use don't really use that much power; if they did, they would have big opportunities for energy reduction. (The ARM instruction decoder is a lot simpler than the x86 one ever was.)

    IME, on-chip memory uses far more space than the actual CPU core itself.



  • @dkf said in Mill CPU:

    They've not done a tapeout for fabrication (...). That's a bad sign for a processor architecture; it put's them firmly in the area of vapourware.

    They're still a long way off a tapeout it seems, they claim to be working on an FPGA implementation as proof of concept. They still run in simulation, have a compiler tool chain and claim to be working on porting "the L4 kernel". Thus yeah, that's still deep in the vapourware stage.

    @dkf said in Mill CPU:

    Without knowing a little bit about the type of fab process they're targeting

    They say they target "normal" fab processes, they claim their advantage comes purely from architecture.

    @dkf said in Mill CPU:

    major costs are due to actual data movement

    They claim to have eliminated a lot of that, at least as concerns in-CPU data movement.


  • Discourse touched me in a no-no place

    @ixvedeusi said in Mill CPU:

    They're still a long way off a tapeout it seems, they claim to be working on an FPGA implementation as proof of concept.

    That could be 5–10 years out from general fabrication, especially if they're looking to get into being a provider in the SoC market (that is the main area for really low power work; desktops/servers mostly go for sheer grunt-power). As such, they're competing with processors that haven't even started to be designed yet, but where they're being done by companies with more established fabbing workflow solutions available to them. Heck, even our next gen chip hyper-parallel research chip is well in advance of them, and we're just a university.

    Right now, they're more likely to run out of VC money than they are to make it to delivery in a receptive market.



  • @dkf One thing to keep in mind is that this is being done more or less as a scratch project, with little up-front funding (none, initially) by people who mostly have day jobs. They are taking things very slowly, on purpose. Doing hardware on a volunteer basis, with only small amounts of funding offered (and mostly turned down) but not really solicited, is a slow process at best.

    They also have been slowly putting together a large number of patent applications, which has been another reason to go slow about things.

    This means that after 13 years (I think) of part-time development, they are only just now looking at an FPGA implementation (not taping out for silicon, not even as an ASIC) to test how the instructions work beyond their simulations. It also means that they haven't actually published much other than some of those patent applications and the aforementioned videos, nor have they publicly released a simulator for it AFAIK. They do say they have a LLVM back-end that targets it, but again, I don't think it is publicly available yet.

    My impression is that they are being particularly cagey about it, but not in a pig-in-a-poke way so much as a "we don't want to screw this up by going too fast" way. It seems like those in the project, especially Ivan Godard, are seeing it as a labor of love, as something they wanted to try but knew from the start was too risky for most corporate backers and too long-term for your typical GO-GO-GO venture capitalists. In other words, they are doing it mostly for e-peen, ego satisfaction, and their 'legacy' as designers, and are well aware that they probably won't get a cent back out of it.

    They certainly speak as if they don't expect it to succeed - they want it to at least be enough of something to make their money back, but they talk as if they are figuring it is a longshot. Godard in particular makes a big deal about being cautiously restrained, though guardedly optimistic, in many of his videos and interviews.

    They also have the sense to realize that they aren't going to take on Intel, or even ARM Holdings. They are targeting things like HPC blade systems, industrial control systems and avionics as the markets they are most likely to get some leverage in, ones where legacy hardware isn't a consideration in the first place.

    All in all, it appeals to me personally because, as I said elsewhere, this looks like a more practical solution to the sets of problems I had in mind when I floated the RSVP idea years ago, by people who actually know what they are doing (Godard apparently has been working in designing DSPs since the 1980s, with the Trimedia being his big claim to fame prior to this).

    That, and the fact that they really do seem aware that this isn't a sure thing. That alone goes a long way towards establishing credibility, in my mind. Now, it could be a strategic thing, trying to get people to take them seriously, but if so, that by itself shows a certain savvy often missing in projects like this.



  • @dkf said in Mill CPU:

    @ixvedeusi said in Mill CPU:

    They're still a long way off a tapeout it seems, they claim to be working on an FPGA implementation as proof of concept.

    That could be 5–10 years out from general fabrication, especially if they're looking to get into being a provider in the SoC market (that is the main area for really low power work; desktops/servers mostly go for sheer grunt-power).

    Neither of those are even in their consideration, AFAICT. They are going straight for industrial embedded systems, avionics for airliners and spacecraft, and HPC servers, and aren't looking at consumer markets at all. They have specifically mentioned that they want to design it to be easily radiation and temperature hardened, even if it isn't a necessary part of their design specs.

    Keep in mind, too, that they are going with a fabless, IP holder model, and don't intend to run a fab themselves at all - it is only a contingency for the case where no fabricator licenses the design.

    They also have deliberately decoupled the programmer's ABI and the actual hardware - they intend to operate with a formal-to-actual opcode resolver (in software, I think, not hardware, meaning that the program binaries would have to be processed it would have to be processed and saved separatey before they could be used on a specific implementation), similar to how some older mainframe families (specifically the System/360) worked. In this approach, the assemblers or compilers would generate one set of cross-family-portable opcodes to represent the operations, but the particular implementation would be free to use a different binary representation in order to allow CPU designers to tweak the real opcodes to match their specific designs.

    This indicates to me that they are not seeing binary compatibility as a priority, something that is pretty much necessary for a system used for general application software in the current market (though for things like Android, this is less important as most of the software is distributed as JVM bytecode rather than native code). While some other mainframe manufacturers (particularly Honeywell, IIRC) followed IBM's lead on this back in the day, the approach fell out of favor precisely because it didn't fit well with a consumer-driven software market.

    They very specifically have stated that using it in anything like an SoC would be the call of the people licensing the design, not theirs, and that they don't actually expect it to happen in the first several years of it being a marketable product, if ever.


  • Discourse touched me in a no-no place

    @scholrlea said in Mill CPU:

    They also have been slowly putting together a large number of patent applications, which has been another reason to go slow about things.

    That's idiotic. Patents have a limited lifespan; if they take it slow, others will be able to use the IP before they've had a chance to properly monetise it.



  • @dkf said in Mill CPU:

    @scholrlea said in Mill CPU:

    They also have been slowly putting together a large number of patent applications, which has been another reason to go slow about things.

    That's idiotic. Patents have a limited lifespan; if they take it slow, others will be able to use the IP before they've had a chance to properly monetise it.

    I said they were going slow putting the patents together; they do, apparently, intend to speed things up now that they are submitted, they just wanted to make sure that the i's are dotted and the t's are crossed before they filed them. Yes, there was a risk of someone beating them to the USPTO, but since no one is really working in this particular space right now - most others in the field are taking things in a very different direction - they seem to have felt they could afford the time to get it nailed down tight first.

    At least, that's the story they give. Make of it what you will; just because I like the ideas I am hearing from them doesn't mean I have any ego commitment or fiscal investment in what they do.

    I mean, realistically, the whole thing of putting together the patents before it is working in the first place sounds dicey to me, both in terms of the cost of filing, and in terms of defending the patent if they needed to. But that's not my call, and IANAL even if it were.



  • @dkf It's time to make patents last 95 years like copyrights!

    This way transistors would still be patented by someone and cost $15 each, and we wouldn't have to worry about processor architectures.



  • @anonymous234 said in Mill CPU:

    make patents last 95 yearsforever like copyrights

    FTFY 🐠

    The Public Domain Report said:

    Hey gang! It's time for Public Domain Day again, where we list all of the music, film, books, and other pieces of art leaving copyright today. And here's that list again, just like last year:

    Nothing. Nothing at all.

    Happy New Year.


  • Discourse touched me in a no-no place

    @scholrlea said in Mill CPU:

    no one is really working in this particular space right now

    It reminds me of stuff that was worked on my an old friend of mine back when he was doing his PhD. Which was in about 2000. And was based at least partially on work done by IBM back in the 1960s and '70s…



  • @ixvedeusi said in Mill CPU:

    The Public Domain Report said:

    Hey gang! It's time for Public Domain Day again, where we list all of the music, film, books, and other pieces of art leaving copyright today. And here's that list again, just like last year:
    Nothing. Nothing at all.
    Happy New Year.

    With any luck, this is the last year for that, especially since RIAA, MPAA, and a certain Floridian company don't appear to be pursuing further extensions this time around.



  • @dkf said in Mill CPU:

    It reminds me of stuff that was worked on my an old friend of mine back when he was doing his PhD. Which was in about 2000.

    Well, they'd been at this from like 2005 and are quite open about how they took inspiration from the original EPIC which according to Wikipedia originated in 1989. So yes, I suppose there are quite a few non-new ideas in there; but I doubt that these early projects went quite as far as they intend to (I'd be happy for you to prove me wrong on this), and this is explicitly a commercial endeavor, not university research.



  • @twelvebaud That's what I'm hoping for.

    Steamboat Willie is due to enter public domain in 2023, that's 5 years from now. The two previous copyright law extensions happened 8 and 5 years before it was due, so if it was going to happen this time it's starting to run out of time. Plus it would cross the 100 year barrier now which would probably get too much bad publicity to be worth it. And Disney just bought Fox so their assets are more diversified.


  • Discourse touched me in a no-no place

    @ixvedeusi said in Mill CPU:

    this is explicitly a commercial endeavor, not university research

    So?

    The real concerns are things like “what is the memory bandwidth?” and “what sort of co-packaging options will they be targeting?” as those make a very big difference to both gross speed and practical power consumption per speed unit. It's good that they've managed to hook their simulated version behind a conventional compiler (with some funky stuff involved, but that's fine) as that means that they can get software ported fairly easily, yet given the comparative design novelty involved, the concern is that reality won't be nearly as accommodating with their designs.

    And they've got to persuade someone to actually make a chip with their CPU on it. That means persuading someone who is probably happy with their current setup (existing compiled software, etc.) to change… They're going to have to be pretty wonderful to get over that roadbump.



  • @dkf said in Mill CPU:

    So?

    So their main interest is to have that thing actually be used, and as soon as possible.

    @dkf said in Mill CPU:

    The real concerns are things like “what is the memory bandwidth?” and “what sort of co-packaging options will they be targeting?” as those make a very big difference to both gross speed and practical power consumption per speed unit.

    Yes, I'm sure these are important, but not what they differentiate on, so I'd suppose that would be "the usual", whatever that means. I've mostly watched the talks and there's not much being said about these topics, it's mostly about the core.

    One thing I wonder is if they'll really manage to really keep the CPU occupied during DRAM access as well as out-of-order machines; I'd suppose tricks like hyper-threading can go a long way to fill those holes. There's also quite a few components which seem to go to DRAM behind the scenes. In particular that spiller thing, which holds all the function state, sounds like a potential bottleneck to me.


  • Discourse touched me in a no-no place

    @ixvedeusi said in Mill CPU:

    One thing I wonder is if they'll really manage to really keep the CPU occupied during DRAM access as well as out-of-order machines; I'd suppose tricks like hyper-threading can go a long way to fill those holes.

    That depends on the clock speed and the overall system architecture. If the design includes a chunk of local RAM in the functional module (a not-unheard-of design decision for embedded systems) then the local RAM will be able to keep up with the CPU core. External RAM will be slower, but might be handled in practice by some sort of DMA controller so that the CPU doesn't (normally) block. Or the whole system could just run incredibly slowly (that's a good way to get great performance-per-watt statistics) and then the RAM will have no trouble keeping up.

    I'm more concerned about how they claim that there's going to be no way to fool the security of the context boundaries. That's the sort of claim that runs deep into “O RLY?” territory. Yes, it will be pretty easy to enforce if everyone uses their compiler, but that's a whole world of difference away from what happens when serious hackers get involved; when the rules are being broken anyway, software-enforced rules have a tendency to not add all that much security. In some contexts, it doesn't matter; in others, it matters a lot. At the very least, they need those security claims verified by someone else not financially affiliated with the success of the processor…



  • @dkf said in Mill CPU:

    software-enforced rules have a tendency to not add all that much security

    The security they talk about is hardware based, they go quite far on that level and there's no mention of security enforcement in the compiler (details are in the security talk).

    For example, a function cannot access the callee's "registers", return addresses are on a separate stack inaccessible by software1, and each protection context has "its own" stack space, even within a single thread. What they describe does seem quite robust in the abstract processor model, I can believe that most known attack vectors are simply not there on this system; but I suppose there would be new vulnerabilities which would exploit leaky abstractions and implementation imperfections or side effects of speculative execution like Spectre.

    I'd think that judiciously setting up the protection environments in software could go a long way to mitigating the speculation leakage variants. They also have "portal calls" which are supposed to provide very cheap changes of protection environment (function call + 2 cache lines of memory access) which should limit the performance impact of such a setup.

    1 they promise there'll be an API for debugging, which can access all that stuff if you allow it to by giving it permission to read the right memory addresses. Yes, this could poke a hole into the security, but that "bypass" system is in fact subject to the very same protection mechanisms as the rest of the program, so something in the software must give the required access rights for it to work.


  • Discourse touched me in a no-no place

    @ixvedeusi said in Mill CPU:

    I'd think that judiciously setting up the protection environments in software could go a long way to mitigating the speculation leakage variants.

    If they're going for embedded, it's quite possibly a minor thing in the first place. Most embedded systems don't actually run arbitrary code, but rather just what their manufacturer intended. The general computing market is more security conscious, but the market effects due to the very large incumbents and existing software bases make that a very difficult route to making any money.



  • @dkf said in Mill CPU:

    Most embedded systems don't actually run arbitrary code, but rather just what their manufacturer intended.

    Ya might think...but...

    About two years ago I was involved in a large industrial system. One component was a thermal sensor and display. The only "programmatic access" as via an RS-485 for establishing trip points, etc..... Occasionally the display (old school 7-segment) would start displaying strange sets of segments. We thought it might be some type of diagnostic code and contacted the vendor. They said no, so it must be a defective device. We exchanged it, and the new one did the same thing. After the second exchange (3 devices replicating) it was clearly not something at that level. Still the vendor could not reproduce.

    On a lark, we attached a recorder to the 485 line and waited.... After the next occurrence we played back the recorder but it it did not reproduce.... We went one step further and put a simple signal recorder on (the 485 recorder captured characters, the signal recorder, simply transitions..... BINGO a reproducible situation.

    Yup, there was a vulnerability that cause the device to execute code that was NOT what the manufacturer intended....


  • Discourse touched me in a no-no place

    @thecpuwizard said in Mill CPU:

    Ya might think...but...

    The whole system needs to be secure, but that's entirely different from saying that the CPU needs to be running a partitioned-security-model OS.



  • @dkf said in Mill CPU:

    @thecpuwizard said in Mill CPU:

    Ya might think...but...

    The whole system needs to be secure, but that's entirely different from saying that the CPU needs to be running a partitioned-security-model OS.

    Quite so. Security is a process.



  • @dkf said in Mill CPU:

    The general computing market is more security conscious

    They claim to be going for general computing, and the architecture seems clearly aimed at that. For the reasons you give, I can't really see how they would be able to enter that market directly, so I think for them to stand any chance at all they'd have to find some niche first where they can establish a strong presence and prove the worth of their approach. This niche might be embedded, avionics, automotive, automation, IoT fads or some other relatively closed environment.



  • @ixvedeusi said in Mill CPU:

    @dkf said in Mill CPU:

    The general computing market is more security conscious

    They claim to be going for general computing, and the architecture seems clearly aimed at that. For the reasons you give, I can't really see how they would be able to enter that market directly, so I think for them to stand any chance at all they'd have to find some niche first where they can establish a strong presence and prove the worth of their approach. This niche might be embedded, avionics, automotive, automation, IoT fads or some other relatively closed environment.

    Uhm, yes, that's exactly their plan. Which they state repeatedly in almost every one of their videos.

    Wait, no one has posted that playlist yet? Yeah, that's probably something I or someone else should have done earlier...

    Stanford Seminar -Drinking from the Firehose: How the Mill CPU Decodes 30+ Instructions per Cycle – 1:16:59
    — stanfordonline



  • I should have inlined the Hackaday interviews as well, probably.

    Though watching the first one again, I see that I got the 'fab vs. IP holder' equation backwards (what a surprise, coming from me...). Their main goal is to produce their own chips, but they are looking to sell the IP to fabs if they can't afford that.

    Mill CPU for Humans - Part 1 – 11:35
    — HACKADAY


    Mill CPU for Humans - Part 2 – 17:26
    — HACKADAY

    Mill CPU for Humans - Part 3 – 10:20
    — HACKADAY

    Mill CPU for Humans - Part 4 – 12:28
    — HACKADAY



  • @dkf said in Mill CPU:

    The whole system needs to be secure,

    I agree 100%.... My reply was specific to the quoted material regarding "embedded systems" and running ONLY what the Manufacturer Intended....


  • Discourse touched me in a no-no place

    @scholrlea said in Mill CPU:

    Their main goal is to produce their own chips, but they are looking to sell the IP to fabs if they can't afford that.

    Unless they've got a few billion to build their own fab, they'll be in the IP business. That's not the end of the world, there can be a lot of profits made in that space (ARM worked that way for decades; I don't know whether their owner has that sort of facility now) but they're going up against some extremely well established players with a system that will require large amounts of software retooling. Also, we don't know whether their object code is as dense so we can't make any kind of guess at the RAM or ROM capacities required and those are a big factor in a lot of the higher-end embedded. The low-end embedded won't be interested; they're still on 8-bit CPUs and are happy that way.

    Chip manufacturing plants are close to the most expensive industrial facilities ever created by man.



  • @dkf This is true. I think it is more of a "well, we wish would could, but..." sort of thing, but I could be wrong.



  • @dkf said in Mill CPU:

    @scholrlea said in Mill CPU:

    Their main goal is to produce their own chips, but they are looking to sell the IP to fabs if they can't afford that.

    Unless they've got a few billion to build their own fab, they'll be in the IP business.

    AMD keeps only one fab (or zero) last l heard, It contract the fabricación to the spun_off co and also to TMSC or similar, no?


  • Discourse touched me in a no-no place

    @cabrito said in Mill CPU:

    AMD keeps only one fab (or zero) last l heard, It contract the fabricación to the spun_off co and also to TMSC or similar, no?

    As far as I know, only Intel keep things totally in-house. The economics of these things are pretty brutal.



  • This video was not as interesting (to me, anyway) as I expected, but I got two nice quotes from it:

    @30:05:

    This is the only time I've ever seen an actual, useful use of overloading the comma operator.

    @56:23:

    Has anybody ever written a template taking template template arguments that themselves take template template arguments?

    That's used inside, and there's this marvelous piece of code with 42 consecutive right angle brackets.

    It works! It works amazingly well

    LLVM Meets the Truly Alien: Mill CPU Architecture – 57:54
    — Janus Troelsen


  • Discourse touched me in a no-no place

    @zecc said in Mill CPU:

    42 consecutive right angle brackets

    0_1516281658500_7da5007e-d267-4edf-91e2-325552d69cd8-image.png



  • @dkf said in Mill CPU:

    42 consecutive right angle brackets

    YEs, the limit should clearly be no more than 39 🙂

    Seriously, if one starts doing heavy meta programming, this can happen. Take a look at Andrei Alexandrescu's work...

    One of by favorites [NOT related to Andrei] was a program to play all possible games of tic-tac-toe and print out the results in a simple format... Use C++ templates to provide the maximum amount of compile time decision making.... The end result was a C++ executable that consisted of the output of a single literal string!


  • Discourse touched me in a no-no place

    @thecpuwizard said in Mill CPU:

    Seriously, if one starts doing heavy meta programming, this can happen.

    I've done pretty hefty metaprogramming in my time. I prefer to avoid it with C++ as it gets a bit impenetrable to debug when it goes wrong.



  • @dkf said in Mill CPU:

    I prefer to avoid it with C++ as it gets a bit impenetrable to debug when it goes wrong.

    IME the error messages have gotten a lot better in the last few years, but "a bit impenetrable" is still quite a bit of an understatement. I guess the only hope we have for actually understandable error messages with complex C++ templates is to get broad compiler support for C++ Concepts.



  • @dkf said in Mill CPU:

    @thecpuwizard said in Mill CPU:

    Seriously, if one starts doing heavy meta programming, this can happen.

    I've done pretty hefty metaprogramming in my time. I prefer to avoid it with C++ as it gets a bit impenetrable to debug when it goes wrong.

    Try understanding (let alone debugging) a 100 character APL program 😵 😵 😵 😱



  • Just for those not familiar...

    rawbits←,⍉(8/2)⊤¯1+ASCII⍳msg
    bits←512{⍵↑⍨⍺×⊃0 ⍺⊤⊃⍴⍵}rawbits,512↑1
    (¯64↑bits)←,⊖8 8⍴,(64⍴2)⊤⍴rawbits


  • Discourse touched me in a no-no place

    @thecpuwizard said in Mill CPU:

    APL

    I know I don't know APL. 😉



  • SOMEONE HELP! @TheCPUWizard IS HAVING A STROKE! 🚑



  • @zecc said in Mill CPU:

    SOMEONE HELP! @TheCPUWizard IS HAVING A STROKE! 🚑

    Only stroke I have is on the gold course, and I have far too many of them there.... 🙂

    My wander into APL [and yes, I worked heavily in it for a number of years, many decades ago] was relevant to this thread because it is just one scenario where the entire approach to software development was changed because of the implementation tool.

    In terms of the Mill processor, I am sure that it will be, at best, on the low end when it comes to "writing software the same way". At the same time, developing an alternate approach that leverages the power and avoids the more common pitfalls (or at least mitigates them) is something that I am fairly confident is possible.



  • @thecpuwizard said in Mill CPU:

    it is just one scenario where the entire approach to software development was changed because of the implementation tool.

    One of the points I tried to make to Geri the SubLEq Guy was that even if the whole OISCes-everywhere idea wasn't total ass, it still was a bad idea to use C as the primary language targeting it, because C was so thoroughly designed around targeting a register machine with an instruction set similar to the PDP-11 (this is an issue even with RISC systems, which often suck balls with code which expects the same suite of addressing modes as the Eleven).

    I tried to get him to look at either state machines or dataflow programming, both of which seemed a better fit than conventional procedural or OOP models. His reply was somewhere between "C the only language I already know that doesn't suck", and "Spare me your space age technobabble, Attila the Hun!" Amusingly, he initially thought I was complaining that C was too high level, which is almost, but not quite, exactly the opposite of what I meant.



  • @scholrlea said in Mill CPU:

    C was so thoroughly designed around targeting a register machine with an instruction set similar to the PDP-11

    Not sure is I should Vote++ or ++Vote 🙂



  • @thecpuwizard said in Mill CPU:

    @dkf said in Mill CPU:

    42 consecutive right angle brackets

    YEs, the limit should clearly be no more than 39 🙂

    Seriously, if one starts doing heavy meta programming, this can happen. Take a look at Andrei Alexandrescu's work...

    One of by favorites [NOT related to Andrei] was a program to play all possible games of tic-tac-toe and print out the results in a simple format... Use C++ templates to provide the maximum amount of compile time decision making.... The end result was a C++ executable that consisted of the output of a single literal string!

    Suddently I lost all interest in d-lang or anything touched by this Alexandrescu guy


  • Impossible Mission - B

    @thecpuwizard said in Mill CPU:

    gold course

    🤑❓


  • Impossible Mission - B

    @dkf said in Mill CPU:

    I've done pretty hefty metaprogramming in my time. I prefer to avoid it with C++ as it gets a bit impenetrable to debug when it goes wrong.

    Agreed. A big part of the problem is that template is a completely separate language that's only very loosely related to C++, which means that even if you know the rest of C++ well, trying to do template metaprogramming requires a whole different set of language skills.

    Sane metaprogramming is done in the target language itself.


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.