The state of web development


  • area_deu

    Stumbled across this:

    Blah blah, a CNN page showing a simple article makes 200+ HTTP requests and uses 2 MB of data. It sucks, but it's par for the course, and the answer to that is Adblock and Ghostery, not moving to a walled garden. If you bought a phone where you can't install ad blockers: Sucks to be you, idiot.
    Not the WTF.

    But one comment mentioned this:
    https://github.com/reddit/reddit-mobile/issues/247

    And reading that thread really made me say :wtf: out loud like ten times. Go ahead, read it, and then tell me if you understood and recognized even a third of the things (frameworks? libraries? methods?) he talks about. I do (intranet) web apps for a living, and I didn't.

    IT'S A FUCKING WEB PAGE. It does the same things web pages always have done - it shows some stuff the user can read and click on.
    Why do you need a MINIFIED client.js that's 1.1 MB for that? Why does it take 45 fucking seconds to load a bunch of boxes with pictures in them? Why do I need animated SVG icons that my browser has to calculate and render every time instead of SHOWING A FUCKING PICTURE IT CAN CACHE?

    And why, when doing a performance audit, does nobody stand back and ask what the FUCK they are doing? Instead they are proudly shaving a few milliseconds here and there and remove a few JS functions they make you load every time but never use, without addressing the HUGE STEAMING PILE of SHIT the app's core design is?
    It's not reddit specifically, you can take pretty much every modern "web app" (and yes, Discourse as well).

    I'm guessing the answer is that everybody tells us web apps are the future, but browsers are such a shitty platform and the HTML/JS/CSS stack is such a huge goat-sucking mountain range of excrement that developers see no choice but to pile on frameworks, libraries and plugins just to stay sane.
    And to make matters worse, for every browser-related problem there is, someone has created a shitty JS plugin that does 17 other things you don't need, dumped it on Github and never touched it again. And you use it, because you have "solved" your problem by adding one script reference and can move on.

    Or devs today are just lazy, I don't know. I only know I feel old now.


    Edit: So, escaping <script> is not a thing we do anymore? Fuck you, Discourse. And fuck your lying preview window.



  • I think the problem is that many web designers have lost sight of what they’re (supposed to be) doing because of all the possibilities modern browsers offer. That, and reinventing the wheel because with all this in-depth knowledge they have, they forget about the basics that the browser can do itself anyway.

    Google Books is one of those that have me going :wtf: every time I come across it. They’re serving scans of pages from books, magazines, etc. They could just stick a bunch of images one below the other in a div and set it to overflow-y: scroll — but no, the site implements its own scroll bar, which means you can’t just give the thing a good swipe and scroll a long way, you have to scroll one screen-length at a time all the time. Great if you’re browsing through a 300-page book …



  • ####Re: the article

    Yup, commercial web is in deep shit. Their primary business model (adds) is becoming less profitable, and the only viable alternative (paywalls) has failed. For some reason, people are willing to pay for an app while they aren't willing to pay for an equivalent website.

    All the bitching about frameworks and request count won't solve this fundamental problem. The only way to save the web as a commercial content platform is to find a way to make paywalls just as easy and enjoyable to circumvent as buying an app from appstore. But I don't see that happening right now.

    Mobile apps have the lead.

    ####Re: the rant

    The problem is, these aren't just the simple web pages that they used to be. If you stick to "documents with links" paradigm, then yes, you can get away with static pages with a few jQuery touches.

    But as you start adding more and more widgets, that need to remain synchronized and live-updated from the backend, "simple" web becomes unmaintainable. Thus the whole mess of the modern SPA ecosystem.

    I'm still convinced that we're seeing the very early stages of this process, and that 5-10 years from now, no one will care about the "2 MB of minified js!!1!". The same way as no one cares whether you unrolled your loops and manually memory managed every last byte in modern desktop apps. Yes, managed code and high-level abstractions are less efficient, but they're just good enough for most use cases.



  • @cartman82 said:

    I'm still convinced that we're seeing the very early stages of this process, and that 5-10 years from now, no one will care about the "2 MB of minified js!!1!". The same way as no one cares whether you unrolled your loops and manually memory managed every last byte in modern desktop apps. Yes, managed code and high-level abstractions are less efficient, but it's just good enough for most use cases.

    I agree! Web stuff has to follow the same business model [maximize ROI for the company] so that drives the development. Add the issues with advertiser revenue, paywalls, etc. that are decreasing the funding, and something will have to give [I am hopeful it will be in the 3-7 year time frame, but that is a detail].



  • @ChrisH said:

    Go ahead, read it, and then tell me if you understood and recognized even a third of the things

    This sentence I understood: "There's way too much javascript here."



  • @Hanzo said:

    This sentence I understood: "There's way too much javascript here."

    Define "way too much", or heck, even define "too much"... remember all of the 'normal' framework type libraries should be referenced from common CDN's so they are a) Fast and b) Cached locally - so I would not count them based on size (but quantity can be a consideration)



  • @TheCPUWizard said:

    b) Cached locally

    Urban myth.



  • @Ragnax said:

    Urban myth.

    Not at all...though I do agree that the vast majority of sites are doing it wrong, and are serving local copies (from their server or even worse, specific application) and/or have their webserver incorrectly configured.



  • If I were president of the internet, the second thing I would do would be add some proper low-level bytecode language to replace javascript.

    The first thing would be, of course, integration with bittorrent, or any other P2P protocol, to use when downloading common resources.

    Maybe it's a good thing I'm not president of the internet. We may never know.


    I can tell you though, that yes, Discourse is horrifically slow. An order of magnitude slower than other sites in fact. 4chan takes less than 5 seconds to load and render a thread with almost 800 posts, with nothing cached, including several hundred image thumbnails. Discourse takes half a second to load like 20 posts.



  • @anonymous234 said:

    If I were president of the internet, the second thing I would do would be add some proper low-level bytecode language to replace javascript.

    You're getting your wish:


  • area_deu

    WebAssembly solves the wrong problem (at least with that crowd backing it it won't be the abortion Silverlight was from the start).
    The problem is not "complex mathematical calculations in Javascript that need to be moved to assembler". The problem is "people are trying to build desktop apps in browsers, forcing them to do things they were never meant to do. And because that hurts so much, they use shitty frameworks to make developing just bearable enough".
    Writing the same shit (that still must be compilable to Javascript because that's how it's gonna work until all browsers support the new stuff) in a different language will maybe make it faster shit, but it's still shit.

    And if I read stuff like "yeah, the DOM is slow, use this huge-ass javascript framework that will do everything in a virtual DOM which is faster" I just want to puke.
    Our shit is so shitty it's really slow! Oh no problem, just pour on more shit emulating virtual shit, that will make it better somehow.



  • Dude, I'm telling you.

    They just Lego-together 19,000 buggy open source JS libraries.

    That's all they know how to do. I seem to recall you also employed the same development philosophy in previous conversations.

    This is the end result of that.



  • @TheCPUWizard said:

    Define "way too much", or heck, even define "too much"... remember all of the 'normal' framework type libraries should be referenced from common CDN's so they are a) Fast and b) Cached locally - so I would not count them based on size (but quantity can be a consideration)

    Now your app is dependent on some wanker at Google or Akamai doing his job correctly?

    No thanks.



  • @anonymous234 said:

    The first thing would be, of course, integration with bittorrent, or any other P2P protocol, to use when downloading common resources.

    Even a 5 MB site would download directly quicker than any bittorrent client could find servers. You'd be wasting time and bandwidth.



  • @ChrisH said:

    The problem is "people are trying to build desktop apps in browsers, forcing them to do things they were never meant to do. And because that hurts so much, they use shitty frameworks to make developing just bearable enough".

    Right; but that means replacing DOM, not JavaScript. (Well, ... both probably. But you're talking about replacing DOM.)

    Replacing DOM is a... tricky proposition.


  • area_deu

    @cartman82 said:

    But as you start adding more and more widgets, that need to remain synchronized and live-updated from the backend, "simple" web becomes unmaintainable. Thus the whole mess of the modern SPA ecosystem.

    That raises two fundamental question blocks: Do we really need every single one of those widgets? Does anybody really use them? Enough to justify the effort? Or are they just there because we can and other people do it, too?

    And if yes, is there really no other way but to use shitty MVC SPA frameworks on the client?
    I write web stuff that updates live from server triggers. But I somehow manage to do that without four frameworks spewing 5,415 events all over my browser every time I open a page.

    I'm still convinced that we're seeing the very early stages of this process, and that 5-10 years from now, no one will care about the "2 MB of minified js!!1!".
    Hopefully because JS will be replaced by something better by then.
    The same way as no one cares whether you unrolled your loops and manually memory managed every last byte in modern desktop apps. Yes, managed code and high-level abstractions are less efficient, but they're just good enough for most use cases.
    The difference is: Desktops were always meant to run desktop apps. Browsers were meant to display HTML pages. And just because a billion monkeys finally managed to write HTML pages that almost look like desktop apps that doesn't make it "right". Just look at the shit people do to "optimize" web pages. Load CSS the way it was meant to load? Nah, use Javascript to load it "below the fold". Load script files the way defined in the standard? Nah, use a Javascript loader script to load them. Render your stuff on the client, because he knows best how? Nah, use "React" to pre-render the layout in a virtual DOM and hope the browser likes it and doesn't re-render it anyways. Just the amount of fucking workarounds people use to make browsers do things they don't want to and were never meant to should tell everyone to stop doing that shit or switch to a better platform. If there was one.

  • area_deu

    @blakeyrat said:

    They just Lego-together 19,000 buggy open source JS libraries.

    Right.

    I seem to recall you also employed the same development philosophy in previous conversations.
    Who, me? I sincerely hope not.


  • @blakeyrat said:

    Now your app is dependent on some wanker at Google or Akamai doing his job correctly?

    No thanks.

    As soon as an app is deployed over the Internet (I am excluding in-house browser based apps that are used only within a corporate network) then you are already dependent on many companies having people doing their job correctly. [And I would put my money on Google, Akamai, and others well before in-house staff at 90% of the companies]



  • @ChrisH said:

    Who, me? I sincerely hope not.

    Oh sorry, I got mixed-up and thought Cartman was the OP.



  • @blakeyrat said:

    Now your app is dependent on some wanker at Google or Akamai doing his job correctly?

    No thanks.

    I still don't see why you can't just do < script src="/somefile.js" sha256="123456asdfxyz"> and let the browser share cached files between domain if the hash matches. No CDNs needed.

    If Apple, Google, Mozilla or Microsoft want to send me a million dollars for inventing this, I'll gratefully accept it, but it's not necessary.

    I think we need to get the WHATWG and W3C here so we can tell them what they are doing wrong.


  • area_deu

    I really regret Microsoft fucked up their one shot at that with Silverlight so badly. I could REALLY like the idea of having a browser natively run a XAML-ish layout with C# code behind it. Maybe if they had open sourced .NET back then...


  • area_deu

    @anonymous234 said:

    I still don't see why you can't just do < script src="/somefile.js" sha256="123456asdfxyz"> and let the browser share cached files between domain if the hash matches. No CDNs needed.

    I like the idea, but Google would probably tell you to optimize that away because computing the hash takes some milliseconds.



  • I was thinking along the lines of bigger applications, like those 3D games in HTML5... or you could make your own YouTube with a fraction of the bandwidth. There's definitely potential here.

    But if someone could come up with some super-lightweight P2P that would work for smaller stuff, even better. Bittorrent and the rest are overly complicated by things that would be unnecessary here (mostly to bypass anti-piracy measures), like DHTs.



  • @blakeyrat said:

    That's all they know how to do. I seem to recall you also employed the same development philosophy in previous conversations.

    This is the end result of that.

    I still do!

    My uncompressed js is like 5MB!



  • @ChrisH said:

    That raises two fundamental question blocks: Do we really need every single one of those widgets? Does anybody really use them? Enough to justify the effort? Or are they just there because we can and other people do it, too?

    Depends on the use case. If you make a publicly accessible web page, maybe not. If you're making a panel-like intranet app, some SPA framework is a fine choice.

    @ChrisH said:

    And if yes, is there really no other way but to use shitty MVC SPA frameworks on the client?I write web stuff that updates live from server triggers. But I somehow manage to do that without four frameworks spewing 5,415 events all over my browser every time I open a page.

    Once again, depends on complexity and your (and your team's) skill level.

    I'll just say, despite all my bitching about Ember, we recently did a shitload of new features and it went really well. New stuff just slid into the existing view layout, no problems.

    Without a framework, the update would have turned into a jQuery spaghetti nightmare. Or, if we had a home-grown framework, rewriting a bunch of things we didn't predict when we first wrote it (if the crufty old code could be touched at all).

    @ChrisH said:

    Hopefully because JS will be replaced by something better by then.

    No it won't.

    @ChrisH said:

    The difference is: Desktops were always meant to run desktop apps. Browsers were meant to display HTML pages. And just because a billion monkeys finally managed to write HTML pages that almost look like desktop apps that doesn't make it "right".

    Things change. Platforms evolve.

    Sure browser isn't an ideal app platform, but it obviously has serious advantages over desktop apps. Otherwise, we wouldn't have seen the exodus to the web during the 2000-s.

    @ChrisH said:

    Just look at the shit people do to "optimize" web pages. Load CSS the way it was meant to load? Nah, use Javascript to load it "below the fold". Load script files the way defined in the standard? Nah, use a Javascript loader script to load them. Render your stuff on the client, because he knows best how? Nah, use "React" to pre-render the layout in a virtual DOM and hope the browser likes it and doesn't re-render it anyways.Just the amount of fucking workarounds people use to make browsers do things they don't want to and were never meant to should tell everyone to stop doing that shit or switch to a better platform. If there was one.

    Yup, hacks on top of hacks. Yet it still works.

    And no, we aren't getting something different. Too big switching costs for more people, not enough incentive. It's much more likely things will go towards native mobile apps than switch to a different open web platform.


  • area_deu

    @cartman82 said:

    Once again, depends on complexity and your (and your team's) skill level.

    Agreed.

    Sure browser isn't an ideal app platform, but it obviously has serious advantages over desktop apps.
    I know it has. Like I said, I do intranet web apps. It's still shit, it's just that alternatives are non-existent or even worse.
    It's much more likely things will go towards native mobile apps than switch to a different open web platform.
    Yes. But native mobile apps are the next goat-sucking mountain range of shit. Things like Phonegap exist because it's actually easier to do a multi-platform mobile app using shitty HTML/JS/CSS techniques. (Although things are a bit better now with Swift. I hate Objective-C even more passionately than I hate Apple in general.)

  • Java Dev

    @ChrisH said:

    I like the idea, but Google would probably tell you to optimize that away because computing the hash takes some milliseconds.

    No it doesn't.

    Well, it does, but not during page load. Clientside, at most, you need to calculate the hash during cache store for verification. But that can be deferred till the page load is completed. Serverside the hash is about as constant as the filename.


  • area_can

    @anonymous234 said:

    I still don't see why you can't just do < script src="/somefile.js" sha256="123456asdfxyz"> and let the browser share cached files between domain if the hash matches. No CDNs needed.

    The W3C is considering something called the Content Security Policy that kind of uses hashes and js.

    But it's for security purposes, not for caching.



  • @ChrisH said:

    The problem is not "complex mathematical calculations in Javascript that need to be moved to assembler". The problem is "people are trying to build desktop apps in browsers, forcing them to do things they were never meant to do. And because that hurts so much, they use shitty frameworks to make developing just bearable enough".

    The problem is shitty developers using shitty frameworks or using good frameworks shittily, instead of competent developers using good frameworks properly. Nothing else.

    Sure; Webassembly won't help with that. It will, however, help with bridging the performance gap that exists with native code.


  • area_deu

    @Ragnax said:

    The problem is shitty developers using shitty frameworks or using good frameworks shittily, instead of competent developers using good frameworks properly. Nothing else.

    No, I think the problem goes deeper than that.

    Sure; Webassembly won't help with that. It *will*, however, help with bridging the performance gap that exists with native code.
    Prepare to be massively underwhelmed by what it actually will do for your actual code. Unless your code is something covered by asm.js right now.


  • @ChrisH said:

    Prepare to be massively underwhelmed by what it actually will do for your actual code. Unless your code is something covered by asm.js right now.

    I don't think so. The fact that it uses a binary AST representation vs pure source code alone is going to provide compression and parse-time benefits to all JavaScript. Experiments already show the parsing is up to 20x faster than plain JS source. That kind of thing really matters for a web-app's boot times or for page load times on a JS-heavy website.


  • 🚽 Regular

    @cartman82 said:

    I'm still convinced that we're seeing the very early stages of this process, and that 5-10 years from now, no one will care about the "2 MB of minified js!!1!". The same way as no one cares whether you unrolled your loops and manually memory managed every last byte in modern desktop apps. Yes, managed code and high-level abstractions are less efficient, but they're just good enough for most use cases.

    This is happening in embedded development now. Generally, gone are the days of hand-crafting your code and interrupt handlers to squeeze every last cycle out. Microprocessors/microcontrollers (even cheap 10c ones) are seriously fast now and tools like SimuLink will spaff out horrific C straight from a model. Its inefficient crap but it works and people are using it.

    It makes me unhappy that's the way it's going, but I'm not sure if that's just pride (because I do it better my way dammit!) or just the worry that things are becoming less testable and less deterministic due to the abstraction layers involved.
    I feel that some engineers are losing the intuition that crafting things more directly (and on more constrained devices) brings you. Looking at recent product failures a lot of them are due to edge conditions that weren't caught, there should have been a nagging voice that said to them 'I need an extra check for that critical variable, single-bit corruption does happen' or 'I should think about what happens on roll-over of those counters'. Meh, might be bad grapes but it's worrying that important things seem to be being lost in the new wave of doing things.


  • area_deu

    Time will tell.


  • Discourse touched me in a no-no place

    @Cursorkeys said:

    Meh, might be bad grapes but it's worrying that important things seem to be being lost in the new wave of doing things.

    But is anything being gained at the same time? Are products coming to market sooner and at less cost?



  • @Cursorkeys said:

    It makes me unhappy that's the way it's going, but I'm not sure if that's just pride (because I do it better my way dammit!)

    As someone who has spend weeks crafting out a few bytes, or machine cycles, I completely understand the sentiment....BUT, the word "better" can be applied against many different metrics, and for the majority it is ROI.

    If the machine generated "crap" meets the requirements (and can be maintained to do so), then it is virtually impossible to hand craft code at a lower cost...So the reality is that the "crap" is indeed better than the most elegant stuff you (or I) could create.

    [and yes, sometimes this makes me cry]



  • @TheCPUWizard said:

    remember all of the 'normal' framework type libraries should be referenced from common CDN's

    What we actually need here is content-addressable data.

    If it became fashionable for frameworks or other widely-shared static content (including media) to be addressed via URIs containing their SHA hash, and a .hash TLD existed, then not only could caching work properly for shared components but it would become possible for streaming and download services to save a shitload of bandwidth by responding to SHA URL requests over multicast from their nearest CDN mirror.

    Back in the pre-dawn of the Internet, I used to work for an outfit that made a networked hard drive for Apple II computers in the classroom. It ran over either a flat ribbon that was essentially an Apple II floppy drive cable with some address bits tacked on the side (250 kbits/s data rate) or the same kind of multidrop RS-485 twisted pair that AppleTalk used (230 kbits/s, later upgraded to 920 kbits/s) and its main innovative feature was support for a broadcast mode for disk data.

    It was a poll select network, so clients would just sit and read the shared bus while waiting for their poll slot from the server; but if a block appeared on the bus that they were about to request anyway, they just read that instead of submitting their own request, essentially free-riding on some other client's request/response exchange.

    When we initially dreamed up this scheme I was all for implementing some kind of queue and cache on the clients to take best advantage of it, but we never actually had to: once we had a prototype version where only a single outstanding request could ever be short-circuited in this fashion, it turned out to be good enough. If you booted up a room full of 30 Apple IIs at the same time and watched all their screens, you'd see the room settle rapidly into two groups of lockstep machines. For such a dirt-simple optimization it worked astoundingly well.

    Our scheme relied on the use of unambiguous disk block addresses for the data we were shipping over the wire. Requesting data based on its SHA effectively gives every possible chunk of data an unambiguous address, so the same kind of bandwidth sharing should be achievable. But even without the multicast refinement, the 100% reliable cacheability of something like http://43dc554608df885a59ddeece1598c6ace434d747.sha1.hash/jquery-2.1.4.min.js, and the handballing of the problem of which physical server(s) to find that on back to a network of .hash DNS resolvers, might be quite interesting.



  • I am familiar with the concept [though I never had heard of your product]....and it could definitely be useful...

    My one question is:

    @flabdablet said:

    Requesting data based on its SHA effectively gives every possible chunk of data an unambiguous address,

    By definition, a hash has less bits than the data, and therefore there WILL be collisions if applied against every possible data content...

    FWIW: current "data deduplication" hardware for storage basically uses this, but without the multicast part...



  • @TheCPUWizard said:

    By definition, a hash has less bits than the data, and therefore there WILL be collisions if applied against every possible data content

    Nope. We covered this in excruciating detail a while back on the old forums, and I can't be arsed looking up the discussion to link to it, but the gist is that if a hash is large enough, then the effective address space it forms will never have time to get enough used addresses allocated inside it to make the birthday-paradox chance of a hash collision higher than that of machine failure due to alpha particle hits, lightning strikes, catching fire and whatnot.

    "Every possible data content" is certainly an open-ended theoretical construct, but physics imposes limits on the speed at which unique chunks of content can be created, and it's always possible to leave those limits panting in the rear view mirror by just making your hash code bigger.

    For content-addressable URIs inside .sha512.hash, the birthday-paradox collision probability works out to be well below that of any other conceivable source of machine failure even assuming the hashing of one piece of unique content every single Planck time since the start of the Big Bang.



  • @flabdablet said:

    ,,,but the gist is that if a hash is large enough, then the effective address space it forms will never have time to get enough used addresses allocated inside it to make the birthday-paradox chance of a hash collision,,,

    Well there are a few other considerations:

    1. You do not need to hit birthday paradox levels of probability, even a minuscule chance [granted higher than alpha particle hits, lightning strikes, catching fire and whatnot] would have major ramifications.

    2. For random data, I would largely agree, but there are is a non-normal distribution [e.g. for JavaScript all of the characters will be in the printable text range, thereby reducing the "all possible" by a hugh margin...

    3. Your position discounts the situation where someone deliberately searches out an "alternate" pattern for a specific hash to corrupt the system.

    The de-dup hardware takes care of this is some interesting ways. [Over]simplified, it does a full content compare in the unlikely event that a hash collision is found and then alters the key to be some other value than the collision. Once this is done, then time does become the only enemy, and I will agree that there will not be enough of this...

    So, I still agree with your concept, but remain convinced that something other than a pure hash would be better.



  • @TheCPUWizard said:

    current "data deduplication" hardware for storage basically uses this, but without the multicast part

    I had a psychotic break fifteen years ago during which I dreamed up a plan for a single world-dominating content-addressable network filesystem all linked together using 512-bit hashes of 4KiB data blocks, and still have a certain amount of affection for that mad scheme 😄



  • One thing with these million web developing trends is that they come and go. With js libraries it seems there's a new one every year that all the hipsters use and you should too. The only library that still seems to be relevant from five years back is jQueryb ut you're a bad person if you use it because it's not cool anymore.

    Sites become more heavier on the browser every time someone gets the idea of doing full blown MVC with javascript only. Everyone tells me to use AngularJS now.

    Less, Sass, SCSS? I never learned any of them and the first two seem to be forgotten already or maybe Sass and SCSS were about the same thing. I've briefly had to mess with SCSS and then comes all these fucking compilers and the chosen piece of shit for SCSS is written in ruby so now I need to install ruby and wait I need grunt to set up auto-compiling jobs now how do I deploy and what do I store in VCS?! Fuck I hate web dev.


  • area_deu

    I still use Less. It makes CSS a little less shitty to write at no cost (just save it in VS and the CSS gets compiled automatically). I don't care if it's still trendy or not.


  • 🚽 Regular

    @dkf said:

    But is anything being gained at the same time? Are products coming to market sooner and at less cost?

    @TheCPUWizard said:

    As someone who has spend weeks crafting out a few bytes, or machine cycles, I completely understand the sentiment....BUT, the word "better" can be applied against many different metrics, and for the majority it is ROI.

    If the machine generated "crap" meets the requirements (and can be maintained to do so), then it is virtually impossible to hand craft code at a lower cost...So the reality is that the "crap" is indeed better than the most elegant stuff you (or I) could create.

    [and yes, sometimes this makes me cry]

    Yeah, you and dkf are right. Time to market and developer cost is lower. The trouble is the general quality of products on the software side seems to be going down. As the field matures you'd hope it should be going up instead.
    The car companies especially are feeling the pain now of a ridiculous number of recalls for embedded issues, maybe that will start to improve things if it's hitting them financially (and brand perception by customers).



  • @Cursorkeys said:

    The trouble is the general quality of products on the software side seems to be going down. As the field matures you'd hope it should be going up instead.

    Actually, a complicated issue. The complexity and expectations have also gone up. Also I am not sure that given the current growth rate (in many directions) that the field has really "matured"...It would be interesting to see a broad spectrum graph that attempted to correlate these items...

    @Cursorkeys said:

    The car companies especially are feeling the pain now of a ridiculous number of recalls for embedded issues, maybe that will start to improve things if it's hitting them financially (and brand perception by customers).

    It is.



  • @TheCPUWizard said:

    You do not need to hit birthday paradox levels of probability

    Sorry, I was using birthday-paradox as a shorthand term for unintentional hash collision of any two unrelated documents across the corpus, not necessarily implying the canonical 1/2 probability.

    A decent cryptographic hash function, of which the SHA family are examples, has a randomly distributed output negligibly sensitive to patterned input data, and also makes deliberate collision creation computationally unfeasible. So if a data de-duper is based on a decent crypto hash function, then all its additional collision-avoidance logic can be made completely redundant simply by choosing a wide enough hash function.

    [quote=Wikipedia]A good rule of thumb which can be used for mental calculation is the relation

    which can also be written as
    This works well for probabilities less than or equal to 0.5.[/quote]

    So if we hash unique data blocks at one per Planck time (1e-43s) since the start of the Big Bang (13.8e9 years ago), we will have 13.8e9 years * 365 days/year * 86400 sec/day / 1e-43 sec/item = 4.35e60 items. For SHA512, m is 2512 = 1.34e154; so the probability of one collision is 4.35e602 / (2 * 1.34e154) = 1.4e-33.

    In other words: not gonna happen.

    Even the far narrower SHA1, at 160 bits, can accommodate sqrt(2161 * 1e-18) = 1.7 million billion unique data items before the probability of a single collision rises above one in a billion billion.



  • @ChrisH said:

    It makes CSS a little less shitty to write at no cost (just save it in VS and the CSS gets compiled automatically)

    Only if you install the bloated Web Essentials add-on which is an entire can of worms (and very little else) in and of itself.



  • @TheCPUWizard said:

    convinced that something other than a pure hash would be better

    The main advantage of the pure hash is that it makes static content truly location-independent; you can verify that no malicious server has tampered with what's supposed to be mirrored content, just by checking the hash of what you're served against the one in the URI you requested it with.

    The DNS resolver network would need robust mechanisms for registering content, and detecting and deregistering failed or malicious mirror servers, but those ought to be designable.


  • BINNED

    Simulink (really Realtime Workshop) code is an infinite loop, but compilers are order of magnitude smarter now. The problem with BlockDiagram programming is not about efficiency but development tools and methodology.
    A for-loop is much more elegant than a box with lines, and electrical engineers (those that are more into circuits and layouts) are bad software engineers, I have seen them write entire code in the header files before.
    When I want to change a for-loop to a while loop in software it is 2 diff modifications, in LabView it is many clicks and hand movements, this means you cannot experiment as easily with block diagrams, as with text-based code.
    Last time I used SimuLink its development environment was a bunch of open tabs, no real IDE, cluttered space is only appealing to the group that has a cluttered desk and messy mind, and the result is off the shelf crap.

    The other problem is that no matter how integrated these systems look (like SimuLink) the first poor engineer that wants to do something real with it has to work twice, once to get the graphics in line and once to review the crappy code, and then connect them with TCL or whatever other glue layer crap (the abstraction is leaky and sooner or layer you have to know it and cannot use the provided blocks). It is never a good idea to use them in a fast prototyping as opposed to what is claimed.

    I have seen people refer to phones as embedded systems, which is stupid they are hundreds of times faster and more powerful than my first desktop.
    The traditional embedded systems are still around, those that you have to write in C and occasionally Assembly, but now it is the low power consumption that keeps them in business.

    @dkf said:

    Are products coming to market sooner and at less cost?

    It depends if the solution is tried and you are licensing it, or you want to develop it. In the first case, perhaps yes but otherwise no.



  • @dse said:

    now it is the insanely low power consumption

    This.

    TI has parts offering 500 nanoamps standby, 95 microamps per megahertz active.


  • FoxDev

    @flabdablet said:

    Even the far narrower SHA1, at 160 bits, can accommodate sqrt(2161 * 1e-18) = 1.7 million billion unique data items before the probability of a single collision rises above one in a billion billion.

    which is all well and good. until your data deduper uses those hashes to dedup data and merges your children's saturday morning cartoons with your collection of German dungeon pornR rated moves


Log in to reply