However, it's also not not a number. Don't believe me?
NaN doesn't incorporate positive or negative infinity, which are infact defined as numbers, but are not finitely quantifiable.
E.g.
isFinite(1/0)
// false
However, it's also not not a number. Don't believe me?
NaN doesn't incorporate positive or negative infinity, which are infact defined as numbers, but are not finitely quantifiable.
E.g.
isFinite(1/0)
// false
You can force a function declaration not to accept nulls, but the compiler has no way to verify with 100 percent certainity that a variable has been initialized. So that's kinda out too.
It doesn't need to. Like dkf mentioned:
That's an easy case, actually. It would chew you off for not defining it on all code paths.
Generally speaking it works that way, because static analysis tools mostly operate on the level of preconditions and postconditions. They're not interested in what happens inside a method, as much as they are interested in what a method describes must go into it and what a method describes as guaranteed to come out of it.
Good static analyzers can infer missing conditions based on pre- and postconditions from other methods and can handle simple branching logic as well, giving you the illusion that they're quite a bit more intelligent than they reeally are (.NET Data contracts are quite amazing at this, actually, especially for a free solution) but in the end it still boils down to 'all logical branches must fullfill all guarantees needed by the (inferred or not) post-conditions.
(new Action(() => {}))()
I think does the job.
Not if your lambda returns several lines of LINQ - because all the readability gains are immediately lost by giant cast to Func<>. Also, good luck at guessing which Func<>.
Easily solved:
public static class Lambda
{
public static Func<TOut> Func<TOut>(Func<TOut> func) { return func; }
public static Func<T1,TOut> Func<T1,TOut>(Func<T1,TOut> func) { return func; }
//etc.
public static Expression<Func<TOut>> Expr<TOut>(Expression<Func<TOut>> expr) { return expr; }
public static Expression<Func<T1,TOut>> Expr<T1,TOut>(Expression<Func<T1,TOut>> expr) { return expr; }
//etc.
}
var func = Lambda.Func((int x, int y) => x + y);
var expr = Lambda.Expr((int x, int y) => x + y);
And then chain on from there, if you need, using something like LinqKit's predicate builders.
won't compile because of a name conflict. (The name of a bound variable isn't abstracted, obviously.)
That happens because C# doesn't support variable shadowing.
Yes, there's an Rx library from Microsoft for that. Uses IObservable instead of IEnumerable, and seems quite powerful.
That's exactly what I was gunning for, yes.
And indeed; it's running on top of IObservable<T>
instead of IEnumerable<T>
.
It seems like yield
cannot be combined with async
/ await
just yet. It has been proposed a few times though and there is atleast already an IAsyncEnumerable<T>
implemented as part of Rx's sister-project Ix (Interactive Extensions). It is even residing in the System
namespaces already, instead of the Microsoft
namespaces, which is a pretty clear sign that async enumerables are somthing that's expected to occur at some point in time.
(Uhm... Ok. Apparantly someone has an early prototype of async yield
working in a branch of the Roslyn compiler. Nice...)
Doubt it.
While not directly useful for a foreach
statmenet, infinite sequences are definitely a thing.
Generating pseudo-random numbers with each MoveNext
(You know; like Random.Next
already does...) and using the Zip operator to pair them to a sequence is one possible application.
Reactive programming? Infinite sequences of asynchronously generated events, processed via IEnumerable's pull semantics so you can easily Take
, Skip
, Aggregate
, Join
, etc. and create complex workflows. if you can yield return await
to block on the next item arriving in the enumerator that could totally work. (Does C# support that? Because that'd be all kinds of awesome as a programming paradigm to try out...)
In other words, foreach doesn't make any guarantee at all - the guarantee is the responsibility of the enumeration itself.
Which is what I said to begin with...
I wouldn't write code like that at all. It sounds like a terrible design, mixing content and design at exactly the wrong layer.
At one point, you're going to have to take your content and spit back out somethng resembling the design.
At that point, you're going to have to use an index in some way or form to add those bullets to the finished, processed output.
This has nothing to do with separation of concerns (with which I agree, mind you) and everything to do that at some point during the pipeline from input to output you - will - need - that - index.
But as to foreach: I've learned that foreach guarantees only that each element of a collection will be passed once and only once to the body of the loop, but explicitly doesn't guarantee a definite order of the elements. I've never encountered an example where the order changed between successive loops, though. Did the definition of foreach change in this point since I learned it?
foreach
is syntactic sugar over IEnumerator.Current
and IEnumerator.MoveNext()
. These are implemented by collection classes in the .NET framework, and are free to be implemented yourself.
Enumerators that return indeterminate order do exist and some can even be explicitly random. E.g. many people at one point roll a random shuffle implementation as an extension method on IEnumerable<T>
sequences.
There is also no guarantee by foreach
that each element will be passed only once. This is a guarantee on the indiviudual collection classes only. You can easily build an implementation of an IEnumerable
or IEnumerator
that returns each element twice.
Like this, perhaps?
You changed the way you generate the index and introduce it into the loop, but it's still an index.
(But what I dislike a bit is that the procedure first builds an array and then applies a Join on it. (Unless Join itself is implemented a different way which I can't tell.))
It doesn't build arrays.
IEnumerable
sequences use pull semantics on top of the IEnumerator
interface. The zip operator will only pull as many numbers from the range source as is necessary to process the input parameter and it produces again an IEnumerable
sequence with pull semantics. string.Join
has an overload that takes an IEnumerable<string>
and there's possibly some kind of streaming operation happening there as well, rather than converting everything into an array.
Give me an example so I can tell you how you're wrong.
Generating output text from an API-consumer supplied ordered list of items that should be presented as an indented list with numeral bullets.
Plain text, no HTML! So you don't get to use the <ol>
element as an escape hatch. Better yet! Scoring a little higher on practical applicability; let's say that you're actually implementing the rendering of such a type of layout construct in $rendering_engine_of_choice$.
How are you going to do that without somehow keeping an index?
attempting to use ink from other regions bricks your printer
Wow. And which bright light at Xerox thought that holding the hardware to ransom like this would fly under the radar of consumer protection laws? The EU is going to have Xerox for lunch...
I'm not sure, but I can check. It's a standalone application, though, so I wouldn't be able to add one if it wasn't there.
It doesn't matter anyway.
As of IE9 a webpage can only render in one document mode, determined by the top level page. Child document contexts, which includes iframes, are forced to run in the same document mode as the parent webpage.
As of IE10 the situation probably got worse: there is an emulated 'HTML5 Quirks mode' present that can be used to mix-and-match standards mode and legacy content spread across frames. However, this mode bunches together all non-HTML5 legacy document types, which includes the HTML 4.0 and XHTML 1.x ones, with documents without document types and apparantly treats them all as to be rendered with IE5-level quirks mode.
No, it does not; only Google and Firefox do that, and then only for sites on a tiny, tiny list.
Visual Studio apparantly also does it.
So there's precedent for Microsoft doing it in other products as well.
Have any of the mobile developers using the hamburger menu concept done usability testing on it?What were the results?
Surprisingly; yes, there are mobile developers that have done some actual real user testing regarding 'the hamburger icon'.
Afaik the results were split about fifty-fifty between users that managed to locate the hamburger and managed to understand that it hid a menu of sorts with actionable items and users that didn't.
Add the text 'menu' or similar to the hamburger icon and it raises success by about 10% to a better 60%. Add a clearly demarkation around the icon, mimicing a button, and the hint of the item being tappable/clickable raises success to about that same 60%. Do both and get somewhere between 60% and 70% success rate.
And now for the kicker: LEAVE OUT THE HAMBURGER ICON and keep only the text and button markings and your success rate will remain between 60% and 70%.
In other words; the contribution of the hamburger icon itself is zero and we can probably safely assume that the 50% of users that managed success in the icon-only case, only managed success due to prior exposure to this broken piece of user interface to the point where they underwent the virtual equivalent of having it beaten into them with a stick.
Are you certain that's true at that point? The way I hear it, Son and Spirit were monkey-patched in later.
So, reality was programmed in a weakly-typed language? "Always bet on JavaScript." strikes again.
This is the typical way Node works. Code is called from the event loop, but Require directly constructs dependencies.
Yes. CommonJS Require operates as a service locator pattern.
DI and IoC are two completely separate concepts that just happen to be implemented in the exact same way.
They're not "implemented in the exact same way" at all.
Inversion of Control means you hand away control over the program flow to an external code-unit and trust it to call into your code. Anything based on callback functions, e.g. , event handlers, teration/filtering/etc. on lists or sets based on lambdas, asynchronous control flow with promises/futures; all of that is IoC.
While IoC is about handing away control over the program flow, DI is about taking (or being given) control over an application's components or units of work, instead of relying on a code-unit to autonomously regulate its own.
You can have IoC without DI at all, simply by using evented programming, reactive programming, etc.
You can have DI without IoC by employing a service locator pattern or by injecting and constructing by hand.
Or you can leverage IoC to automate and abstract away the DI aspects by handing responsibility for the complete object graph construction over to a third party framework. That's how the big frameworks like Spring, Unity, StructureMap, etc. do their thing.
Dependency Injection is a type of Inversion of Control.
The other way around: Inversion of Control is a type of (or rather: can be used as a means to provide) Dependency Injection.
Redeploying adware and bloatware straight from BIOS/UEFI firmware to thwart laptop re-installations from clean install media.
Oh wait...
I prefer moving my eyes down the page for each paragraph over moving my eyes back and forth over the entire monitor π
That's because you and I, along with 99.99% of the gross human population, are average normal humans with average normal brains capable of average normal spatial relational mapping and average normal eyes capable of average normal eye scanning. Thus we prefer a 75 to 80 character line limit that has been widely proven via usability testing as optimal for the average normal human at the average font-size for normal reading.
Sadly we can't all be part of the 0.01% of super-human outliers like PJH...
web pages with actual content thattakes up only 20% of the screen widthcares about optimal line length for reading..
FTFY
The real question is why you'd think that to be a bad thing...
Admin panel = perfect for angular or ember
User facing site = you're better off being able to mess around with DOM elements, without bindings getting in your way
Or you do both; MVVM-style bindings where applicable, hard-tied DOM manipulations from explicit mediator or controller classes where applicable. I'm using CanJS as an application framework on top of jQuery, which is really not too bad a way of doing both. (It also doesn't fall into the trap of using prohibitively expensive dirty-checking as a means to implement observable data models, like Angular does...)
I knew I liked telerik. They actually know what the problem is. They're Infragistics' main competitor, and they have demos that work.
I had the displeasure of having to work with both. The biggest reason Telerik's web stuff has improved to usable levels, is they rode on jQuery and jQuery UI's coattails. Yet like all RAD tool- and widget-suites they still heavily favor configuration over composition, which still means it's 'their way or the high-way' and any attempt at building something outside of the box will result in uphill battles.
The fact that atleast one of the guys behind Discourse is considering 'issuing a fake scroll event' as the fix should tell you all you need to know.
The way you fix this properly is ofcourse to check the bounding client rectangle using the getBoundingClientRect
function. The bounding rectangle is given in window space coordinates meaning you need only compare the bottom of the last item in the list to the window height to figure out if you've scrolled 'beyond the end of the list'...
what about them? do they need a swift application of the cluebat?
They need a VM with differencing disks, externalized isolated storage of application data heavily vetted and scrutinized and a nightly reset policy on the VM.
It's not supported by the manufacturer and not supposed to be possible, but I can do it.
Huge red flag right there. Probably means absolutely zero defense in depth added to the car's systems. It probably doesn't even have basic filtering of which devices connected to the bus are allowed to talk to each other.
I'm building a DLL and it doesn't have an app.config. The application that would be loading my dll doesn't have one either, although there's a manifest.
Ok. Didn't realize you were the library producer and not the consumer.
Assembly redirects are under the control of the process that loads the app domain, i.e. , the application executable, and not under the control of the assemblies loaded into the app domain. You could try using a publisher policy, but I think that requires your assemblies (or atleast their policies) to be GAC-ed.
Are you delivering an assembly to be used with third-party programs; as in a plugin, or are you delivering them to be used with third-party code; as in being used to build an application. In case of the latter; VS2013 and up should already add binding redirects when it detects versioning conflicts related to assemblies built against multiple versions of other assemblies within the same project.
(This actually is all still in the same segment of MSDN documentation that I already linked you to...)
but I'm not 100% sure where to apply it; I guess in the Progress executable[1]'s manifest? That's not really optimal for raisins.
In the app.config or web.config file (depending on what kind of application you're building). It's just plaintext editable XML; no fuss editing it, and it's completely local to that one application.
https://msdn.microsoft.com/en-us/library/7wd6ex19(v=vs.110).aspx
Nothing more to see here.
Moving on...
You can be on the "correct" side of the argument and still be a useless piece of shit who doesn't accomplish anything at all. These two things are not mutually-exclusive.
Mozilla accomplished the replacement of GIF with APNG just fine in their own browser and all other browsers gracefully fall back to normal PNG where APNG is not supported. However, the PNG standards committee cock-blocked Mozilla on elevating it to a full standard because MNG was their own little precious snowflake they wanted to push.
No other browser is interested in touching non-standardized image formats thanks to the fallout ensuing from the whole <video>
element codec debacle ... and so we come full circle back to post #1 in the thread...
I'm not 100% sure that the fallback to a single image is a good idea... I'd rather see no content at all than see a static image with no indication that something is wrong.If things have to fail, make the failure clear and make it happen as soon as possible. Wasn't that one of the basic ideas of modern programming?
The primary tenent of the web is the robustness principle: be conservative in what you do and be liberal in what you accept. Graceful fallback to a static image fits that principle. We tried failing early and failing hard before; do you remember XHTML?
I don't get why the MNG format never took off.
Well; we have APNG. Fairly broad support except ofcourse in IE, and weirdly: Chrome.
Even Apple added support for it to Safari.
the whole thing feels faster
While the UI itself overall feels faster, some operations definitely take a lot -- a lot -- longer. E.g. the time to fully load up a basic MVC5 solution has more than quadrupled as opposed to loading up the same solution in VS 2013 for me.
Why the fuck.
Two reasons:
Firstly, because H.264 and AAC have hardware decoding chips in lots of existing Apple and Nokia hardware and those are both more cost-efficient to produce (or buy from a third-party) and more energy-efficient for the end-user's phone than having to build and use a new implementation in software that has no dedicated hardware backing it.
Secondly, if I recall correctly, both Apple and Nokia have or had an arrangement that weasels them out of having to pay the full license fees to the MPEG group, which gives them a monetary advantage over competitors in the mobile smartphone/tablet market.
Don't worry, there'll be a wireless internet based approach soon enough. Because people want to be able to start their cars by their smartphones.
I swear, if that becomes a reality then the first thing I'll do with any purchased car is to have the receiver disconnected or: no sale.
None of which will help in this case if the equipment is vehicle-mounted. βWe've worked out that it's somewhere in the vicinity of Indianapolisβ¦β
Or if the equipment in question is a compromised car itself that was hacked to re-broadcast the signal, building up a swarm with every compromisible car it passes and infects. Not quite practical with RF and an exploit aimed at a radio, but if we're talking reinfection via Bluetooth and the average car-to-car distance in congested inner city traffic...
I'm pretty sure when a media player opens a local file it doesn't have to read the entire file to determine which codec to load.
Iirc this actually is the case in legacy AVI containers when several not-quite compatible streams have been shoe-horned into the container.
Interestingly, it's probably those kind of degenerate use-cases with not-quite-compliant streams that are the reason for the 'quirky' playback enumeration values "probably" and "maybe", because they don't allow for a-priori 100% confidence that a file can be played back correctly.
From http://www.w3.org/TR/html5/embedded-content-0.html#dom-navigator-canplaytype
The `canPlayType(type)` method must return the empty string if type is a type that the user agent knows it cannot render or is the type `"application/octet-stream"`; it must return `"probably"` if the user agent is confident that the type represents a media resource that it can render if used in with this audio or video element; and it must return `"maybe"` otherwise. Implementors are encouraged to return `"maybe"` unless the type can be confidently established as being supported or not. Generally, a user agent should never return `"probably"` for a type that allows the codecs parameter if that parameter is not present.
The whole reason we're in this bloody mess is because a number of prominent stakeholders (notably; the browser vendors) were unable to settle on a core set of containers and codecs (and encoding profiles) that must always be supported. Namely; Apple demanded H264; Mozilla demanded an open-source container without submarine patents attached; and Google wanted to push its WebM format as an alternative.
Though there's plenty of other stuff you're free to shit on the W3C as an organization for, this isn't one of them. If anything; blame the stubborn-headed browser vendors and the people responsible for the lack of sane standards for reading back video and audio codec support in general. (The most bulletproof method on Windows is iirc still to just try to play back a file into a null audio/video output and monitor the pins to see if any failed to render. Obviously that doesn't work well with stream format selection...)
Or values that are actually subtypes of that single type. If you define all values as being instances of subclasses of a value type, you can do the strongly typed schtick.
Right. Dictionary<string,object>
it is then; because that's more or less the only way to capture the expressiveness of holding 'anything' as a member property on a weakly typed object literal.
In a way that will annoy people, sure, but it's legal.
So I don't really see where you're coming from when you say you have to write a lot of code to integrate things in strongly typed languages. Because the only difference between a weakly typed object and a dictionary is the getter syntax,
A strongly typed dictionary has strongly typed values of a single type. That's not the same as a weakly typed object literal. Try again please.
Yeesh...not trivial, but this is just getting worse and worse all the time.
The (not so) nice thing about technology?
It's only non-trivial until someone wraps it in a neat little package to go that makes it trivial.
Consider angry-at-the-world teen script-kiddies hacking your car and going to play a round of IRL Carmageddon.
Unless car manufacturers wake up and finally start getting their act together, this will happen. It's unavoidable and only a matter of time. The genie is out of the bottle now that Crysler has taken this to the general public with their recall.
The problem was the invisible fox would somehow get into combat rarely (to this day I have no idea how, I had all his AI set so nothing should attack him and he should attack nothing, but it still happened sometimes.)
Scripts can override AI packages. And there are a ton of trippy scripts that set fight or flight behavior. Some quest driven, some random encounter driven, etc.
Incase of Markarth, its major questline is one giant cesspit of bugs that will in most cases either corrupt the quest and set you into an infinite loop of 'being sent to jail' or will set the entire city's populace as hostile to the player. It took the guys working on the unofficial Skyrim patch ages to get that junk sorted out, and I think you can still occassionally glitch it.
:sigh: Bethesda games...
brain dead type coercion
The actual rules are quite simple and the type of degenerate cases you have to cook up to prove it 'brain dead' are not how you would write day-to-day JS. If you want brain-dead non-intuitive type coercion, try PHP instead.
function-scoped variables
Hi, let me introduce you to the let
keyword.
globals out the ass
Let me introduce you to ES6 modules, CommonJS require
and AMD define
/require
.
Long before those arrived on scene a common and well-established practice was to wrap modules of code into immediately invoked function expressions and manually export only a limited set of entry-points into your code to the global scope.
Also, it's DOM and the browser implementation of JS that decided on having the global object be the DOM window object. That's strictly not part of the language.
insane type annotation system
That's because the annotation system is meant for aspect-oriented programming and not for declarative annotations in the vein of C#'s [Attribute]
annotations.
typeof everything is object Object
That's because you're supposed ot use instanceof
to determine what type an Object is. The typeof
keyword is strictly to find out what primitive type (or Object) a value is. Also, typeof
returns just "object"
. What you're thinking of is the so-called string tag "[object Object]"
produced by Object.prototype.toString
. And that actually returns different values for built-in Object types, e.g., "[object Array]"
for an array or "[object RegExp]"
for a regular expression...
Having said that: ES6 is going to give you a means of overriding the base Object.prototype.toString
implementation of string tag generation via the Symbol.toStringTag
symbol and nothing is preventing you from implementing a regular toString
method on your own objects to override the one inherited from Object.prototype
.
callback hell
An artefact of mis-use (or rather; complete lack of) user-defined flow control structures for async operations. You cannot really fault the language for not providing those, as asynchronous continuations were never part of its original design scope. And either way, ES6 and 7 are fixing that with generator functions, the yield
keyword and later on the aysnc
and await
keywords.
You can fault Joyent for the single biggest flaw in NodeJS's design here: centering on callback continuation style programming when superior alternatives like Promises/A were already a thing.
everything else
You know; I take it back. Your beef is not with DOM. You're just an idiot that believes JavaScript is still stuck in 1999 and hasn't advanced since.
Why, oh why, do the marketoids think that advertising should basically do the equivalent of shouting at the top of its lungs over the show you're trying to watch?
Tell me; have you ever been to an old-fashioned open air market?
You don't do much javascript, I gather?
As has been said so many times before; there is very little wrong with JavaScript.
Your beef is with DOM. And while they're often encountered in an unhappy / unholy marriage; DOM != JS
It makes CSS a little less shitty to write at no cost (just save it in VS and the CSS gets compiled automatically)
Only if you install the bloated Web Essentials add-on which is an entire can of worms (and very little else) in and of itself.
Prepare to be massively underwhelmed by what it actually will do for your actual code. Unless your code is something covered by asm.js right now.
I don't think so. The fact that it uses a binary AST representation vs pure source code alone is going to provide compression and parse-time benefits to all JavaScript. Experiments already show the parsing is up to 20x faster than plain JS source. That kind of thing really matters for a web-app's boot times or for page load times on a JS-heavy website.
The problem is not "complex mathematical calculations in Javascript that need to be moved to assembler". The problem is "people are trying to build desktop apps in browsers, forcing them to do things they were never meant to do. And because that hurts so much, they use shitty frameworks to make developing just bearable enough".
The problem is shitty developers using shitty frameworks or using good frameworks shittily, instead of competent developers using good frameworks properly. Nothing else.
Sure; Webassembly won't help with that. It will, however, help with bridging the performance gap that exists with native code.
If I were president of the internet, the second thing I would do would be add some proper low-level bytecode language to replace javascript.
You're getting your wish: