JavaScript is more complicated because it's more simple.
Yes. Or as I've heard someone once describe it in a moment of zen: JavaScript doesn't have a few screws loose. Its screws are firmly fixed, but everything else in the language is loose.
JavaScript is more complicated because it's more simple.
Yes. Or as I've heard someone once describe it in a moment of zen: JavaScript doesn't have a few screws loose. Its screws are firmly fixed, but everything else in the language is loose.
Me: "But but... we already sent the previews to the client... and he approved.... AGGGGGHHHH!"
Think a few images is bad? On the whole you could probably still have those renegotiated and replaced fairly easily.
I (or rather the project manager assigned to this project. Heheh....) had the pleasure of dealing with this particular problem with an entire batch of font families that the customer had not only signed off on, but had promoted to being part of their brand identity. Yeah ... that was not nice.
Which means, as your code snippet shows, the text can be lifted directly by a bot
The source-order of the letters is not the on-screen order, which is assembled by shifting coordinates using transformation matrices. The on-screen order is what you actually have to enter; not the source order. So a bot cannot lift the text directly. It atleast has to discern how the characters were shuffled.
(Ofcourse, with humanly readable translate()
transformations like this, that's still easy to the point of being trivial, so this is still a complete failure.)
Realtek is known for absolute shit software, their drivers are generally worse than any other manufacturer.
Remember when Microsoft re-engineered audio drivers to run outside of kernel space, so that 'bad drivers' would stop taking down systems?
In hindsight... that was probably to mitigate the Realtek problem...
Have any of the mobile developers using the hamburger menu concept done usability testing on it?What were the results?
Surprisingly; yes, there are mobile developers that have done some actual real user testing regarding 'the hamburger icon'.
Afaik the results were split about fifty-fifty between users that managed to locate the hamburger and managed to understand that it hid a menu of sorts with actionable items and users that didn't.
Add the text 'menu' or similar to the hamburger icon and it raises success by about 10% to a better 60%. Add a clearly demarkation around the icon, mimicing a button, and the hint of the item being tappable/clickable raises success to about that same 60%. Do both and get somewhere between 60% and 70% success rate.
And now for the kicker: LEAVE OUT THE HAMBURGER ICON and keep only the text and button markings and your success rate will remain between 60% and 70%.
In other words; the contribution of the hamburger icon itself is zero and we can probably safely assume that the 50% of users that managed success in the icon-only case, only managed success due to prior exposure to this broken piece of user interface to the point where they underwent the virtual equivalent of having it beaten into them with a stick.
Sam comes out with a nice summary of Discourse's perf problems.
which states that:
This specific example is just an illustration of the endemic issue. In React and some other frameworks you just render and don't carry around the luggage of "binding", instead you just rerender when needed. This approach means that you don't need bookeeping, the bookeeping is the feature that is killing android perf.
Oh right. Because diffing the live DOM tree against a newly cooked string and performing piecemeal updates isn't slow at all, right? Well sunshine; guess what...
No; the real problem is inefficient bookkeeping and creating multiple bindings that all update class name lists and all touch the DOM on their individual time. Change that to perform class name bindings completely in JS space and coalesce all of it using batched event handling. Then you have one single string that gets assigned directly to a node's className
property once.
I think devs are realising that building everything clientside isn't the way to go.
Building everything clientside isn't the problem.
Trusting on people that have poor insight into algorithms performance and excruciatingly poor insight into what makes JavaScript and the DOM tick to build everything clientside; that is the problem. And that's par for the course when you take a bunch of people that have been treating JavaScript as a 'toy' for 10+ years while they happily shat out the most grotesque pieces of code possible, of which the mal-performance would all be masked away behind the immense amount of horse-power available on the server.
Frameworks help: they simplify and provide application guidance, but most can still be easily subverted into doing the wrong thing if you do not have the first clue about what you are doing or the environment you are doing it in. Infact; it may very well make the situation worse. {{ Insert witty reference to C++ and loaded footguns here. }}
Ofcourse you can cite education or evangelism as the solution, but that is a battle you will LOSE when you are by far outgunned by idiots that will all too happily provide further fuel to the fire by giving 'helpful advice' or offering their 'profound knowledge' to others.
Effectively SPAs and JS are stuck in the uncanny 1999 PHP valley. It's going to hurt immensely to get it to move out of that sinkhole.
You have a working scrollbar, it's right there at the bottom right of the topic:
The substitute you are offering to fill this gap is nothing more than a bastard lovechild of basic previous/next paging and a progress bar. It is nothing close to what a scrollbar represents.
Scrollbars operate on a stable locality of reference and have a grip that you can grab and move up or down a fixed amount to move up or down a corresponding amount of viewport 'pages', in a ratio that you have mentally mapped to the area of the scrollbar.
Your poor excuse of a substitute neither offers the ability for direct interaction with the scroll bar nor offers any grounds to establish the cognitive relation between the progress bar part of this bastard child and the position in the scrolled page. I'd go sofar as to say that it actively interferes with user cognition: it operates on a horizontal axis, perpendicular to the actual scrolling axis, obscuring the fact that there is any relation there to begin with.
The paging portion of this substitute doesn't even offer normal paging. Click the up or down arrows -- Up & down arrows on a horizontal bar, another cognitive mismatch. -- and you move all the way back to the first item or all the way forward to the last.
This makes that it basically fails on all fronts.
This guy tries to use his street cred to make her sound less terrible. His claim to fame is, basically, knowing jQuery and CSS. Big fucking deal, guy.
Not a big surprise.
The few times Coyier writes something original and current for his CSS Tricks site, it usually ends up in sub-optimal hacks riddled with undisclosed/undiscovered caveats. The rest of the content are (optionally outdated) rehashes and summaries, or 'guest' posts.
I place that between quotation marks, because most are actually 'copied-with-permission' posts masking as content written originally for CSS Tricks. A lot of that material is also donated by people that seem to have equally questionable knowledge and are simply parroting others or reinventing square wheels.
All signs point to the guy being a firm part of the 'copied-from-SO' generation himself, and in this comment he even freely admits so, while he continues to see himself as some kind of 'rockstar' or 'cowboy' (i.e. roughing it out). A lot of the vitriol that is rightfully being spewed back in Lara's face is maybe hitting a liiii---tle bit too close to home.
Are you planning to do anything about it, or to just sit there bitching complaining about it?
I petitioned for some project time to develop an alternative in-house for my employer.
Our first implementation was just finished last week and we're currently reviewing code, touching up where necessary and gearing up to run a beta inside an internal project before shipping it out on our clients' websites and webapps.
The current architecture/design is two-tiered with a logic tier that handles composition of month calendar data objects and a second, modular UI tier. The second tier is currently implemented using controllers, view templates and computed bindings that are part of the MVVM framework we base most of our modern stuff on.
The whole thing runs on top of MomentJS for date handling and potential future time handling. (As stated; the UI tier is modular and the base is extensible enough to add it on without causing breakage.)
I'm already way past planning to do something about it.
That would lead to an security pop-up avalanche.
Not if you design your app around that particular limitation and don't assume (like a lazy shithead) that your code will be running with full admin permissions ca. Win98.
The fact that atleast one of the guys behind Discourse is considering 'issuing a fake scroll event' as the fix should tell you all you need to know.
The way you fix this properly is ofcourse to check the bounding client rectangle using the getBoundingClientRect
function. The bounding rectangle is given in window space coordinates meaning you need only compare the bottom of the last item in the list to the window height to figure out if you've scrolled 'beyond the end of the list'...
i wonder how we could get XML in there for extra enterprise goodness?
XSLT to perform the time truncation?
My special helper
A special helper for a 'special' framework.
(A nice example of why you don't drink the Ember Kool-aid, btw. )
people who think that engineering is not at least as much about art as about process and metrics.
Absolutely true.
Iirc, it's one of the world's most renowned artists that phrased this best. I believe it was Pablo Piccaso who once stated (translated to English, ofcourse) that one should "learn the rules like a professional, so that one may break them like an artist."
Engineering at the level of abstraction that software engineering takes place, requires a certain kind of finesse and feeling for the subject matter as well as a rigorous understanding of the rules. A combination that you typically find in artisan occupations. It's this combination that allows you to engineer solid products while also able to break out of ill-defined boxes to arrive at better solutions to situational problems.
Well; here we go again. Wonder who's going to end up being suckered into this one.
@CodeNinja said:
Actually freaked my direct supervisor off today, he came in to ask me to do something, I looked him in the eye and said, "Ask me to do that, and I'll quit right here". Now I don't have to do that, and he's doing a bunch of leg-work to try and get me hardware to test with.
Congrats. You've made a little wave and it paid off short-time. Now use that opened up time to start looking for job alternatives and get the fuck out of that hell hole of a sweatshop. You have no future there.
Most of your colleagues accept structural unpaid overtime and have kept on working there under those conditions for years. There are two types of people that stick around in places like that and under those conditions:
Either of those is not a productive environment to foster your own career and both combined is enough to kill it in its tracks.
Why are their products incompatible with the default drivers?
Do you remember those cheapily cheap copy-cat USB chipsets that brick when the official driver is applied?
Have you looked at Samsung's prices?
Gary Bernhardt already documented a number of things in his 2012 lightning talk titled "Wat", but the video seems to have gone missing. Here's the abridged version:
It's quite interesting if you actually know about the type conversion rules applied by the addition operator and the general parsing rules.
> [] + [] // Array plus array
"" // Evaluates to empty string
Type conversion starts off by obtaining the primitive value of the left and right hand operands, as addition is only defined on primitives. To try to obtain a primitive value, first the valueOf
method is run. However, for an array this returns the same array, which is still an object and not a primitive value. Because the valueOf
method failed to produce a usable primitive value, the toString
method is tried. For an empty array, this produces an empty string. A string is a primitive value and is returned.
Both the left and the right hand operator yield empty strings for their primitive values. As atleast one of the operands returned is a string, both are cast to strings and the addition operator is executed as a string concatenation operation. Concatenation of two empty strings yields another empty string.
> [] + {} // Array plus object
[object Object] // Is object
Again; start by obtaining the primitive values. The primitive value of the empty array is again the empty string, as before. The valueOf
method of an object produces the object itself, which is not a primitive value. Again, the toString
method is used, which produces "[object Object]"
.
As atleast one of the operands returned is a string, both are cast to strings and the addition operator is executed as a string concatenation operation. Concatenating an empty string with "[object Object]"
results in an identical "[object Object]"
string.
> {} + [] // And likewise, the reverse
0 // Equals zero
The parser favors treating code as a statement when it can, and not as an expression. The parser treats the whole as a statement where the leading {}
is an empty code block (which is skipped) and then treats + []
as an expression formed by a unary plus operator on an empty array.
This operator converts the array to a primitive value, producing an empty string and then casts the primitive value to a number. Casting the empty string to a number produces the numerical zero.
If you add braces, then the whole is treated as an expression:
> ({} + []) // Added braces turn the whole into a single expression to evaluate
[object Object]
This again yields concatenation of the string "[object Object]"
with an empty string, again resulting in an identical "[object Object]"
string.
> {} + {} // Object plus object
NaN // Not a Number, obviously
The same parsing rule applies and the empty code block is skipped. The +{}
expression yields NaN
because the primitive value of the object literal is the string "[object Object]"
, which cannot be numerically represented when an attempt is made to cast it to a number.
If you add braces, then the whole is treated as an expression:
> ({} + {}) // Added braces turn the whole into a single expression to evaluate
[object Object][object Object]
As expected this yields a concatenation of two "[object Object]"
strings.
I can personally add the following:
> (function () { return this; }).call(null) // Set a property on a null object?
Window { ... } // Whoops, it's a global
This is because the value assigned to this
is auto-boxed into an object. The result of the boxing operation on null
or undefined
is the global object, which in browsers corresponds to the window object.
(It's actually no longer possible to do this in ES5 Strict mode, which passes values through to the this
keyword without any kind of boxing, specifically because of the security problem of re-exposing the global object in potentially sandboxed environments.)
> var undefined = 12 // Set a variable that happens to be a reserved word?
12 // Error? What error?
undefined
is not a reserved keyword. It is a property of the global object that is initially unassigned and thus is of type 'undefined'. Like most properties its value is assignable by user code and it can be shadowed by variables of the same name.
> {} // Sanity check
undefined // Congratulations, you're insane
This is executed as a statement and is again interpreted as an empty code block. An empty code block yields an undefined value. If you want to get your empty literal back, turn it into an expression by adding braces:
> ({}) // Added braces turn this into an expression
Object { }
The real WTF here is not JavaScript. The real WTF is people not learning about the language's rules and taking their initial interpretation as canon.
For example, the teacher cannot control how prepared the students are.
Nice anecdote incoming:
One of my professors at university once opened his first lecture of the semester with a round of questions regarding previous knowledge acquired. Just a few basic terms and such. A few by counting of hands, a few via answering some direct questions shot at random people. (And if you dared snicker, you were next on the list or asked to critique on your peer's explanation.)
At the end of that round, he went back and grabbed his paperwork containing the notes for that lecture, tidying the stack theatrically by tapping on his desk a few times. You could almost feel the air becoming ionized.
He walked back forward to address everyone, straightened his back and said: "Right! You, you, you and you!" (pointing directly at a few of the students) "You either lack the required prerequisite knowledge or think this is going to be a free ride. It will not be a free ride and it will be hard. Frankly, it will be hard enough to find the requisite time to get everyone else that is actually intent on succeeding across the finish line. I do not want to waste any of my time or your peers' time on you and likely neither do they. Stand up, leave now and do not return next class. We will wait."
Best damn course EVER.
Sheesh...just use the CLI like God intended already.
Contrary to popular belief; Torvalds is not God. ;-)
It's not supported by the manufacturer and not supposed to be possible, but I can do it.
Huge red flag right there. Probably means absolutely zero defense in depth added to the car's systems. It probably doesn't even have basic filtering of which devices connected to the bus are allowed to talk to each other.
None of which will help in this case if the equipment is vehicle-mounted. “We've worked out that it's somewhere in the vicinity of Indianapolis…”
Or if the equipment in question is a compromised car itself that was hacked to re-broadcast the signal, building up a swarm with every compromisible car it passes and infects. Not quite practical with RF and an exploit aimed at a radio, but if we're talking reinfection via Bluetooth and the average car-to-car distance in congested inner city traffic...
I'm pretty sure when a media player opens a local file it doesn't have to read the entire file to determine which codec to load.
Iirc this actually is the case in legacy AVI containers when several not-quite compatible streams have been shoe-horned into the container.
Interestingly, it's probably those kind of degenerate use-cases with not-quite-compliant streams that are the reason for the 'quirky' playback enumeration values "probably" and "maybe", because they don't allow for a-priori 100% confidence that a file can be played back correctly.
From http://www.w3.org/TR/html5/embedded-content-0.html#dom-navigator-canplaytype
The `canPlayType(type)` method must return the empty string if type is a type that the user agent knows it cannot render or is the type `"application/octet-stream"`; it must return `"probably"` if the user agent is confident that the type represents a media resource that it can render if used in with this audio or video element; and it must return `"maybe"` otherwise. Implementors are encouraged to return `"maybe"` unless the type can be confidently established as being supported or not. Generally, a user agent should never return `"probably"` for a type that allows the codecs parameter if that parameter is not present.
The whole reason we're in this bloody mess is because a number of prominent stakeholders (notably; the browser vendors) were unable to settle on a core set of containers and codecs (and encoding profiles) that must always be supported. Namely; Apple demanded H264; Mozilla demanded an open-source container without submarine patents attached; and Google wanted to push its WebM format as an alternative.
Though there's plenty of other stuff you're free to shit on the W3C as an organization for, this isn't one of them. If anything; blame the stubborn-headed browser vendors and the people responsible for the lack of sane standards for reading back video and audio codec support in general. (The most bulletproof method on Windows is iirc still to just try to play back a file into a null audio/video output and monitor the pins to see if any failed to render. Obviously that doesn't work well with stream format selection...)
Redeploying adware and bloatware straight from BIOS/UEFI firmware to thwart laptop re-installations from clean install media.
Oh wait...
what happens if you literally light a computer on fire while it's running
Are you familiar with the expression "crash and burn"?
Proprotip: Be nice to secretaries in general.
Proproprotip: Just be nice/polite/understanding in general.
If you act like a dick to people, you can be sure that people will do the same to you. You never know when and with whom you'll have to cash in on some built-up good will.
( Pro^4tip: that doesn't mean "let people take advantage of you". )
Why the fuck.
Two reasons:
Firstly, because H.264 and AAC have hardware decoding chips in lots of existing Apple and Nokia hardware and those are both more cost-efficient to produce (or buy from a third-party) and more energy-efficient for the end-user's phone than having to build and use a new implementation in software that has no dedicated hardware backing it.
Secondly, if I recall correctly, both Apple and Nokia have or had an arrangement that weasels them out of having to pay the full license fees to the MPEG group, which gives them a monetary advantage over competitors in the mobile smartphone/tablet market.
That's what the Special Disc Marker Pen folks tell you, at any rate. I have never actually seen an old disc with evidence of damage from marker writing - which, given the fairly extreme volatility of all those "aggressive" solvents, I find completely unsurprising.
I've seen a fair few exhibiting flaking of the top layers where permanent marker ink was used.
Just pulled up an archive DVD, created May 6, 2004 per the Sharpie-ink on the surface. Read data from it just fine.
Not all permanent markers are bad. Those that use isopropyl alcohol as a solvent have a negligible reaction with the disc's polycarbonate and its various protective coatings. (I think original Sharpie markers use this, which would explain why your archive DVD is still good.) However there are also plenty of markers based on strong organic solvents like acetone or benzone. Those are the ones you need to avoid. They will react with the disc over time.
So, before you use a marker not explicitly meant for use on disc media, investigate the label and see if you can find the type of solvent used. If it's isopropyl alcohol, you should be fine. If you find heavier stuff like acetone; best not to use it. If you don't find anything, it depends on how disposable your data is, I guess?
Agree with thegoryone.
OP shows an example of atomic design and OOCSS being followed through to its absolute, but horrific and non-sensical end. The idea and concept of these techniques is good, but like all good things: you can have too much of it.
When applied with moderation you get a really robust set of ground rules with which to assemble a website in a way that leaves it highly mallable, stylistically highly cohesive and ultimately far more maintainable than the mess of conflicting specificities any project of comparable size using 'classic' CSS would eventually devolve towards.
It would seem potentially possible as "who does have permission to access a FILE_NOT_FOUND ?"
Strictly speaking, that is a file-system dependent critirium, actually, and one that is made quite interesting by the existence of filesystems that cascade access permissions. Taking NTFS as an example:
In NTFS everyone that is granted read access to a folder via its ACL is by default recursively given read access to the underlying folders and files.
(Note that in truth, things are a bit more complex with both allow
and deny
entries and the fact that a user can belong to multiple groups that combine both type of entries. But let's forego that for a bit, lest we go cross-eyed...)
As a non-existent file also has no ACL to speak of, technically everyone that has access to its parent folder also has access to read the non-existent properties of the non-existent file as well. If parent folders also do not exist that relation recurses all the way back up to the root of the tree, a.k.a. the volume root, to which typically all users have read access.
An explicit FileNotFoundException
is imho still the only correct exception to throw when attempting to fetch meta-data on a non-existent file if you want to guarantee any kind of predictability. ;)
Dude, the warnings you get from everybody here if you disable UAC in windows are just as bad....
Which is hilarious if you consider that Windows 8 and up contain a bug in the handling of split-token adminstrators that allows the prompt to be bypassed anyway.
You can only protect yourself against this by either putting UAC at the highest protection level or by logging in as a regular user and elevating to a fully different administrator account on demand. And that means ---
Typing the root password every 15 seconds for every little fucking thing
And so we've come full circle.
(Oh and btw. , if we are to believe the extended discussion in the cited bug report, then Microsoft no longer treats UAC bypasses as security issues either, meaning this probably won't be fixed.)
I just invented one. It's called Yamiscript. There are no operators, no keywords, and no compiled output. Anything you enter is a syntax error.Boom, no bad code ever.
Someone will invent a way to call out to the runtime and rely on the short wait time until the syntax error bailout occurs to defeat a race-condition or default onto a fallback code path of a larger process.
Bad code happened.
Logging in with KeePass is super-smooth because input focus automatically goes to the user ID field on page load, and you get a choice of phone or dongle 2FA.I can honestly say that I have never had cause to complain about this bank. It Just Works. Recommended.
Nonces sent via SMS are not that great either, really, as SMS is not secure.
If you want real two-factor authentication, you need something only you know and something only you have. SMS is neither because it can be intercepted (and even modified) mid-flight.
To log in to my internet banking environment I supply the card number and bank account number. Those are not exclusively known to me, ofcourse. However, I then have to take a small dedicated, tamper-proof closed-system card reader supplied by my bank, insert my bank card (something only I have) and use my PIN (something only I know) to generate a temporary 8-digits login code.
The encryption method used by the reader has a time component to it that removes possibility of replay attacks should the TLS connection on top of the transfer have been broken.
Each individual transaction I set up has to be signed with the same card reader; the website presents an initial 8 digit seed token the reader takes as its first input. As its next input it takes up to 8 digits of the amount of money involved in the transaction. For large amounts, as a third input it takes the last numbers of the recipient's back account. And again, I have to use my bank card and my PIN to complete the token, which again has a time component added to it as well.
That, ladies and gentlemen is the only correct way to secure online banking.
And don't worry about CSS, nobody understands CSS. You just need to be able to bullshit past it. (Say stuff like, "we need to increase the specificity of these selectors" and you'll sound like a genius.)
CSS selectors and most of the properties themselves actually aren't that hard to understand or keep in check.
It's when nutjobs that don't know what they're doing start plowing through a nicely organized CSS codebase that nightmares start, because as part of the openness of how you can write rules and how specificity can be abused, shit will quickly snowball out of control to produce even greater shit to the point where you can only 'fix' things within a reasonable timeframe by piling on even more shit.
If I'd be interviewing and your answer to an interview problem would be to increase the specificity of a set of selectors, you'd better have a damn good argument for wanting to do so, because it'd be almost exclusively the wrong answer.
Keep selectors shallow. Componentize. Assemble small components to achieve bigger ones. Keep layout and structural components strictly separate from content or content components. (That also means avoiding dependencies such as a content component 'knowing' about physical dimensions of a layout component.)
Basically: with great power comes great responsibility, and even greater footguns.
At least WebAPI has reasonable scope. It's still way more bloated than it needs to be, though. And engineered in a wack-ass way which makes something really fucking simple, like submitting a form with a file upload, nearly impossible.
Ofcourse it still is. It's still built by the same crew that built WCF, making WebAPI the unholy marriage of ASP.NET MVC's convention-over-configuration and WCF's configuration-over-convention, which in far too many cases leaves you smack in the middle of nowhere without a clue what to do when something inevitably breaks and keels over.
// 'empty instance'
SomeClass value = null;
// equality comparison
Object.equals(null, other);
// predicate filter (convoluted just to illustrate that it can be done with a one-liner)
var value = new[] { obj }.Where(x=> x != null).Where(predicate).FirstOrDefault();
// mapping
var value = new[] { obj }.Where(x=> x != null).Select(mapper).FirstOrDefault();
// get value
object value;
if ( object == null) { throw NoSuchElementException; }
value = object;
// get hashcode
int hashcode = obj != null ? obj.GetHashCode() : 0;
// if present
if ( obj != null ) { consumer( obj ); }
// is present
bool value = ( obj != null );
// or else
object value = obj ?? new object();
// or else get
object value = obj ?? other();
Trivial and all expressed using null
in the internals. In other words Optional<T>
does indeed not do anything that null
doesn't do.
Are you certain that's true at that point? The way I hear it, Son and Spirit were monkey-patched in later.
So, reality was programmed in a weakly-typed language? "Always bet on JavaScript." strikes again.
brain dead type coercion
The actual rules are quite simple and the type of degenerate cases you have to cook up to prove it 'brain dead' are not how you would write day-to-day JS. If you want brain-dead non-intuitive type coercion, try PHP instead.
function-scoped variables
Hi, let me introduce you to the let
keyword.
globals out the ass
Let me introduce you to ES6 modules, CommonJS require
and AMD define
/require
.
Long before those arrived on scene a common and well-established practice was to wrap modules of code into immediately invoked function expressions and manually export only a limited set of entry-points into your code to the global scope.
Also, it's DOM and the browser implementation of JS that decided on having the global object be the DOM window object. That's strictly not part of the language.
insane type annotation system
That's because the annotation system is meant for aspect-oriented programming and not for declarative annotations in the vein of C#'s [Attribute]
annotations.
typeof everything is object Object
That's because you're supposed ot use instanceof
to determine what type an Object is. The typeof
keyword is strictly to find out what primitive type (or Object) a value is. Also, typeof
returns just "object"
. What you're thinking of is the so-called string tag "[object Object]"
produced by Object.prototype.toString
. And that actually returns different values for built-in Object types, e.g., "[object Array]"
for an array or "[object RegExp]"
for a regular expression...
Having said that: ES6 is going to give you a means of overriding the base Object.prototype.toString
implementation of string tag generation via the Symbol.toStringTag
symbol and nothing is preventing you from implementing a regular toString
method on your own objects to override the one inherited from Object.prototype
.
callback hell
An artefact of mis-use (or rather; complete lack of) user-defined flow control structures for async operations. You cannot really fault the language for not providing those, as asynchronous continuations were never part of its original design scope. And either way, ES6 and 7 are fixing that with generator functions, the yield
keyword and later on the aysnc
and await
keywords.
You can fault Joyent for the single biggest flaw in NodeJS's design here: centering on callback continuation style programming when superior alternatives like Promises/A were already a thing.
everything else
You know; I take it back. Your beef is not with DOM. You're just an idiot that believes JavaScript is still stuck in 1999 and hasn't advanced since.
The real issue is that Discourse doesn't really parse anything. It's just a toxic hellstew of regular expressions.
FTFY
Touché!
[EDIT]
And for added irony; Discourse completely shat over the nested quote for whatever reason, so I had to edit to compromise.
Go Discourse! [/sarcasm]
Well it is a good benchmarking procedure to have the other components of the system good enough not to interfere with the testing.
In general, I agree that when running a scientifically valid test you should have as many constants as possible and only vary what you are actually measuring.
What I'm opposed to is the fact that a system with lopsided specs is needed to demonstrate the software even has any positive effect at all. Basically, the benchmark was written against conditions that favor the software, instead of against the real-life conditions in which the software is expected to operate.
DI and IoC are oft-muddled by a lot of people.
Inversion of Control means an inversion of the traditional control flow of procedural programming, where your application code instruments library code to get something done. Instead, the library code instruments bits and pieces of your application.
Lambda delegates, callbacks, promises, evented systems, etc. are all things typically used for or with Inversion of Control.
DI (dependency injection) means the dependent bits of code you are calling out to are not looked up from inside your own code, but injected from the outside. You can either set all of that up by hand, or you can use a DI framework that uses IoC to instrument the construction of your classes for you in such a way that it can inject the dependencies for you.
I.e. IoC is one way to achieve DI.
If I were president of the internet, the second thing I would do would be add some proper low-level bytecode language to replace javascript.
You're getting your wish:
Is that what they mean when they say to do a 'clean install'?
No. That also involves some kind of soap.
Yes, this we knew. It's so helpful that it allows this - as opposed to doing it as standard then only unescaping the entities that make known legal tags.
It shouldn't attempt to parse inside code regions
<div>like this</div>
The real issue ofcourse being that Markdown without GitHub style fenced code blocks is a complete DICK for rendering code blocks. And this on a forum dedicated to WTFs in programming, ofcourse... (Hey Discourse guy; see the problem there?)
.length
Think again:
define(function() {
function Sparse() {
if ( !( this instanceof Sparse )) return new Sparse();
}
Sparse.prototype = { constructor : Sparse };
Object.defineProperty( Sparse.prototype, "length", {
get : function() {
var me = this;
var numeric = Object.keys( me ).map( function( name ) {
var num = Number( name );
return ( Number.isInteger( num ) && ( name === String( num ))) ? num : 0;
});
return Math.max.apply( Math, numeric );
},
set : function( value ) {
var me = this;
if ( !Number.isInteger( value ) || ( 0 > value )) {
throw new Error( "Not a non-negative integer." );
}
Object.keys( me ).forEach( function( name ) {
var num = Number( name );
if ( Number.isInteger( num ) && ( name === String( num ))) {
if ( num >= value ) {
delete me[ name ];
}
}
});
}
});
return Sparse;
}());
require([ "sparse" ], function( Sparse ) {
var test = new Sparse();
console.log( test.length ); // 0
test[2] = "a";
console.log( test.length ); // 2
test[5] = "b";
console.log( test.length ); // 5
console.log( JSON.stringify( test )); // {"2":"a","5":"b"}
test["foo"] = "c";
console.log( test.length ); // 5
console.log( JSON.stringify( test )); // {"2":"a","5":"b","foo":"c"}
test.length = 3;
console.log( test.length ); // 2
console.log( JSON.stringify( test )); // {"2":"a","foo":"c"}
});