WTF Bites
-
@Polygeekery said in WTF Bites:
He has sent zero emails today. He has made zero calls. He is a salesperson. Holy fuck.
Not for long...
-
Ha.
The human brain has about 86 billion neurons. The biggest chips to date (the latest Xilinx FPGAs, according to Wikipedia) have 50 billion transistors. And I'm pretty sure you need a whole bunch of those to simulate even the simplest mathematical "neuron".And the rest of the slides basically say "our chip is faster than everything else" over and over. Yeah, I'm sure these guys with $25million can totally outperform Intel, that made $21 billion last year in profits.
-
WhyTF does my espresso machine take longer to boot up than my laptop?
Is it running Android? That can take quite a while to boot…
-
A startup company claims to have a CPU design that's faster than Intel's, smaller than ARM's, uses 10x less power (than what? Doesn't say), and is 3x cheaper (again, cheaper than what?).
The “faster than Intel, smaller than ARM” is because they're relying on compilers to do VLIW magic. The history of that approach has been chequered at best. The fact that they're planning to use the TSMC 7nm fab explains the power claim (but I really want to see that verified) and the cheaper claim probably follows from them not buying into either the x86 or ARM IP stacks. The number of cores per chip isn't very impressive (our next gen chip is looking at 2–3 times that on a much cheaper 20nm process), and I suspect their interconnect is the usual shit (because almost everyone follows that same well-trodden path) and so their system won't scale up well to big neural networks.
IOW, what they're doing is so revolutionary it's significantly behind what academics are up to. For sure, they're not critical for any human brain project; I'd have heard of them before if they were.
And most of that $24M will go on getting stuff to and through initial fabbing. Until they get that sorted, they won't know their actual yield figures and can't figure out what their costs per chip are. They probably also are looking to ship some chips with individual (broken) cores turned off; that'll cut costs quite a lot, but it's quite tricky to characterise all the ways that hardware actually breaks during manufacture…
-
@anonymous234 said in WTF Bites:
The human brain has about 86 billion neurons.
It's the number of synapses (connections between neurons) that really hurts. Some neurons (I can't remember if they're cerebellar or hippocampal) have on the order of a quarter of a million synapses each. We have absolutely nothing designed to be able to tackle that scale.
-
@anonymous234 said in WTF Bites:
The human brain has about 86 billion neurons.
It's the number of synapses (connections between neurons) that really hurts. Some neurons (I can't remember if they're cerebellar or hippocampal) have on the order of a quarter of a million synapses each. We have absolutely nothing designed to be able to tackle that scale.
*mumble* Quantumsomething *mumble*
-
@anonymous234 and isn't a lot of the complexity in connections between neurons? That's something that takes serious power (and we don't understand very well at all, AFAIK).
I see.
-
I was typing a command in a CLI window, and a UAC prompt appeared about 8ms before my finger hit enter. I have no idea what I just approved/denied.
I love Windows Control Panel. Open it, wait, wait, wait, finally go to click something and bam, everything moves and you just opened Mail options.
-
@TimeBandit said in WTF Bites:
@Polygeekery said in WTF Bites:
he has spent 14m23s doing work so far today
That's probably more than the average user of this forum
I did about 10m and that’s probably still twice as productive as the average monkey.
Filed under: okay, I spent at least 2 hours on PowerPoint. I only counted work I don’t hate.
-
Holy fuck, M.2 is just so much faster. I just installed Office 2016 in ~15 seconds.
If I had to go back to spinning rust for OS and applications I would likely end up on a tall building, naked, with a high powered rifle just plinking people off due to the frustration involved in the regression.
-
@Polygeekery are you only just now discovering this? I keep Overwatch on my SSD because the map loads about ten times faster so I can always get the character I want due to being first.
-
Almost all of my games are on M.2. Speedy.
-
@Polygeekery said in WTF Bites:
If I had to go back to spinning rust for OS and applications I would likely end up on a tall building, naked, with a high powered rifle just plinking people off due to the frustration involved in the regression.
Coincidence does not imply causation.
-
@pie_flavor said in WTF Bites:
I keep Overwatch on my SSD because the map loads about ten times faster so I can always get the character I want due to being first.
Okay? Now all SSDs are M.2? I was not consulted on this change in terminology.
-
@Polygeekery did it mean anything else?
-
@Benjamin-Hall said in WTF Bites:
That's something that takes serious power (and we don't understand very well at all, AFAIK).
Individual synapses are fairly well understood as they're relatively easy to examine in vitro. There may be some nuances left to uncover, but they're unlikely to change the wider picture much. It's the pattern of communication, how they interact (dendritic tree computation is apparently a thing), how they change over time, that's what's not well understood and where the cutting edge of research is.
The only reason it “takes serious power” to simulate is because the communication technology being used is entirely wrong. People creating comms architectures for computers usually focus heavily on how to ensure that comparatively large messages get through accurately to a single target without loss; optimise for low power small messages and multicast and you instead something better suited for modelling synapses.
-
@pie_flavor said in WTF Bites:
@Polygeekery did it mean anything else?
I keep thinking it means some entry in a star catalogue…
-
-
@pie_flavor said in WTF Bites:
@Polygeekery did it mean anything else?
I keep thinking it means some entry in a
starcluster, nebula, galaxy and other non-comet catalogue…
-
-
@pie_flavor said in WTF Bites:
@Polygeekery did it mean anything else?
M.2 is a form factor specification.
And does not necessarily mean SSD, though it is a popular application thereof.
-
@Benjamin-Hall said in WTF Bites:
What if the definition of 1 changes?
I love the fact that this was actually a real concern in some old languages.
Of all the retarded things Fortran does, this might be in the top 10.
To be fair, it's not FORTRAN that does it. It's one of the usual "undefined behavior" cases where the standard specifies what a correct program must do but leaves it to the compiler writers to silently fuck things up for non-compliant ones with what may or may not have been an optimization on ancient hardware, like merging all constants of the same value in one writable memory location.
-
WhyTF does my espresso machine take longer to boot up than my laptop?
If it was a Long Black or a Vietnamese coffee machine, fine. But it's supposed to be a fucking espresso!!!eleven
-
@anonymous234 said in WTF Bites:
The human brain has about 86 billion neurons.
It's the number of synapses (connections between neurons) that really hurts. Some neurons (I can't remember if they're cerebellar or hippocampal) have on the order of a quarter of a million synapses each. We have absolutely nothing designed to be able to tackle that scale.
It will necessarily have to be some molecular-scale system. Something like using protein membranes with electrochemical communication channels -- I hear they take quite some time to compile and don't give very consistent results but they sure are fun to develop.
-
It will necessarily have to be some molecular-scale system.
That doesn't follow at all.
-
It will necessarily have to be some molecular-scale system.
That doesn't follow at all.
-
@LaoC :thatwasthependant:
-
@Polygeekery said in WTF Bites:
on a tall building, naked, with a high powered rifle just plinking people off
Isn't that your usual weekend anyway?
-
@Jaloopa I believe his usual workweek involves a lot of spinning rust computers.
-
involves a lot of spinning rust
So more of a revolver guy instead of a semi?
-
The “faster than Intel, smaller than ARM” is because they're relying on compilers to do VLIW magic.
Back at university, some colleagues were involved in implementing the Ia-64 (Itanium) targets for gcc. It definitely couldn't do VLIW magic and had to rely on swaths of nops to produce code that worked at all, but at nowhere near the promised performance.
Now gcc was used because it's codegen is fairly generic and can be taught new instruction sets relatively easily, but it was still quite a lot of work. Tachyum claims they'll have gcc and llvm backends this year including compiling Linux and FreeBSD with them. I am really, really curious how they want to do it. Or how they want to realize the performance, since I doubt gcc has learned much about VLIW even though it's been almost 20 years since the ia64 thing.
-
had to rely on swaths of nops to produce code that worked at all
The nope thread is .
-
had to rely on swaths of nops to produce code that worked at all
The nop thread is .
FTFY. Also known as the thread.
-
-
I am really, really curious how they want to do it.
They'll probably be trying to keep the number of jumps down, but the result is going to be hairy as hell. I suspect their performance on real code (as opposed to carefully selected benchmarks) will be irritatingly shit. Which isn't to say that GCC hasn't got better at handling complex inter-instruction dependencies — it most certainly has, to the point where only the most deeply hacked around hand-generated assembler can possibly go faster than what it spits out (some of the commercial compilers do even better; they've usually got better cost models for the target ISA) — and that's exactly what you need for powering VLIW trickery, yet it's just really hard to get right. If someone had truly succeeded, they'd be making a lot of noise about it as it would let CPUs get cheaper/faster and they'd be in line for becoming both seriously rich and rather famous (inside computing at least). I've also no idea what happens when you start adding function calls or interrupts (either software- or hardware-defined) to VLIW systems; those have potential to be deeply annoying!
VLIW is just one of these things that'd be great… if only it could survive contact with reality. But there's never been great evidence that it actually can provide meaningful benefits on a general CPU. (It's much easier to think of using it on a GPU or other specialized coprocessor; people have different expectations there.)
-
Which isn't to say that GCC hasn't got better at handling complex inter-instruction dependencies — it most certainly has, to the point where only the most deeply hacked around hand-generated assembler can possibly go faster than what it spits out (some of the commercial compilers do even better; they've usually got better cost models for the target ISA) — and that's exactly what you need for powering VLIW trickery, yet it's just really hard to get right.
I think the problem was particularly in how the parallel blocks in Itanium depended on instruction alignment, which the code generator wasn't taking into consideration, especially as the alignment depended on target of last jump or something like that. So it was not just VLIW, but a badly designed one. If they design the parallelism control better, the compiler might be able to generate it right more easily.
-
Filed under: Some of these emojis could stand to be bigger. Especially if they're on a line alone.
At this point why won't we just move the entire forum to Slack?
-
@Gąska For starters, Slack is blocked by my work proxy.
-
VLIW is just one of these things that'd be great… if only it could survive contact with reality. But there's never been great evidence that it actually can provide meaningful benefits on a general CPU. (It's much easier to think of using it on a GPU or other specialized coprocessor; people have different expectations there.)
I think some of the AMD GPUs used to be somewhat VLIW-ish (~2010?). I never dealt with them specifically, but programming them looked painful. Later GPUs went to a more traditional "scalar" instruction encoding.
-
Found in an ASP.Net page recently created by the Indians. This page had many problems, the particular one that stood out as worthy a post over here is that an unordered list had been required and rather than use something crazy and new-fangled like
<ul>
and<li>
there was an alternative horror. I assume auto-generated by some tool.<p> <span style="font-family:Symbol;mso-fareast-font-family:Symbol;mso-bidi-font-family:Symbol"><span style="mso-list:Ignore">·<span stle="font:10.0pt "Times New Roman""> </span></span></span> Item 1</p> <p> <span style="font-family:Symbol;mso-fareast-font-family:Symbol;mso-bidi-font-family:Symbol"><span style="mso-list:Ignore">·<span stle="font:10.0pt "Times New Roman""> </span></span></span> Item 2</p>
-
Sometimes when moving my work laptop between the normal dock and the boardroom TV, Windows forgets how to properly render parts of itself and wacky things start happening.
This "Save changes?" dialog is about a million pixels wide, and I had to drag it across the entire span of both screens several times before the buttons would show up.
-
@levicki said in WTF Bites:
there are agencies booking all the slots available in advance and selling them to people at 10-20€ a pop.
That should not be allowed. The system is designed incorrectly. Sounds like I (as a citizen) could write a bit bot to snipe slots eBay style with no consequence. Only challenge is properly timing the reservation requests to come in before the fraud agency does.
-
@levicki said in WTF Bites:
Remind me again what is the advantage of having e-Government here?
You don't have to pay line-standers
-
@Tsaukpaetra said in WTF Bites:
@levicki said in WTF Bites:
there are agencies booking all the slots available in advance and selling them to people at 10-20€ a pop.
That should not be allowed. The system is designed incorrectly. Sounds like I (as a citizen) could write a bit bot to snipe slots eBay style with no consequence. Only challenge is properly timing the reservation requests to come in before the fraud agency does.
it's South-East Europe. There's a second, much harder challenge: obtain a promise from local politicians that the police won't pursue you for it.
-
@Tsaukpaetra said in WTF Bites:
@levicki said in WTF Bites:
there are agencies booking all the slots available in advance and selling them to people at 10-20€ a pop.
That should not be allowed. The system is designed incorrectly. Sounds like I (as a citizen) could write a bit bot to snipe slots eBay style with no consequence. Only challenge is properly timing the reservation requests to come in before the fraud agency does.
it's South-East Europe. There's a second, much harder challenge: obtain a promise from local politicians that the police won't pursue you for it.
Once I figure out how the agencies can get away with booking a not-person...
-
For starters, Slack is blocked by my work proxy.
And by the proxy used by the wifi on the train when I'm commuting. (I could use my phone as a hotspot… but the train has a lot better connectivity, especially when in a tunnel or deep cutting.) It seems to be a feature of how they (Slack) are using secure DNS; StackOverflow is blocked for the same (technical) reason and I can't think of any (policy) reason at all for enforcing such a block.
-
Our company is super afraid of us leaking confidential data so they block anything that can transfer files or proxy servers they don't control. Also, they have a CA cert installed that lets them MITM snoop on my encrypted traffic.
-
Our company is super afraid of us leaking confidential data so they block anything that can transfer files or proxy servers they don't control.
We have Slack and many other things blocked for the same reason.
-
@error Yes, but that doesn't apply to a train company enforcing a block on its customers. Apparently, they're keen on stopping anything that might be embarrassing to the very uptight or that might work like streaming, and they're not very good at actually implementing their policy sanely. If it wasn't for the fact that their connectivity and bandwidth for here is so much better than doing it myself, I'd stick to using my 4G phone.
-
There's a second, much harder challenge: obtain a promise from local politicians
Not necessarily difficult, just expensive, perhaps.