makes a note to look into SSDT
(we don't have our DB schema in source control)
no description available
@anonymous234 If you're using any calculator to generate crypto keys, you're already beyond help, except maybe by
@ben_lubar well, I did report details of it to them along with the selfie, and it seemed like the obvious conclusion in hindsight. The mission title should really have given it away to us.
But also the fact that (someone) threw out 17k of damage in a hit and I don't get downed over that plus I'm not running as a scourge at present means I still have my full complement of two health bars, plus the relevant trait that auto switches if I would be downed... we were debating if it was a weird scaling vs my necro skills, or simply that I didn't get downed that time around as I figured out that the best place to stand when fighting (someone) was directly behind them at all times, provided aggro was held in the opposite side. Fortunately... party plus flesh golem and blood fiend made that completely possible - and the first time around, fighting the (someone) was much easier than the immediately preceding battle against their little helper.
In conclusion, less likely it was broken in a party and more likely the party leader didn't get downed at all.
One time I was looking to improve performance of a hash table implementation (implemented in an interpreted language). I ended up porting pythong's dictionary.
There are a few really fast ones, but this is an area where there's a truly enormous amount of BS. In particular, the only tests that matter worth a damn are those that involve running against a realistic corpus of hash keys and measuring the actual performance in production. Python's hashing is pretty reasonable; not quite the best but not at all shabby.
I once did a lot of work studying hashing for simple hash tables. It turned out that there was a best algorithm, but it was so complicated that I went with something that consistently came within about 0.1% of it but used trivial code. I then wrote up what I discovered in comments in the code, and the ratio there is now somewhere in the region of 10 or 20:1; it's tiny code that happens to work extremely well, and yet which has several pages of comments beforehand to act as a “don't screw with this even if you're very smart; we really mean it!” marker.
So long as nobody ever uses the first character of the string or the length of the string as hash code (naming no names) it'll be OK.
That said, it's useful to occasionally switch around hashing algorithms in development though. Lets you check where you've got code that makes unsafe assumptions about hash table iteration order…
per-user licensing instead of per-developer
Just double checking: is that per user of their tools, or per user of the database including every single customer that you have?
@tsaukpaetra DOIs are “easy” to read:
10.14989 indicates the publisher, and
139379 the individual item. The link resolves to a metadata page and links from there to the actual content (depending on a bunch of conditions). The big difference from a normal URL is in what happens later: DOIs will continue to resolve for a long time, even past the publisher changing how it internally arranges their files or the publisher going bust, as there are agreements with repository institutions (e.g., Library of Congress) to keep them going, and will even survive if we stop using HTTP and HTTPS and so on.
Properly a DOI is a URN and not a URL:
doi:10.14989/139379. The service
dx.doi.org is nothing like as important, but provides a simple way to do the resolution with current browsers.
Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.