I'm from germany and I can confirm that using IE here is a capital offense. Convicts are to be immediately shot with a correctly registered gun.
Posts made by PSWorx
-
RE: Anyone from Germany?
-
RE: Best scripting language for my game
There is also edge.js, which seems to be in active development. It's a bridge between node.js and C# (and apparently python too).
It seems more built for embedding C# modules in nodejs scripts than the other way around, but if you're fine with building a JS "launcher" for your game, this might not be that much of a problem. (You could still keep game scripts from abusing the nodejs API/knowing they run on nodejs at all by e.g. sandboxing them with contextify.) -
RE: Automated Acceptance Testing
@RTapeLoadingError said:
In that case they should link to here so they have something for the project documentation file. For example...
I agree, though they should make sure they choose the right design. -
RE: SID playing: Looking stuff up from a flat file has never been easier
Whoa, first morbiuswilters, then Gene Wirchenko, now WWWWolf and asuffield - it's like some crazy kind of reuni- oh, wait, it's just another necro. Nevermind...
@dhromed said:
Apparently I wrote an insightful post six years ago and nothing has changed since then.
Around here, one of the largest ice cream producers runs two different brands of ice cream. The products of one brand are sold in cafes and upper-middle class shopping malls, have all sorts sophisticated-sounding names, are backed by an extensive advertising campain, and their distinctly-shaped package is plastered with tantalizing photos of vanilla whirlpools and whole fruits which apparently had just survived a tropical rainstorm. Almost all sentences on the package contain one occurrence of "natural", "traditional recipe" or "carefully selected ingredients". Last time I checked, they cost about €7 per package.
The other line is sold in lower-class stores, has no advertising at all and has a plain white package out of the cheapest plastic they could find. The only photo on it makes you almost smell the artificial flavours. Price (for the same amount) is €2.
A few years ago, some magazine actually bothered to check the ingredient listings of both products. Turns out they were exactly the same. This hasn't kept the manufacturer from selling both brands up until now and making a huge profit off both of them.
So yeah, I fear you already had been wrong six years ago. -
RE: Hard drive babysitting on linux
@morbiuswilters said:
Check out the -S option to hdparm. If your hardware is anything approaching new, it should work. Be sure to read the man page for the -S option, the way you specify time is really fucked-in-the-head.
Previously on this thread:
@PSWorx said:Changed settings via hdparm -S or -B have no noticeable effect at all.
@PSWorx said:Didn't know of that option yet. Though, if I read this correctly, doesn't this just set the same timer hdparm -S does? I've already tried the latter without any effect. (And yes, APM is 127)
So yeah, I tried that already and even if I set it to APM 1, timeout 5s (according to the manpages), no spindown happened.
@morbiuswilters said:
Also consider checking out the -M option to hdparm. It will let you control how fast the head moves, which may reduce your noise to acceptable levels without having to spin down the HDD.
Hmm, will try out that one, thanks.@morbiuswilters said:
All of this assumes you've nuked anything that is doing frequent I/O on the drive. Obviously spinning down doesn't do much good if some process is constantly spinning the disk back up.
I've not yet installed anything that should access the drive on its own. I'm reasonably sure that's the case because the drives don't spin up on their own either after I've shut them down manually (as I they should do if something was accessing stuff on them). -
RE: Subversion comments
@Mason Wheeler said:
@Vanders said:
@Planar said:
TRWTF is threads. If you have any programmer on your team with less than 20 years of experience, you have no hope of making a multi-threaded program work correctly.
I must be some sort of miracle worker then.Seriously, threading isn't hard. You were taught what a critical section is. You know what shared data is. Where's the confusion?
This. Is there some special broken thing in Java that makes multithreading extra hard? Because I never have problems with it in Delphi. You just have to keep the basic principles in mind.
You know those principles. Apparently the whole concept of "shared data" has been so overloaded with fear by now that the cutting-edge software stacks like node.js rather clone half their runtime for each thread and require developers to pass objects (and functions) around as strings than to allow a single bit of shared state. See (the second half of) this. -
RE: Hard drive babysitting on linux
@PJH said:
http://info4admins.com/tips-to-spindown-your-hard-disk-in-debian-or-ubuntu/
Well, like you said, the logs/cache/etc should be on the SSD, not on the disks. The kernel options looks quite useful though.@PJH said:
do you have the noatime option set on those mounts?
yup - that is, I think so. The disks are managed by lvm2 and are exposed as a single logical 1TB volume. I'm mounting that volume with noatime. I shouldn't need to tell lvm anything about noatime, should I?@PJH said:
Also do you have spindown_time set in /etc/hdparm.conf?
Didn't know of that option yet. Though, if I read this correctly, doesn't this just set the same timer hdparm -S does? I've already tried the latter without any effect. (And yes, APM is 127)@Cassidy said:
You looked into any distros that are purpose-built with those uses in mind? Just tihnking of freeNAS, or an HTPC-type machine. They may already have this kinda thing covered.
Well, part of the reason I'm doing this is to have a fun way to get more familiar with the linux utils, standard daemons, init process, etc. Hence I don't necessarily want to reinvent the wheel, but I would like to know what exactly is running on the system, why, and why it's configured the way it is. I also might use this box for some other things than a media center later on. I'd rather not use a ready-to-go distro where I've no idea which settings I can safely change without stuff breaking in obscure ways. -
Hard drive babysitting on linux
Hey everyone,
I'm currently trying my hands on a small home server use as tivo, file storage and general linux tinkering tool. I've got a few configuration headaches however, and, since I'm still kind of a linux n00b, I'd like to ask if you guys have an idea.
For OS, I'm using Arch Linux 3.8. Storage-wise, my setup consists of an SSD for the OS and two 500GB hard drives for data.
As the data disks are quite noisy and will be accessed relatively rarely, I'd like to keep them spun down most of the time. After some playing around with hdparm, I found that the disks spin down perfectly fine if I tell them to (via hdparm -y) but don't seem to ever spin down on their own. Changed settings via hdparm -S or -B have no noticeable effect at all.
The disks are basically empty and they stay off after I've put them into standby manually. That's why I suspect it's a hardware issue and not anything constantly accessing them.
To work around this, I thought of writing a script that periodically checks disk activity and calls hdparm -y if one of the disks hasn't been accessed for a certain time.
So, long story short, my questions are:- Is there any command or utility in linux that I can use to get the last access time for filesystems or block devices?
- Is my approach the Real WTF? Are there additional settings I should check first? Is there already a daemon that does something like this?
If, by any chance, we have any resident Linux geeks here (that haven't been scared away yet by blakeyrat) who could give some advice about this sort of stuff, I'd be very thankful.
- Is there any command or utility in linux that I can use to get the last access time for filesystems or block devices?
-
RE: TDWTF Markov chain
@mott555 said:
I wonder if SpectateSwamp would notice if we only posted on his thread using a Markov generator.
For best results, you'd probably have to run it as a daemon, so it responds to his posts. but it could work. -
RE: XML is too difficult
4) Because I don't know how the hell they'll escape "&" and "=" in their NVP implementation, so it's now more work for me to QA their format
Actually, if they already decided to use their own home-grown not-quite-url-encoder, what makes you think they'd get escaping XML right?
5) And speaking of which, I also need to figure out how they escape everything else. It looks like some quasi combination of URL encoding and something else. So not only do I have to implement a NVP parser, I also have to make sure to implement a decoder on the values. -
RE: Mozilla credits list is not updated.
Judging by how he keeps repeating the same three (?) topics, I think he is trying to tell us something. I have no idea either what though...
Hey Tae Wong, if you want this to get anywhere, ask someone else to translate your posts to english - or at the very least get a better translator. -
RE: WeHostBotnets.com
@swayde said:
Regarding botnets, i really hope this paper is bullshit : paper
The pictures are pretty, but I'm more concerned about the download section. (assuming this is genuine) So he just released, to the general public, a list of all open ports for basically every machine on the internet, ever? Including afromentioned VPN gateways, industry device controllers, door locks , half a million printers and several million web cams? What could possibly go wrong?
the quick reply is even worse than the default editor, colour me surprised -
RE: The Abyss of 64-bit Compatibility
@Vanders said:
Can you do me one favour? Can you please tell me that the TMB-100 & TMB-200 weren't some safety or life critical device, like say a nuclear reactor safety system, or an X-ray machine?
Don't worry, we don't need the TMB-100 for that.
@This report about the THERAC-25 said:Currently, AECL's primary business is the design and installation of nuclear reactors.
-
RE: Flash vs HTML 5
I'm not an expert on Flash and I agree that HTML5 has many flaws, but I think if you compare it with Flash, it actually looks rather good.
@Flash vs HTML 5 said:Flash is easily blocked, HTML 5 is not.
Depends what exactly you mean with "blocking HTML5" - or, put another way, why you're blocking Flash in the first place. If you want to get rid of interactive content, you can still block JavaScript as you always could. (The degree on which you can selectively block certain JS functions varies between browsers AFAIK, but almost all relevant browsers support per-site-blocks and muting audio tracks)
If you want to get rid of ads (which was a lucky but pretty coincidental side-effect of blocking Flash) get a decent ad-blocker.
@Flash vs HTML 5 said:Flash can be allowed on a per-instance basis, HTML 5 will be hard or impossible to run on a per-instance basis.
If by "instance" you mean "page", see above. If you want to block specific elements, you'll probably get a roughly equal effect by filtering <audio>, <video> and <canvas> elements.
@Flash vs HTML 5 said:
I do agree on that - apparently, the WHATWG not just decided that "separation of content and presentation" is a thing of the past, they seem to have declared it an antipattern. Then again, if you're a developer, you can still keep the segregation in your toolchain; If you're a search engine or othwerwise want to work with other people's pages, a HTML/JS app is probably not that much harder to disassemble than a compiled SWF file.Flash segregates a lot of multimedia away from other content, this is a very good thing. HTML 5 does the opposite.
@Flash vs HTML 5 said:
Well, duh. So you'd have to with any other technology that runs code of unknown origin and intent on your computer whenever you open a web page. Blame the crazy way the web has developed for this apparently being a requirement today. With JS/HTML, you at least have multiple competing implementors that you can switch between when there is an unfixd vulnerability. With Flash you pretty much only have Adobe. (And now Google apparently)Do you want Flash vulnerabilities or HTML 5 vulnerabilities? You can pick one or both, but you can't pick none.
Where I see a big advantage for flash though is authorability. From what I know, the Flash IDE for a large part lets you design animations with much the same tools you'd use for graphics or movie editing. In contrast, it's hard enough to design non-WTF static HTML using an authoring tool - don't even think about doing a moderately complex animation with canvas if you have never programmed before. And as the skill to design appealing user interfaces is usually not related to experience in programming (proof: Ubuntu Unitiy), this might create the wrong kind of entry barrier.
@blakeyrat said:I've yet to see a HTML5 game that has a volume control. JUST PUTTING THAT ONE OUT THERE!
At least the API is apparently there. No idea how far/if it's implemented though. -
RE: Why is source code stored as text?
Incoming rant in 3... 2... 1...
I've heard the argument about text files being "human-readable" pretty often, and frankly, I believe it's bullshit. Firstly, no one would argue that office documents, spreadsheets or graphics are not human-readable, despite them being stored as quite complex binary formats. Secondly, as others have pointed out, programming code and config files have actually not much in common with actual free text - they're a textual serialisation of a data structure - an AST, a table, a graph, etc. - that usually is not at all related to text.
I'd argue it has more something to do with the tools that are available. You find a decent text editor on almost any machine, every self-respecting progamming language can at least process ASCII and a large part of the Unix toolchain seems to be built around manipulating strings that match a certain subset of regular grammars. On the other hand, there are no efficient and widely available tools with which you could edit an AST directly.
Does that mean it's impossible to build one? I don't know - but the fact that most IDEs already keep an approximation of your code's AST in memory, so they can do syntax highlighting, error checking, code completion, etc hints that such a tool could actually be useful. It could also maybe reduce the amount of "string thinking" that led to SQL injections, XSS and bloaty text-based network protocols. But so far, there doesn't seem to be many good ideas how a good UI for such a tool could work. That and the fact that it would be incompatible with most existing toolchains probably prevents much effort getting spent in this direction.
-
RE: Time Zone Converter
@boomzilla said:
The real worse than failures
@Jonathan said:
What is the worse than failure?
@Jonathan said:
So the worse than failure here is expecting requirements to be assumed, which is the worst of all worse than failures.
FTFY -
RE: Time Zone Converter
@Jonathan said:
Dates as strings are necessary in order for them to be human readable.
If you want to output your date, use DateFormat.format(). If you want to view the date during debugging, any decent debugger will call toString for you. Where else would it be important that a date is human readable?@Jonathan said:
So far we haven't been given any requirements so we don't know if any other operations are needed. The unknown requirements may or may not call for this code.
We haven't. But the fact that his bit is repeated in dozens of different classes makes it rather unlikely that this is only used to read dates from one text file and write them into another. -
RE: TDWTF's Community Server doesn't like me
@henke37 said:
Remind me again why we still use it.
Because it's so ironic.
@dhromed said:
Which what?I really like the select-quoting feature. I haven't seen that in any other forums, and I miss it a lot in other communities I participate in them in. Which.
-
RE: I need a simple, light database to accumulate some data from a JSON feed. Ideas?
CouchDB comes to mind. It's a NoSQL Document DB like MongoDB, but especially designed for JSON-structured data and accessible via HTTP. (Apparently there is also a free cloud service for personal projects) It should be sufficient for your use case though it has some catches for more sophisticated data models. (read: anything that requires a join)
Or, of course, you could just learn SQL. For your project, you should be fine with some basic SELECT and UPDATE queries and it'll probably be worth the investment in the future.
-
RE: Debugger statement in production code
@dhromed said:
This, also, you guys are forgetting eval(), setTimeout("...") and the like, coupled with the usual WTF styles of JS programming. I think the current DOM APIs are broken enough that you'd probably have to ship a parser/compiler with your JS engine in any case - and given how often eval() is still being used, I doubt it would even make a performance difference.@spamcourt said:
Minified javascript is TRWTF. Either send readable code or send bytecode, but this just looks silly.
Minified JS isn't a WTF, but that said I do like the idea of precompiled javascript sent from the server, which is served, perhaps in a script tag/link with type="binary/javascript" or somesuch.
That is, assuming that javascript interpretation is a significant problem for browsers, which I don't think it is, and if it is, I don't think it should be because then you're overstepping your boundaries for a javascript program, and I think rendering a page from complex grammars and ludicrous standards like HTML/CSS is far more taxing than a little JS compilation.
This doesn't mean that current JS compilers don't try to be clever. From what I've read, Google's V8 uses some crazy runtime analytics to basically turn all the dynamically-typed JS hashes into a number of statically-typed structs - then JIT-compiles the whole thing into native code and caches it on disk - but only if the code path is executed often enough that it would be faster than interpreting it. I'm not sure what Mozilla's VM does, but probably similar stuff.
Generally, a lot of time and effort seems to be invested by a lot of extremely smart people around the world so that a quick-and-dirty prototyping language with one of the most WTF APIs ever can run almost as fast as compiled C 10 years ago. Compiling it, ironically, doesn't seem to be an option... -
RE: Wine (also QR codes)
As for a somewhat clever use of QR codes (among other things), I've recently found this game. Sadly, I couldn't try it out yet because I don't have an iThing though.
-
RE: Movie suggestions?
@Zemm said:
@Zecc said:
48 fps (HFR)
I saw the Hobbit in HFR - paying $25 (!) for the privilege.
To me it was like playing a FPS game where the video card is only just keeping up. Some scenes - like the wide panning shots or hand-held camera - felt like it was accelerating rather than being smooth. It was fairly disconcerting, not sure if it was from the HFR or what.
Just saw the hobbit in 3D+HFR, too (for "only" €3 more than the normal price) and same experience. I also noticed some quick movements of the actors felt weirdly out of sync, kind of as if the movie would play at an accelerated speed for a second, then flip back to normal speed. (e.g. old Bilbo rummaging through his chests in the beginning)
My personal theory is that you start noticing very small, quick movements that before were swallowed by the low framerate - whereas slow and steady movements (which I think were supposed to become "smoother" somehow) didn't change much at all, because the brain already did a good job interpolating them before. You basically reduced the low-pass filter that motions went through before.
Speaking of questionable new movie technologies, 3D. I know it's getting old, but seriously, guys could you at least do some research on visual perception before using this stuff? Protip: You can't just throw stereoscopic imaging on everything but expect zooms, transitions and depth of field to work just like before. Wasn't the whole point of 3D once that everything should feel more realistic? Yeah, that worked beautifully here.
-
RE: Hacker News is the DeviantART of developer side projects
@ObiWayneKenobi said:
@MiffTheFox said:
@C-Octothorpe said:
@ObiWayneKenobi said:
with their bastardization of how the web works
I hate webforms as much as the next developer, but I'm curious of what you mean by this. Are you talking about viewstate?90% chance he's talking about postbacks.
Both. The whole way WebForms works is meant to ignore the way the web works and pretend it's a desktop app that runs in the browser. Viewstate, postbacks, the whole shebang. It encourages event-driven programming like VB6 where you wire everything up behind a button click.
The irony is that newer technologies like Node.js or Vert.x actually work like this and manage to make it look halfway sane. -
RE: Nobody shares knowledge better than this
@Gazzonyx said:
You should start a blog over on blogspot
You're aware he did that already? And boy, does he know how to blog! -
RE: One field table
(addendum)
I find it interesting that there are is apparently a well-known somewhat similar antipattern that actually has a single-column table as its fix: link (slides 80+). This isn't exactly the same problem as ours, but the pros/cons discussion seems similar. (Basically, replace "enum" with "arbitrary unique ID"). -
RE: One field table
@RTapeLoadingError said:
@stratos said:
This particular part of the database is about product attributes. So a table for attributes of a product. Think "Natural wooden finish", "sterling silver", etc.. These attributes are accompanied with a title and a bit of summary text.
However, this is a multiligual setup with the texts saved in the database as well. Also meta information like Who edited what and when is saved on language level.
So you more of less end up with a structure like this.attribute
id: PKattribute_lang:
prod_attr_id: PK, FK
lang_id: PK, FK
name:
desc:
user_id: FK
last_update:It's late in the day so I could be totally wrong... but isn't joining on...
attribute_lang.prod_attr_id = attribute.id
...where the attribute table has only one column logically equivalent to not joining the tables at all? All the information in the row in the "attribute" table is already included in "attribute_lang" in the FK.
The only thing it gives you I guess is the ability to have products with zero attributes but that sounds a bit philosophical to me.
Well, the one information the "attributes" column (and also the join above) would give you is if an attribute exists, irrespective of the languages it's formulated in. I could see this as somewhat useful if you want to have your DB validate PK/FK constraits:
If you, for some reason, ever need to clear and re-import your "languages" table (e.g., because some other department changed the captions or descriptions), you'd have to keep one "alias" language for each attribute, so the primary key doesn't vanish. This could potentially make reimport more complicated and messy. But I agree, I'm not sure how much impact that would actually have.
I'd also say it will make it easier to add language-independant properties later, when you discover you need some of those. Depends how likely it is that you WILL need them later. -
RE: Microsoft Surface - WTF?
@El_Heffe said:
So Microsoft has a new tablet out called Surface. I don't know anything about it, but I've been wondering if it might be be a worthwhile product. Unfortunately, after watching these two videos, I still know absoulutely nothing about the Surface.
Well, you can learn a little more from this video. (After installing silverlight, because how else would you be able to play a video in your browser?)
Like that it's for people whose most important moment in life was solving that rubics cube puzzle, that it lets you work the way you used to work before you bought it and that it has a touch-sensitive keyboard. -
RE: Hacker News is the DeviantART of developer side projects
@Soviut said:
@Zylon said:
@GNU Pepper said:
HN is the DeviantART of developer side projects.
So there's quite a lot of really awesome code on it?
No, it means it's filled with 95% of the code equivalent of anime, furries and creepy fan art.
Please tell how I can write anime, furries and creepy fan art in code. I'd be interested in a tutorial. -
RE: Logic
@Lorne Kates said:
FTFY. Do you think the masterminds at SnoofleCorp would tolerate copy-pasting if you can make it generic?var a_returner = false;
function a(sql)
{
$.ajax({
url:"www.snooflecorp.com/WebService/QueryDatabase?sql=" + sql,
async: false,
success: function(data) { a_returner = data; },
fail: function() { a_returner = null; }
});
return a_returner;
}That way not only do you have 26 database hits, but you also have 26 asynchronous web requests!
-
RE: Bumping Threads WTF
@ekolis said:
Since this thread was originally from 2006, and revived once in 2009 before lying dormant 3 more years until now, does that make it a double necropost?
If you count the post the OP referred to, which apparently was from 2003, that would make it a triple necropost. -
RE: Bumping Threads WTF
Nice try, guys.
@tster's younger self from 3 years ago said:When I read that post I was like, OMFG, I remember posting that.
In contrast, I don't remember posting in here before. WTF? -
RE: Xamarin multi-threading/ui
I'm not familiar with Xamarin (or the perks of Android development), but this looks like the usual design pattern for doing responsive UI. The RunOnUiThread() method exists exactly to solve your "multiple async workers" problem - It'll put each worker's callback in the UI thread's event queue and run them (in arbitrary order) as soon as all pending UI events have been processed.
-
RE: The Google Algorithm Has Become Self-Aware and Self-Hating
In oddly fitting irony, that article will also cause Google Chrome to crash if I try to open it - not just the current tab, but somehow the whole browser...
As for Google's spam filter, it was probably just synced with the Google Books database and, after analyzing The Casual Vacancy, correctly derived that it would be best for all involved if as few people as possible get to read that book. I can't see anything broken with it. -
RE: 2 features, 1 name
@flabdablet said:
That's surely the responsible way to do it, however I'm going out on an limb and guess that the the average joe user won't think like that. To adopt this as a really widespread solution, it would have to be easier to use than to just reuse the same password on every account (or once click on the shiny Facebook Connect button and be done with it). And keeping your keypass database updated, backed up and properly organized for every blog you might want to write a comment for once or twice certainly isn't. Let alone maintaining a custom VPN for off-site access and never using any kind of public computer.KeePass lets me group accounts into folders and subfolders, so it never becomes a "whole mess" of accounts. For example, one of my folders is "Has CC info" and any organization that has my credit card details on file goes in there; when my card expires, I know exactly which sites I need to log onto and update my details to keep e.g. domain autoregistrations and VoIP telephony working. And I would much rather do it that way than have a whole bunch of who-can-remember-who-I-authorized-to-do-what linked to a centralized server where my CC info is held.
Checking my email in internet cafes is not something I generally do, because I can't trust their workstations not to have hardware keyloggers installed. I pay for a cheap VPN service for when I'm using one of my own devices on somebody else's wifi. If I'm using a workstation that isn't mine but is still trustworthy, I use the copy of my KeePass database and the portable KeePass executable I carry with me in USB memory attached to my car keys.
We might be talking about slightly different things. Your approach is probably useful for tech people and corporate environments, but I can't imagine how it would be adopted by average consumers or your facebook-addicted neighbour's daughter. Which, I think were some of the use-cases of OpenID and Passport. (Not that the facebook-addicted daughter should ideally have access to your credit card...) -
RE: 2 features, 1 name
@flabdablet said:
@PSWorx said:
Well, depends on your use-case. If you only want something to remember your passwords, you're right. (Which was arguably what I was telling in the previous post, I admit) - but that still leaves you with a whole mess of accounts that you have to remember and maintain separately.@flabdablet said:
OpenID is a non-solution to a solved problem.
I wouldn't exactly call it "solved", given how many passwords you're still required to remember on the web today.Personally, I'm required to remember one - the master password for my personal KeePass database. I no longer even know what any of my actual web or email or banking passwords are; KeePass made them all up for me (they all have at least 100 bits of entropy). It also remembers them for me, and autotypes them into sites as needed. I don't even need to sync my important bookmarks across multiple browsers and browser installations any more, because KeePass remembers all those as well. It's absolutely a solved problem. Has been so for ten years.
What Passport, OpenID, Persona, Facebook Connect etc try to do after all is to make you be able to log into (ideally) arbitrary pages with the same account, not just the same password - allowing consumers to use information from your account as well and, ideally, allow you to configure your account/read notifications/etc at a centralized place. Of course, because of what El_Heffe wrote, that problem is probably unsolvable in practice. But that doesn't mean people shouldn't think about ways of how to alleviate it.
... also after skimming the documentation of KeyPass, what do you do if you need to check your e-mail in an internet cafe? -
RE: 2 features, 1 name
On the risk of sounding noobish, but if you're using the login credentials of the user's mailbox anyway (which basically would make the mail provider the actual auth provider), why would you need a 4th party (mozilla) at all?
Why couldn't you just - I don't know - have a website that wants to use the service e-mail you some kind of key (on first login) and then have some module in Firefox connect to your mailbox and retrieve the key on subsequent logins? -
RE: 2 features, 1 name
@flabdablet said:
OpenID is a non-solution to a solved problem.
I wouldn't exactly call it "solved", given how many passwords you're still required to remember on the web today. But yeah, I agree that it's a non-solution.
@TwelveBaud said:If it does have binary sludge, like Fiddler's COM+ connection to Fiddler to figure out when it's running and what port to use, then you need to update it. Every six weeks. Whether it actually breaks or not.
And that's a desirable feature because ...?
@TwelveBaud said:There is no analogue to Mozilla Weave (bookmark, history, and tab synchronization).
Um? (though I'll give you that this will probably dump even more information into Google's data pool. Then again, Mozilla would get the same data using Weave.) -
RE: 2 features, 1 name
@blakeyrat said:
@PSWorx said:
I think the web would really benefit from a (working) single sign on solution
It had one. Passport.com. But being owned by Microsoft meant the 2/3rds of the web run by the "Microsoft SUX!!!! OPEN SORESS!!!!" people would never adopt it.
Have there ever been any non-Microsoft auth providers/login servers/etc that you could use with Passport though?
-
RE: 2 features, 1 name
@blakeyrat said:
Name gripes aside, I'm cautiously optimistic that Persona solves some of the horrible glaring problems with OpenID.
I'm almost positive it doesn't, and adds brand new horrible glaring problems of its own. But I'm trying to be cautiously optimistic about it!
<commencing rant about single sign on in 3 ... 2 ... 1 ... >
I think the web would really benefit from a (working) single sign on solution - it would force services to invest more in interoperability and would make life of users a lot easier - but I don't see how something like this would be possible at all from a commercial perspective.
I mean, take a look at the sites that use the (in)famous "Log in with Twitter/Facebook/whatever" functions, which I'd say is the closest to a working single sign on function we have so far. Even most sites which make use of those functions don't allow you to actually sign in with them - they're basically using them as a disguised "register" button which will conveniently connect your new account with your Facebook identity. You still have to answer the same annoying questions you've had before.
The reason is of course that right now you can earn big bucks if you provide an account API - but you will almost only have disadvantages if you're the consumer of such an API. I think before this changes, we'll keep having everyone and their grandma presenting their new killer single sign on solutions - and we'll have no one using it except themselves.
<end of rant>
@MiffTheFox said:
*** Userscript support has been removed for whatever reason though, but reinstated via an extension.
It technically hasn't been removed - it's just a consequence of Google's new policy that all remotely extension-ish things for chrome are now to be installed via their web store, and only via their web store. Of course that makes Chrome's native userscript support completely pointless, but it's still technically there...
I'm rather surprised that the above extension is still allowed in the web store, because it effectively breaks that exact policy. I'd like to stick up for the conspiracy theorists in this one and guess that it won't be available for very long anymore.
-
RE: Has anybody ever done a usability study on the Linux CLI interface?
@boomzilla said:
This is simply human nature and, uh, reality. While there probably is a much better system, it may or may not be worth it to replace everything currently in existence, not to mention obsolete all of the knowledge that exists all over the place. For the readers who like to imagine things, this isn't an argument saying that we shouldn't try to improve things, no matter how much you want it to be that.
Fair enough. I can understand if you have to stick with sub-par solutions due to this being the real world. What I can't understand is actually praising this as the superior thing and selling it as one of the prime features of Linux. -
RE: Has anybody ever done a usability study on the Linux CLI interface?
@bannedfromcoding said:
@PSWorx said:
So, please, I'd really like to know this: Apart from "because we've always done it like this", why would the former EVER be a good idea?
"Because using an API would require you to use a tool to create and debug the script, and it'd be a separate way from the way you're doing manually in the command line".
Except, I dare saying that powershell solves this exact issue. Of course, in order to create equivalent of powershell in Unix, you'd first need to create an API for all the features...
No, using the CLI requires the exact same tools. It's just that, for historical reasons, those tools already exist - they're called "shells". Like you said, Powershell already bridges that gap in Windows. (Though, thanks to ActiveX and stuff, there is a longer tradition of tools that directly access libraries.) - I'm no Linux expert, but I don't see why the same wouldn't be possible using shared objects on Linux. -
RE: Has anybody ever done a usability study on the Linux CLI interface?
@Kittemon said:
@blakeyrat said:
So if a library exists, WHY THE HOLY SHIT IN HELL IS THE GUI TOOL CALLING THE CLI APP!??!??!!?>!?@>
Your post explains nothing, it just raises more questions!
I'm surprised nobody brought up this point previously: the CLI defines a protocol. Anything built against that protocol can be insulated from irrelevant internal changes to the underlying library, so by default, it's preferable to write other software against an app's CLI rather than its library API. It's a basic *nixy philosophy.
So that explains 1) why it's done, and 2) why there is such strong resistance to changing the way it's done.
The weird fixation with strings that seems so prevalent in the *nix is something that has been bugging me for a long time already. Of course you can treat the input syntax and output formatting of a specific CLI utility as a protocol - just like you can treat the, I dunno, the public Application Programming Interface of the underyling library as one. Comparing the two approaches, I see the following pros and cons:
- Using the CLI as a protocol:
- + Output can be interpreted by humans
- - You're restricted to exchanging strings. Good luck solving the problems of packaging structured information into strings for the 100th time.
- - Your tools waist CPU by formatting data into a human-readable string, which is then parsed back by the next tool - witout any human ever looking at it.
- - There can only be one version of the protocol all the time. You have to take great pains to stay backwards compatible to every previous version ever.
- - Actually, there are no versions at all. There is no structured description of the protocol, except convention and a likely outdated manpage.
- - Corollay: Every client of your protocol probably parses it subtly different, depending on which particular regex the developer felt like using.
- - If you want to do anything non-trivial, you'll want to chain different tools together, which all have their own rules on string parsing. Good luck figuring out the correct syntax when multiple tools interact. Examples: find -exec, using bash wildcards on directories with more than 10.000 files.
- Ok, now let's treat the API of the library as a protocol:
- + Debuggers can generate a "human readable" view of the protocol exchange for you - no need to burden the protocol itself with that.
- + You have an explicit description of the protocol as header files, type libraries, etc. You can add new functions while keeping your old API completely unchanged for legacy clients.
- + There is a standard way for launching tools (loading the library), invoking commands (calling function) and exchanging all kinds of data (arguments). You can chain two arbitrary libraries together and don't have to think about how to pass arguments between them! Unbelievable, I know!
So, please, I'd really like to know this: Apart from "because we've always done it like this", why would the former EVER be a good idea?
-
RE: Yet another sleep
@Xyro said:
...is the monitor bright enough to light up the room? Or, maybe I can just fiddle with the contrast and brightness settings. Wait a second, how on earth does a 486 have a DVI port??? WHERE AM I????
@dhromed said:@Xyro said:
LOOK COMPUTER
It's a crappy old 486 case. Beige and lightly stained. It's off.
@dhromed said:You carefully remove the back screws and the cover. You are greeted by a very long flexible PURPLE DILDO that has been crammed in and around the hardware, dislodging some CONNECTORS and jamming a few fans.
The hardware itself looks absolutely pristine and new, as if it was purchased and installed only yesterday.
@Onyx said:Windows 7 on a 486...
... I don't think there are any emulators running on that computer.
-
RE: Pick your own, but don't go to the source
@ekolis said:
Freaking MEDIA-CONDITIONAL STYLESHEETS? CHECK...
Sorry, that fad is back in style again. -
RE: Oauth is awful-- illustrated!
@blakeyrat said:
How would they know who to push it to?
You could have your app register the upcoming authentication with your own server before you start. e.g. like this:
- User clicks on "Connect with Twitter" button in awesomesoft app.
- App connects to twitter, gets request token X.
- App connects to awesomesoft.com, announces that it's about to do an OAuth authentication, sends the request token and keeps the connection open.
- Awesomesoft.com associates the request token with the connection it just got sent through.
- App asks user to somehow open the Twitter authorisation page. User opens the page on his smartphone, clicks "Yes".
- User gets redirected to awesomesoft.com/landing?oauth_token=X.
- The landing script retrieves the open connection associated with X and sends a push notification.
- The app plays a cheerful jingle to indicate the setup is complete. User is completely baffled because he has just remote-controlled his playstation from his smartphone.
Disclaimer: I know this setup is something between needlessly complicated and completely insane. But, well, it's OAuth, what do you expect? -
RE: Oauth is awful-- illustrated!
Wow, I didn't know there is actually an application that uses that ridiculous PIN authentication flow...
There is one thing I don't get here. (Ignoring for a moment the fact that OAuth is just wrong on more levels than one can count)
So this is the official Twitter app? Last time I checked, Twitter was a cloud service. They have servers. That can serve web pages. Why can't they just use the normal flow, redirect you to some landing page on their site when you're done and notify the app via server push?
I could kind of understand a crazy setup like this when you have a pure client app with no server-side infrastructure at all. But for Twitter of all things? -
RE: Not So Friendly Greeting on Wikipedia
ITT: A forum RP about posting in a forum RP.