var x = 'MM_callJS(x+x);';
MM_callJS(x);
//muhahaha...!
var x = 'MM_callJS(x+x);';
MM_callJS(x);
//muhahaha...!
I'm not really experienced with IE 6 but that sounds like some internal buffer cut off in IE for me. How about putting the warning inside an invisible div instead and showing it when the user clicks the button?
In any case it would propably help if you posted a bit of the code. Is that possible?
@tster said:
wait a minute.... is this help message associated with the people using the board or the people placing the board on the website? in the first case it is completely asinine. In the second it is completely perfect.
I'd find it perfectly concise and helpful if it were in the administrator panel. But, I kid you not, it's on the "edit profile" page where users can change their personal settings.
@tster said:
As for help messages, I actually wish Windows had 2 or 3 sets of error messages. [...]
I agree with you, that would solve many problems. But I think too that detecting the experience level automatically would propably lead to more problems and confusion than it's worth it. A simple option in the control panel would suffice. That, or make more use of the "Details >>" button.
[Quote user="Iago"]Um, there are probably more users in the world who find the American date format totally incomprehensible than who are used to it. It's not a case of being "picky", it's a case of expecting an application to work for the user, not the other way round. A good interface here would have had predefined options for "Month-Day-Year", "Day-Month-Year", and "Year-Month-Day", and an "Other" option that revealed the underlying editable format string. Advanced functionality tucked away where only advanced users will see it, and common functionality provided in a way that common users can use easily.[/Quote]
Seconded! Coudln't have said it better!
@emurphy said:
@PSWorx said:I believe the focus is more on not taking knowledge of technical terms for granted (like the phpBB "help" message "the syntax of this field is the same as in php's date() function")
phpBB hyperlinks "date()" to the relevant section of the PHP manual. The only thing it takes for granted is that the subset of users who are both (1) picky enough to want a custom date format, and (2) unable and/or unwilling to RTFM, is small enough not to worry about.
(3) the user knows what a "syntax" is
(4) the user knows what a "function" is
(5) the user knows what a "string" and a "pattern" is (in the documentation)
(6) the user is willing to dig through the documentation of a programming language and skip over paragraphs of higly technical and completely irrelevant information only to configure how some forum displays dates.
I really don't think the forum should reproduce the complete documentation in the help text. What I mean is, this is knowledge the user cares a crap about and that's his right.
RTFM is a good point if you're arguing with programmers. But those aren't programmers, those are just users.
Suppose some hobby gardening site that adds a board for some chit-chat. Then some hobby gardener discovers that function and gets hit in the face how he dares to surf the internet without being a computer geek.
I like how the guidelines don't only approach programmers and designers but management and human resources too though :
[quote user="Rule 4: Use icons and graphics consistent with the Windows Vista style and quality"][...] Spend the time necessary to get it right. If you do not have an in-house graphic designer, outsource the task to experts at one of the many design agencies.[/quote]
I believe the focus is more on not taking knowledge of technical terms for granted (like the phpBB "help" message "the syntax of this field is the same as in php's date() function")
I think the problem is that many users "just want to work" and don't have the time and/or will to learn new terminology. And besides, a lot of terminology is already common sense ("files", "directory", "firewall", "browser", etc) so I think you can get pretty far without getting into the really technical stuff. That's my opinion at least.
If you're ever got mad that your spec wasn't in-depth enough, you'll propably like the Windows Vista UI Guidelines. They don't stop with describing the semantics and correct usage of UI controls, they get down to every detail of the interface, until they arrive at the attitude of the text in dialog boxes...
(I wouldn't necessarily classify this as "Worse Than Failure", maybe it's even an improvement, but it still made me call out "WTF" loud)
@JamesKilton said:
I love these filters that work on parts of words as well. Blizzard has a wonderful filter for their World of Warcraft forums which includes words like rape (how this fits in with the normal curse words is beyond me). So you get g&*@! (grape), and it can make posts very hard to read.
The "smarter" the word filter, the more it gets wrong. And they still don't know how to handle s h i t.
Be glad you never played the mmo Maple Story. A prominent one along its many WTFs was its word filter. Basically, if it found something that looked like a curse word for it, it would
I like the word "buttumption" though... Shows pretty well IMO with which body part the developers propably thought that regex up...
@Ice^^Heat said:
Then its time to think about something new right?? The revolution starts here!
I better start reading cyberpunk!
There ARE many really interesting projects going on. (XHTML 2.0, RDF, microformats, WIDEX to name only a few). The W3C itself seems to pump out new standards like crazy. And I'd really love if those would establish themself some time but I have that eery feeling that many will go the same way as apparently apache 2.0 (and RDF):
The shiny new standards will be there and everyone will agree that "in theory", "in an ideal world", etc they'd make everything better. However, no one will take the risk of actually using them because they'd lose compatibillity with the old buggy and messy but "established" technologies - and with this actually prevent them from ever being established.
I mean, it's the same way with IE. Even in 2007 it has lots of obnoxious quirks. But because it's still the no.1 browser on the market. Hence web developers need to "code around it" and other people that don't think much about copatibillity and the like even still happily exploit non-standard behavoir. And, well, so since everything seems to work, there is of course no urgent reason for Microsoft to actually fix the quirks. On the contrary even, making it more standard compatible might [i]break[/i] dozends of sites relieing on those quirks.
Ok, this is maybe a bit too pessimistic but from what I've seen so far it really looks like some kind of vicious circle to me...
Also, finally, please forgive that offtopic rant... I guess I had to blow off some steam ... *goes in the corner*
Or more accurately, what's it purpose in the W3C philosophy? I'm trying to make myself familiar with the whole "semantic web" and "future of the web" stuff and everything, because, although a few years old already, it sounds pretty interesting in my opinion. (Yes, I am a nerd, I know...)
While XHTML seems clean and nifty at first I more and more don't really get how it is supposed to "fit in the greater scheme of things". Basically, from what I got so far, the W3C wants to promote XML and RDF datasets for semantic data and CSS an XSL for presentational data. While this sounds like an improvement in my opinion, what would be left as "valid" XHTML content then?
I mean, I see what good ol' HTML was good for (or in fact, still is). It combined semantic and presentational data because it was being developed a long time before the whole standardisation stuff started. But XHTML was created long after RDF, XML and CSS, so why was it created in the first place?
That, and people should spend more time including intelligent caching behavoir in their scripts. It annoys me to no end when pages disable caching completely only because they end in .php and MIGHT be changing their content some time in this milennium... You can use header() for some more things than just redirecting, you know...
The page is served as "no cache". I could imagine, since the browser is not allowed to keep a local copy, it has to fetch it again when you use the back button, in the process "forgetting" the scrolling position.
I remember a friend of mine way back when he had win98 on his pc. The first thing he did after starting up the pc was to
@Mikademus said:
@Steeldragon said:@Mikademus said:@PSWorx said:
/me slaps Steeldragon around with a large trout for using "ur"/me misquotes Mikademus in an attempt to start a silly quote-flood forum game; slaps Steeldragon with tuna for good measure
/me slaps self twice for using "ur" twice
/me realises that THE SLAPFEST IS ON! pulls out swordfish
PsWorx replaces himself with a symbolic link and redirects all slaps to Steeldragon and Mikademus, decided randomly
...mahaha!
/me slaps steeldragon around with a large trout for using "ur"
@db2 said:
@malfist said:(BTW, that function only works with a live connection to a database, any reason why?)I've wondered that myself. Maybe it's getting some information from the server about how it should properly escape the strings. Or maybe it's just poorly designed. :) I don't know for sure either way.
According to the article, it does exactly that: It queries the database server which chars it uses for escaping. That way you're save from bad surprises if the database server uses escape chars you didn't know of.
Note how she uses XHTML to display that picture...
Sorry... care to explain for the non-baseball challanged europeans?
<edit>
...oh...yeah, being able to read might help too... my fault... still, what does "homered to deep right/left" mean?
(sorry for the double post, thank those stupid post permissions...)
Check this: http://badgers.badgers.badgers.worsethanfailure.com/forums
@Kemp said:
Why it apparently jumps through hoops in order to give inconsistent behaviour I have no idea.
Not necessarily. I could imagine the www is not processed by the redirect script but by the DNS resolving mechanism. Since there is no entry for "forums.forums.forums.....www.worsethanfailure.com", the default entry for "anything.www.worsethanfailure.com" is returned, the adress of the frontpage. But the browser still sends the original adress in the Host header from where it then gets copied into some "request uri" server variable.
The redirect script however assumes that the only way to reach the page is via "worsethanfailure.com", thus blindly slaps a ".forums" in front of the server variable and hilarity ensues...
Filed under: BadgerBadgerBadger
Seconded!
Aah, ok, seeing it now. Weird but somehow fun =)
Weren't web browsers supposed to stop following redirects after the fifth time or so?
hmm, either it's already fixed or it's browser dependant in some weird way. In any case forums.www.worsethanfailure.com loaded the front page fine for me, Tamper Data didn't report any http redirects either..
While we're at it though, what's the meaning of those "Home" and "Forum" links at the top of the page where the one brings you to the forums and the other one brings you ... to the forums too?
Also, queries that cross the ocean without touching the american continent get rejected with an "can't process". Still, roflmao...
@dhromed said:
@joe.edwards@imaginuity.com said:I don't remember seeing NextPage in the (X)HTML specs. Nonstandard attributes make baby Jesus cry.
No they don't.
No really.
In fact, they make him smile and coo, because you used a custom attribute, relevant to your operation, instead of misusing an existing one.
@KattMan said:
Aside from that I can see one place that using history -1 would be helpful as a link on the page. Think about search results and paged result sets. Bottom of the page you would have "back 1 2 3 4 next", clicking on a number takes you to that page. clicking next takes you to the appropriate page. All these process the result set again before displaying. The "back" option can be a forward link causing a process of the page again, or it can be a history -1 thing that simply pulls it from your cache. Of course if you have your page set to NOCACHE or the user sets caching off on their browser tis doesn't help so why not just process the page anyway.
As someone who some times browses search engine results or forum topics or any other kinds of pages you have to browse by number backwards, I find that extremely irritiating. You have a "next" link that takes you to the page one number ahead. So I as the user would expect that the "back" link does the opposite - taking you one page back, and not to the page you came from. I agree, this is maybe not so much a problem for search engines, but consider a forum: Someone posted a link to the second page of the topic. It turns out interesting but I'd like to read the start of the topic too again, so I click on the "back" link - only to land at the post with the link again. Useless.
I agree, the caching advantage sounds tempting, but what prevents you from simply adding chache headers to the output of your script and check incoming requests for If-None-Match and
If-Modified-Since headers before doing the hard work? You get the same efficiency win at the costs of maybe a couple more bytes over the wire.
@plazmo said:
Something i dont like about that website that i find on many other sites.
Its making the back button link on the page go to 'javascript:onclick:history.go(-1)'
Its most annoying, i think, when you navigate to the page from a search engine. I think the back button at the least should go to the index page. Just seems like a lazy way to do navigation.
@Imroy said:
@PSWorx said:It shouldn't matter what version of XHTML is involved, it's still just XML. To be valid XML however, you should at least define your own namespace in the top-level tag for use with your own tags or attributes:I thought that the point of xhtml was that you COULD make things up?I believe that's the difference between XHTML 1.x and XHTML 2.0.
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:wtf="http://worsethanfailure.com/xml/1.0">
... <div wtf:nextpage="...">...
Things would be nice it it were that way but apparently the spec says differently:
@XHTML 1.0 said:
The XHTML namespace may be used with other XML namespaces as per XMLNS, although such documents are not strictly conforming XHTML 1.0 documents as defined above.
That is a somewhat borderline definition in my opinion. But fact is that W3C's official validator doesn't even validate the example of the spec itself. And that seems to be enough for people to rule out that solution...
Take a look at the source too... those hidden fields with non-ascii values doesn't look very healthly either...
I thought that the point of xhtml was that you COULD make things up?
I believe that's the difference between XHTML 1.x and XHTML 2.0.
1.x seems (from as much as I have understood) to be little more than a XML rendition of HTML. It uses XML as the underlying data format but is nevertheless not really extensible beyond what HTML offers (in the XML sense at least) and even comes with a "you shall not have any namespaces other than me" policy.
2.0 however is fundamentally different, rebuild almost from scratch. Functionalities are organized into different modules, each with its own namespace. The same way, the language can simply be expanded with custom XML markup or with other languages, RDF for example. While in theory that would make the language incredibly clean and flexible, it comes at the cost of completely breaking the web as it is. Which is one of the reasons that it propably never will be used in its current form, despite sticking around for several years already...
I remeber a public affair about the (privatized) german postal services some years ago. If you get a packet around here, you'll actually find a notification in the mailbox that you can collect the packet in your local postal bureau. Packets that aren't collected are stored in the bureau for some time, then are released to an auction where everyone can get it. The revenues of the auctions of course go to the postal company. Well, turned out in many cases the postal company didn't quite make ... all possible efforts to deliver the packet before sending it off to the auction. In fact, many packets on there had "mysteriously vanished" during transit before...
@RevEng said:
@PSWorx said:
The real WTF IMO is that the same capcha URL will always produce the same image.That's the worst WTF you can come up with? The whole point of a CAPTCHA is to be difficult for a machine to understand, but easy for a human being. I think putting the answer to the CAPTCHA in the URL makes it rather trivial for the machine (indeed, easier than for the human!), entirely defeating the purpose of a CAPTCHA.
I was referring to the URL of the captcha image, not the URL of the comment form, so maybe we're talking about the same thing? In any case, where exactly so you see the answer of the captcha in the URL?
The number in it seems to be the only piece of information, the captcha is based on, true, but you still don't know the algorithm they use to generate the captcha from the number. That could be anything after all, from a hash to a table with random letters in it.
So, yeah, if you had seen enough captchas there to associate each number with a captcha string you could solve this captcha automatically, but how do you think you can decode the number "on the fly"? Or did I miss something?
...all that of course not taking into account that any halfway self-respecting OCR software should have no problems reading that thing in anyway... or is the shark background supposed to scare them away? :p
@iwpg said:
@KattMan said:I thought it was caching also, but when I go in the first time it is always the same captcha, after submitting I get a different captcha because the previous failed. Submitting again still fails and I get another captcha. Leave the site, shut down the browser and go back, attempt to post again and I get the exact same captcha as I did the first time, there is no pattern to subsequent captchas but the first one on every attempt is the same.
If it were caching, wouldn't it either always be the same or the same as the last one I had? The duplication is always on the first attempt and of course every attempt fails.
Possibly because when you go to the form for the first time it's a GET, but when you post a wrong CAPTCHA and it tells you to try again it's a POST?
Yup, that's it. The "submit comment" page will be cached and with it the URL of the captcha image. Click the "add comment" links on two different articles: You will get two different captchas but the captcha for each article will stay the same.
The real WTF IMO is that the same capcha URL will always produce the same image.
@malfist said:
How do you do that? Make a URL point to 127.0.0.1, is that not part of the whole hidden IP addresses (besides being localhost too?).
It may have a special meaning but it still is an IP adress and can used everywhere were "normal" adresses can be used.
Blind stab in the dark, maybe it has the option to treat the database contents like nodes in an XML file? With DOM access, XQuery/XPath navigation, XSLT and the whole stuff...
(The emphasis for the non-WTF product would be on *treat*, in that you just provide the XML APIs but store the data binary internally)
Judging from the speed it seems to run on just now, he should better think about adding a cooling system...
@pauluskc said:
very interesting, because IE doesn't complain like this... i wonder if IE has an explicit trust for *.microsoft.com* in it's certificate pool.... how interesting is that.
Besides, think about it for a second, doing that would just increase the trouble for Microsoft. Usually, if some kind of hacker/virus/malware/etc redirects users to its own website (hosts file change, etc), SSL authentification could uncover this and prevent any harm. If MS really included such a "hack" to bypass autentification checks, they would just make it easier for hackers to take over their site.
I mean, come on, which company of all has propably the fewest issues affording a correctly signed certificate...
"Acrobat Reader sucks all your base..."
This is getting dangerous
Was gazing a few seconds too long on our shiny, new tag cloud... and suddenly discovered it! Patterns! The tag cloud wants to tell us something!
Just starting my study on computer science after having played around with VB, PHP, ECMAScript and Java (in roughly that order) as hobby
I like how they make use of not one, not two but three different 3D systems, each with its very own set of angles:
First there is the "flat screen", complete with skewed window content
Second there are the 3D features of the fake scroll bar and the border
Third there is the graph itself...
Talk about consistency...
Quote:
Well, I wrote a function named "strCommaIze" once, in C, because there wasn't anything built-in. I still use it sometimes.
<*sigh*> I remember the good old days of COBOL. It was so easy. Just like this:
77 WITH-COMMAS PICTURE Z,ZZZ,ZZZ,ZZ9.
MOVE RESULT-VALUE TO WITH-COMMAS.
...but how am I supposed to order that ISS vacation trip for 100 billions from your catalogue? :p
Stumbled over this ad for a german community website, that apparently is open for buisyness during Alpha stage... and advertizing with this...
(granted, it's a pretty new spinoff service from another, larger community website (with the full load of WTFs on its own) but still...)
@utoxin said:
And yeah, while this may be a 'string programmer' symptom... that doesn't excuse it.
Pardon my bad English, I meant to say "in addition to" instead of "despite". I didn't want to excuse it in any way :)
@kirchhoff said:
The problem is that if that were to work the way you would want it to work, then the exists/defined operators would have to block the code parser from emitting hash traversal opcodes for the parameter.
Because if it didn't autovivify in an lvalue sense, then by doing exists($a->{b}->{c}) would cause a warning or return undef if 'b' didn't exist in hash a.
So that adds another special case to the perl interpreter... parameters to exists are not evaluated before passing. They would have to be evaluated implictly at runtime inside an eval and trap the undefined reference error.
I agree that exists() throwing a warning would propably not very DWIM-ish. Then again though I wonder if an exists() call that silently changes data is so much more intuitive.
Despite the obvious WTFness, this seems to me like a typical case of the pathological string programmer, who tries to solve all problems by converting the input to a string, bludgeoning it with string manipulation functions until it kinda looks like the desired result and then converting it back.
Btw, am I the only one who finds
function dollarfy ($num='',$dec='2') {
$dec = $dec ? $dec : 0;
a bit um ... wrongfully redundant? Please tell me when exactly the 'else' expression of the ternary operator is evaluated there.
Correction: The Real Real WTF is that I can't read. >< Shame on me... Mods, please delete...
@Volmarias said:
Sign up. You know you want to. You can host images, like this one!