What (Who?) needs research



  • @Eldelshell said:

    Distributed HTTP.

    Didn't Opera try that at some point?



  • What I know from Opera is that they did some pre-render on their servers so it was faster client-side. Never heard of them doing dHTTP



  • I remember there being some Opera thing where you could host a website using the browser and it would be reverse proxied by their servers or something.



  • someone is doing that at the moment with bittorrent..


  • Discourse touched me in a no-no place

    @Eldelshell said:

    Distributed HTTP. The guys behind BitTorrent are trying to do this and there are other efforts but it would be great if you cracked that one.

    That's basically content-addressable referencing. The main problems with it right now as I understand it are:

    • The referencing schemes which work (see magnet: URLs) are not even slightly user-friendly, so you need some sort of resolution scheme and that stuffs most of the good features of the scheme. Or at least it looks that way to me…
    • Getting the data associated with a particular content-id in a timely fashion is (probably) going to be hard.

    And nobody's worked out the implications of really going for commercialisation. Anything that cannot be commercialised will probably remain just a research curiosity…



  • I actually considered what it would take to use the bittorrent protocol to implement a distributed forum software.

    I mean, if bbsync can sync my files in near-realtime, then a forum software should be implementable as well, right?


  • Discourse touched me in a no-no place

    @Mikael_Svahnberg said:

    if bbsync can sync my files in near-realtime

    The key is that it knows where to sync with and the place it is syncing with is adequately provisioned.



  • @dkf said:

    @ijij said:
    I've forgotten the details, it must have something to do with Duals, but memory says there are subtle ways.

    You can do all sorts of tricks (and have to: sometimes you need to start with log scales or work in the reciprocal domain) but you still need a way to convert things to a common “goodness” scale, to say that gaining 3 points of A is worth giving up 4 points of B (or not). If you don't, you end up making bizarre decisions, such as giving up one bar of gold to get 3 wooden nickels.

    :angry: I'm warning you... I may have to dig out a book if you don't stop... 😉

    you're calling me out on stuff I worked on 20 years ago

    The ideas all relate to staying in the realm of feasible solutions and switching to "if you want 3 A and 4 B that implies 'some gold == a few wooden nickels' " only at the end.

    AHP is related...


  • Discourse touched me in a no-no place

    @ijij said:

    I may have to dig out a book if you don't stop...

    The article you link to still makes my point. 😄 You still need scaling factors. You just don't need (in my view) to produce a score in the range 0.0–1.0…



  • @dkf said:

    You still need scaling factors. You just don't need (in my view) to produce a score in the range 0.0–1.0…

    :facepalm: but they're outputs ... :headdesk: :headdesk:


  • Discourse touched me in a no-no place

    Well, I did read the Wikipedia article and some related ones. That stuff seemed to be all about trying to come up with ways to have a factor be subsumed entirely by another so that it only needs to be used for breaking ties once more important factors have been considered. Which is a way of doing it I suppose, but is quite capable of leading to poor decisions. A generalised scoring mechanism (with qualifying thresholds for critical factors) is strong technique that can balance between many factors.

    Though the weights still need to be borne in mind (my point at the start of this sub-thread) and some scales will need converting to log scales first or other similar mathematical black magic to get something approaching linear importance. 😄



  • @dkf said:

    Though the weights still need to be borne in

    In spite of my insistence on the existence of alternate voodoo - bearing the weights in mind is what I do all day...



  • Thought of mentioning AHP earlier. In our experience, though, AHP is perceived as quite opaque -- it is not obvious to the users how you get from the pair-wise comparisons to the 0-1.0-on-a-ratio-scale (which I perceived as somewhat of a showstopper for @dkf). For this reason we use cumulative voting (or hierarchical cumulative voting, if we need to) whenever we can get away with it, but then you are back to directly assigning values to the alternatives.


  • Discourse touched me in a no-no place

    @Mikael_Svahnberg said:

    the 0-1.0-on-a-ratio-scale

    You can always scale to that range afterwards if you want. But you might instead prefer to use 1 ⭐ – 5 ⭐ if talking to plebs. 😃

    FUCK YOU DISCOURSE! I WANT THIN SPACES THERE, STOP REPLACING THEM DURING COOKING!



  • If I ask a decision maker "please rate from one to five ⭐s how important this item/thing/requirement/whatever is", I am going to get the answer "they're all important!". Cumulative Voting (aka the 100$ method) or AHP forces them to realise that not all options are equally important. This is where we have found that CV is much more transparent than AHP in getting people to subsequently accept that the priorities are actually a result of the rating they initially did, without some intermediate magic (Eigenvector, anyone?).


  • Discourse touched me in a no-no place

    Reducing things to financially-based scores is definitely one way to get the weightings, especially as people get to think that the scores might have some meaning.



  • @dkf said:

    You can always scale to that range afterwards if you want. But you might instead prefer to use 1 – 5 if talking to plebs.

    :headdesk: I always cringe when I see the stars nonsense. But obviously the 0-1 scale is in reality no better and you'd get no answers at all.

    The nerd appeal to AHP is the ability to calculate a measure of the internal consistency.

    WRT one large scale use of AHP that I have experience with, the opaque-ness of the calculation was actually (eventually) viewed as a positive since it shielded the decision-makers from direct responsibility for the specific outcomes.



  • There is one big difference between the 0-1.0-scale that you get out from e.g. AHP and the stars-system: 0-1.0 is on a ratio scale, meaning it makes sense to say things like "Item A is twice as important as item B".

    The five-stars are usually used as an ordinal scale (rank matters but nothing else makes sense), but when the results are interpreted, they are read as an interval scale (item A ( ⭐) + item B ( ⭐ ⭐) == item C ( ⭐ ⭐ ⭐ )).

    This is usually difficult to get across in a discussion, because you have to throw nerd-talk at them, and when they ignore that and continue doing meaningless comparisons you have no means by which to reel them back in.



  • @ijij said:

    The nerd appeal to AHP is the ability to calculate a measure of the internal consistency.

    Yeah, thats a load of bull. Especially the "compensate for randomness by multiplying by this magic number that increases in an ad-hoc way based on the number of elements you compare, except for when you have 12 or 13 elements in which case the magic number is lower".

    And what are you going to do with the consistency measure? Go back and tell the highly payed decision makers "Velly Solly, this no Clicket -- you do again!"?



  • @Mikael_Svahnberg said:

    The five-stars are usually used as an ordinal scale (rank matters but nothing else makes sense), but when the results are interpreted, they are read as an interval scale (item A ( ) + item B ( ) == item C ( )).

    This is usually difficult to get across in a discussion, because you have to throw nerd-talk at them, and when they ignore that and continue doing meaningless comparisons you have no means by which to reel them back in.

    We are apparently not speaking of the same thing...

    Throw some nerd-talk at me...

    (But don't tell my wife.)



  • With the five-star system there is simply no way of controlling that they have interpreted the stars in the right way. Yes, you can tell them, but the appeal of the system is that they think they already understand it.

    Example: a person ranks element A as ⭐, element B as ⭐ ⭐ and element C as ⭐ ⭐ ⭐.

    • Is C three times as important as A?
    • Is C "moderatly important" and A "not important"?
    • Is A+B==C in importance?
    • So if I take A and C, are they together twice as important as B?
    • Is even A<B<C, and A<<C?

    You don't know which system the person used when answering, but everyone is going to use the interpretation that best suit their needs during the analysis of the results. And then we have the "Let's just take the average for each element over all participants!" <shudders>



  • @Mikael_Svahnberg said:

    And what are you going to do with the consistency measure? Go back and tell the highly payed decision makers "Velly Solly, this no Clicket -- you do again!"?

    ... I actually think in our special case, the answer was. "Yes."

    The long answer might be - let's re-examine the proxies that we're using for comparing the "goodness" of these things and tease out why there's disagreement about it.

    Trivial E.g. comparing small SUVs - is a third row good? Some decision-makers may implicitly be thinking about hauling adults and others about hauling kids (and some may be short adults who don't realize how small the back is!) so putting that issue on the table could improve your choice.



  • Agreed. Different stakeholders may and will think of different things, so it is important to discuss their different results to uncover these implicit assumptions.

    The consistency index in AHP does not help you with that, since it only measures how internally consistent one stakeholder has been in their particular answers. So at best you can tell a single stakeholder that "you're messed up".



  • Ah. Absolutely... I thought you were somehow defending stars.... or some glorious stat-based method that works. ;)

    A co-worker is currently in the position of supporting an application where two different people are rating the exact same things, and then reporting on the gaps between the ratings. That would be the straw that would break my back.



  • @Mikael_Svahnberg said:

    The consistency index in AHP does not help you with that,

    ❓ TIDNR

    You've been AHPing more recently than I...

    TODO: reread on AHP.



  • I may very well have misinterpreted something. Most applications of AHP I've seen (including Saaty's whole book of examples) are usually pretty flat (neglecting the 'H'). I suppose if you use the H to represent different stakeholders at different levels and start merging upwards you may be able to get some global measure of consistency, but IIRC there was no way to propagate the consistency index of the different sub-trees upwards -- it's just the consistency of one particular group of pair-wise comparisons.



  • and I may neglecting to group the comparisons... I think I was remembering it as tossing all the comparisons in together.



  • Each individual would need to do the pair-wise comparisons for all elements. I don't think having duplicate answers for the same pair is supported (it is at least not as simple as using the mean value), so you need to do the merging based on the eigenvector (i.e., the outcome of AHP, where you've already calculated the CI).



  • @ijij said:

    I think I was mis-remembering it

    FTFM ;)


Log in to reply