Sentiment Mining in Twitter?



  • Has anybody here done sentiment mining from Twitter data, or have any ideas on a good way to go about it? It looks like the best tools have an accuracy rate of about 65%... is that really the best it gets?



  •  Ian barber does a fair bit of blogging about these kinds of algo's for PHP, even has one specifically doing what you are looking for I think.  But probably not the amount of accuracy you need.

    http://phpir.com/bayesian-opinion-mining



  • @stratos said:

     Ian barber does a fair bit of blogging about these kinds of algo's for PHP, even has one specifically doing what you are looking for I think.  But probably not the amount of accuracy you need.

    http://phpir.com/bayesian-opinion-mining

    That's more or less what I'm doing now*, the reason I'm looking for an alternate is that I'm trying to minimize the amount of human classification required (probably impossible; but there's no budget for this, so if a human has to classify it's gonna be me), and because of the amount of data, the Bayesian approach is... well, it's a strain on the DB. (Classifying a single tweet takes over a second, and we have millions of tweets.)

    *) One slight difference is that I'm including multiple-word tokens when tokenizing the tweet, so that "is shit" and "is the shit" are different tokens.


  • 🚽 Regular

    I don't remember where I read it, but someone made the argument that sentiment mining on social media feeds like Twitter and Facebook are especially grossly inaccurate to the point of uselessness because often times the sentiment is simply a "retweet" or a "share" of an existing review, which does not necessarily match the opinion of the reader. For example, if someone shared a link on Facebook to a blog's review like:

    "This new Motorola phone causes brain damage" with the Facebook user blasting the review adding an addendum to the link: "This whole review is bullshit and full of lies." even the best tools would likely inaccurately flag this review as a scathing review, even though the Facebook user is actually disagreeing with the original review, and not the product. Reading the comments to that wall post add to the confusion, as you don't always know who is responding to whom.

    The more accurate way to mine for opinions is to look at store websites and blogs for opinions, as they are almost always posted by the very person who authored them, rather than a bunch of squawking parrots who just feel like hitting the "Recommend" or "Retweet" button. The article I read on it is a bit old, though, so things might have changed, or maybe the article itself is bullshit and full of lies.



  • The problem with Facebook is that the only access we have is comments on the company's own fan site, without doing dubious screen-scraping. If we mined sentiment from a fan site, every single day it'd be 99.9% positive abd the data's useless.

    Note that there's a good chance the data's useless anyway.



  • As the data sample gets larger each individual tweet will be less and less important. If you analyze 10k tweets mentioning nike; The 30 odd RT'ing some blog post with a title like "why nike sucks" should have no noticable impact.

    I of course don't know what blakey is doing, but the most common use of these kinds of systems is to follow the trend, and when the trend changes, find out why. Perhaps that new product getting a lot of flack/praise? It's a data trend over time thing, not a single point in time deal. and humans (alright, alright, marketing people) will need to find out why changes happen. Which means that a large enough group of false positives will be easily found as a reason for a certain change.



  • @blakeyrat said:

    Note that there's a good chance the data's useless anyway.
    I was going to make a suggestion, then realised I'm probably wrong. Make it a question, instead: how do you test/verify your results to find out whether they are actually meaningful in any way?



  • @intertravel said:

    @blakeyrat said:
    Note that there's a good chance the data's useless anyway.
    I was going to make a suggestion, then realised I'm probably wrong. Make it a question, instead: how do you test/verify your results to find out whether they are actually meaningful in any way?

    That's an extremely good question.

    You can judge the accuracy of the classifier by just having a human re-classify tweets that the software already classified, and see if the scores match.

    But to figure out if there's any actual relevance to the entire affair? ... if anybody has ideas, I'd love to hear them.

    To be honest, what's happening is my company is buying this service from a vendor who:
    1) has very little data (they use blog posts from "approved" bloggers, which means their classifier is very accurate, but it only finds 6-7 posts a month for most clients.)
    2) is very very expensive
    3) also don't seem to talk about the real-world usefulness of their results (there are some things in advertising just taken as gospel with no actual data. "TV advertising is effective" for example. This might just be another.)

    So the goal is to find something roughly comparable we could run in-house, save a ton of scratch, and impress our clients with how technically savvy we are.



  • @blakeyrat said:

    You can judge the accuracy of the classifier by just having a human re-classify tweets that the software already classified, and see if the scores match.

    But to figure out if there's any actual relevance to the entire affair? ... if anybody has ideas, I'd love to hear them.

    To be honest, what's happening is my company is buying this service from a vendor who:
    1) has very little data (they use blog posts from "approved" bloggers, which means their classifier is very accurate, but it only finds 6-7 posts a month for most clients.)
    2) is very very expensive
    3) also don't seem to talk about the real-world usefulness of their results (there are some things in advertising just taken as gospel with no actual data. "TV advertising is effective" for example. This might just be another.)

    Well, I started off thinking that surely the way to weed out the retweets of disagreement, and so-on, would be to rank all a Tweeter's other tweets as well, and see if it matches - presumably you could also factor in all the other tracking data you can get to profile them with.

    Presumably there is some ultimate end-product - a prediction, of sorts - that can be tested against reality once things have panned out. For example, I get the impression you might in some cases be dealing with film reviews - and, a couple of years later, when the sales figures are in, you can check if the sentiment-mining was telling you anything useful about what would happen. Or have I got hold of the wrong end of the stick?

    So the goal is to find something roughly comparable we could run in-house, save a ton of scratch, and impress our clients with how technically savvy we are.

    How about producing something nice and techy that proves the 3rd-party product is ineffective? You could get client-points by telling them to stop wasting time and money on something which doesn't work.



  • @intertravel said:

    Well, I started off thinking that surely the way to weed out the retweets of disagreement, and so-on, would be to rank all a Tweeter's other tweets as well, and see if it matches - presumably you could also factor in all the other tracking data you can get to profile them with.

    I was just gonna exclude retweets altogether, although I could see people retweeting tweet X might serve as a score multiplier for X.

    @intertravel said:

    Presumably there is some ultimate end-product - a prediction, of sorts - that can be tested against reality once things have panned out. For example, I get the impression you might in some cases be dealing with film reviews - and, a couple of years later, when the sales figures are in, you can check if the sentiment-mining was telling you anything useful about what would happen. Or have I got hold of the wrong end of the stick?

    This is the nebulous, retard world of brand advertising. There's data, sure, but the problem is the data includes: the brand impressions, the non-brand impressions, sales/coupons, news/rumors, online social opinion networks (consumerist.com alone can have a noticeable effect), friends talking amongst each other, stock price, etc. There's so much noise, anybody claiming they have good analytics data on brand campaigns is probably lying... even if you stop all non-brand advertising, you still have all those other factors.

    The real problem is, we're an ethical, rational, measurement-based company that's competing against BS-spewing idiots who do things like, say, "you can get reliable sentiment monitoring from social networks." Because they've come up with some bullshit measure that works in their extremely limited testing, and now they think they can open it up to the biggest brands in the world (Nike, to use the above example) without modification.

    @intertravel said:

    How about producing something nice and techy that proves the 3rd-party product is ineffective? You could get client-points by telling them to stop wasting time and money on something which doesn't work.

    I'm not in a position to tell our clients a service they're paying for is shit. In fact, even if it was shit, and we could prove it's shit, that doesn't necessarily mean you take that data to the client... especially some of the dimmer CMOs we deal with. (Believe me, we serve CMOs who don't know how pie charts work. Never assume "high ranking company officer" == "competent". The Peter Principle goes all the way to the top.)

    Edit: to clarify, the goal here is to create an in-house service with roughly the same accuracy, but without spending much budget. We can use its data to augment data from the shit company, and we can potentially use it to impress our other clients who haven't started attempting it yet. This is something I'm doing in my what Google would call "20% time", but what we call bench time, so if nothing comes of it the world won't end.



  • @blakeyrat said:

    There's so much noise, anybody claiming they have good analytics data on brand campaigns is probably lying...
    @blakeyrat said:
    they've come up with some bullshit measure that works in their extremely limited testing
    @blakeyrat said:
    to clarify, the goal here is to create an in-house service with roughly the same accuracy

    When you put it like that, doesn't sound so hard...

    Still, it sounds like you're trying to build a system to do something you acknowledge is probably impossible, or at least impractical, to do properly. What you really need, particularly if you want to be ethical and honest, is to find better ways of telling clients when their favoured idea is shit. I'm pretty good at that, if you want to hire me as an idiot-communications consultant. :)

    Anyway, if you're just doing it because it's interesting to try, and they're paying you to do it, still makes sense. I'm thinking that there may be a lot of meta-information you can use - stuff like the number of followers of a tweeter, the number of retweets, and so-on. After all, the easiest way to classify stuff is to let other people do it for you.


Log in to reply