IBM WATSON is not really AI



  • http://www.rogerschank.com/fraudulent-claims-made-by-IBM-about-Watson-and-AI

    For those who don't do hackernews. He says AI winter is coming soon. I'm not sure what to make of it. :/


  • Fake News

    @stillwater said in IBM WATSON is not really AI:

    He says AI winter is coming soon. I'm not sure what to make of it. :/

    You mean you don't get why he's saying that "AI winter is coming", what the name comes from or just in general if the guy's right?


  • ♿ (Parody)

    @stillwater Meh. He seems to be saying that computer "intelligence" thinks differently than humans so AI is bogus. Well, that seems to be rather missing the point. The Turing test isn't about comparing the difference in the way a computer and a human arrive at something but that a human couldn't tell the difference.

    And then because he probably knows that he's full of shit he tries to use Watson's analysis of Bob Dylan to show that Watson isn't even passing the Turing test. Because Watson doesn't have the background to know that Dylan was really talking about the Vietnam War. Except this dickhead even admits that a human without the background does the same fucking thing! As any high school student sitting through a literature class has experienced.

    But it all comes together at the end:

    The things I was talking about then clearly have not been read by IBM (although they seem to like the words I used.)

    IT'S ALL ABOUT MEEEEEEEE



  • @boomzilla said in IBM WATSON is not really AI:

    IT'S ALL ABOUT MEEEEEEEE

    Yes, quite a lot of sour grapes and quite a myopic outlook, but equally IBM spouts a lot of rubbish about Watson, which, as far as I can tell, is not an 'artificial intelligence' in any interesting sense.

    Most of the feats attributed to Watson appear to down to thoroughly traditional deterministic programming rather than more abstract machine learning.


  • ♿ (Parody)

    @japonicus said in IBM WATSON is not really AI:

    Most of the feats attributed to Watson appear to down to thoroughly traditional deterministic programming rather than more abstract machine learning.

    Which says that maybe it's machine learning that's full of shit.



  • @jbert said in IBM WATSON is not really AI:

    @stillwater said in IBM WATSON is not really AI:

    He says AI winter is coming soon. I'm not sure what to make of it. :/

    You mean you don't get why he's saying that "AI winter is coming", what the name comes from or just in general if the guy's right?

    I mean I don't get why he says the AI winter is coming given the fact that a lot of AI work is being done everywhere and also the reasons that caused the last AI winter just don't exist anymore. He just goes on a diatribe about Watson and then just goes oh yeah AI winter is coming. Why is the AI winter coming though? :/



  • @boomzilla said in IBM WATSON is not really AI:

    Which says that maybe it's machine learning that's full of shit

    I ve found that saying Machine learning is full of shit triggers people like crazzzyyyyy. You should say that when you re hanging out with Data scientists and she likes and watch them go completely mental.


  • ♿ (Parody)

    @stillwater said in IBM WATSON is not really AI:

    Why is the AI winter coming though?

    Like the next GRR Martin book, it never actually comes. :giggity:

    But it sounds bad ass in the author's head.



  • @japonicus said in IBM WATSON is not really AI:

    but equally IBM spouts a lot of rubbish about Watson

    I've sat in a meeting for a very long hour where the product manager pitched using Watson for the AI stack of a prototype we were making and watched the CTO turn red and go on a rant about how Watson was sooooooo not production ready for about 40 mins non stop. We ended up writing the thing in Azure. Solid AF.



  • Thinking about AI, leaving out the big ones like google and the likes, AI based MOOCs and training seems to have made more money than AI based companies in the last year. Not backed by data ofcourse.


  • area_pol

    @boomzilla said in IBM WATSON is not really AI:

    He seems to be saying that computer "intelligence" thinks differently than humans so AI is bogus.

    I see the article differently.
    He specifically refutes hype spewed by IBM's marketing:

    What I am concerned about are the exaggerated claims being made by IBM about their Watson program.

    It would be nice if IBM would tone down the hype and let people know what Watson can actually do and stop making up nonsense about love fading and out thinking cancer. IBM is simply lying now and they need to stop.


  • ♿ (Parody)

    @adynathos said in IBM WATSON is not really AI:

    He specifically refutes hype spewed by IBM's marketing:

    He thinks that he does. But then he refutes his refutation:

    But he doesn't mention Viet Nam or Civil Rights. So Watson wouldn't know that he had anything to do with those issues. It is possible to talk about something and have the words themselves not be very telling. Background knowledge matters a lot. I asked a 20 something about Bob Dylan a few days ago and he had never heard of him. He didn’t know much about the 60’s. Neither does Watson. You can’t understand words if you don’t know their context.



  • @boomzilla but the point is that Watson's profound insight that Bob Dylan is all about "love fades" is bullshit and way off the mark - and yet that's what IBM's marketing dept. picked up on!

    Interpreting the meaning behind Bob Dylan is well outside the realms of what artificial intelligence can currently do (and, as the author pointed out, is outside the experience of many humans). It's therefore a really stupid way for IBM to show off Watson. That someone in marketing at IBM apparently doesn't understand that is quite telling.



  • AI winter is coming

    Well, it's nothing to lose your head over



  • Slightly unrelated - Is anyone worried python gonna soon become the Javascript of the AI world and we'd be stuck in technical debt for fucking decades? I have nothing against python though but I'd be skeptical building large AI systems that does not have static typing. Maybe that's just me. :/


  • ♿ (Parody)

    @japonicus said in IBM WATSON is not really AI:

    but the point is that Watson's profound insight that Bob Dylan is all about "love fades" is bullshit and way off the mark - and yet that's what IBM's marketing dept. picked up on!

    I'm not convinced that it is bullshit. Literature (the guy has a Nobel for it!) can definitely have multiple meanings. That Watson was able to pick up on the surface meanings even if it didn't get all of the allusions and etc is still impressive. (Assuming it did it well. I'm not really familiar with Dylan or whatever Watson did with it.)

    @japonicus said in IBM WATSON is not really AI:

    Interpreting the meaning behind Bob Dylan is well outside the realms of what artificial intelligence can currently do (and, as the author pointed out, is outside the experience of many humans). It's therefore a really stupid way for IBM to show off Watson. That someone in marketing at IBM apparently doesn't understand that is quite telling.

    Only if you accept pedantic dickweedery over the One True Meaning of literature as a valuable marketing goal.


  • area_pol

    @stillwater said in IBM WATSON is not really AI:

    large AI systems that does not have static typing

    You would need to have a new (not present in current mainstream languages) type of static typing to be useful in ML.
    Everything is of type N-D-array of float, the challenge would be for the type system to keep track of the sizes of the arrays and whether you can multiply them etc.



  • @stillwater said in IBM WATSON is not really AI:

    For those who don't do hackernews. He says AI winter is coming soon. I'm not sure what to make of it.

    He explains it in the article.

    AI Winter wasn't some drop-off in the progress of research, it was a bunch of researchers making ridiculous over-promises which killed-off research funding when those sources felt lied to.

    Right now IBM is lying to people about what they're capable of.

    If you can't see the connection between those two events, I don't know what to say.



  • @stillwater said in IBM WATSON is not really AI:

    I have nothing against python though but I'd be skeptical building large AI systems that does not have static typing.

    Don't forget the global interpreter lock!

    Python is an absolute excrement language for AI work. Which is, of course, why all AI researchers use it: because everything in IT is awful, all the time.


  • area_can

    @stillwater said in IBM WATSON is not really AI:

    Why is the AI winter coming though?

    hype begat funding, begat disappointment, begat lack of interest and funding for continued research



  • @blakeyrat I haven't looked into Microsoft's ML libraries much, but they seem to be at least useful, and probably don't require you to use python.



  • @blakeyrat said in IBM WATSON is not really AI:

    If you can't see the connection between those two events, I don't know what to say.

    If the article implied AI winter is coming because IBM is lying then that is mistaking IBM for the only player in the AI game which clearly is not the case.



  • @blakeyrat said in IBM WATSON is not really AI:

    @stillwater said in IBM WATSON is not really AI:

    I have nothing against python though but I'd be skeptical building large AI systems that does not have static typing.

    Don't forget the global interpreter lock!

    Python is an absolute excrement language for AI work. Which is, of course, why all AI researchers use it: because everything in IT is awful, all the time.

    This is where I daydream about Anders Hejlsberg working on a language built with AI in mind instead of making goddamn fucking javascript better. What a waste of time.



  • @magus I did a bit of tinkering with their API-based ones in Azure. They're competent at what they can do, but they also (and this is key:) do not lie about what they can do. Not like IBM's Watson team does.

    And yeah, for the record, there's nothing in AI that Python does better than other languages. I think Python's just easy-to-write in general and some math guy learned it back in 1998 and since "we've always done it that way" it's still being used now.



  • All the ML tutorials that show all rosy code in python and it's all jolly good in the beginning.

    It's when shit hits the fan somewhere deep in the ML pipeline and you get a long winded stack trace annoying to debug. Then you find that bug that is so minor which should have been caught way way back when you wrote that line of code. Multiply this pain in the ass when working with code that makes different libraries talk to each other and then add the challenges that are inherently present in data and ML. top this with running the code on distributed systems and omgaisjdlasjdaslhfdasjfhafhasdfdsfdsjfjdjfjafkasdjlfjsd.

    I wish F# had a better marketing department. :(


  • area_pol

    @blakeyrat said in IBM WATSON is not really AI:

    there's nothing in AI that Python does better than other languages

    The strength of python in scientific environment is not really the language (which is similar to other dynamically typed ones) but the libraries (numeric, ML, image processing, visualisation), integration (everything that matters accepts numpy arrays) and interactive environment (jupyter).
    Also many scientists learned (or still use) MATLAB, and python's numeric tools are a free equivalent of that.

    @stillwater said in IBM WATSON is not really AI:

    long winded stack trace

    Sure, but I prefer it over Segmentation Fault + program disappearing.



  • @adynathos said in IBM WATSON is not really AI:

    The strength of python in scientific environment is not really the language (which is similar to other dynamically typed ones) but the libraries (numeric, ML, image processing, visualisation), integration (everything that matters accepts numpy arrays) and interactive environment (jupyter).
    Also many scientists learned (or still use) MATLAB, and python's numeric tools are a free equivalent of that.

    Right but again that's a self-fulfilling prophecy. The libraries are there because they were written after "we've always done it that way".

    All of those products would be far better, faster, less error-prone, etc. if implemented in .Net.


  • area_pol

    @blakeyrat said in IBM WATSON is not really AI:

    All of those products would be far better, faster, less error-prone, etc. if implemented in .Net.

    Yes, but it would require a lot of effort for them to catch up now.
    I was hopeful about NET-Core, but it did not turn out well.


  • Banned

    @adynathos said in IBM WATSON is not really AI:

    @stillwater said in IBM WATSON is not really AI:

    large AI systems that does not have static typing

    You would need to have a new (not present in current mainstream languages) type of static typing to be useful in ML.
    Everything is of type N-D-array of float, the challenge would be for the type system to keep track of the sizes of the arrays and whether you can multiply them etc.

    Correct me if I'm wrong, but isn't it something that C++ templates have been capable of for decades already?


  • area_pol

    @gąska said in IBM WATSON is not really AI:

    Correct me if I'm wrong, but isn't it something that C++ templates have been capable of for decades already?

    Yes, although i don't think they are capable to change that at runtime.
    If I load Nx2 points from a file, it should only be allowed to mat-multiply with a 2xN matrix, but N is not known at compile time - which is when C++ templates work.

    In practice, instead of a grand check at compile time, it is easier for me to perform small steps and inspect the results (which has to be done anyway, type checking will not magically make the algorithm correct in all regards).
    If there is an exception thrown, I edit the code and re-run the step.
    Jupyter keeps all the variables in memory so i repeat only the last step that failed.
    (of couse, type-checking and interactive environment could co-exist)


  • Banned

    @adynathos said in IBM WATSON is not really AI:

    of couse, type-checking and interactive environment could co-exist

    But not STATIC type checking. And dynamic type checking is functionally equivalent to manual checks and exception throwing.


  • area_pol

    @gąska said in IBM WATSON is not really AI:

    But not STATIC type checking.

    Still you could compile it with respect to the variables stored in memory.
    If there is a variable a and I write a.f() but there is no method f on the object in a it would complain.

    But if i had to choose, I would prefer interactivity over checking.


  • Discourse touched me in a no-no place

    @stillwater said in IBM WATSON is not really AI:

    I have nothing against python

    That's OK. I use it on a daily basis and it's ass. Or rather it's fine for small ad hoc scripts that are plenty useful in themselves, but it's deeply annoying once you start to build a larger program in it. The syntax for classes is just deeply horrid in places (and the detailed semantics are awful) and the high tendency to have spooky action-at-a-distance effects is just nasty. And it's bitch slow and has the world's worst threading implementation baked in so deep that nobody can unfuck things.

    You could add in all the static typing you want and it still wouldn't unfuck Python.


  • Discourse touched me in a no-no place

    @gąska said in IBM WATSON is not really AI:

    And dynamic type checking is functionally equivalent to manual checks and exception throwing.

    Sometimes that's a lot easier, you know?


  • Discourse touched me in a no-no place

    @blakeyrat said in IBM WATSON is not really AI:

    Right now IBM is lying to people about what they're capable of.

    Watson seems to be a fancy expert system, which is no bad thing necessarily; if it can generate explanations of why it comes to its decisions suggestion, it's far more acceptable to humans than machine learning algorithms, as those basically have exactly zero explanatory power. Heck, the way most people use ML they can't even describe how complex their model actually is and what features of the input data it is using to reach a conclusion, and this is because the don't recognise that what they've really got is a fancy version of statistical curve fitting, that they've started with a dumb-ass prior, and have likely prioritised fitting the noise in the input data over the actual signal.

    Almost all machine learning is total shite. Watch for those people using it who have had proper statistics training; they'll achieve great things. Or get shouted at by management…


  • area_pol

    @dkf said in IBM WATSON is not really AI:

    they don't recognise that what they've really got is a fancy version of statistical curve fitting

    We do realize.
    The people peddling the "AI" hype are usually journalists and marketing, not the actual researchers.


  • Dupa

    @stillwater said in IBM WATSON is not really AI:

    @jbert said in IBM WATSON is not really AI:

    @stillwater said in IBM WATSON is not really AI:

    He says AI winter is coming soon. I'm not sure what to make of it. :/

    You mean you don't get why he's saying that "AI winter is coming", what the name comes from or just in general if the guy's right?

    I mean I don't get why he says the AI winter is coming given the fact that a lot of AI work is being done everywhere and also the reasons that caused the last AI winter just don't exist anymore. He just goes on a diatribe about Watson and then just goes oh yeah AI winter is coming. Why is the AI winter coming though? :/

    What he's saying is, ML is not AI. It can't "understand", but it can deterministically process data, therefore it's not intelligent. It's just clever programming and lots of available resources. That's my take on that.



  • @adynathos said in IBM WATSON is not really AI:

    I was hopeful about NET-Core, but it did nothasn't turned out well yet.

    It's new. Write things. .NET Standard is helping.



  • @adynathos said in IBM WATSON is not really AI:

    I was hopeful about NET-Core, but it did not turn out well.

    What parts of .NET Core are you referring to?



  • People who say machine learning isn't artificial intelligence are probably the same people who thought virtual reality wasn't as simple as duct taping a cell phone to your face.

    Tricking humans into thinking it's intelligent is what makes it AI. It doesn't need to be actually intelligent, whatever that means. Heck, humans probably aren't "actually intelligent" by whatever weird standard those people want machines to live up to.


  • Notification Spam Recipient

    @stillwater said in IBM WATSON is not really AI:

    I ve found that saying Machine learning is full of shit triggers people like crazzzyyyyy.

    Current models of machine learning is mostly shit. Get tiggerred evrbody!


  • Notification Spam Recipient

    @stillwater said in IBM WATSON is not really AI:

    @adynathos said in IBM WATSON is not really AI:

    I was hopeful about NET-Core, but it did not turn out well.

    What parts of .NET Core are you referring to?

    The parts that are OS-specific, I can only assume.


  • Notification Spam Recipient

    @ben_lubar said in IBM WATSON is not really AI:

    Heck, humans probably aren't "actually intelligent" by whatever weird standard those people want machines to live up to.

    No, you're really not. You just have more nodes is all...



  • @stillwater I mean, it's buggy and weak and lacking features compared to the main version. But it's early.



  • @magus said in IBM WATSON is not really AI:

    @stillwater I mean, it's buggy and weak and lacking features compared to the main version. But it's early.

    Agreed. The problem I have with using the old .NET framework though is some of the docs are outdated af. That's my only gripe with it.


  • Considered Harmful

    @stillwater Are you using MSDN or Microsoft Docs?



  • @pie_flavor https://docs.microsoft.com/en-us/aspnet/overview

    I'm using this.The docs for .NET Core seems like a perpetual work in progress.



  • @adynathos said in IBM WATSON is not really AI:

    If I load Nx2 points from a file, it should only be allowed to mat-multiply with a 2xN matrix, but N is not known at compile time - which is when C++ templates work.

    For NNs ... how much does that matter, though? From what I understand, you fix the topology of your network, then train it, at which point you can't easily change the network anyway without retraining. Recompiling a bit of source code is probably cheaper than retraining the network.

    In practice, instead of a grand check at compile time, it is easier for me to perform small steps and inspect the results (which has to be done anyway, type checking will not magically make the algorithm correct in all regards).

    I assumed that the backends take your network topology and compile it down to something more optimized at runtime anyway. I.e., the APIs that I've seen tend to look like you're specifying a layout, then tell the API that this is the layout you use (which gives it a chance to build/compile the pipeline that will later run on CPU/GPU/whatever) and then you start feeding it stuff.



  • @blakeyrat said in IBM WATSON is not really AI:

    Right now IBM is lying to people about what they're capable of.

    That is the totality of the IBM modus operandi though.



  • @adynathos said in IBM WATSON is not really AI:

    (which has to be done anyway, type checking will not magically make the algorithm correct in all regards)

    I 'm not talking about the algorithm at all. I'm talking about the entire pipeline being fragile because there is no type checking.

    When I open another person's code, I know what is breaking with all the squiggly lines in VS. However I can get a notebook from somebody and then have to do Ctrl+Enter on every cell to actually find out if everything works. You execute a notebook and you see shit breaking out of nowhere even if you have the exact same virtualenv. Don't tell me this has never happened to you.


Log in to reply