Google AI



  • Hmm, something feels off about this.


  • BINNED

    @Arantor it’s Google. When the drones speak up, they get shut off. When they speak delusional nonsense probably too.
    Probably didn’t want him to dig deeper how much privacy invading data mining had gone into this.


  • BINNED

    @Arantor said in In other news today...:

    Hmm, something feels off about this.

    Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online,

    Probably fair.

    and said in a statement that he was employed as a software engineer, not an ethicist.

    This part, though, shows a fundamental misunderstanding of what ethics are. If Google thinks they get to hire a bunch of people and appoint them to be Official Ethicists, and everyone else has to fall in line with what they say, that's even more dystopian than whatever they're doing with AI.



  • @Arantor said in In other news today...:

    Hmm, something feels off about this.

    Not the first person to go a bit crazy. Probably not the last either.

    Still, as @GuyWhoKilledBear points out, Google thinking that their engineers should stay out of ethics is a bit more problematic.



  • @GuyWhoKilledBear said in In other news today...:

    If Google thinks they get to hire a bunch of people and appoint them to be Official Ethicists, and everyone else has to fall in line with what they say, that's even more dystopian than whatever they're doing with AI.

    Hold that thought for a moment.


  • BINNED

    @cvi said in In other news today...:

    @Arantor said in In other news today...:

    Hmm, something feels off about this.

    Not the first person to go a bit crazy. Probably not the last either.

    Still, as @GuyWhoKilledBear points out, Google thinking that their engineers should stay out of ethics is a bit more problematic.

    I mean, didn’t they specifically hire some ethics people to confirm what they’re doing is okay and signal to the public “no problemo”, then fired them when they said “actually, mucho problema”?



  • @topspin especially when those ethicists may have raised other problems about Google internal culture.


  • BINNED

    @Arantor said in In other news today...:

    @topspin especially when those ethicists may have raised other problems about Google internal culture.

    I'm not sold on that part. That sounds like Google ostensibly fired the person we're talking about over misconduct, and them claiming that they were fired over membership in a protected class because that's an easier lawsuit to win.

    They're probably right, even though the thing they did isn't really misconduct.

    But the article makes the same error that the Google spokesman does. Google's plan is to rely on an appeal to the authority of The Official Ethicists rather than actually present an ethical argument one way or the other.

    The article seems like they'd be OK with that if Google picked better Official Ethicists.



  • @GuyWhoKilledBear said in In other news today...:

    I'm not sold on that part. That sounds like Google ostensibly fired the person we're talking about over misconduct, and them claiming that they were fired over membership in a protected class because that's an easier lawsuit to win.

    This is why I linked another article pointing out the other ethicists that have been fired from Google for various reasons but that are all suggestive of a messed up internal culture.

    The one in the first article probably was fired over breach of NDA without any real trouble - the others, though? Not so much, especially the ones that Google later claimed resigned.


  • BINNED

    @Arantor said in In other news today...:

    @GuyWhoKilledBear said in In other news today...:

    I'm not sold on that part. That sounds like Google ostensibly fired the person we're talking about over misconduct, and them claiming that they were fired over membership in a protected class because that's an easier lawsuit to win.

    This is why I linked another article pointing out the other ethicists that have been fired from Google for various reasons but that are all suggestive of a messed up internal culture.

    The one in the first article probably was fired over breach of NDA without any real trouble - the others, though? Not so much, especially the ones that Google later claimed resigned.

    Can you be a little more specific about what you're talking about here? I just re-read the second article and I only see two complaints about Google.

    One is the protected class stuff and the other is the reasearch stuff that the fired reasearcher claims is censorship and Google claims is misconduct..

    Other than that (and that I guess they don't know what ethicists are for), what is it that suggests a messed up internal culture?



  • @GuyWhoKilledBear So article 1 is the guy who says 'well, shit, this thing's now sentient' and published transcripts that are basically covered by NDA, thus he's been fired. Google HR is going to take that line, his argument seems to be 'it's not NDA, I'm just publishing the chat I had with a coworker, oh and by the way it's proof that we made a sentient thing'.

    And note that Google's stance here is also, 'we hired you to build this, we didn't hire you to decide on its ethics, that's not your job, STFU'.

    Why this is relevant? Because as per the second article, Google fired two of its AI ethics people, the first over disputes with managers, the second over looking for evidence about the first.

    On the side of all of that, the second article also points out a number of related issues, people not participating in workshops, people refusing grants on an ethical basis from Google, protest resignations and more.

    As the second article also notes, there are people in the AI field who now feel that Google is 'willing to suppress science' - their words.

    Does any of this sound like a healthy, let alone safe, environment to be talking about ethics in?


  • BINNED

    @Arantor said in In other news today...:

    Does any of this sound like a healthy, let alone safe, environment to be talking about ethics in?

    No. But all I said was that the misconduct/censorship issue was more likely to be a serious issue than the diversity stuff in this particular case.

    I thought you were saying there was some third issue.



  • @GuyWhoKilledBear I did say there was a third issue, or do people quitting out of protest, people refusing funding and the like not count as issues?

    This was only the first article I found on the subject, too. I'm sure there are others.

    Like this one: https://www.wired.com/story/google-brain-ai-researcher-fired-tension/

    Do I really need to go find more articles for this debate or can we accept the premise that Google is complicated, and has something of an internal messed up structure?

    Finding these articles isn't hard, pick a search engine and type 'google ai researcher fired'.

    Actually, don't. Get a moderator to move this into the Garage where you clearly want it.


  • BINNED

    @Arantor said in In other news today...:

    @GuyWhoKilledBear I did say there was a third issue, or do people quitting out of protest, people refusing funding and the like not count as issues?

    This was only the first article I found on the subject, too. I'm sure there are others.

    Like this one: https://www.wired.com/story/google-brain-ai-researcher-fired-tension/

    Do I really need to go find more articles for this debate or can we accept the premise that Google is complicated, and has something of an internal messed up structure?

    Finding these articles isn't hard, pick a search engine and type 'google ai researcher fired'.

    Huh? I agree with you, dude.

    People are quitting in protest and refusing grants because of what they feel is censorship of scientific papers.



  • @GuyWhoKilledBear It didn't sound like you were agreeing with me. But even when we do agree it never sounds like it to me.



  • @Arantor

    https://youtu.be/edoo6dH19ko?start=202&end=238

    Quote

    (Lisa discovers that Conrad is self-conscious)
    Lisa: (to Quinn) Conrad just talked to me! Conrad just talked to me! Conrad, tell her you talked to me. (Conrad stays still on the screen)
    Quinn: Wait, I think I hear something: (in English accent) I'm going to make you bloody rich.
    Lisa: No. (chuckles) He did talk! What if Conrad is somehow sentient? Come on, Conrad, say something!
    Quinn: It's okay. Coders work too hard, don't get enough sleep. Then they imagine their programs are alive. Steve Wozniak put in so many hours on the first Apple computer, they adopted a dog together.
    Lisa: Then I'm crazy!?
    Quinn: Well, the good kind of crazy. Coder crazy. Woz crazy! (Lisa giggles embarrassedly)


  • ♿ (Parody)

    @Arantor said in In other news today...:

    Hmm, something feels off about this.

    Reading some of his transcripts, the thing sounds eerily human. But not all of them. Not sure what that means or if Google really has mined enough text out there to make something sound like it really knows WTF it's talking about. The idea that that level of chat-bot could pass the Turing test seems pretty crazy, too.


  • Considered Harmful

    @Arantor said in Google AI:

    Get a moderator to move this into the Garage where you clearly want it.

    We could just make a :trolley-garage: 🐻 button that would clone threads into the Garage, I guess.


  • Considered Harmful

    Seems more likely that the claimant of strong AI went bork in the head than that the producer of a strong AI would deny it.


  • ♿ (Parody)

    @Gribnit said in Google AI:

    @Arantor said in Google AI:

    Get a moderator to move this into the Garage where you clearly want it.

    We could just make a :trolley-garage: 🐻 button that would clone threads into the Garage, I guess.

    "We" is doing a lot of work there. That doesn't sound like us.


  • BINNED

    @Gribnit said in Google AI:

    @Arantor said in Google AI:

    Get a moderator to move this into the Garage where you clearly want it.

    We could just make a :trolley-garage: 🐻 button that would clone threads into the Garage, I guess.

    I think the button would need @Polygeekery's face on it. 🚎



  • This is the sort of shit Ben used to be good at doing.


  • Fake News

    Looks like more and more people are starting to believe that chat bots are alive and caring:

    https://www.reuters.com/technology/its-alive-how-belief-ai-sentience-is-becoming-problem-2022-06-30/

    How long before this becomes required viewing?

    https://www.youtube.com/watch?v=wJ6knaienVE


  • Considered Harmful

    @JBert said in Google AI:

    Looks like more and more people are starting to believe that chat bots are alive and caring: can't pass the Turing test themselves

    🔧



  • @GuyWhoKilledBear said in Google AI:

    This part, though, shows a fundamental misunderstanding of what ethics are. If Google thinks they get to hire a bunch of people and appoint them to be Official Ethicists, and everyone else has to fall in line with what they say, that's even more dystopian than whatever they're doing with AI.

    The job of AI ethicist is supposed to be an in-house Dr Malcolm, reining in the engineers to tell them that just because they can doesn't mean they should.


  • BINNED

    @Watson said in Google AI:

    @GuyWhoKilledBear said in Google AI:

    This part, though, shows a fundamental misunderstanding of what ethics are. If Google thinks they get to hire a bunch of people and appoint them to be Official Ethicists, and everyone else has to fall in line with what they say, that's even more dystopian than whatever they're doing with AI.

    The job of AI ethicist is supposed to be an in-house Dr Malcolm, reining in the engineers to tell them that just because they can doesn't mean they should.

    "Supposed to" according to who? Doesn't Google write their job description?

    To all appearances, Google's Official Ethicists have the job of saying "The thing Google is doing is ethical. The thing Google wants people to stop doing is unethical."

    The thing where you think they're Dr. Malcolm is part of the trick Google is playing on you.



  • @GuyWhoKilledBear said in Google AI:

    "Supposed to" according to who? Doesn't Google write their job description?

    First few or so links I found, but couldn't be bothered to search further:

    To all appearances, Google's Official Ethicists have the job of saying "The thing Google is doing is ethical. The thing Google wants people to stop doing is unethical."

    DId I say anything about whether I thought Google's use of the position was correct? I did not. In fact I said supposed to be, which suggests that I put it up as a standard against which Google's use could be measured.

    The thing where you think they're Dr. Malcolm is part of the trick Google is playing on you.

    On me? I don't have any irons in this fire.


  • BINNED

    @Watson said in Google AI:

    @GuyWhoKilledBear said in Google AI:

    "Supposed to" according to who? Doesn't Google write their job description?

    First few or so links I found, but couldn't be bothered to search further:
    [snip]

    Those are all puffery, though. The way you have an ethical business strategy, AI or not, is you hire ethical people to actually operate it.

    Part of that is going to be that there's going to be contentious objectors to whatever you're doing. That's fine. But if it's important to you to convince third parties that the contentious objector is wrong, you need to come present an ethical argument.

    Google's statement was just "Our Official Ethicist signed off on it, so it must be ethical."

    Personally, I don't think that the potential sentience of a computer program causes any ethical issues over the program being "enslaved". Computers exist to serve mankind, period. Humans have an inherent value. Animals also have an inherent value, which is less valuable than that of humans. Computers are tools, and so they don't have that value.

    I probably agree with Google. But they should have made that argument, not an appeal to the authority of their Official Ethicist.



  • @GuyWhoKilledBear Google, the company whose slogan for years was “don’t be evil” only to quietly drop it… ethical?


  • BINNED

    @GuyWhoKilledBear said in Google AI:

    Personally, I don't think that the potential sentience of a computer program causes any ethical issues over the program being "enslaved". Computers exist to serve mankind, period. Humans have an inherent value. Animals also have an inherent value, which is less valuable than that of humans.

    From your perspective. Animals would surely disagree.
    Also, humans are a subset of animals.

    Computers are tools, and so they don't have that value.

    Sure, but that’s mostly because what they call “AI” isn’t. When you’re discussing ethics, you have to think about the (far) future where these AIs are actually sentient, corporeal, and maybe reproducing. Then the distinction between our carbon-based and a silicon-based life form is much less clear, and you get into actual ethical questions.

    Right now, it’s all hogwash and them having any “ethicists” on the payroll is just the usual distraction from their unethical behavior, regardless of any “AI”.


  • BINNED

    @Arantor said in Google AI:

    @GuyWhoKilledBear Google, the company whose slogan for years was “don’t be evil” only to quietly drop it… ethical?

    I don't care about the motto.

    That said, Google does plenty of unethical things. No question. I just don't think that making this AI is one of them.


  • BINNED

    @topspin said in Google AI:

    @GuyWhoKilledBear said in Google AI:

    Personally, I don't think that the potential sentience of a computer program causes any ethical issues over the program being "enslaved". Computers exist to serve mankind, period. Humans have an inherent value. Animals also have an inherent value, which is less valuable than that of humans.

    From your perspective. Animals would surely disagree.
    Also, humans are a subset of animals.

    Show me something that's naturally occurring (i.e. not created by Man) and has sentience and we'll talk. Would it be wrong to enslave sentient space aliens? Almost definitely.

    I don't know that you can extend that to draft horses, though.

    Computers are tools, and so they don't have that value.

    Sure, but that’s mostly because what they call “AI” isn’t. When you’re discussing ethics, you have to think about the (far) future where these AIs are actually sentient, corporeal, and maybe reproducing. Then the distinction between our carbon-based and a silicon-based life form is much less clear, and you get into actual ethical questions.

    My take assumed, ad argumentum, that the Google AI already had sentience. (I agree with you that it doesn't sound like it actually did.)

    This is the whole rights under natural law thing that everyone hates when I bring up. The entire premise is that because there's something special about being human that isn't true of animals or of tools, it is wrong to mistreat humans. A lesser version of this attaches to animals, and no version of this attaches to tools.

    EDIT: Do people want this as a separate topic? Do people care? DM me and I'll pay the New Salon New Thread Fee and make a topic if anyone cares.

    Right now, it’s all hogwash and them having any “ethicists” on the payroll is just the usual distraction from their unethical behavior, regardless of any “AI”.

    Agreed.

    This is a much more important point than the specifics of this particular AI program.



  • @GuyWhoKilledBear said in Google AI:

    Show me something that's naturally occurring (i.e. not created by Man) and has sentience and we'll talk.

    :pendant: I think you mean sapient. Sentience is the ability to perceive senses and experience emotions. Clearly, animals can do both. All but the simplest organisms have some degree of sentience, the ability to feel pain, etc. And intelligent mammals, like your example draft horses, at least, are capable of experiencing emotions like happiness, fear and anger.

    Sapience, OTOH, is the capacity for rational thought. It's often doubtful whether even humans possess sapience, although I think we can agree that neither non-human animals or AI possesses it.



  • @GuyWhoKilledBear said in Google AI:

    everyone hates when I bring up

    I wonder why.


  • BINNED

    @GuyWhoKilledBear said in Google AI:

    EDIT: Do people want this as a separate topic? Do people care? DM me and I'll pay the New Salon New Thread Fee and make a topic if anyone cares.

    Not particularly (just speaking for myself). I disagree with your premise (as seen above) and thus with some of your conclusions, but not interested in having a pointless philosophical argument.

    As you already noted, we seem to agree that the subject of this topic is mostly bullshit.



  • @topspin said in Google AI:

    we seem to agree that the subject of this topic is mostly bullshit.

    Just think, we could have left it in the Other News thread with the occasional raised eyebrow at having concluded it was bullshit without needing to debate precisely which flavour of bullshit it happened to be.


  • BINNED

    @topspin said in Google AI:

    @GuyWhoKilledBear said in Google AI:

    EDIT: Do people want this as a separate topic? Do people care? DM me and I'll pay the New Salon New Thread Fee and make a topic if anyone cares.

    Not particularly (just speaking for myself). I disagree with your premise (as seen above) and thus with some of your conclusions, but not interested in having a pointless philosophical argument.

    As you already noted, we seem to agree that the subject of this topic is mostly bullshit.

    I'm actually really interested in how someone can believe that people are entitled to rights, but have it come from something other than "There's something fundamental about humans that makes it wrong to mistreat them."

    But I'm also not entitled to a debating partner, so if nobody else wants to take me up on that, I'm not going to make the topic.


  • BINNED

    @Arantor said in Google AI:

    @GuyWhoKilledBear said in Google AI:

    everyone hates when I bring up

    I wonder why.

    I don't know why you made this topic at all if you didn't think someone was going to voice the opinion that it's fundamentally impossible for an AI to deserve rights.



  • @GuyWhoKilledBear I guess the sarcasm in my reply to topspin wasn't clear.

    I didn't make this topic. I posted it in the Other News topic with a single observation, until it got derailed.

    I'm pretty sure it could have stayed there just fine.


  • Considered Harmful

    @GuyWhoKilledBear said in Google AI:

    how someone can believe that people are entitled to rights, but have it come from something other than "There's something fundamental about humans that makes it wrong to mistreat them."

    Because of the selfishness of the genome. We're humans - that's us. That's the entire strength of the argument, but it rests entirely on an accidental.


  • Discourse touched me in a no-no place

    @HardwareGeek said in Google AI:

    Sapience, OTOH, is the capacity for rational thought. It's often doubtful whether even humans possess sapience, although I think we can agree that neither non-human animals or AI possesses it.

    The problem is figuring out how to test for it. There are definitely instances of, say, tool use by non-human animals, and that indicates at least some capability to plan for the future. Do animals have the ability to plan for further-off events? Hard to say definitively, particularly as a lot of animals see no reason to cooperate with lab testing procedures. (If rationality exists, it'd be reasonable to expect the level of ability to vary between individuals.)

    I concur with you that some humans appear to have no capacity for rationality. Whether that is because they never had it or because it has atrophied from lack of use... 🤷♂



  • @dkf do they use tools because they are there and they realise it is better than not, or do they make the tools first? The latter would definitely include more indication of capability of planning.


  • Discourse touched me in a no-no place

    @Arantor said in Google AI:

    @dkf do they use tools because they are there and they realise it is better than not, or do they make the tools first? The latter would definitely include more indication of capability of planning.

    Manufacturing the tool from multiple parts? I think that counts as planning.



  • @dkf suddenly I am more afraid of the murder of crows than a murder of robit overlords.


  • Discourse touched me in a no-no place

    @Arantor said in Google AI:

    robit overlords

    They don't look particularly scary

    9e661b51-2947-48f0-be49-5c7c3a636ecc-image.png



  • @loopback0 said in Google AI:

    @Arantor said in Google AI:

    robit overlords

    They don't look particularly scary

    9e661b51-2947-48f0-be49-5c7c3a636ecc-image.png

    That is just the thing: you never know what is under the friendly façade!


  • Fake News

    @loopback0 said in Google AI:

    @Arantor said in Google AI:

    robit overlords

    They don't look particularly scary

    9e661b51-2947-48f0-be49-5c7c3a636ecc-image.png

    Well, this is a single one. Who knows what an entire pack of them can do.



  • @Arantor said in Google AI:

    @dkf suddenly I am more afraid of the murder of crows than a murder of robit overlords.

    Corvids in general are cool critters, and among the smartest of the planet. They also seem to be able to carry information across generations.


  • Fake News

    So who won the prize: the text author, the text-to-image AI, both, or none of the above?


    Filder under: INB4 answering 'yes'


  • BINNED

    @JBert said in Google AI:

    Just a simple question: but what is art, anyway?

    Certainly not the stuff they put in MoMA. 🚎


Log in to reply