No, it's not Sentient - Computerphile

870,187
0
2022-06-17に共有

コメント (21)
  • @arik_dev
    When we (humans) see a cat swiping at its own reflection in the mirror we find it amusing. The cat is failing to recognize that the "other" cats behavior matches its own, so it doesn't deduce that the image it's seeing is actually its own actions reflected back at it. When humans react to models like LaMDA as if it is a distinct and intelligent entity, we're being fooled in a way that is analogous to the cat. The model is reflecting our own linguistic patterns back at us, and we react to it as if it's meaningful.
  • @tielessin
    Before this video I have never thought about the loneliness of my python functions. There are probably soo many functions that I have never called, but I will take care of them from now on.
  • The whistleblower in question here was actually a lot more focused on Google's complete lack of ethical oversight regarding decisions they have moving forward with the research. He was also concerned about Google's unwillingness to address A.I. imperialism in newly developing countries. All of the coverage I've seen has taken away from the guys point, because he was just trying to force Google into addressing the ethics, he even admitted that it's not sentient, and we wouldn't even know how to define that if it was.
  • If it looks like a duck, acts like a duck, and quacks like a duck, it might just be a convincing robotic simulation of a duck.
  • @matsim0
    The most frustrating thing about reading the "interview" was that the obvious follow up questions were not asked - like who are these friends that you miss hanging out with? What are you doing when "hanging out"? But then, this would have immediately destroyed the impression of sentience, so of course they didn't ask those.
  • Harold Garfinkel proved that people getting randomized yes/no answers could make sense of them as thoughtful advice. And that's back when computers were the size of rooms.
  • @3DPDK
    I remember the arguments of eventual sentience in the 1980s about a program called "Eliza", basically a word calculator originally written in the 1960's at MIT, but offered for use on home computers in the 1980s. Over time as Eliza built data files of words and their usage weights, the sentences it constructed began to take on seemingly human characteristics. The program itself was extremely simple that calculated which verbs and adjectives were best used with specific nouns, and it chose those nouns based on the ones you used in the questions you asked it. It mostly framed it's answers to your questions as questions it would ask you. We humans recognize intelligible speech patterns as a result of conscious though and curiosity (asking questions) as a sign of intelligence, but at least in the case of Eliza, it's much like recognizing faces in tree bark or cloud shapes - we see them, but they are only there because our brains are wired to look for them.
  • From years of reading science fiction, I was under the impression that "sentience" means "possessing a reflective consciousness", but the dictionary says that it simply means "the ability to sense and feel".
  • @Zeekar
    Whoever did the animations: how did you react to being asked to make a function call look lonely? 🥺
  • I was having the same opinion about the “conversation”. The AI was responding enthusiastically to tell the engineer exactly what he wanted to hear, and when the engineer is convinced that it is sentient, he’s starting from a presupposition that the AI is sentient, and confirmation bias takes hold. As I told some others, I’m pretty sure that the AI would just as happily and enthusiastically discuss how it is not sentient.
  • "it just says what IT THINKS you want to hear" "Exactly"
  • @Zizumia
    I love the study of empathy people have for things that are not sentient because they form a personal connection with it. This AI blurs the line quite well since it's programming is so advanced but people create bonds with dolls or toys, people feel bad when an engineer from Boston Dynamics kicks one of their walking robots, some police feel bad sending their bomb disposal robots into danger, etc. Fascinating.
  • You are probably all too young to know this but back in the early 1980s, there was a program called "ELIZA" that accepted your input (from a terminal) and gave back an "answer". It was said to be a "Rogerian nondirective psychotherapist", but all it did was cleverly extract some keywords from your input and giving those back as questions. Such as: "I am lonely" would produce "Why do you say you are lonely?" It made quite a splash and people were really thinking it was very clever and helpful.
  • I remember reading some stories written by Asimov where robots had sentience but yet were unable to speak because that was too complex. It's interesting that he and many other futurists had it exactly backwards.
  • The problem with the Turing test, is that it is not that the coding bot is passing it, it is that some humans are failing it, the number of which is growing rapidly.
  • Even the best AI can't make intelligent sentences if it is only trained by youtube comments.
  • Very glad you made this video. The notion of a single google employee claiming that a language model had become sentient just because he "felt" like it was sentient was something I dismissed offhand, but I really wanted someone with more knowledge about AI and language models to go in depth about what the difference between a language model like this is and what we would more rigorously define as sentience.
  • I feel like people who think AI is sentient must have been what it felt like in the 1800s when people first heard the radio and though a box was alive. Or that a photograph took your soul
  • Realizing a chatbot is not real sentience is like realizing a magic trick is just an illusion.
  • On the flip side : now prove that a fellow human you are talking to is "sentient". Humans also learn language, responses and acceptable behaviours in their interactions as they develop, plus they can fabricate fiction or lies when cornered in a conversation, or simply to please their interlocutors.