Rob Ager on the SENTIENT A.I. delusion (a game dev perspective) Collative Learning

1,082
0
2024-07-02に共有
Does increasing complexity make A.I. programs more conscious? By Rob Ager.

コメント (18)
  • @PortlyPete
    ive messed around with AI chatbots and they just feel like sophisticated search engines. they ignore your inputs all the time because they arent actually listening to you, they are just reading off scripts.
  • @jomesias
    Great arguments friend! Great to hear from someone that knows his stuff!! 🎉
  • Great video Mr. Ager. I've had many similar experiences being surprised by my own code while making games myself. You're correct that modern chatbots, aka Large Language Models, are ultimately just a bunch of variables like any other program, but that doesn't quite tell the whole story I think. You mentioned that you've had trouble finding a good explanation of how these things differ from regular programs, so I thought I would take a stab at it. I suppose the difference between LLMs and traditional programming is that, with LLMs, the variables are determined by the model itself as it is "trained" on relevant data, rather than being constructed by a human programmer directly. The end result is a program that has millions or even billions of variables, referred to as parameters, that all have interconnected relations and different ways of effecting each other. So with that in mind, I think the reason people consider it to be a separate domain from regular programming is that, instead of us building the machine, it's more like we built a machine that builds a machine, and we don't really have the ability to understand how that machine-built-machine actually specifically works because it's just too complicated. It's kind of a black box, we just feed it data to make it grow, and then give it inputs and receive outputs. I agree with you that it isn't even slightly conscious though. It's ultimately still just a very very complex structure of our own making.
  • @Ytnzy250
    Looks great, when's the sequel coming out?
  • @Savoy1984
    Just had a weird flash back I remember a shopping centre opening in the 80’s and they had a robot talking to people and I remember being close to the guy talking to the people through a mic, AI is a bit like that.
  • Your tank and it's grenade lobbing basically simulated that human behaviour "....yeah, I meant to do that". I can't remember where or whom (author) I got it from but 'Simple systems, complex behaviour' (and vice versa) always comes to mind when AI is the topic
  • Well, my friend. I suspect you may have been shining after all. I had forgotten you had begun to delve into actually creating games and I'm impressed not surprised, at how quickly you have learned. I'll be honest, the first half was a bit over my head, I just watched in awe, disappointed that you hadn't been shining after all. But in the second half, you appear to be talking about the things I was thinking. Things that I may have omitted from my opening lecture. And therefore are now disreputable having heard you speak of them. I could easily just be copying what you say to gain your trust. And I think i should probably leave you alone for at least a month.
  • I would like to introduce another similar question: Are humans sentient? Now, the answer may seem like an obvious yes, but aren't we all just taking input in the form of our senses and generating output in the form of actions using some analogue version of a computer we call a brain? The only differences I see between us and LLMs (and other similar AI things) is that our CPUs were designed by more natural processes and are not digital. I'm not saying that they are sentient at the moment, but if you think that they can't be sentient, what that say about the sentience of ourselves? Since (with our current understanding of the brain) we have no way to determine if something is sentient through external means, we should analyze why we care for something to be sentient, and use that instead. I think that if we can easily predict what something will do, like an enemy in a game, there's no need to bring in the concept of sentience. A less predictable thing like a human or dog, I would say is sentient for practical purposes.
  • Why am I only now finding out that you have a channel dedicated to games? You have so many channels!
  • So, as far as I understand it, when it comes to neural network "AI", all the neural network is doing is just generating a program for the computer to run. Its fundamentally no different from any other program that could be written by a human, just very big and complex (and often bloated). But its still just common programmatic arguments. The neural network itself is just in inverse statistical algorithm. It just compares prior output to training data, and makes random iterations until its output begins to match said training data. In principle it mimics evolution, but at the end of the day the product is just code because it has to run on a digital device. It still all runs off of transistors, taking input, doing math, giving output like any other algorithm. The big tech innovation is just a new technique for brute-force pattern recognition. This method is also very inefficient and inelegant; most of the perceived complexity is actually just junk code left over after the thousands/millions of training cycles.
  • @ryszakowy
    i love to cheat in games to see how much AI can pretend it doesn't see me
  • I don't think that the focus should be on how neural networks are "fundamentally different". They aren't. Neural networks have been around as an idea and a practical useful programming technique since long before the recent LLM stuff, so anyone saying AI is fundamentally new is just not correct. The thing I really heavily disagree on, is the implicit idea that humans alone (or just biological brains) possess experience, and everything else from a philosophical perspective SHOULD be considered "dark", or incapable of sensation. I could present you with a similar (although its actual answer inherently unprovable) alternative to your question of "how is an LLM any different from a basic game AI?"; "How is the matter that makes up your brain capable of experience, while the matter in a computer chip is not?" and I think that there is similarly no answer. As far as I am aware we do not have a root cause of consciousness figured out, but this side of the argument always seems to go with the base assumption that no system made of silicon and copper COULD ever have experience, because it's just the default assumption since we cannot prove anything outside of our own mind.
  • @Savoy1984
    What’s funny because I love robots especially in films but love your videos about AI it’s super overrated.
  • When you design the game please nerf to fuck all the weapons and introduce rainbow 🌈 politics cos that goes down well with players 😉😂👌
  • This is nothing but a strawman. You failing to understand how LLMs work does not make them the same as scripted NPCs. Your understanding of scripted "AI" lends you an illusion of authority. It's an illusion. As demonstrated by you comparing LLMs to scripted systems.