AI's Future, GPT-5, Synthetic Data, Ilya/Helen Drama, Humanoid Robots- Sam Altman Interview

90,157
0
Published 2024-06-04
Sam Altman was interviewed about a wide range of topics, including GPT-5, languages, UBI, synthetic data, and seeing inside the "black box."

Be sure to check out Pinecone for all your Vector DB needs: www.pinecone.io/

Learn more about ASM : lnk.bio/ASMOfficial

Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com/

Need AI Consulting? 📈
forwardfuture.ai/

My Links 🔗
👉🏻 Subscribe:    / @matthew_berman  
👉🏻 Twitter: twitter.com/matthewberman
👉🏻 Discord: discord.gg/xxysSXBxFW
👉🏻 Patreon: patreon.com/MatthewBerman
👉🏻 Instagram: www.instagram.com/matthewberman_ai
👉🏻 Threads: www.threads.net/@matthewberman_ai
👉🏻 LinkedIn: www.linkedin.com/company/forward-future-ai

Media/Sponsorship Inquiries ✅
bit.ly/44TC45V

Links:
   • AI for Good Global Summit Day 2  

All Comments (21)
  • @4.0.4
    Safety, for the government: "More censorship". Safety, for corporations: "More profit". Safety, for me: "open, locally runnable, user-aligned".
  • @vaisakhkm783
    Questions: 0:42 What is the first big good thing we'll see happen and what is the first big bad thing we'll see happen? 3:48 You've just announced that you're have begun training the next iteration whether it's dp5 or whatever you're going to call it. One of the big concerns in this room in Geneva is that gp4 and the other large language models are much better at English Spanish French than they are at say Swahili. How important to you is that? How important is language Equity as you train the next big iteration of your product? 5:14 As you train it what level of improvement do you think we're likely to see? Are we likely to see kind of a linear improvement or are we likely to see an ASM topic improvement or are we likely to see any kind of exponential very surprising improvement? 6:41 What do you think those hugely better areas are going to be and what do you think the not so better areas are going to be? 7:49 You're going to have a model that will be trained in large part on synthetic data. How worried are you that training a large language model on data created by large language models will lead to corruption of the system? 10:56 Have you created massive amounts of synthetic data to train your model on? Have you self-generated data for training? 12:20 Patrick Collison uh the founder of stripe asked this great question. He said "Is there anything that could change an AI that would make you much less concerned that AI will have dramatic bad effects in the world?" And you said "Well, if we could understand what exactly is happening behind the scenes, if we could understand what is happening with one neuron..." Is that the right way to think about it and have you solved this problem? 16:40 Is there anything close to where I would say yeah you know everybody can go home we've got this figured out? 17:34 You don't understand what's happening, isn't that an argument to not keep releasing new more powerful models? 19:03 What is the most progress we've made or have there been any real breakthroughs in understanding this question of inability? 19:36 Tristan Harris made this morning as we're talking about safety, he said that for every million dollars that a large language model company puts into making their models more powerful they should also put a million dollars into safety one for one. Do you think that's a good idea? 22:45 One of the reasons why this is on my mind of course is that you know the co-founder who's most associated with safety Ilia just left, Yan who's one of the lead workers on safety left and went to go work at Anthropic and tweeted that the company's not prioritizing so convince yeah exactly that's what I said he is literally calling out what just happened and Sam was dancing around it so let's see if he's going to answer it more directly now everybody here that they're not and and you're flying this we're all on your air right now Sam convince us that the Wing's not going to fall off after these folks have left. 24:55 I understand why AGI has been such a Focus right it has been the thing that everybody in AI wants to build, it has been part of Science Fiction, you know building a machine that thinks like a human means we're building a machine like the most capable creation that we have on Earth. But I would be very concerned because a lot of the problems with AI, a lot of the bad things with with AI seem to come from its ability to impersonate a human. Why do you keep making machines that seem more like humans instead of saying you know what understand the risks we're going to kind of change directions here. 29:12 What about doing more in that direction, what about for example saying that chat gbt can never use I. This gets to the point of human compatibility. 29:46 We're about about to enter this period of Elections everybody here is concerned about deep fakes misinformation, how do you verify what is real what can you do to make it at the core design level so that's less of a problem? 31:49 You demonstrate these voices she then puts out a statement which gets a lot of attention, everybody here probably saw it saying they asked me if I could use my voice I said no they came back 2 days before the product was released I said no again they released it anyway. Open AI then put out a statement saying not quite right we had a whole bunch of actors come in an audition we selected five voices after that we asked her whether she'd be part of it she would have been the sixth voice. What I don't get about that is that one of the five voices sounds just like Scara Johansson so it sounds almost like you are asking there to be six voices two of which sound just like her and I'm curious if you can explain that that to me? 33:45 I asked GPT 40 how when you're interviewing someone on a video screen to prove that they're real and it suggested asking them about something that has happened in the last couple of hours U and seeing if they can answer it. So what just happened to Magnus Carlson? 34:32 It's in your interest for there to be one or few large language models, but where do you see the world going? Do you think that 3 years from now there will be many base large language models or very few? And importantly will there be a separate large language model that is used in China, one that's used differently in Nigeria, one that's used differently in India, where are we going? 36:21 I'm most concerned about is we head to the next iteration of AI is that the web becomes almost incomprehensible pble where there's so much content being put up because it's so easy to create web pages, it's so easy to create stories, it's so easy to do everything that the web almost becomes impossible to navigate and get through. Do you worry about this and if you think it's a real possibility what can be done to make it less likely? 39:10 I've kept this list of like questions or very smart people in AI disagree and to me one of the most interesting is whether it will make income inequality worse or whether make income inequality better. Has this changed your view of what will happen with income inequality in the world both within and across countries? 42:44 That reconfiguration will be led by the large language model companies, no no no just the way the whole economy Works, uh and what no big deal no no no just the way the whole economy works. 43:10 Let's talk about um governance of open AI. One of my favorite quotes I can't read the whole thing because there's un prohibitions but this is from an interview you gave to the New Yorker eight years ago and you were when I worked there and you were talking about governance of open Ai and you said, "We're planning a way to allow wide swats of the world to elect representatives to a new governance Board of the company. Because if I weren't in on this I'd be like why do these effers get to decide what happens to me?" So tell me about that quote and your thoughts on it now? 44:10 Let me ask you about the critique of governance now. So two of your former board members Tasha mcau Helen toner just put out an oped in The Economist and they said after our disappointing experiences with open AI these are the board members who voted to fire you before you came back and were reinstated as CEO. They said you can't trust um self-governance at an AI company. Then earlier this week toner gave an interview with the Ted AI podcast which was quite tough and uh she said that the oversight had been entirely disfunctional and in fact that she had learned and the board had learned about the release of chat GPT from Twitter. Is that accurate?
  • @anta-zj3bw
    Thank you for the pause to explain Asymptote
  • @Odysseum04
    I mean... GPT4 IS intelligent. But I guess it could be MUCH more intelligent given the amount of data it has been in contact with.
  • @othermod
    This (regulatory capture) push for equity and safety on LLMs needs to end. Just give us a model that responds to prompts, and let us choose what to do with the information.
  • @erb34
    Sam Altman, the guy who owns the startup fund, scares me a lot.
  • @ginebro1930
    We need a model to remove voicefry from Altman, i mean right now.
  • @darth-sidious
    I would like to see someone collect all his interviews and run them through a model trained in body language and lie detection and show in the form of simple statistics how much of what he says is true and how much is pure talk.
  • @Masterfuron
    This will be the video they show in the future when they have to explain "When it all went bad."
  • @user-hu9eg7ck9b
    Regarding asymptotes, I think it's the vertical asymptote shown in the top left picture that they mean. Basically, it increases so quickly that it ends up being almost vertical.
  • @zerorusher
    Wow! Congrats on the ASM supporting this video!
  • Sam Altman out there painting every wall he can see with the 'lovely brush'. Everything is lovely, all is fine... a little dab of lovely here, a spattering of 'nicey nice' there .... ohh its all lovely. [rolls eyes]
  • @TheStuntman81
    Interview summary: Host asking direct questions - Sam beating around the bush.
  • @quaterman1270
    When I hear Sam Altman talk about AI and security, it's like listening to an alocoholic saying that he is not drinking much.
  • @adangerzz
    Yes please to Claude GGB video! Sam does political speak well. There was a politician who answered a direct yes or no question the other day with a simple, "No." I thought I might have crossed over to a new paradigm for a moment.
  • @MDougiamas
    On needing more data - once the models are in robots world wide and experiencing the world that way (as we do) there will be an infinite amount of new REAL data available for training