Joe Rogan: "I Wasn't Afraid of AI Until I Learned This"

5,390,648
0
Published 2023-12-19
FREE Alpha Brain Trial ► onnit.sjv.io/LPvLgM
CODE: jredaily for 10% off other purchases ► onnit.sjv.io/jWPr2e
Sub for daily JRE clips! ► youtube.com/@JREDailyClips
Onnit Affiliated.

Tristan Harris and Aza Raskin are the co-founders of the Center for Humane Technology and the hosts of its podcast, "Your Undivided Attention." Watch the Center's new film "The A.I. Dilemma" on YouTube.

Clip taken from JRE #2076 w/ Aza Raskin & Tristan Harris

Host: Joe Rogan
Guests: Aza Raskin & Tristan Harris
Producer: Jamie Vernon

#jre #joerogan #ai #chatgpt

All Comments (21)
  • @danielswann3319
    Fascinating that at the same time AI is learning how to think our children are being dumbed down
  • @jayok7609
    The 80’s and 90’s. The good ole days
  • @KermitOfWar
    Commander Shepard did warn us about the Reapers...
  • One second they say they “weren’t aware” that these breakthroughs even happened. Nobody knew. Then this guy says the “Board” was being “honest” about what they knew in terms of what big breakthroughs happened. I just love dumb smart people so much! Nothing to worry about at all.
  • @user-hc2yd4vw7t
    One of my favorite things about listening to Joe Rogan, he and his guests SO RARELY interrupt one another. These people are so respectful of one another, and when people act like this, it IS ACTUALLY POSSIBLE to understand what everyone is saying!.ty Joe, for another fascinating show.
  • @rogerjordan8998
    Here is a thought experiment. In Isaac Asimov's Foundation Trilogy, there was a scientist (Harry Seldom) who founded the science of psychohistory. This allowed for the prediction of the future based on predicting society's reaction to narratives/policies/events, etc. This sounds a lot like "predictive" AI. AI doesn't need to exterminate or enslave us, it only needs to psychologically convince us that it is all good.
  • These guys would be a great rap duo. Dude hops in, comments and hops out strategically.
  • @Dennis-nc3vw
    What drives me up the wall is that no one talks about the threat AI poses as a means of censorship. Censorship is 1000X more dangerous if people don't know its happening, and AI could detect "problematic" opinions and censor them instantaneously. It would be an invisible form of censorship.
  • @mitchevans4597
    I don’t feel comfortable over something we created that we no longer understand how it works.
  • I began studying Turkish. A year and a half into my studies, I started to do warm up conversation with AI in Turkish. I would have my husband ( native speaker of Turkish from Turkey) take a look and he was flabbergasted. He said the output of AI, was the most natural feedback he'd ever seen from a machine. Unlike google translate, which uses "default" or "stsndard" language which doesn't always pair up well, AI used more idiomatic phrases, slang, culturally specific uses of the language that would be more commonly used in conversation in Turkey. My husband said he was kind of creeped out and he got quiet after that.
  • There are so many things wrong about this, its crazy: First of all, a transformer is simply an architecture of a neural network. In the beginning he states that transformers are new AI models that learn more if you give it more data; that is literally the case with EVERY AI. That also comes with many advantages AND disadvantages. The other thing he said was that one of the neurons in the transformer model OpenAI was testing was able to be the world's best at sentiment analysis; this is simply not true and that is not how AI works. All these neurons simply hold values and work together to solve an object. So, for example, if you were asked to tell whether a message is happy or sad (0 or 1), the model's neuron at the end either says 0 or 1 based on the activation function. So, to summarize, a single neuron can't be crazy good at sentiment analysis. Also he says that AI is something we don't completely understand yet: not true AT ALL. Honestly, it's really all math. If you wanted, you could literally take a notebook and do the exact same process an AI follows. We literally made AIs and we know exactly what happens. It's written out to be this "Black Box" which is simply not true. The only instance when this is true is during its training process, which doesn't really matter much anyways. I'm not tryna make these guys sound stupid or anything, they're prolly wayyy smarter than I am. It's just I don't want people to be afraid of AI for reasons they shouldn't be right now. This is nothing compared to some of the things AI can already do lol.
  • @JohnnyAquaholic
    Every time I listen to or read something about AI, I immediately think of Dr. Malcolm in Jurassic Park. "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."
  • @navneetnair
    I asked GPT and Gemini to read a tarot spread and the way it read that was so amazing putting in nuances relating the questions and the cards drawn. I never thought it would have that data but pretty incredible how it just performs. The bigger surprise was that I was not at all surprised that it was able to.
  • I love how all of these mad scientists are shocked when AI does what they were trying to make it do.
  • @Teacherofall
    For me, i just dont want to have to prove i am not a robot to a robot yet at the same time unable to speak with an actual human.
  • If I am not mistaken the core contains gradient descent mechanisms. The critique being that intelligence is not a huge stack of slider controls or any fixed geometry. The settings and changes themselves change and the changes of the change change. This is qualative change, and is non-communicative, meaning non-time-reversible. We have yet to crack meta-bootstrapping as far as I know. The method of always adding new variables to make up for model shortcomings is the wrong path, you must also distill and prune down a model’s variable space to the least-complex but still prudently comprehensive yet elegant model, and that’s a deeper art to build than any dedicated for particular tasks. The process of generalization itself must be generalized, and that resulting construct generalized as well, as a series.. It’s got to be super-meta, and beyond. Recursion is not the same thing, because each level of abstraction has its own place in the irreducible bootstrapping hierarchy.
  • @Wall_E.
    It's crazy AI somehow was capable of learning and doing more than what they were meant or were programmed to do, basically going beyond the desired results. And it's even crazier that there's not much space in this particular podcast for Joe to slip in that Bear card