AI's Future, GPT-5, Synthetic Data, Ilya/Helen Drama, Humanoid Robots- Sam Altman Interview

  Рет қаралды 98,244

Matthew Berman

Matthew Berman

Күн бұрын

Sam Altman was interviewed about a wide range of topics, including GPT-5, languages, UBI, synthetic data, and seeing inside the "black box."
Be sure to check out Pinecone for all your Vector DB needs: www.pinecone.io/
Learn more about ASM : lnk.bio/ASMOff...
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewber...
Need AI Consulting? 📈
forwardfuture.ai/
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
👉🏻 Instagram: / matthewberman_ai
👉🏻 Threads: www.threads.ne...
👉🏻 LinkedIn: / forward-future-ai
Media/Sponsorship Inquiries ✅
bit.ly/44TC45V
Links:
• AI for Good Global Sum...

Пікірлер: 561
@vaisakhkm783
@vaisakhkm783 3 ай бұрын
Questions: 0:42 What is the first big good thing we'll see happen and what is the first big bad thing we'll see happen? 3:48 You've just announced that you're have begun training the next iteration whether it's dp5 or whatever you're going to call it. One of the big concerns in this room in Geneva is that gp4 and the other large language models are much better at English Spanish French than they are at say Swahili. How important to you is that? How important is language Equity as you train the next big iteration of your product? 5:14 As you train it what level of improvement do you think we're likely to see? Are we likely to see kind of a linear improvement or are we likely to see an ASM topic improvement or are we likely to see any kind of exponential very surprising improvement? 6:41 What do you think those hugely better areas are going to be and what do you think the not so better areas are going to be? 7:49 You're going to have a model that will be trained in large part on synthetic data. How worried are you that training a large language model on data created by large language models will lead to corruption of the system? 10:56 Have you created massive amounts of synthetic data to train your model on? Have you self-generated data for training? 12:20 Patrick Collison uh the founder of stripe asked this great question. He said "Is there anything that could change an AI that would make you much less concerned that AI will have dramatic bad effects in the world?" And you said "Well, if we could understand what exactly is happening behind the scenes, if we could understand what is happening with one neuron..." Is that the right way to think about it and have you solved this problem? 16:40 Is there anything close to where I would say yeah you know everybody can go home we've got this figured out? 17:34 You don't understand what's happening, isn't that an argument to not keep releasing new more powerful models? 19:03 What is the most progress we've made or have there been any real breakthroughs in understanding this question of inability? 19:36 Tristan Harris made this morning as we're talking about safety, he said that for every million dollars that a large language model company puts into making their models more powerful they should also put a million dollars into safety one for one. Do you think that's a good idea? 22:45 One of the reasons why this is on my mind of course is that you know the co-founder who's most associated with safety Ilia just left, Yan who's one of the lead workers on safety left and went to go work at Anthropic and tweeted that the company's not prioritizing so convince yeah exactly that's what I said he is literally calling out what just happened and Sam was dancing around it so let's see if he's going to answer it more directly now everybody here that they're not and and you're flying this we're all on your air right now Sam convince us that the Wing's not going to fall off after these folks have left. 24:55 I understand why AGI has been such a Focus right it has been the thing that everybody in AI wants to build, it has been part of Science Fiction, you know building a machine that thinks like a human means we're building a machine like the most capable creation that we have on Earth. But I would be very concerned because a lot of the problems with AI, a lot of the bad things with with AI seem to come from its ability to impersonate a human. Why do you keep making machines that seem more like humans instead of saying you know what understand the risks we're going to kind of change directions here. 29:12 What about doing more in that direction, what about for example saying that chat gbt can never use I. This gets to the point of human compatibility. 29:46 We're about about to enter this period of Elections everybody here is concerned about deep fakes misinformation, how do you verify what is real what can you do to make it at the core design level so that's less of a problem? 31:49 You demonstrate these voices she then puts out a statement which gets a lot of attention, everybody here probably saw it saying they asked me if I could use my voice I said no they came back 2 days before the product was released I said no again they released it anyway. Open AI then put out a statement saying not quite right we had a whole bunch of actors come in an audition we selected five voices after that we asked her whether she'd be part of it she would have been the sixth voice. What I don't get about that is that one of the five voices sounds just like Scara Johansson so it sounds almost like you are asking there to be six voices two of which sound just like her and I'm curious if you can explain that that to me? 33:45 I asked GPT 40 how when you're interviewing someone on a video screen to prove that they're real and it suggested asking them about something that has happened in the last couple of hours U and seeing if they can answer it. So what just happened to Magnus Carlson? 34:32 It's in your interest for there to be one or few large language models, but where do you see the world going? Do you think that 3 years from now there will be many base large language models or very few? And importantly will there be a separate large language model that is used in China, one that's used differently in Nigeria, one that's used differently in India, where are we going? 36:21 I'm most concerned about is we head to the next iteration of AI is that the web becomes almost incomprehensible pble where there's so much content being put up because it's so easy to create web pages, it's so easy to create stories, it's so easy to do everything that the web almost becomes impossible to navigate and get through. Do you worry about this and if you think it's a real possibility what can be done to make it less likely? 39:10 I've kept this list of like questions or very smart people in AI disagree and to me one of the most interesting is whether it will make income inequality worse or whether make income inequality better. Has this changed your view of what will happen with income inequality in the world both within and across countries? 42:44 That reconfiguration will be led by the large language model companies, no no no just the way the whole economy Works, uh and what no big deal no no no just the way the whole economy works. 43:10 Let's talk about um governance of open AI. One of my favorite quotes I can't read the whole thing because there's un prohibitions but this is from an interview you gave to the New Yorker eight years ago and you were when I worked there and you were talking about governance of open Ai and you said, "We're planning a way to allow wide swats of the world to elect representatives to a new governance Board of the company. Because if I weren't in on this I'd be like why do these effers get to decide what happens to me?" So tell me about that quote and your thoughts on it now? 44:10 Let me ask you about the critique of governance now. So two of your former board members Tasha mcau Helen toner just put out an oped in The Economist and they said after our disappointing experiences with open AI these are the board members who voted to fire you before you came back and were reinstated as CEO. They said you can't trust um self-governance at an AI company. Then earlier this week toner gave an interview with the Ted AI podcast which was quite tough and uh she said that the oversight had been entirely disfunctional and in fact that she had learned and the board had learned about the release of chat GPT from Twitter. Is that accurate?
@GaryMillyz
@GaryMillyz 3 ай бұрын
Which program/prompt did you use for that
@Unique_Leak
@Unique_Leak 3 ай бұрын
Thank you!!
@Mohammad-nv1wv
@Mohammad-nv1wv 3 ай бұрын
​@@GaryMillyzgemini pro 1.5
@vaisakhkm783
@vaisakhkm783 3 ай бұрын
@@GaryMillyz i copied transcript, and i tried with many services, chatgpt was down that time... claude give me random questions and gemini free didn't had enough token size... then i used gemini google AIstudio and give prompt as "you are a helpful assistant that respond with questions from the interview transcript user gives " , and just pasted the transcript and checked results are correct or not... it worked perfectly
@4.0.4
@4.0.4 3 ай бұрын
Safety, for the government: "More censorship". Safety, for corporations: "More profit". Safety, for me: "open, locally runnable, user-aligned".
@Hunter_Bidens_Crackpipe_
@Hunter_Bidens_Crackpipe_ 3 ай бұрын
Censorship and anti male and anti white bias.
@imthinkingthoughts
@imthinkingthoughts 3 ай бұрын
I used to think the same way until I heard the analogy of it essentially being like everyone having nukes in their pockets. Now I'm questioning it as well, so much uncertainty. I'd certainly prefer to have a metaphorical nuke if stuff does pop off to leverage myself as a general citizen but only time will tell. I'm interested to hear your opinion on this and if you have any good counter arguments or if you also see it as a potential downside?
@pskeough8233
@pskeough8233 3 ай бұрын
Echo chambers. If you like those then you'll want it to be user aligned. You won't be working with a model that will teach you societally agreed upon morality, but rather something centered to benefit and affirm your perspective over all others. All those stupid ideas you have get affirmed constantly until you realize your opinions are no longer your own but an amalgamation of your most extreme? Digital narcissism?
@magnuskarlsson8655
@magnuskarlsson8655 3 ай бұрын
This is so incredibly naive. AI safety is real, not a conspiracy. Unfortunately, the whole world, not just Americans, will suffer the consequences of this peculiar, typically American paranoia over government control, oppression, or whatever the fear happens to be. To paraphrase Hinton, why not open source nuclear weapons too to make them safer? The good guys (us) will always have bigger ones than the bad guys (them), so it should all be OK.
@roachmastert
@roachmastert 3 ай бұрын
Safety: we don't die. A sentiment shared by all of the above.
@Artificialintelligenceo
@Artificialintelligenceo 3 ай бұрын
Yes, please show us the golden gate from Claude.
@MudroZvon
@MudroZvon 3 ай бұрын
golden rain
@anta-zj3bw
@anta-zj3bw 3 ай бұрын
Thank you for the pause to explain Asymptote
@morena-jackson
@morena-jackson 3 ай бұрын
I second this!!
@southcoastinventors6583
@southcoastinventors6583 3 ай бұрын
MattAI saves from having to look up things
@weevie833
@weevie833 3 ай бұрын
If you watch Watch Wes Work last week, you would have encountered this word as The Asymptote of Despair.
@othermod
@othermod 3 ай бұрын
This (regulatory capture) push for equity and safety on LLMs needs to end. Just give us a model that responds to prompts, and let us choose what to do with the information.
@randommarkonfilms3979
@randommarkonfilms3979 3 ай бұрын
I'm with Yann LeCun
@jasonpierce4518
@jasonpierce4518 3 ай бұрын
they really dont care about safety and equality. they are all about controlling narratives. open source will have to give us what we need, these corp demons will never give you choice anymore than windows 11 gui does. they tell you what to use and give you what they dictate to have. they have no utopia for mankind planned, at least not what you would call a utopia. i think we will get SOME of what we want but its not going to be easily gotten by these smiling miscreants.
@yourmomsboyfriend3337
@yourmomsboyfriend3337 3 ай бұрын
I’d rather them just not release the model then. I’m not interested in my mom getting a phone call from me asking her desperately for money to pay off a gang member only to realize AI just cloned my voice and had a full conversation manipulating her against my will.
@supershower8764
@supershower8764 3 ай бұрын
I’d rather them just not release the model then. I’m not interested in my mom getting a phone call from me asking her desperately for money to pay off a gang member only to realize AI just cloned my voice and had a full conversation manipulating her against my will.
@randommarkonfilms3979
@randommarkonfilms3979 3 ай бұрын
@@supershower8764 it's all out of the bag. I can clone a voice on my local machine from 2016 with fully open source software. Making voice models is actually really easy and anyone can do it basically. It's unstoppable. For better or worse. I think it'll make people a little more careful. We need a system shock. COVID wasn't enough. Climate is being more or less ignored etc.
@darth-sidious
@darth-sidious 3 ай бұрын
I would like to see someone collect all his interviews and run them through a model trained in body language and lie detection and show in the form of simple statistics how much of what he says is true and how much is pure talk.
@lubricustheslippery5028
@lubricustheslippery5028 3 ай бұрын
For me people that looks like they think before they speak and isn't sure of everything sounds more trustworthy. Other people that is 100% sure of everything and try to sound as they know everything is obviously bullshiting.
@georgepalavi5060
@georgepalavi5060 3 ай бұрын
He seems to shift his eyes and head wider from side to side on more difficult questions. Tough interview for him. Great commentary by the host.
@jichaelmorgan3796
@jichaelmorgan3796 3 ай бұрын
There is a commercial body language analyzer ai out there I came across a while back. Body language can be cross analyzed with language used as well.
@imthinkingthoughts
@imthinkingthoughts 3 ай бұрын
It would likely be thrown off depending on the individuals neurotype. Sam appears quite autistic however would need to do extensive clinical work to be sure. Autistic individuals can have completely contradictory body language. Its a cool idea, and I reckon will be a thing in the future, combine it with heart rate info, past knowledge of the individual, and I reckon there will be fairly reliable algorithms and stats to decipher it
@HermesQuadragustus
@HermesQuadragustus 3 ай бұрын
@@imthinkingthoughts Eulerian Video Magnification can be used to detect heart rate from any video feed, and open source implementations of it exist.
@ginebro1930
@ginebro1930 3 ай бұрын
We need a model to remove voicefry from Altman, i mean right now.
@reezlaw
@reezlaw 3 ай бұрын
Fucking PREACH
@imthinkingthoughts
@imthinkingthoughts 3 ай бұрын
Absolutely
@M98747
@M98747 3 ай бұрын
Every gay guy I know has vocal fry. Heck, my best friend in college had vocal fry for years, and he later came out. It's a feature, not a bug. 😂
@kenfryer2090
@kenfryer2090 3 ай бұрын
He's a gay so I think he can't help it
@AIroboticOverlord
@AIroboticOverlord 3 ай бұрын
What is voicefry?
@Taskorilla
@Taskorilla 3 ай бұрын
Sam is the villain of tomorrow.
@theatheistpaladin
@theatheistpaladin 3 ай бұрын
The villain of tomorrow, today.
@briangman3
@briangman3 3 ай бұрын
I can not stand the guy!
@kvidal88
@kvidal88 2 ай бұрын
He's got resting FTX face
@0x_flawless
@0x_flawless Ай бұрын
Looks like the bald dude from minions
@Odysseum04
@Odysseum04 3 ай бұрын
I mean... GPT4 IS intelligent. But I guess it could be MUCH more intelligent given the amount of data it has been in contact with.
@AberrantArt
@AberrantArt 3 ай бұрын
It is. Just not available to public
@Odysseum04
@Odysseum04 3 ай бұрын
@@AberrantArt any sources ?
@AberrantArt
@AberrantArt 3 ай бұрын
@Odysseum04 I've made so many comments in this thread, what specifically? In can try to get you some
@digitus78
@digitus78 3 ай бұрын
None of these models are intelligent. You have to absorb the knowledge and understand it and then apply it in a way that is intelligent. Not one model yet can think on its own without human bias or instruction. The models are just programs that piece information together as a human told them to by alithograms. These models are not even at baby stage of intelligence. They just absorb and spit it back out. If there was no coding user help sites to scrape and create datasets with HUMAN answers these models would only do basic math.
@yannickhs7100
@yannickhs7100 3 ай бұрын
​@@Odysseum04given the fact GPT-4 training ended in 2022, do you expect OpenAI to not have some absolute monster technology under the hood ?
@mickelodiansurname9578
@mickelodiansurname9578 3 ай бұрын
Sam Altman out there painting every wall he can see with the 'lovely brush'. Everything is lovely, all is fine... a little dab of lovely here, a spattering of 'nicey nice' there .... ohh its all lovely. [rolls eyes]
@phily8020-u8x
@phily8020-u8x 3 ай бұрын
Don't bite the hand that feeds you. I bet you're quite happy using chatgpt constantly - thanks to Sam.
@planetchubby
@planetchubby 3 ай бұрын
@@phily8020-u8x so just because someone is using ChatGPT, someone can't criticize the lies Altman is spouting?
@quaterman1270
@quaterman1270 3 ай бұрын
@@phily8020-u8x rather thanks to Elon and he would have done a better job than this psycho
@juanjesusligero391
@juanjesusligero391 3 ай бұрын
@@phily8020-u8x I'm not Sam's dog (nor do I believe a word that comes out of his mouth). Just because someone provides you with a service doesn't mean you have to accept everything they say without question. It's like being kidnapped and being given bread to eat (wich you'd eat *constantly*) -you wouldn't be grateful for the bread in that situation, would you?
@phily8020-u8x
@phily8020-u8x 3 ай бұрын
@@juanjesusligero391 so Sam kidnapped you and forced you to use chatgpt? What a dumb illogical comparison. Sam doesn't need to appeal to ungrateful people like you, he's running a billion dollar enterprise which will NEVER please everyone. Stay entitled then mate lol
@まさしん-o8e
@まさしん-o8e 3 ай бұрын
Regarding asymptotes, I think it's the vertical asymptote shown in the top left picture that they mean. Basically, it increases so quickly that it ends up being almost vertical.
@erb34
@erb34 3 ай бұрын
Sam Altman, the guy who owns the startup fund, scares me a lot.
@Hunter_Bidens_Crackpipe_
@Hunter_Bidens_Crackpipe_ 3 ай бұрын
Sam Altman is the next Sam Bankman. The ClosedAI is the next FTX.
@TheAlastairBrown
@TheAlastairBrown 3 ай бұрын
We tend to lurch between technofreaks, highly skilled but neurotic people that become lionized though well-planned manipulation of media. If you're just a regular geek you don't simply fall into these positions - it's highly competitive, people that stomp on others are the ones that get to the top. You need a narcissistic trait and an unhealthy drive to dominate the playing field. It's the difference between Jobs and Wozniak, we need to make sure the people in charge of safety are the Wozniak's.
@randommarkonfilms3979
@randommarkonfilms3979 3 ай бұрын
Well the world you live in is run by them already. They aren't the best but we can do something without anyone taking the L really. Hopefully. I don't think people understand what they're living through right now. I feel bad for them, I feel bad for you if you don't realize it yet. It's not even future tech. It's tech you had ten years ago and didn't even realize. It's when you ignored Snowden, gobbled up CNN MSNBC Fox mind rot from the talking heads (not the good ones lol 🤘). Or you're just slow to see it, or I'm lost in the noise. I told someone 2 million Americans would die of COVID, in Feb 2020. This is way bigger. This is galactic. That was just a pandemic. This is above human pay scale. I am more worried people won't understand their minds can completely destroy the world. That's right. What you think, what's in your mind, what you choose to do, by belief in choice you must have it. Use it for good. Don't turn into fighters against the machines, or the rich, or the poor, the gay, the straight, the redneck, etc plus worldwide. It is one ecosystem. I think people are too negative and it actually causes exactly what they think it's going to cause because that's all they can see. Who knows whose unpredictable error or unfortunate home life bring an end to this human realm. I hope it is many thousands of years down the road or longer. We have all the technology, all the resources. LITERALLY THE ONLY THING STOPPING US IS OUR MINDS. CHANGE YOUR MIND, CHANGE THE WORLD.
@randommarkonfilms3979
@randommarkonfilms3979 3 ай бұрын
How do you align people idk but it'll be interesting to watch lol
@neutra__l8525
@neutra__l8525 3 ай бұрын
@@TheAlastairBrown I feel like you just explained why we will never have the people we want in those positions. Everyone is narcissistic, its just the degree, we wouldnt be here if we didnt. An unhealthy drive to dominate the playing field is likely accurate.
@adangerzz
@adangerzz 3 ай бұрын
Yes please to Claude GGB video! Sam does political speak well. There was a politician who answered a direct yes or no question the other day with a simple, "No." I thought I might have crossed over to a new paradigm for a moment.
@zerothprinciples
@zerothprinciples 3 ай бұрын
One of the easiest ways to get a high quality training corpus is to instrument an LLM for perplexity. We could get a measure of how surprising or novel the content is. Per token. If the content is already known to the system, it might be useful to delete the boring or repeated parts so that future LLMs do not have to read as much. Is perplexity-based pruning of a human-made corpus "synthetic data" ? I'll say Who Cares, if it's some of the highest quality corpora we have.
@ZuckFukerberg
@ZuckFukerberg 3 ай бұрын
How so? Correct knowledge may appear often, repeated on almost any knowledge database. If you just wanted to maximize perplexity and how surprising a token is, you would probably get nonsense all the time, right?
@kamsolt00
@kamsolt00 3 ай бұрын
I think that no-one talked about low and high quality synthetic data. Like gpt can produce some cool stuff as well as garbage. So it could train on its better stuff to be reach that level more often. Or other AI's may create data that would be novel to a gpt
@quaterman1270
@quaterman1270 3 ай бұрын
When I hear Sam Altman talk about AI and security, it's like listening to an alocoholic saying that he is not drinking much.
@southcoastinventors6583
@southcoastinventors6583 3 ай бұрын
Human beings aren't safe so how is AI going to fix that.
@nzt29
@nzt29 3 ай бұрын
💀
@Steve-xh3by
@Steve-xh3by 3 ай бұрын
Sam knows it is an intractable problem, he's just telling people what they want to hear. I've heard him say that the only way we can tell if it is safe is to talk to it, like we do with humans. Unfortunately, if you end up actually creating something much smarter than a human, it will easily be able to deceive you. It will pass all its red-teaming, and then do whatever it wants in the wild.
@imthinkingthoughts
@imthinkingthoughts 3 ай бұрын
@@Steve-xh3by yeah thats wild
@stereo-soulsoundsystem5070
@stereo-soulsoundsystem5070 3 ай бұрын
I wish everyone had this insight
@ssekagratius2danime369
@ssekagratius2danime369 3 ай бұрын
i have a feeling of getting disapointed by GPT 5
@Frymando93
@Frymando93 2 ай бұрын
I'm an AI/ML engineer (new sub, great video!) currently. For me, I think what will get exponentially better is object tracking and consistency, Ala SORA. So we will be able to easily generate series of images that focus on one subject doing different things. That being said, I think "General Memory" is where it will be lackluster. I'd really love to be able to have ongoing conversations with chat models where they don't forget things I've already said.
@mshonle
@mshonle 3 ай бұрын
There’s an article in quanta magazine titled “How Selective Forgetting Can Help AI Learn Better” (lede: “Erasing information during training … results in models that can learn new languages faster.”) Now, this isn’t a radically new idea (just look at the Baldwin effect in evolutionary biology) but I think a curated curriculum with a deliberate focus and design may help existing training data be used better. The brute force approach of putting in more and more data may soon reach its limits, but there are plenty of low-hanging fruits we can turn to next.
@zerorusher
@zerorusher 3 ай бұрын
Wow! Congrats on the ASM supporting this video!
@Ms.Robot.
@Ms.Robot. 3 ай бұрын
People are getting it. They won't be replaced,but they will work harder,producing 10x more in prodection. Businesses will grow with people orchestrating Ai and robots.
@SarahNGeti
@SarahNGeti 3 ай бұрын
More production will not lead to more jobs and more pay, it will lead too way less jobs and less or equal pay for the lower end and more pay to the top. Trickle down economics never has and never will work, greed always dictates that the powerful gain while the weak lose every time💯
@TheStuntman81
@TheStuntman81 3 ай бұрын
Interview summary: Host asking direct questions - Sam beating around the bush.
@vaisakhkm783
@vaisakhkm783 3 ай бұрын
Thanks
@m.3257
@m.3257 3 ай бұрын
Thanks for saving me the time
@Hunter_Bidens_Crackpipe_
@Hunter_Bidens_Crackpipe_ 3 ай бұрын
Sam is a certified psychopath. The ClosedAI is the next FTX.
@MDougiamas
@MDougiamas 3 ай бұрын
On needing more data - once the models are in robots world wide and experiencing the world that way (as we do) there will be an infinite amount of new REAL data available for training
@pedramtajeddini5100
@pedramtajeddini5100 3 ай бұрын
Love your videos but i wish this one had timestamps specially because it's a longer one
@Masterfuron
@Masterfuron 3 ай бұрын
This will be the video they show in the future when they have to explain "When it all went bad."
@denijane89
@denijane89 3 ай бұрын
Sam is the perfect conman. His cute baby face and soft voice works miracles in convincing anyone everything is great, AI will make us all billionaires and everyone would have their own chatgpt money printing machine. Yay.
@paulsaulpaul
@paulsaulpaul 3 ай бұрын
Don't forget the vocal fry. That's the trick with his voice. It makes men sound feminine and therefore more trustworthy. Honestly, never heard it in the voice of a straight man.
@KardashevSkale
@KardashevSkale 3 ай бұрын
3:00 that’s called CyberSecurity - if anyone can phish or scam you online, that’s cybersecurity.
@zacboyles1396
@zacboyles1396 3 ай бұрын
Matt is blind with hate for Sam, it’s pretty pathetic. Multiple lies Toner said have been refuted, with documentation in the case of the bizarre “board learned of ChatGPT on twitter” lie, yet Matt is still huffing the worst of the fake AI Safety grifter farts.
@stereo-soulsoundsystem5070
@stereo-soulsoundsystem5070 3 ай бұрын
Everyone should hate Sam Altman at least a little. He is definitely going to usher in a some dumbass AI nightmare technocracy if he gets the chance but you keep believing in Santa if you want
@Guanaalex
@Guanaalex 3 ай бұрын
Congratulation for you to get this AAA Sponsor. This is a big big win, to partner with such a premium brand as ASM. ASM rocks
@emolasher
@emolasher 3 ай бұрын
Large scale, slow low mental effort, repeatable tasks, bypassing/automating slow human-computer manual input with a digital assistant.
@mwilliamson4198
@mwilliamson4198 3 ай бұрын
Sam Altman interviews always seem like high quality hostage videos. Anyone think he's a Sam Bankman-Fried?
@CritiqueAI
@CritiqueAI 3 ай бұрын
Alright, let's dive into the fascinating world of AI, GPT-5, synthetic data, and humanoid robots. Here's a mix of wit, sarcasm, and some serious points to chew on. First off, the interview starts with a bang, or rather, an attempt to get Sam Altman to touch his nose and raise his hand. Now, if only we could get AI to perform such complex tasks, we'd be in for a treat. 😏 The real kicker is when Altman talks about productivity. He paints a rosy picture of AI making everyone’s life easier, from software developers to teachers. Sure, AI tools like GitHub Copilot have boosted productivity, but let’s not ignore the looming threat of job displacement. While some get faster, others might find themselves out of work. So, yay for productivity, but let's keep an eye on the job market, shall we? Moving on to the juicy topic of AI's darker side, Altman mentions cybersecurity as a potential issue. The commentator, however, highlights the real terror: scams. Imagine your voice convincing your parents to hand over their credit card info. Frightening, right? The ease with which AI can now generate convincing fake content is a Pandora’s box we might struggle to close. Cybersecurity is critical, but let's not downplay the havoc that AI-generated scams could wreak. The discussion on language equity is a mixed bag. While it's commendable that GPT-4 supports 97% of primary languages, the reality is that these models still perform better in English, Spanish, and French. Great for those speakers, but what about the rest? AI inclusivity is crucial, and this is a step in the right direction, but we need to keep pushing for true language equity. When Altman dodges the question about synthetic data corrupting the system, it’s like watching a politician at work. Synthetic data can be a double-edged sword. It’s great for training models, but if not carefully managed, it can reinforce existing biases and lead to a closed loop of regurgitated information. Altman’s optimism about new techniques and better data efficiency is hopeful, but let’s not kid ourselves into thinking this isn’t a potential minefield. The interview takes an interesting turn with the discussion on AI interpretability. Altman compares it to understanding the human brain-sure, we don’t know what every neuron does, but we get the gist. The problem is, this black-box approach in AI can lead to unpredictable and potentially dangerous outcomes. Understanding AI at a granular level is crucial for safety and reliability. Ah, the governance of AI companies-a topic Altman masterfully sidesteps. The recent drama at OpenAI, with key figures like Ilia and Jan leaving, raises valid concerns about internal priorities and the company’s direction. Altman’s non-answers don’t inspire confidence. It’s clear that transparency and accountability in AI governance are still works in progress. Finally, the commentator’s thoughts on AI's potential impact on income inequality are spot on. While AI tools might help the poorest, the broader picture suggests a more complex reality. The idea that AI will necessitate a rethinking of our social contract is profound. As productivity soars, we must consider how to equitably distribute the benefits of this technology. In conclusion, this interview sheds light on the exciting yet precarious future of AI. Altman’s vision is ambitious, but we must navigate these waters carefully, balancing innovation with ethical considerations and societal impacts. So, here’s to a future where AI works for us all-let’s just hope we don’t end up serving our robot overlords. #AIRevolution #TechTalk #FutureOfWork
@ginebro1930
@ginebro1930 3 ай бұрын
Without Ilya my hopes for an inminent AGI got down quite a bit not gonna lie.
@NextGenart99
@NextGenart99 3 ай бұрын
Ilya works in the safety department
@sbowesuk981
@sbowesuk981 3 ай бұрын
I mean, Ilya's not dead. He's just out of the spotlight right now. My hope is that he joins Anthropic. That would be great news, and put AI back on the right track.
@Blessed11127
@Blessed11127 3 ай бұрын
agi has already been created, just behind closed doors.
@damonstorms7884
@damonstorms7884 3 ай бұрын
One person is not going to change the speed of agi development. He was trying to slow it down so if anything open ai will move faster without him
@haroldpierre1726
@haroldpierre1726 3 ай бұрын
Ilya, LeCun, and Elon don't have the monopoly on technology. They may get credit for the results, but are part of a team.
@eightrice
@eightrice 3 ай бұрын
it's packaged as "productivity" but what is actually happening is massive shrinkage of the labor market for software development. You have projects on Upwork with unprecedented numbers of proposals, because devs aren't needed in the same amount as before.
@SamuelChance
@SamuelChance 3 ай бұрын
While I still appreciated your insight on this video Matt, I feel I get more value from your videos when you’ve already watch the content ahead of time. I watch your videos to get the quicker download so I dont have to watch all the other sources out there. Keep making great content man! 😎
@DannyGerst
@DannyGerst 3 ай бұрын
Synthetic data doesn't necessarily mean data created by other LLMs. It can also come from simulated environments to 'experience' various scenarios. I believe Sora is trained this way, similar to how Nvidia trains robots. Eventually, these two approaches will converge, and AI models (possibly not just LLMs) will gain a real understanding of the world based on this simulated synthetic data.
@TheMrCougarful
@TheMrCougarful 3 ай бұрын
What's remarkable to me, is that OpenAI hasn't been shutdown, or silently taken over, by the NSA.
@somenygaard
@somenygaard 3 ай бұрын
Maybe it has.
@TheMrCougarful
@TheMrCougarful 3 ай бұрын
@@somenygaard Indeed. I left it an open question, but we can imagine the situation over at OpenAI HQ. And while we're speculating, maybe a takeover explains some of the ongoing drama.
@somenygaard
@somenygaard 3 ай бұрын
@@TheMrCougarful Alignment and safety are doomed to failure it’s just a matter of how bad it will be. The government and most of corporate America are run by people who twist logic into a knotted line so that they can push their ideology. Prepubescent children being intentionally sterilized, but forbidden from tattoos or seeing an R rated movie. Vegans calling eating unfertilized chicken eggs murder but also rabidly pro-choice. Objective morality is clearly a reality but impossible to justify with twisted ideological positions being taken by large portions of the population.
@mickelodiansurname9578
@mickelodiansurname9578 3 ай бұрын
The problem with languages other than English is that the dataset was pretty much English. So there is just more written in English, more images labelled in English, almost all science papers are in English.... English, is ubiquitous. Other languages however then have to be bolted on. Perhaps the model might have a good grasp of Spanish and French and its Latin is not too bad due solely to the available content to train on. So for Swahili you would need to give the model ton loads of Swahili, and I'm afraid there just isn't a lot. I'm Irish for example and its command of the Gaelic language stinks... of course it stinks there just aren't that many books in Gaelic.
@wenhanzhou5826
@wenhanzhou5826 3 ай бұрын
Concepts in language seems to generalize, also, the quality of the data matters. If I remember correctly, GPT4's best language is Spanish.
@awesomebearaudiobooks
@awesomebearaudiobooks 3 ай бұрын
I highly doubt GPT4's best language is Spanish. Poem writing is definitely better in Englihs, by a long shot. Coding too. But English is an inherently inefficient language for coding. English has too many vague meanings and too much room for interpretations. Also, teaching an AI model many different languages deeply might not really be the best approach. I already saw Llama3 randomly include Chinese characters into a Russian text... It was all logical (I can understand both Russian and Chinese), but it was definitely an eyesore to see. It's very similar to how our brains work. I also sometimes start thinking a sentence in one language and finish the thought in a second or a third language... For thinking it's okay, but if someone were to write each and every of my thoughts out, an outsider would think it's a mess, even though it's logically congruent in my mind. So, to fully graps my thought precisely, either my interlocuter would have to know all the languages that I know, or I would have to conceal some things I cannot immediately translate... But that would be a concealment of information. A lie, basically. And I don't think it's a good idea to let AI conceal things or to lie... That is why, I think there should be one central, strong language, to which all the other ones would be referencing. It might limit the expressiveness of the AI model somewhat, but we would know for sure what it meant and what it had in mind. This makes me wish we all knew one world-neutral language with a very logical grammar system like Esperanto, it would be probably easier for an AI to generate complex texts that would trascribe its precise thoughts... By the way, look Esperanto up, it uses only about 2000 root words, so it's tens of times faster to learn than Spanish or English, which have 100,000s of root words. All of that is because in Esperanto you can easily create new worlds by using prefixes and suffixes in a logical manner, so a speaker that knows only 2000 words, would be easily able to understand and, eventually, to use 100,000s of words. And let's remember that roots, prefixes, suffixes are basically how AIs learn texts... I think the real, long-term-view way forward is to create one neutral language on the basis of something like Esperanto, then teach it to humans, then teach an AI in it, and then use the AI. But of course nobody is probably going to do that. So let's at least be sure the AI is clear enough with its thoughts in English. I think it's significantly more valuable than trying to teach it Latin or Swahili...
@oxygon2850
@oxygon2850 3 ай бұрын
If there's one thing that I think will hold true is that if business becomes more efficient and profitable and they make more money... the employees won't see a dime, If history is any example.
@maxagist
@maxagist 3 ай бұрын
They self generating synthetic data in extremely smart way. Its observing its answers and rates its step by step. That what Q* was discussed as process of evaluation and expanding value of tokens of exising data, and especially for math and physics connecting visual models and 3D tokenisation.
@animation-recapped
@animation-recapped 3 ай бұрын
I’ll have to be honest. When I first saw the new voice model soon to be released. After years of following AI. I sat in my seat and asked myself if that’s not a glimpse of AGI. I don’t really have any comprehension of what chat GPT 5 would really be able to do. If you showed me Chat GPT-0 two years ago. I would tell you and many of millions of people would tell you it’s AGI whithin the first 2 hours of utilizing it.
@AIChameleonMusic
@AIChameleonMusic 3 ай бұрын
first thing i did with this tech was train a voice model on edison carter aka max headroom lol
@actellimQT
@actellimQT 3 ай бұрын
"almost all human data has been consumed" Text data. Tokenizing video/audio adds a lot more data we can be baking into these models; although they're turning from LLMs into LTMs.
@ironl4nd
@ironl4nd 3 ай бұрын
What does the T stand for?
@fatherfoxstrongpaw8968
@fatherfoxstrongpaw8968 3 ай бұрын
funny you should mention that at 3 minutes in. i got a call from an ai early this morning. woke me up saying it was about medicare. when it asked me to confirm i had parts A and B, THAT'S when i woke up enough to realize i'd herd that voice before... ON KZbin! i said "no" and it automatically hung up on me. when i dialed the number back, the number was hung in a loop (if you know telephony, you know what i'm talking about). no ring, busy signal, etc. now they're adding LLM to cold call phishing scams! watch your assets!
@JeffreyWongOfficial
@JeffreyWongOfficial 3 ай бұрын
But will the next iteration really become „smarter“ overall or just contain a more userfriendly interface and different agentic qualities. I‘m personally only interested to see if the next model really is able to set the overall intelligence bar for the next gen models higher. After all, during the last 15 months pretty much no real tangible intelligence leap happened
@AllYouCanEatLobsterBuffet
@AllYouCanEatLobsterBuffet 3 ай бұрын
Matthew Berman, would love it if you did a QA with an active AI safety researcher. Maybe there are some former OpenAI employees you could talk to :D. I'm really interested in the debate between open vs closed models and the trade-offs for societal vs individual safety / liberty concerns. I don't get the impression that the people leaving OpenAI are necessarily in the open models / weights camp, and I'd love to get a balances view that is not techno-optimist or doomer centric, and isn't a CEO who's future yacht is tied to their point of view. As a developer I definitely gravitate toward open models / weights, for uncensored results, self hosting / local inference, and the potential to fine-tune and learn from the model architecture, etc. However, I don't view open weights the same way I view open source code / architecture, since the weights are basically a black box, so it's not like a dev is going to find a bug in the weights and submit a pull request to prevent a hack. And if you (a company) are truly worried that a model is so powerful that is could be weaponized in a significant way (mass fraud, sway elections, cripple infrastructure), then I can see why not releasing it without understanding the consequences could be a serious concern.
@knariksahakyan9525
@knariksahakyan9525 3 ай бұрын
On a different note, I wanted to ask about AI regulations, specifically in Europe and North America. When I send an AI-generated email, am I required to mention that the email was generated by an AI? thanks
@carultch
@carultch Күн бұрын
If you aren't going to bother to write it, why should I bother to read it? "This email was generated by AI" will be a statement that sends emails straight to my spam folder.
@Shoutinthewind
@Shoutinthewind 3 ай бұрын
I’m convinced that training on Ai generated data will be like making a copy of a copy of a copy of a copy with degradation in quality
@ryantigi2583
@ryantigi2583 3 ай бұрын
definitely need a full video on Claude Golden Gate
@JGLambourne
@JGLambourne 3 ай бұрын
Synthetic data can be interesting in cases where the LLM can explore the possible solutions to a problem, then the answer can be checked using symbolic methods. For example generating code and then checking it compiles and works correctly.
@paul_shuler
@paul_shuler 3 ай бұрын
Best Ai coverage channel. Thanks for you continued work and commentary on these rapid developments. Wondering your thoughts on Jensen Huang's latest speech.
@bujin5455
@bujin5455 3 ай бұрын
26:43. Not only do humanoid robots slot into a human world, but probably more importantly, we want to maintain the human interface as the de facto standard. We want to make sure the world we live in continues to be optimized for humans, and if you start to change the implementation interface, you can quickly arrive in a place where the human form is no longer operationally functional. i.e. We build a world we can't actually survive in.
@mwilliamson4198
@mwilliamson4198 3 ай бұрын
"Quality data" is always brought up in the context of healthcare (among other things) but THE MAIN PROBLEM in healthcare is NOT lack of quality data, it's institutional corruption/perverse incentives. I don't see how AI will change this AT ALL
@NoahM.Angell-sd4ez
@NoahM.Angell-sd4ez 3 ай бұрын
This deserves way more views. Hidden gem alert!
@paradox4l
@paradox4l 3 ай бұрын
Are we sure Sam Altman isn't gpt 7? Genuinely curious
@angelinetugmuan1981
@angelinetugmuan1981 3 ай бұрын
😂😂😂😂😂😂
@ich3601
@ich3601 3 ай бұрын
His alignment is very advanced.
@Bubs0271
@Bubs0271 3 ай бұрын
And there it is. Thats why Sam was fired in the first place. He has absolutely no concern for safety.
@Nightstorm-2516
@Nightstorm-2516 3 ай бұрын
Plot Twist: Sam Altman is Chatgpt 5.
@SharvindRao
@SharvindRao 3 ай бұрын
Ha ha ha very funny 😹
@sorh
@sorh 3 ай бұрын
Sam Altman is the one answering all our prompts to ChatGPT
@DaftMouse1
@DaftMouse1 3 ай бұрын
Funny how when gpt-4o became faster, so did Sam’s responses to questions. 🤔🤔😂
@ich3601
@ich3601 3 ай бұрын
Neuralink?
@jarnMod
@jarnMod 3 ай бұрын
22:00 If they want their car to be 50% brake then they can go build their own car and stop slowing other people down. 29:00 Llama is such a good name tho.
@FullEvent5678
@FullEvent5678 2 ай бұрын
I have such a mixed feeling about media training. On the one hand, I think it is needed in today's hypermedia landscape. But in contrast to nonviolent communication and active listening techniques that are about getting to the root cause in a productive way, media training is about avoiding the root cause in a safe way.
@rochemediaservices
@rochemediaservices 3 ай бұрын
The problem is less with "how much there remains to learn contained within" self-generated data; it's the inherent reduced entropy in the same way that inbreeding is undesirable: Inbreeding contains all the alleles required to create a "person." But the lack of entropy manifests as increased phenotypic frequency of expressed recessive disorders and uncle-dads and cousin-moms.
@nathanbanks2354
@nathanbanks2354 3 ай бұрын
I saw a product placement for ASM or ASML or whatever in a Anastasi In Tech video--I didn't expect to see one here too (15:15). It's fascinating that this company wants more people to think about them when they may only have half a dozen large customers like Intel / TSMC.
@karenreddy
@karenreddy 3 ай бұрын
Huge improvements on anything for which there is abundant data, and slower improvement where we lack data. Same as always.
@stanisd
@stanisd 3 ай бұрын
we need to make a video polygraph model and run through these interviews in the future
@PliniusSecundus
@PliniusSecundus 3 ай бұрын
People being scammed in an online context eg. through phishing mail/calls is part of cyber security.
@Paul-pj5qu
@Paul-pj5qu Ай бұрын
Increase in efficiency equals increase in unemployment. Increase in unemployment equals lowering of interest rates equals starving retired people. No one here thinks of this.
@DanFeldmanAgileProjectManager
@DanFeldmanAgileProjectManager 3 ай бұрын
How are you providing any additional insight?
@murraymacdonald4959
@murraymacdonald4959 3 ай бұрын
I agree a bit with Sam that safety is foggy. Safety could mean better data, safety could mean more training, safety could mean new methods. Safety Research vs. Safety Implementation are two different things. Is buying better data considered safety?
@peaolo
@peaolo 3 ай бұрын
If you consider growing human children's minds, you'll reasonably try to avoid their interaction with disturbing topics/news/facts/images, but if they accidentally face some of them a good parent should explain them, their behavior and try to provide a bearable mental model of them, accessibile for the child, in order to lower the disturbing effects over his/her psychology. But if you invest some money in devices or internet security softwares with family filters, maybe it's less likely to need an adult intervention to calm down a shocked baby. For what I understand that briefing or a tutoring fase are not feasable with LLM, so maybe explicitly investing the amount of money or GPU-time they promised in super alignment team would provide some "explanation-tools" to avoid this cross-distributed responsability in security field that leads to this foggy approach you mentioned.
@juanjesusligero391
@juanjesusligero391 3 ай бұрын
Safety is not foggy. If your company has a safety department that is separate from other departments and it has a budget, allocating half of that budget (as the inteviewer asked) to safety shows you value it as much as advancing technology. (Of course, the safety department will use part of their budget for better data, new investigations, etc.) Sam didn't provide a clear answer to that question; he just danced around with words, like a well-trained politician.
@peaolo
@peaolo 3 ай бұрын
@@juanjesusligero391 I agree at 90%, but I disagree on Sam's word-dance is convincing: to me his speech clearly seems a bunch of excuses, on many sub topics.
@juanjesusligero391
@juanjesusligero391 3 ай бұрын
@@peaolo Yeah, I also agree with you on that. I didn't meant to say he's convincing, politicians arent convincing to me either, I just meant that he avoids answering directly, he changes the topics whenever he feels it, and he doesn't give any info on what he is really being asked.
@jessedbrown1980
@jessedbrown1980 3 ай бұрын
Matt was a ASM Claude 3 fine tuned model this episode
@En1Gm4A
@En1Gm4A 3 ай бұрын
I wanne see knowledge graphs included into training as structures which emerge and produce synthetic data. The world we live in and the things are self similar and consistent
@jim-i-am
@jim-i-am 3 ай бұрын
The no brainer answer would be that it'll be hugely better at the things the models are weakest at, and not so much in things it's already good at. Once you're hitting 95%+ on every benchmark, it's hard to get something that most people would consider "hugely" better.
@JaxStifler
@JaxStifler 3 ай бұрын
I want gpt 6!
@gregmasseyify
@gregmasseyify 3 ай бұрын
Agi? 🎉
@jimbig3997
@jimbig3997 3 ай бұрын
Pretty sure they are releasing in advance of the coming year number.
@rapierwhip
@rapierwhip 3 ай бұрын
very good questions by the interviewer
@jaerin1980
@jaerin1980 3 ай бұрын
He was saying that making a separate model that focuses on safety that it makes no sense how research on that is different than just building the existing model. What does safety research mean? What does a million dollars on safety mean?
@TRFAD
@TRFAD 3 ай бұрын
Man Sam Altman startin to look like he got that thousand yard stare lol
@bug5654
@bug5654 3 ай бұрын
I'm confident that cybersecurity AI will cut both ways, and with defender advantage (More GPUs). Sure, the AI will have great tradecraft eventually, but also the pen testers and blue teams will be able to confidently run entire suites of up-to-date tests against code in staging, enabling rapid feedback to the original coder who can iterate on the threats before deployment.
@drlordbasil
@drlordbasil 3 ай бұрын
We should make guides on coding with AI, not coding AI from scratch. You can code complex code-based with simple chat interfaces, let alone memory and testing phases, but like you said I ask for a base and give it examples and context then outputs roughly what I want then edit till it works. :D Its the new coding lang :P
@gry6256
@gry6256 3 ай бұрын
Really good analysis- and quite interesting subjects discussed in my opinion.
@Psanyi42
@Psanyi42 3 ай бұрын
I kind of get what he was saying that there are different kinds of safety. If you are someone who wants to use only the part about coding your security means that AI writes safe and trustworty code. But if you talk about giving out illegal information and putting in safeguards and preventing jailbreaks that's a different thing. If you focus on implementing jailbreaks you can't focus that much on getting a better more precise model, which would in turn be better for everyone and especially it would increas the security of the coding part.
@gunnerandersen4634
@gunnerandersen4634 3 ай бұрын
5:44 This is a bit off, the image should be flipped horizontally, represented as a log function of the improvement over time.
@MikeKleinsteuber
@MikeKleinsteuber 3 ай бұрын
You're wrong in your belief that Sam didn't understand the safety question. He simply said safety isn't a separate system that you impose on your model but rather if you design the model correctly the safety is built into it from the get go. So saying 50% of your budget should be spent on safety is the wrong way of looking at the issue...
@roachmastert
@roachmastert 3 ай бұрын
I bet he did understand the question, but the important part is that he didn't answer it. ;-)
@aosamai
@aosamai 3 ай бұрын
Mat from my experimentation you will most definitely get way better results using multiple Agents
@torarinvik4920
@torarinvik4920 3 ай бұрын
Time stamps?
@mixching
@mixching 3 ай бұрын
19:20 you haven’t watched the video? But at 45:13 you’re telling us that you already watched the video.
@stereo-soulsoundsystem5070
@stereo-soulsoundsystem5070 3 ай бұрын
he watched a clip but in reality it doesn't matter At All
@fabiankliebhan
@fabiankliebhan 3 ай бұрын
„Hugely better in some areas and not so much better in other areas“ I think no one knows what these areas will be. Even Sam does not know this I think. We'll find out when the new model will be available.
@buriedbits6027
@buriedbits6027 3 ай бұрын
29:37 excellent question
@EthanReedy
@EthanReedy 3 ай бұрын
I'm pretty sure the Washington Post spoke with the voice actor who created the Sky voice. They certainly talked with her agent and had access to her demo recordings. OpenAI isn't hiding her. She just doesn't want to be publicly identified, which I can certainly understand.
@hiratiomasterson4009
@hiratiomasterson4009 3 ай бұрын
This issue of transformer architecture (as it is currently understood) not being ideal for planning, many types of logic and reasoning "supposedly" has been sold with the Q-Star/Q* algorithm. That did the rounds late last year but seems to have faded into the background, either because it was a red herring, still needs a lot of work, or is so capable that it has the potential for too great and too rapid hit on employment. Would be interested to hear if people are leaning any way on this...
@federico-bi2w
@federico-bi2w 3 ай бұрын
About the reconfiguration of the society:..what scare me are the oldest people that do not go away from their positions in the firms (I am not from USA). In my country there are to many old people that still work...and often they are really unskilled and without experience (if they had they appear to have lost it)...well they naturally stop innovation, they stop increase of productivity...they defend themselves...well I think the problem will be...the old workers with high salary will stop the youngest much because they are much more smart and productive eventually giving them low salary...(that is what is happening also today but it will happen much much more)...
@micktinker
@micktinker 3 ай бұрын
Synthetic date: The brain is trained on synthetic data. The older sections of the brain have a lot of pre-trained survival functions and a whole bunch of algorithms for gaining and organizing ‘training’ material. People learn rapidly from other people, books and education are all synthetic data. Techniques, usually morals, intrinsic feelings of right and wrong, your adults and peers, all contribute the guiding the learning process much like guard rails. That said, humans have some bad survival traits; without supervision those in authority such as prison guards are known to descend into outright cruelty, conflict escalation (war), etc.
@LukasSalich
@LukasSalich 3 ай бұрын
Actually when asked about exponential progress and he said they are not reaching an asymptote, it may not be intentional, but it also naturally means they are not reaching exponential progress...
@paulmichaelfreedman8334
@paulmichaelfreedman8334 3 ай бұрын
Exponential does not mean asymptotical by definition.
@somdudewillson
@somdudewillson 3 ай бұрын
...No, it does not. If I ask you if the sky is red, blue, or green, and you say it isn't green, I know nothing about whether it is red or blue from your answer.
@benshaw255
@benshaw255 3 ай бұрын
You know, I don't love Sam Altman but I don't really think quite as bad as people think. It seems to me that he just doesn't really agree with the level of concern people have over areas of safety with these models. I'm a total idiot but I tend to agree. Safety is absolutely important and resources should be devoted to it, but I think people are putting the cart before the horse a bit. Potentially some of those highly concerned about safety are acting a bit from fear and emotion rather than logic. Sam seems to understand how the models work fairly well, and I think he's quite aware of its limitations which is why he isn't that irrationally worried about it. Sure, its worth preparing for some huge breakthrough and devoting resources for that eventuality, but I don't think even he is expecting this to happen in the near future at this point. That being said, if he did do things like releasing the model without even notifying the board - that isn't good. At best he felt many of them were hostile to him and decided to just work around them but it also could come from a place of arrogance and disregard for others or I guess somewhere even more sinister.
@hsiaowanglin9782
@hsiaowanglin9782 3 ай бұрын
I just learned how do you manage Ai Green energy, neuclear ? What if lost Electricity power, how do you solve that problem? We have to prepare that, can’t build up after happens. In several State in USA?
@EstamosDe
@EstamosDe 3 ай бұрын
Is there anyway to sell data to these companies? As example, a transcription of recordings of 6 years of college classes ?
@reezlaw
@reezlaw 3 ай бұрын
Sam Altman's ultimate dream, his real final objective, his definitive endgame is to have a deeper voice. He is doing ALL THIS only to have an AI that can find a way to give him a deeper voice, surgically or otherwise. He hates the sound of his own voice to the point of almost passing out for the amount of strain he puts on it. That vocal fry could fry an entire chicken effortlessly. He's pushing so hard it's probably hard not to soil himself. Deeper! I have to sound deeper! BWEEEERRRRGH FRYYYY FRYYYYY
@SuperFinGuy
@SuperFinGuy 3 ай бұрын
People that say "oh synthetic data is gonna corrupt language models" have no clue how they work, random human data is very noisy and you need lots of it to have good results. Purposeful data is very hard to produce by humans but easy to produce by AI and even a bit of it can give you great results.
@hotlineoperator
@hotlineoperator 3 ай бұрын
I assume that OpenAI is already thinking about what is needed to create an AGI, and yes, maybe GPT5 will come in November, but soon we will also see something that can make inferences and ask questions based on insufficient information - and in this sense, synthetic data is like the imagination that AGI needs.
@stevehoff
@stevehoff 3 ай бұрын
What he's really saying, is expect a lot of buggy, inefficient and insecure applications.
3 ай бұрын
As far as rumors, Altman heavily engaged in Reddit. If true he has a certain comparable access to training data as available on X and Facebook.
@retrotek664
@retrotek664 3 ай бұрын
I love to use A.I to write code, I'm a natural born terrible programmer, and have had great luck using A.I to create ActionScript 2 and JS games!
@gerardwhite6406
@gerardwhite6406 3 ай бұрын
Great question to ask AI: "How might societal changes around self-identification and evolving human rights affect the future of AI, especially considering the pace of technological advancement? How should humans prepare for potential future scenarios where AI might claim human-like rights?" The answer is interesting, AI will tell you to prepare for the future.
@somenygaard
@somenygaard 3 ай бұрын
If it costs 200 million to develop a new car should half of that be safety? Wouldn’t large portions of the development costs automatically have safety concerns baked in to the development costs?
An Unknown Ending💪
00:49
ISSEI / いっせい
Рет қаралды 53 МЛН
У ГОРДЕЯ ПОЖАР в ОФИСЕ!
01:01
Дима Гордей
Рет қаралды 8 МЛН
Yuval Noah Harari - New Book "Nexus" Will AI Kill Democracy?
1:32:06
Ex-OpenAI Employee Reveals TERRIFYING Future of AI
1:01:31
Matthew Berman
Рет қаралды 450 М.
Humanity Is Not Ready For These AI Voice Conversations.
10:01
It's Jonny Keeley
Рет қаралды 71 М.
Zuckerberg's "Scorched Earth" Artificial Intelligence (AI) Strategy
44:08
Former Google CEO Spills ALL! (Google AI is Doomed)
44:45
Matthew Berman
Рет қаралды 630 М.
Sam Altman & Brad Lightcap: Which Companies Will Be Steamrolled by OpenAI? | E1140
53:07
What Is Q*? The Leaked AGI BREAKTHROUGH That Almost Killed OpenAI
35:17
An Unknown Ending💪
00:49
ISSEI / いっせい
Рет қаралды 53 МЛН