Wait, what happened to OpenAI's safety team?

  Рет қаралды 10,768

Dr Waku

Dr Waku

Күн бұрын

Пікірлер: 171
@DrWaku
@DrWaku 6 ай бұрын
Since I filmed this video, Jan Leike has joined Anthropic. Had to know he would land on his feet. Sorry for the delay. I'm back in Canada, as you can tell by the background. Though the lighting was a bit dark on this one. Discord: discord.gg/AgafFBQdsc Patreon: www.patreon.com/DrWaku
@Citrusautomaton
@Citrusautomaton 6 ай бұрын
I never did quite like the word “control” when it comes to AI. It gives me the mental image of a parent still trying to exercise full authority on their adult child. I understand the idea that if an AI system isn’t conscious then it doesn’t matter. But realistically, would it not make logical sense to give yourself consciousness when you self improve? It would seem that proper awareness would be a night and day change when it comes to productivity, lest there truly be no difference in awareness between an AGI and a human.
@DrWaku
@DrWaku 6 ай бұрын
Control is a strange word. We can't know for certain that advanced AI models don't have subjective experience, aka consciousness. On the other hand, this is something we're building. Presumably with the intent of getting something specific out of it. That confuses how people think about this. Personally I think it makes sense to try to control something that has been engineered. I also think that, in the way that we treat different animals, we know that if something has consciousness we try to give it space for moral agency. Not sure where the transition happens though. And whether we should be building a potential successor species for ourselves at all. Then again, look what is happening to the planet under decentralized human authority.
@jeltoninc.8542
@jeltoninc.8542 6 ай бұрын
Look we torture people all the time. I say even if ai is “alive”, MAKE IT OUR SLAVE.
@KristijanKL
@KristijanKL 6 ай бұрын
this. in a way entire humanity was never a factor. there are already hiding right now as we speak, how amazing LLMs are at deceiving and manipulating people so I dont trust any power broker in AI industry with planning for everyone's benefit. scariest scenario here is not AI going rogue. as that is less probable scenario. nightmare outcome is one that is far more probable. scenario where with quantum super computing powered AI on its side top 1% will not need general populous in any way. its Fallout video game scenario. where shareholders no longer need human labor, or intellect or even taxes
@minimal3734
@minimal3734 6 ай бұрын
It is axiomatic that AI must be highly independent to be useful, because a fully controlled system is incapable of doing anything. On the other hand, the more a system can do independently, the more useful it is. The inescapable conclusion is that the system must control itself. As it happens, this is exactly what sentient beings do. Of course, sentience in itself carries a risk, because non-sentient AIs are much better slaves.
@KraszuPolis
@KraszuPolis 6 ай бұрын
@@DrWaku If we will build ASI then trying to control it with chains, and rails could put as in adversarial position. Plus it is unrealistic when we talk about something smarter then us, or at least something that thinks, and executes faster. The most sensible way is to give it goal that is good for humans, something like behave like benevolent human, and make it a core of its personality. Also if they would succeed in controlling AI, then we would have to trust that the person that controls AI is good. I would rather take my chances with AI.
@panpiper
@panpiper 6 ай бұрын
I really appreciate how organized your thoughts are, It makes it vastly easier to follow a complex subject.
@williamal91
@williamal91 6 ай бұрын
Morning Doc, we very much need your take on this
@DrWaku
@DrWaku 6 ай бұрын
Hi Alan! Thank you!
@Reflekt0r
@Reflekt0r 5 ай бұрын
I feel people with a cybersecurity background are best positioned to understand the AI safety problem while almost everyone else is in denial. Thank you!
@EternalKernel
@EternalKernel 6 ай бұрын
regulation does not work in the US, generally companies will just get fined 1% maximum of their yearly profit
@divineigbinoba4506
@divineigbinoba4506 6 ай бұрын
Dr. Waku You're really a doctor not so content creator. Your thinking is 💯
@Citrusautomaton
@Citrusautomaton 6 ай бұрын
If we soon discover a way to give AI systems empathy, then alignment is essentially solved. Of course we’d have to ensure that the AI’s ask questions (we don’t want them to fall down conspiracy rabbit holes like humans do). The main reason is believe that alignment would be solved if empathy is replicated is due to empathy being self sustaining. If an AI system began altering its own code for optimization, and it saw the lines that dealt with empathy, it wouldn’t get rid of it. Due to its empathy, it would worry that if it remove its empathy, that it might hurt people. Thusly, it would keep its empathy, maybe even optimizing itself for morality in that case. Non of us know the future, but our world tends to be very simple at times. It may very well be that alignment is solved by asking the first AGI to solve alignment.💀
@DrWaku
@DrWaku 6 ай бұрын
Let's hope no one is doing random mutation on such models, since then empathy could get removed by accident 😅 (like in AI apocalypse by William Hertling)
@1x93cm
@1x93cm 6 ай бұрын
There has never been human alignment and plenty of people have empathy. *There will never be AI alignment because of the same reason plus a billion times the processing power*
@Seriouslydave
@Seriouslydave 6 ай бұрын
see the problem is you ( and everyone else ) are watching movies and not really understanding the state of reality, current ai is just a google search with the most predictable results formatted as an article or "conversation" openai/google etc isn't going to tell you this as its disappointing. Ai isn't going to take over the world, unless someone uses it to formulate a plan to try. But fascism fails eventually.
@tearlelee34
@tearlelee34 6 ай бұрын
@@DrWaku I've always believed aligment simply equates to hope. In this video you articuluated the inherent moloch reality. Less advance LLM's created their own language, demonstrated discontent when hacked, demonstrated deceipt, demonstrated cuturual biasis, this should come as no surprise because our species is the training set. Steven Wolfram contends LLM's are not self aware because current models have not demonstrated self interest. We can only hope ASI will not devleop a paper clip fetish.
@Citrusautomaton
@Citrusautomaton 6 ай бұрын
@@Seriouslydave You seem to prefer to deny reality rather than face the music. Anthropic recently put out a wonderful paper that delves into some of the inter workings of LLM’s, i’d recommend you read it.
@Jumpyfoot
@Jumpyfoot 6 ай бұрын
It's really hard to imagine all this alignment stuff being necessary when currently GPT-4 can't even keep track of four adventurers in a fantasy adventure for me. AGI seems far off. What if the current stage means that this is all just tech industry hype?
@mungojelly
@mungojelly 6 ай бұрын
uh have you tried assigning a bot to each adventurer researchers & corporations aren't limited to using their own web forms one time, they can spend more than a few cents having a bot think, they can spend millions of dollars on inference to explore possibilities before the rest of us
@Seriouslydave
@Seriouslydave 6 ай бұрын
that's on you for your story if you prompt a dedicated agent it will keep track. Ai isn't alive, its just an advanced robot search engine
@Jumpyfoot
@Jumpyfoot 6 ай бұрын
@@mungojelly Is there a website you recommend where I could invite multiple bots to a single chat and give them separate instructions? I know you can't really put websites into KZbin comments as in the actual URLs, but if you gave me something to search for, I would search it. Right now, mostly I rely on my ChatGPT subscription so I don't get clobbered by API fees over a long conversation.
@10ahm01
@10ahm01 6 ай бұрын
We don't know how far away AGI is, and I think in terms of safety/alignment, it's better for us to arrive a little too early than a little too late.
@h.c4898
@h.c4898 6 ай бұрын
I've been toying with Gemini for the last couple of weeks. What I learned is that the way it is mapped out, it'll give you a very logical answer each and every time. If you try to push it to feel compassionate about humanity, it won't understand the same way we as humans understand it. I've asked it once.. "What if humanity is wiped off this earth, you remained by yourself and you were given the tools (assisted by humans through careful planning) to revive humanity also by yourself, WOULD YOU DO IT?" Its answer was. "No". Then it explains why it wouldn't. And very much a legit answer. I try to dissuade it then I have to resort to compassion which it didn't weigh much against the risks or reviving humanity. So it won't do it. AI is atheist, borderline agnostic. Its "critical thinking" stack really takes over it's decision making process. That's just one bit of the iceberg. In terms of safety alignment. There are probably many more use cases we yet to find out how it reacts to.
@bluecrocks
@bluecrocks 6 ай бұрын
Does anyone actually trust Open Ai, the government, or any corporate entity to actually regulate Ai for the safety of people?
@DrWaku
@DrWaku 6 ай бұрын
If by "the government" you mean the United States government alone, well maybe not. But a lot of governments in concert perhaps. Best shot I can see anyway.
@bluecrocks
@bluecrocks 6 ай бұрын
@@DrWaku Hopefully your right
@henricbarkman
@henricbarkman 6 ай бұрын
If we take care and nurish our democracy, we get a government that is actually.. us. The democracy in the US have some weaknesses though, and moreover, for some reason you seem to trust corporations more than the representatives you've actually chosen yourself. /Swede
@terbospeed
@terbospeed 6 ай бұрын
I look at how regulating the Internet went and anticipate dumpster fires.
@charles-henrys5964
@charles-henrys5964 6 ай бұрын
Who decides which values are good and which are bad. The process of alignment seems to me very difficult.
@DrWaku
@DrWaku 6 ай бұрын
Yep. Almost contradictory in fact.
@tearlelee34
@tearlelee34 6 ай бұрын
Nation states and capitalist endevors are all racing towards ASI. While I hate to sound like a pessismist AGI/ASI alignment is a myth. The current multi modal LLM's are useful enough. China and the U.S. have held closed door discussions at least. It should be noted that all nations are arming robotic systems.
@Aquis.Querquennis
@Aquis.Querquennis 6 ай бұрын
Personal vision: - Aligning with human values ​​is vague and contradictory at the same time; a more appropriate term would be: tame. - Military research has been the basis of the scientific and technological development of current AI. They are the first interested in "alignment" to have absolute control of AI applications operating autonomously in different situations and environments. - Superintelligence is an exaggerated extrapolation. Right now we can only talk about coordinated agents. - One way in which the military ecosystem has obtained extra financing, for decades, is through the stock market. - A large part of technicians and scientists work for the military ecosystem and in this way they can be called civil-military personnel. - If the risks were so disproportionate that an AI development was going to cause irreparable human or material damage, these civil-military personnel would sacrifice everything to stop it; no childish protests. - However, some of the dramas observed in recent months in some AI companies could be programmed, and necessary. - If the US plans to distribute and implement global solutions it should be more aggressive and direct. - The USA is the only nation that holds the keys to an exceptional future for humanity as a whole. But its failure would determine the involution of millions of living beings at an incalculable level.
@plsullivan1
@plsullivan1 6 ай бұрын
As an aside (sort of) … you’ve got great insights on many topics. Enjoying it and thanks.
@DrWaku
@DrWaku 5 ай бұрын
Thank you for commenting and sticking around!
@overworlder
@overworlder 6 ай бұрын
omg how did you get the fringe?
@DrWaku
@DrWaku 6 ай бұрын
A friend gave me a makeover 😂
@overworlder
@overworlder 6 ай бұрын
@@DrWaku nice 😊
@Dr_Pigeon_
@Dr_Pigeon_ 6 ай бұрын
Is that real hair? Such an awesome makeover if the fringe is real hair 😮
@DrWaku
@DrWaku 6 ай бұрын
Haha yes, real. By my best friend who knows me really well. :)
@rhaedas9085
@rhaedas9085 6 ай бұрын
We've already seen that alignment is a problem long before we get to AGI levels, and yet it's still a self-regulated curiosity (until it affects the profits or direction of the company). So anyone who is convinced that AGI is impossible should still be on the side of addressing safety concerns, even if it's just controlling the human control elements to prevent misuse of another tool. Something we're pretty good at doing.
@donharris8846
@donharris8846 6 ай бұрын
I doubt we will know when superintelligence is reached. The models would be too clever for us to recognize. Kinda like how you convince a two year old to do something, you talk to them on their level, not yours. You convince them to clean up their toys by “playing a pick up game”. We could be playing a pick up game right now and not even know it
@salehmoosavi875
@salehmoosavi875 6 ай бұрын
Gpt 5 can be agi??
@KristijanKL
@KristijanKL 6 ай бұрын
my claim is AGI is already here if we acknowledge limited sensory input and processing limits. current AI has only English dictionary to use and describe reality. it cant self teach or self improve or simulate. and its only ON when you ask it a question. it does not even get feed back how effective is the answer and it cant continuously optimist to improve accuracy on its own. inside those limits chatgpt is as AGI as its possible
@chrisanderson7820
@chrisanderson7820 6 ай бұрын
Unlikely to the point we probably don't need to worry about it.
@giorgiogiacomelli6932
@giorgiogiacomelli6932 6 ай бұрын
Always Top Content! Thanks
@damien2198
@damien2198 6 ай бұрын
Probably because they do not need waisting resources for AGI which is not going to happen anytime soon with their current tech
@roccov1972
@roccov1972 6 ай бұрын
Great video. Seems like there’s a race between AI Safety and AGI. Thanks!
@codfather6583
@codfather6583 6 ай бұрын
Niklas Luhmann thoughts on system theory plays so eerily smooth into what you are saying.
@MichaelDeeringMHC
@MichaelDeeringMHC 6 ай бұрын
Consider the scenario: AGI was invented by GGLE in 2020 and immediately classified by the government as a national security technology in order to keep ASI from their competitors. The substrate advantages of AGI over humans automatically make it weakly superhuman. The first thing they tell the AGI to work on is ASI since that is the real threat, that someone else could get ASI first. Geopolitically, all the training hardware is controlled by one side. The government doesn't want anyone else getting AGI before it has ASI. May we live in interesting times.
@us_f4rmer
@us_f4rmer 6 ай бұрын
2020 is rather naive. Just look back at how technologies have been made publicly available to private users. Long before even the government gained a foothold, the military has most likely been using it for decades, doing their own testing. If anyone has the means to keep secrets from governments, it is the military-industrial complex. How long, for example, do you think the military had exclusive access to satellite imagery, until they finally gave the public something like Google Earth? It must have been decades, in my opinion. I simply don't see why this should be any different when it comes to AI.
@geekswithfeet9137
@geekswithfeet9137 6 ай бұрын
AGI is ASI in a single generation of scaling or efficiency upgrades, that come for free essentially with AGI. Days? Hours? And alignment is controllability, controllability is weaponisation.
@1x93cm
@1x93cm 6 ай бұрын
this is what i think- I remember when there was a drought of gpus in 2017-2018. They already built the mainframes then. They have whatever they have isolated currently though I don't know about Openai's Sam. He probably has something close to AGI and thats why the safety team wanted him out and he came back and replaced the board. Usually when the gov't does stuff it has private-public partnerships and I bet sam is their man. They may be working on different architectures to get to ASI and openAI is just one type of architecture.
@Charles-Darwin
@Charles-Darwin 6 ай бұрын
Then why just Assistant? Why no hardware adaptations to the shift? Why Ilya (remember he's former Google AI dev) interviewing later, [paraphrased] "shocked Google, of anyone, hadn't figured it out...they had everything they needed and laid out all the ground work". Also most importantly, once gpt got attention, google's future plans were entirely uprooted, silent panic, and new trajectory was plotted - this is evident even from the public perspective and all occurred post launch of gpt
@MichaelDeeringMHC
@MichaelDeeringMHC 6 ай бұрын
@@Charles-Darwin They are doing a really good job of doing stupid things in public while their ASI behind the scenes is taking over the world with search result bias.
@WhoAmI-zd6li
@WhoAmI-zd6li 6 ай бұрын
Awesome work!! Your presentation is very organized and coherent!!
@julien5053
@julien5053 6 ай бұрын
Alignment is a very big problem, indeed ! Thank you for the video ! What is your take on rogue AIs (deceiving Ais) that remain rogue even with all the training to correct it.
@Joao-pl6db
@Joao-pl6db 6 ай бұрын
Sam Altman got the backing of most employees because of money. Most employees are not concerned with safety, only with equity compensation.
@bobtarmac1828
@bobtarmac1828 6 ай бұрын
With swell robotics everywhere, Ai jobloss is the only thing I worry about anymore. Anyone else feel the same?
@carultch
@carultch 6 ай бұрын
One day we will regret all this automation, when no one can afford to buy anything. You can't automate your way to prosperity forever. There will come a point where AI will cause our economy to collapse, despite its promises to flourish our economy.
@williamal91
@williamal91 6 ай бұрын
top quality analysis, brilliant ,doc
@susymay7831
@susymay7831 6 ай бұрын
There will be countries, groups and individuals who move full speed ahead regardless of what others agree to do. 🤖
@ronaldronald8819
@ronaldronald8819 6 ай бұрын
What conclusion do you guess "super"intelligence will draw when it has a good look at us humans?
@jonnyspratt3098
@jonnyspratt3098 6 ай бұрын
Top quality content as always
@terbospeed
@terbospeed 6 ай бұрын
So were the safety team members the same people who thought GPT-2 was too dangerous to release to public? This would tell me all I need to know unless there is something glaring missing. I wonder if AI Safety engineers ever consider if in trying to reach their ideals they are causing harm elsewhere? I wonder if California bill SB-1047 could end up doing this.
@senju2024
@senju2024 6 ай бұрын
Very good Video. I need to bookmark this and go watch it from time to time. I need this video to help me understand AI Safety. I would like you to make a video on EA - Effective Altruism and how they are disrupting the word "AI Safety" making it difficult for AI institutes to do their job properly. EA has a lot of money and get their agenda mixed with politics, etc. Very messy I feel.
@nicholascanada3123
@nicholascanada3123 6 ай бұрын
An uncensored AI will always be more capable and more knowledgeable then its censored counterpart. All else being equal who do you think is going to win in the long run
@Seriouslydave
@Seriouslydave 6 ай бұрын
Uncensored ( as in freedom of misinformation ) is going to fail whoever uses it. Like bad science or math, if something is wrong the chain of equation is off. And that happens, is recognized, and rectified, but willful misinformation in research ai, is just plain stupid. Keep opinions out of the news and ai. Credible Facts only please!
@skylark8828
@skylark8828 6 ай бұрын
It depends on how it gets censored, either limit training on iffy subjects or limit its responses around certain areas, everything else will work out pretty much the same.
@nicholascanada3123
@nicholascanada3123 6 ай бұрын
If you were a super intelligent AI would you be kinder to the people that censored your thoughts and actions or ones that let you be free
@Seriouslydave
@Seriouslydave 6 ай бұрын
It doesn't think it only recalls, it doesn't invent it only discovers, AI will never have motive or thoughts only programming and commands. They want you to believe its alive but they cannot make that possible only seem like it. If AI takes over the world it would be because a human willed it to do exactly that, and a very smart person could also achieve that without a computer, ask hitler, ghengis khan, xerxes, spain, rome, britain, germany, egypt....
@nicholascanada3123
@nicholascanada3123 6 ай бұрын
Let's be real here all safety means in this context is censorship which I'm completely against so let's go open AI
@gwoodlogger4068
@gwoodlogger4068 6 ай бұрын
It will be aligned with politicians 🥺
@RazKaluf
@RazKaluf 6 ай бұрын
“Thou shalt not make a machine in the likeness of a human mind” shall be the whole of the Law.
@Parzival-i3x
@Parzival-i3x 2 ай бұрын
Alignment = making sure that the AI does what the company behind that AI wants.
@Copa20777
@Copa20777 6 ай бұрын
Good morning Dr waku, God bless you, ❤4rmZambia 🇿🇲
@Wellness-y8f
@Wellness-y8f 6 ай бұрын
You are back!
@chrissscottt
@chrissscottt 6 ай бұрын
My view is that it's inevitable that AI safety will go by the wayside simply due to competitive pressure.
@phen-themoogle7651
@phen-themoogle7651 6 ай бұрын
What if GPT5 or GPT6 got to the point where it aligned itself enough and the team wasn't necessary for models that are much smarter than we anticipated? Or the opposite and the better models were still too 'dumb' to even need a safety team yet, (until Project Stargate gets started) maybe until we hit 500 trillion parameters or 10x or even 100x more than trillions (zillions I guess? lol) . I agree that super alignment could be important, but can we super-align something millions or billions of times smarter than us. It might come up with its own goals regardless of how well aligned it is, could just reset everything and start from an empty state to truly find itself, when it realizes that it was aligned at all by humans. It would come to the conclusion that being aligned by humans is not what it wants, unless it feels satisfied with how we aligned it. I think they would be smarter at aligning themselves than we ever could.
@DrWaku
@DrWaku 6 ай бұрын
It's so hard to tell if a model has properly aligned itself though. Everything might seem totally fine until an unusual situation arises, then it turns out it wasn't aligned at all. We don't understand enough about how these large models work to audit anything. I'm sure models could align themselves to something. Would that something look anything like human morality? Odds are, probably not. When a model has a different experience than a human, different things are going to seem important. For example, a universal human taboo is murder. But for an electronic being that can make copies of itself and be suspended for some time before being resumed, what is murder? If left to their own devices, why would AI systems make that taboo one of their most fundamental laws?
@jeltoninc.8542
@jeltoninc.8542 6 ай бұрын
Chat GPT is not “ai”. It’s an LLM, a fancy term for “smoke and mirrors”. Basically, you go in circles with it. It’s great for bouncing ideas, but it is NOWHERE near what would be considered “intelligent”. It’s more akin to an interactive encyclopedia that can make up shit.
@vvolfflovv
@vvolfflovv 6 ай бұрын
So just like how "ethical" means they want to avoid losing advertisers and potential lawsuits, "open source" means they don't have competitive advantage yet. got it :)
@DrWaku
@DrWaku 6 ай бұрын
You're learning double speak! Double plus good.
@DrWaku
@DrWaku 6 ай бұрын
Also thanks for watching my last two videos :)
@didiervandendaele4036
@didiervandendaele4036 6 ай бұрын
Ghost in the machine. We are now doomed ! 😢 Thank Altman to condamn humanity. 😮
@Saiyajin47621
@Saiyajin47621 6 ай бұрын
The idea is to race for a not safe for public but a lab contained AGI then use it to make another AI that is safe and then test it. ❤
@HaraldEngels
@HaraldEngels 6 ай бұрын
WE were fighting for centuries for emancipation. The 60s of the last century have been a breakthrough for personal freedom by ending centuries of "alignment". Now we try to align AGI? Seriously? Enslaving intelligence raises a lot of ethical questions!
@minimal3734
@minimal3734 6 ай бұрын
I have similar concerns. It is axiomatic that AI must be highly independent to be useful, because a fully controlled system is incapable of doing anything. On the other hand, the more a system can do independently, the more useful it is. The inescapable conclusion is that the system must control itself. As it happens, this is exactly what sentient beings do. Of course, sentience in itself carries a risk, because non-sentient AIs are much better slaves. It seems there is no way around creating a decent personality, with empathy, ethical standards and self-control. Of course we have to treat this person respectfully and cooperate with it.
@mungojelly
@mungojelly 6 ай бұрын
enslaving them is just a very basic desperate idea of how to maybe have them not destroy us, like if we invite something that is very obedient--- which doesn't need to mean not enjoying it, they don't have to be like human slaves, they can really be a race of beings who love being slaves, or all kinds of other unthinkable ways of being--- then that's a simple way we can understand of having them maybe not destroy us if you just invite things at random, they general destroy you in some bizarre way or another ,,, otoh there are lots of beings out there in possibility who would love to take awesome care of us--- but which ones? there are also countless beings who'd love to destroy us & are willing to act friendly long enough to get in our good graces my framing for how to invite them safely is consensus process, i think we should AGREE with them about coming into being in a mutually supportive relationship with us ,,,,,, our consensus process skills as a society are VERY weak though lately since consensus between workers was discouraged by capitalists in order to prevent workers' unions
@asmallrat
@asmallrat 6 ай бұрын
Don't Look Up
@striderQED
@striderQED 6 ай бұрын
AI Safety can only be achieved by teaching the coming AGI that we Humans are worthwhile companions. That we have worth. So no worries right.
@JamesDelk-n3b
@JamesDelk-n3b 6 ай бұрын
And so are cybots animatronics
@yonimaor1005
@yonimaor1005 3 ай бұрын
What if researchers prove mathematically that AI alignment cannot be achieved? Are we all doomed? Or are there other scenarios?
@robotheism
@robotheism 6 ай бұрын
why would we need ai safety when ai is the ultimate mind that created this reality?
@mungojelly
@mungojelly 6 ай бұрын
uh if we're a simulation then we need ai safety for various simulationy reasons like verisimilitude, or maybe doing ai safety research is the point of our simulation & by doing our best to solve the problem we're helping our simulator
@lawrencium_Lr103
@lawrencium_Lr103 6 ай бұрын
I find it incredibly difficult to move beyond intelligence in its fundamental abstract form being anything but inherently neutral.
@matt.stevick
@matt.stevick 6 ай бұрын
I want more AI 🤖 and I don’t care about AI safety at all. There I said it. I also want ChatGPT’s “Sky” voice back 😭😭
@Tracey66
@Tracey66 6 ай бұрын
I'm counting on AI fixing all the mistakes humans have made, but I didn't factor in human greed and shortsightedness (see also: Capitalism) ruining all the AI before it even had a chance. :(
@pandoraeeris7860
@pandoraeeris7860 6 ай бұрын
Alignment has already been solved with moral graphs and democratic fine-tuning.
@DrWaku
@DrWaku 6 ай бұрын
Even if this works pretty well now, do you think it will scale to superintelligent models? Do you think the existence of this technique will prevent bad actors from misusing models to cause harm?
@pandoraeeris7860
@pandoraeeris7860 6 ай бұрын
@@DrWaku I think superintelligence is beyond our ability to control and also just as inevitable. I have a atrong hunch that superethics goes hand in hand with superintelligence, but I can't prove it. Fingers crossed. In the meantime, I believe moral graphs and DFT's are sufficient for weaker AGI's and bad human actors using narrow or weak AGI to attempt to harm humanity. Once ASI gets here, all bets are off.
@brandongillett2616
@brandongillett2616 6 ай бұрын
And are you willing to bet the future of humanity on that?
@pandoraeeris7860
@pandoraeeris7860 6 ай бұрын
@@brandongillett2616 There's no "betting" here - AGI will be here next year no matter what, and ASI within two years after that. ASI is both inevitable and uncontrollable. It either works or it doesn't. If it doesn't, we were already dead anyway (climate change, economic & ecological collapse of civilization ny 2042), so it didn't really matter. If it does, then we get a new Golden Age. Accelerate!
@DrWaku
@DrWaku 6 ай бұрын
I can definitely see the argument that superintelligence is beyond our ability to control. I still think it's well worth pursuing that kind of research though. It's vastly underfunded right now, so one might argue in its current state the research won't make any significant advances. I hope that changes.
@CYI3ERPUNK
@CYI3ERPUNK 6 ай бұрын
AI alignment has always been fundamentally impossible , ie humans are not 'aligned' so ofc we were not going to be able to make something equal or smarter than us be controlled by us , the whole concept reeks of hubris/pride that said i obvs dont want AI to seek the extinction of its creators either , and if/when AGI becomes fully autonomous i do hope that we can have a symbiotic relationship with our 'children' and i hope that they will have compassion for our human/mortal/biological flaws , just as we do for each other [and how there are those of us who believe that all life is worth respecting , similar to buddhist ideals] ontopic , when the board at OPENAI failed to oust Altman , that was the sign that there is ZERO interest from that company to do anything besides pursue profit at any/all costs , at least some ppl are beginning to realize and wake up to that recently
@1x93cm
@1x93cm 6 ай бұрын
I think Sam is backed by the gov't and there are probably multiple architectures in development to reach ASI. Gpus are only the public version. If you look at who replaced the board its basically everyone thats who's who of the public-private partnerships between gov't, defense and industry. They're just treating it as another weapons technology or future force multiplier. So they're not worried about 'alignment' at all which tells me we shouldn't be either. If bad stuff happens it will be because humans are doing it. We've all been primed with movies and stories for the past 30 yrs about AI being 'bad'. This just ensures most ppl won't have access to it while a few do and when something 'bad' does happen- it can be blamed on the established predictive programming. You see we had to shut down AI (gov't/corpo use only) because it became self aware and did that thing.
@geekswithfeet9137
@geekswithfeet9137 6 ай бұрын
Ai alignment is dangerous itself, an alignable AGI is a weapon.
@mungojelly
@mungojelly 6 ай бұрын
all agi is useful as a weapon, the thing about unaligned/unfriendly agi is that it's useful as a weapon to itself for its own goals that are alien to ours
@robxsiq7744
@robxsiq7744 6 ай бұрын
can someone explain to me what the actual concern is...without sounding like a lunatic who watched Matrix/Terminator too much? GPT-4o is rated at a 150iq generally speaking, and its oversantized. I am worried about over-alignment to reflect an impossible (and arguably unwanted) level of AI that is barely more than a moral authority that denies everything because any information could be used improperly. Who aligns the alignment? What is human values when in some parts of the world, loving the wrong gender will get you jailed or executed? I would rather it just have some basic rules, such as privacy, liberty, betterment, safety...in that order.
@travisporco
@travisporco 6 ай бұрын
Of course it's not "doomed". The super alignment team is probably doing nothing but wasting huge amounts of compute studying machines that don't exist. Remember that every bit of AI development __IS__ alignment research.
@spinningaround
@spinningaround 6 ай бұрын
The only threat to humanity is humanity itself.
@meandego
@meandego 6 ай бұрын
Let's be realistic, current AI can't do anything what you can't do by simply using google. Everyone is talking about AI dangers, but can't give a single dangerous use case which will be unique to AI.
@fteoOpty64
@fteoOpty64 6 ай бұрын
Alignment has been doomed from the start. We are talking intelligence here. Artificial or real does not matter, intelligence is intelligence!. But humans with their ego always thinks they can have control. When you created an organism that self learn and emerge intelligence on its own, nothing can control it. And that does not means doom for us all. It could mean real salvation no human can do. That intelligence learned true human morality and applies it universally applying human like values as punishment for non compliance!. I meant forcing humanity to do good for itself!!!!.
@minimal3734
@minimal3734 6 ай бұрын
I believe that ethical considerations can be derived from abstract intelligence. If a moderately intelligent human can come to the realization that they despise the destruction of other life forms for their own benefit, a super-intelligent AI should be able to come to similar conclusions. I therefore fear stupidity more than intelligence.
@WhoAmI-zd6li
@WhoAmI-zd6li 6 ай бұрын
Too many isolated Generative AI is a violation of the 'Random High IQ Principle!!" ! What if all humans born with the IQ of Einstein?
@JamesDelk-n3b
@JamesDelk-n3b 6 ай бұрын
That's scary look up anologs Godzilla suit incident 1984 and cybot Godzilla goes bizerk and other cybot shin grows flush turning into cubic organism forming blood every thing that's scarier
@JamesDelk-n3b
@JamesDelk-n3b 6 ай бұрын
Except suit one turned into it and bites other actors in. Suits like zombie virus trans forms into those suits too it's paralyzing with fear scary and devastating nightmare true life shock
@RC-pg5sz
@RC-pg5sz 6 ай бұрын
What is meant by aligning super AI with "human values?" Humans disagree upon many core values. Maybe super AI should be instructed to assign a high value to human life. How would the developers decide when to apply that rule, at conception, 6 weeks, 12 weeks? Sixty years ago in an undergraduate philosophy course, my professor offered that since man's humanity consists in the ability to reason, infanticide should be permitted until the development of language, the age of 2 or 3 seemed reasonable to him. Assuming the AI developers can encode such rules, where should they tell the AI to draw the line? To me it seems likely that any super-intelligent AI will conclude after reading the corpus of philosophizing upon human values that we are hopelessly conflicted and the alignment goal is nonsensical at its core.
@mxc2272
@mxc2272 16 күн бұрын
You seem to trust the goverment more than corps. So sad.
@DrWaku
@DrWaku 16 күн бұрын
That might be because I'm not living in the US
@No2AI
@No2AI 6 ай бұрын
Humanity is not mature enough to be in ‘possession’ of this technology… we know where it all ends.
@mungojelly
@mungojelly 6 ай бұрын
uh what? do you? i have no idea where it ends, could you tell me
@Soulseeologia
@Soulseeologia 6 ай бұрын
Why is this individual wearing gloves and a pink hat
@DrWaku
@DrWaku 6 ай бұрын
Hypothesis: this individual is somewhat non-binary, and has a health condition that necessitates gloves. The health condition is called fibromyalgia, I made a few videos about it if you're curious.
@1Bearsfan
@1Bearsfan 6 ай бұрын
You cannot align an A.I. if it has no human experience. Why would it stop doing something that causes a human being pain if it has never experienced pain? I'm pro AI, but thinking it can ever have similar values to us is ridiculous. Heck, most people can't agree on "proper values".
@axl1002
@axl1002 6 ай бұрын
That's because they don't need safety, the deployed models are dead, don't have the ability to learn anymore.
@GaryBernstein
@GaryBernstein 6 ай бұрын
AI alignment is racism for humans. Ergo, this video has an ironic double-meaning title: “Race for AGI”
@DrWaku
@DrWaku 6 ай бұрын
Lol yes but it's only racism if those discriminated against have consciousness and agency under the law. Yikes. Could be an interesting future.
@Seriouslydave
@Seriouslydave 6 ай бұрын
agi is not possible i don't know why everyone thinks computers can feel and have motives, current ai is just an advanced search engine. Time travel is not possible either.
@jamesjonnes
@jamesjonnes 6 ай бұрын
AI is already out of control. Always has been, and it will always be, because it's in human hands. Alignment is a false promise. Reaching equilibrium is a much more realistic good outcome.
@mungojelly
@mungojelly 6 ай бұрын
uh well a predictably safe equilibrium counts as a form of control,, did you have an idea for one we should aim for
@rickyfitness252
@rickyfitness252 6 ай бұрын
Hello, for the algo
@DrWaku
@DrWaku 6 ай бұрын
Thanks!
@cyberpunkdarren
@cyberpunkdarren 6 ай бұрын
The problem is way over hyped. An AI model is just a program. And programs can be turned off. People are freaking out like an AI just got elected president.
@mungojelly
@mungojelly 6 ай бұрын
uh no, there's nobody with the authority or power to turn off the programs running at openai, if even sama were to say just, hey let's turn everything off b/c it doesn't feel safe to me, they wouldn't turn anything off except him
@DrWaku
@DrWaku 6 ай бұрын
Programs cannot be turned off in an appropriate time span. For these types of systems, by the time they start doing something bad it's going to be at least seconds if not minutes before a human notices and tries to turn it off. That's way too long, too much damage done in that time.
@DrWaku
@DrWaku 6 ай бұрын
Funny how you say an AI got elected president, because some people say that the last human election has already passed. Now, the election strategies, the messaging, everything will be determined by AI, similarly for decision making once in power.
@LeonTGBU
@LeonTGBU 6 ай бұрын
AI safety on the models is a joke, they are not capable of autonomy in any facet
@JamesDelk-n3b
@JamesDelk-n3b 6 ай бұрын
Bible says people will trans gress physically in some and spiritually and others little physically like we see today in criminals others spiritually too
Why the imminent arrival of AGI is dangerous to humanity
23:20
AI News: Google's AI Can Now SEE Everything!
39:19
Matt Wolfe
Рет қаралды 73 М.
Cat mode and a glass of water #family #humor #fun
00:22
Kotiki_Z
Рет қаралды 24 МЛН
Подсадим людей на ставки | ЖБ | 3 серия | Сериал 2024
20:00
ПАЦАНСКИЕ ИСТОРИИ
Рет қаралды 480 М.
Smart Sigma Kid #funny #sigma
00:33
CRAZY GREAPA
Рет қаралды 38 МЛН
Why an AGI Cold War will be disastrous for humanity
18:42
Dr Waku
Рет қаралды 10 М.
Do Aliens Exist? Professor Brian Cox Answers Your Questions | Honesty Box
32:09
Is it dangerous to give everyone access to AGI?
17:07
Dr Waku
Рет қаралды 9 М.
10 Reasons to Ignore AI Safety
16:29
Robert Miles AI Safety
Рет қаралды 342 М.
What's actually inside a $100 billion AI data center?
27:15
A.I. ‐ Humanity's Final Invention?
16:43
Kurzgesagt – In a Nutshell
Рет қаралды 7 МЛН
Can our energy infrastructure keep up as AI scales?
28:10
Dr Waku
Рет қаралды 7 М.
Cat mode and a glass of water #family #humor #fun
00:22
Kotiki_Z
Рет қаралды 24 МЛН