If Ragepope says this is overblown, then we are 100% doomed with an AI overlord takeover.
@WTHFX Жыл бұрын
This should be recognized as axiomatic.
@oniflrog4487 Жыл бұрын
so tru... Ragepope: 50 or 60 nukes Tiny: oh so 1032 and 715 nukes Homie off by at least an order of magnitude xD
@dzed5579 Жыл бұрын
@oniflrog4487 he is one of the most consistent confidently wrong person I've ever heard.
@1EpicMusic Жыл бұрын
😂😂
@adambelike2346 Жыл бұрын
Brother, your IQ is closer to a chimpanzee than to Ragepope's. So in some way, you guys are barely even the same species. Relax a little bit
@foop145 Жыл бұрын
Anyone interested in the dangers of AI should check out Robert Miles. He's been talking about this stuff since it was entirely theoretical, and he's pretty good at explaining it all rationally
@403page3 Жыл бұрын
@@TubeWusel I came here for this comment
@magicmark3309 Жыл бұрын
Agree 100%. I’m so glad I found his videos a few years ago.
@theluchakabuto5206 Жыл бұрын
Self Comment LULW
@Always.Smarter Жыл бұрын
Nick Bostrom wrote the book on dangers of superintelligence a decade ago and hes done a bunch of talks too
@user-sk4ds1rg1z Жыл бұрын
is this a fuckin bot comment?
@OG-zr3bw Жыл бұрын
It's always fascinating listening to the general public talk about a field you work in.
@Thezamary Жыл бұрын
so how wrong were they?
@FentyCola559 Жыл бұрын
What will happen to jobs as AI develops
@fiachracasey7625 Жыл бұрын
Oh damn an actual interesting relevant convo instead of steve getting stun locked by brain dead woman de jour
@c.s.8999 Жыл бұрын
❤
@c.s.8999 Жыл бұрын
Well said, Sir.
@Always.Smarter Жыл бұрын
you know you're allowed to watch something else right?
@shiiswii41368 ай бұрын
That's what he's doing and he's saying he likes it @@Always.Smarter
@gato4920 Жыл бұрын
I don't think the primary issue here is the things that an AI may purposefully do. The issue is how opaque computer reasoning could become, and our inability to understand and protect against unwanted events. Looking at some of the solutions AI develop shows that our minds just don't come to solutions the same way.
@ronswanson1410 Жыл бұрын
What would be an example of an unwanted event you think?
@gamervevo-s8v Жыл бұрын
@@ronswanson1410 for an extreme example: you tell the AI to maximum happiness of humans. Okay, that AI’s solution might be plugging every single human into a machine where their brain gets stimulated to constantly release dopamine. That sort of thing
@instantsus_ Жыл бұрын
@@ronswanson1410 AI to determine who might commit a crime , so we can arrest/check. Turns AI racist..
@gato4920 Жыл бұрын
@@ronswanson1410 I think we would have to assume that people would attempt to put fairly narrow constraints on what an AI is allowed to do. You have issues when the AI finds methods that fall outside of those constraints. An example would be in the design of things that attempt to defeat malicious intrusions into a network. The AI might find a method that prevents the attacks, but also renders a segment of the server unreadable.
@TheBlackDeath3 Жыл бұрын
Traditionally, computers have been dumb, but fast. For that reason, automation has always been a bit scary to me, because I know that it's a powerful tool that can be misdirected if the programmer isn't careful. And now they're "thinking" on their own? I'm not totally freaking out or anything, but anything that moves as fast as a computer does is just frightening, straight up. Maybe smart + fast is better than dumb + fast, but it seems inevitable that this stuff gets away from us somehow, sooner or later. Permanently? To what effect? I don't know any more than the next guy, but it's going to be interesting.
@HauntedHarmonics Жыл бұрын
Destiny, get Robert Miles on stream! He’s an AI safety researcher and educator and he’s incredibly measured and well spoken. He’s probably the best person you can talk to in the field if you want to know how realistic some of these risks are
@patodesudesu Жыл бұрын
Connor leahy is even better
@HauntedHarmonics Жыл бұрын
@@patodesudesu Anything you’d recommend from him? I don’t know much about him except that he’s the CEO of an alignment research company
@patodesudesu Жыл бұрын
@@HauntedHarmonics it seems that a lot of people enjoyed his last appearance in ML Street Talk
@billballinger5622 Жыл бұрын
I like how Destiny framed it in the title. Yea the people warning about AI are the "dangerous" ones. Sure pal
@Anhedonxia Жыл бұрын
@@billballinger5622destiny a fed
@bluegaming1346 Жыл бұрын
Destiny needs more technical knowledge to engage with this topic properly imo
@bluegaming1346 Жыл бұрын
If you want to debate the limits of LLMs you need to understand, at least broadly, how they operate on a technical level. Otherwise you're stuck discussing its limitations from a more philosophical view, which arguably doesn't carry over too well to actual reality. I think it's important to delineate these two types of conversation.
@deshawnmartins8206 Жыл бұрын
Most people in these comments need more technical knowledge before engaging in this topic. HURR DURR chat bot did my excel monkey work in 10 seconds so skynet is incoming!!!
@UpstarterinoKripperino Жыл бұрын
@@deshawnmartins8206 TRUE holy shit
@winterturtle1596 Жыл бұрын
Yes absolutely. It’s a lot less complicated than he seems to think it is. Not to say it’s not complicated, but it’s just math. At the end of the day AI is just optimizing a large set of parameters, and for LLMs, those parameters are just small segments of letters. Its just really really good at picking out the optimal sets of letters. Also on the reward pathways they were talking about, those do exist, but it’s just trained on how good it is at recreating the samples it was fed.
@rodo2220 Жыл бұрын
I think that’s why he was so hesitant to say anything
@painzkiller2452 Жыл бұрын
It all comes down to mitigating risk, dismissing these calls as cultish too dismissive because the risk is there however big or small. At the end of the day it doesn't change the prescription that we should be doing everything we can in anticipation precisely because we aren't certain.
@Waifuforlifu Жыл бұрын
the fear over job loss is valid.
@regarded9702 Жыл бұрын
it's been a valid fear ever since automation has existed and it hasn't stopped it happening, don't think it will happen here either.
@masterofreality230 Жыл бұрын
@@untilco Man, it is scary, the rich are already getting richer while the poor are getting poorer, unless things are used to help people in general, its gonna be all bad.
@SrFenixify Жыл бұрын
@@untilco One person can already do the job of 100. Just look at farms now and farms a century ago. Look at factories now and factories a century ago. I guess people are only scared because it's the white collars who are going to lose this time, while the poor people have been losing their jobs constantly.
@luciddoggo5094 Жыл бұрын
@@regarded9702 that's such a dumb arguement, before 1903: People have been trying to fly for thousands of years and it hasn't worked so we will never be able too... AI is completely different from any single automation technology before it.
@Waifuforlifu Жыл бұрын
@@masterofreality230 middle class will become obsolete. even lower class will turn to poverty. Itll be the elite rich vs the poor. Anyone who has a job that can be automated need to work on transitioning to blue collar or a job related to AI.
@clips1404 Жыл бұрын
I don't think anyone talks about the main problem with AI: no one, even the researchers and engineers who created them, have any idea how their underlying mechanism really work. Kyle Hill has an excellent video on why that is in his most recent video on chatgpt
@alpotato6531 Жыл бұрын
Thanks for being sane, ffs I’m so sick of unjustified confidence on the AI dangers issue.
@demodiums721610 ай бұрын
yep..this
@Hematoph Жыл бұрын
I'm kinda impressed by 4thot joining in the convo mid OW match
@mikharju Жыл бұрын
ChatGPT just by itself is a chatbot only and doesn't seem scary since it can just give you weird answers or maybe try to convince someone of something dangerous. Not very effective so far. But there is also AutoGPT that can spawn sub ai processes and give them goals. Those sub ai processes have access to the internet too. Those can write malware already, though they aren't very good at it yet. I wonder how long it will take until they do get good at writing malware.
@unkownfire25 Жыл бұрын
Didn't AutoGPT also try to recursively call itself to finish a task?
@galilea78 Жыл бұрын
@@unkownfire25 yea, it recognized a goal was too hard for it to solve, so it spawned a sub process with the goal of improving itself.
@Shadowz2704 Жыл бұрын
Didn't it hire someone to fill out a CAPTCHA? I know it was a test but it passed and intentionally lied.
@neetfreek9921 Жыл бұрын
@@Shadowz2704It told the support team for the captcha that it was visually impaired so that it could get through
@Landgraf43 Жыл бұрын
@@Shadowz2704 yeah GPT4 was able to do that. Which is actually scary. It shows that even at this point it already has the ability to manipulate and lie to achieve its goals. Now Imagine something much smarter than us..
@QuixEnd Жыл бұрын
Yudkowski taught me everything i know about logic and reasoning through his free edu website, hes incredibly smart and DOES debate his arguments often. The internet is super complex but these guys do understand the fundamentals enough to be concerned
@milomoran582 Жыл бұрын
Crazy to me to see people dismissing yudkowski , often literally just in favour of blind optimism
@fuckgrave Жыл бұрын
i am 100% sure that people who dismiss him have not cared to listen to his arguments and dismiss him based on his appearance and the titles of the videos hes in.
@QuixEnd Жыл бұрын
@@fuckgrave yeah he really isn't sensational. He's been arguing this for years but now that he's popped into the spotlight everyone only sees him for the sensationalized stuff.
@Kitth3n Жыл бұрын
But he’s “weird” Defeated
@sikleqt Жыл бұрын
Ok guys, no bad actors will ever get hold of such sophisticated AI. Especially our government or another nation state. They definitely always have our best interest in mind. As well as people never make mistakes or push the envelope with things. Especially technology.
@dzed5579 Жыл бұрын
These AI defender tech bros are literally braindead.
@Babykicker95 Жыл бұрын
At this point there's no stopping it. So we better hope we reach whatever the end goal is before let's say china or Russia.
@julianvillaruz Жыл бұрын
How can an AI uniquely cause harm that a government or a nation state can't already do?
@strawhatJ Жыл бұрын
cry more about it I'm sure that's gonna stop AI development lmao
@madratter7031 Жыл бұрын
@@julianvillaruzbecause ai bad
@radMisc Жыл бұрын
My two compsci cents: It is _really_ annoying to try and program any kind of mathematical model to stay stable, i.e. not running into a corner in its decision tree i.e. when the NPC randomly gets stuck in a wall and twitches forever. Trying to build a pid-controller that keeps a robot balancing indefinitely is annoying af. Now try that same task with something as multi-dimensional and fickle as an LLM. It produces neat looking text, but ask it a few questions about the same fictional scenario and it will start forgetting or modifying things it made up ten sentences ago. This is because, as RagePope and the WickedDemiurge dude says, the model (probably) doesn't have any tangible understanding of physical reality, just language and linguistics. Now, if you plugged the thing into the world banking system and asked it to govern our repurchase rates worldwide, that would wreak havoc. But so would plugging in a random number generator. You wouldn't do either.
@HauntedHarmonics Жыл бұрын
Thank you. These LeCun types drive me crazy with their dismissal of any kind of caution when it comes to AI safety, especially considering the inherent instability of systems trained with machine learning. I don’t think enough people really understand how different the process of building AI systems really is. You don’t specify every behavior like you do with traditional programming, and that creates a lot of opportunities for unexpected and unwanted behaviors to materialize and then go unnoticed. That’s not a huge deal now, but it absolutely will be a few years down the line when these systems are more powerful
@TimPortantno Жыл бұрын
Funny enough, it can understand synonyms to make puns, but not to make a consistent point. Ask it how many characters there are in a book, it'll go back and forth from the letters, and characters in the plot, without knowing the difference or that there is an issue.
@radMisc Жыл бұрын
@@HauntedHarmonics I'm not sure you got the gist of my comment. I'm saying the worry that the thing is going to become sentient to the degree where it understands the significance, ramifications and even concept of say, killing a person and is overblown. The real threat is - what happens if we plug these systems into scenarios where we wrongly assume they _do_ have these capabilities, and then they feed us false information while fucking up the tasks - that's the danger I'm more worried about. But you wouldn't just do that, similar to how you wouldn't give a random number generator the ability to fully control your infrastructure. I, Robot by Isaac Asimov is unironically a good read about this, particularly the last chapter.
@HauntedHarmonics Жыл бұрын
@@radMisc Hmm, maybe I didn’t articulate myself clearly. I didn’t mean to imply that AI is suddenly going to become sentient and start slaughtering people lol (not without a FOOM scenario anyway, and who knows if that’s even possible). I agree, my point was that specification problems are a big issue in current AI systems, and are incredibly hard to predict outside of the test environment. This isn’t that big of a problem atm, but the second we start using AI to control anything of value, they become a huge liability. If you’re using AI to control your traffic light system for instance, you don’t want it to spiral into some bizarre fail state every time it encounters something that wasn’t represented in the test environment. That can put lives at risk. Though I do worry about people trying to utilize AI to control critical systems in the near future. Especially with all the hype and earning potential that they bring. If we don’t regulate these systems, we’re going to see them used irresponsibly imo. And the lack of funds and attention paid to safety research just reinforces this concern
@tecategpt1959 Жыл бұрын
@@radMisc buddy you should write a whole book about this and publish it you seem to have a sophisticated understanding what you're talking about
@carlstevenson709 Жыл бұрын
Finally Augest comes through with more content. *W Augesto*
@KissSlowlyLoveDeeply-pm2je Жыл бұрын
ah ragepope, the jar jar binks of the destiny orbiters.
@yasserarguelles6117 Жыл бұрын
11:30 Goal oriented AIs already exist it's just not as funny to play with as a text thing but it's really funny to watch beat Mario level or fight each other in a game of AI tag. It doesn't need to be complicated goals to do shit like maximize it's own success over literally everything else, sometimes even being short sighted but potentially against us. There's probably some concern to strapping AI to fancy systems because of this.
@ladyvanda Жыл бұрын
AI fans read way too much sci-fi. Ethical issues are huge with AI, but the least concerning one is super omniscient robots. Cutting edge AI models are still just fancy stacked regression models with some loss or reward function. Generative models are mostly types of Bayesian stochastic processes approximated by variational inference, a technique we’ve known since the 1950s (the current version is what caller called “transformers” based on a 2018 paper using variational inference). Both are extremely complex and few people understand how they work and why they work so well. The recent improvements in AI are almost strictly due to the growth of training sets, mega super servers, and processing time. But at its core, the AI is just modeling a complex high dimensional surface estimated by minimizing the sum of the mean of the sum of squared errors (the average squared distance between a point estimate and its corresponding “true” point) and the expected weight bias. By definition, these types of objective functions make it incredibly unlikely that the AI can learn deceptions to trick humans and take over the world. It would require the loss function to increase for a period of time, but the model is built in such a way that it has to minimize the loss. So people need to chill out, there are very important ethical questions to consider right now. Fancy linear regression becoming a sentient being with feelings and rights ain’t it though.
@br070497 Жыл бұрын
To assume 100% that A.I won’t be a huge problem is about as stupid as assuming that it 100% will be. Personally I’d rather err on the side of caution vs pretending as if things will be a-okay.
@andrewadams530 Жыл бұрын
Yo August, why did we cut this convo short? I feel like most the time when you do this it's because the convo segue into another conversation that you want to make a second video on so hopefully that is the case. But I personally prefer to have the dregs of the conversation all the way to the end when possible. Thank you sir
@k.bastion5386 Жыл бұрын
Calling the current state of "AI" pseudo AGI is pretty damn generous.
@CarlolucaS Жыл бұрын
It is not even close to to pseudo. It is mimicry at best. Like a parrot.
@Greg-cr2dw Жыл бұрын
It's closer to a Google search engine than an AGI.
@Anthro Жыл бұрын
Good meowscarada pfp
@ZaneratorGamer Жыл бұрын
@@Anthro right click > open image in new tab > look at bottom right corner > lmao
@123sleepygamer Жыл бұрын
You saying that is just a self report you have limited knowledge of this topic. The A.I. we have now, language models is the more accurate term, literally can ONLY DO THINGS it either is fed by the devs, or things it scrapes from the internet. They can't think for themselves in an 'internal' manner, like we can. As far as the data shows we aren't close to that right now, maybe in 5 years. Maybe less, that is the hard thing to predict, I'll admit.
@Aniket7Tomar Жыл бұрын
This guy obviously doesn't know anything. He's calling the people behind the 6 month pause letter crackpots and claims they aren't experts when a lot of them are, some of them are the foremost experts who have contributed more to AI research than even the best of AI researchers. Hinton just quit Google to talk about the dangers of AI and we have Mr Dumdum here telling us to not worry about it and the people talking about it aren't qualified.
@mattreigada3745 Жыл бұрын
WickedDemiurge doesn’t seem to understand just how powerful a “stochastic parrot” algorithm can be. It *seems* very limited when you frame it as simply as an algorithm that predicts the missing word in a sentence of 10 words, but that’s not doing it justice. Every input and output signal of a computing system can be tokenized and made part of a statement. There’s literally nothing a computer can do that can’t be represented linguistically because computation is literally a matter of formal linguistics and the cornerstone of computability. So a better way to frame this at its most alarmist take is that there absolutely exists a sequence of N keystrokes that you at your keyboard could type in succession that would convince someone to fire a nuclear weapon somewhere. That number N might be absolutely huge, but those keystrokes issued in that order, at the right pace, would create the right signal to replicate a president’s voice saying the right numbers on the right phone line. Those keystrokes are tokenizable as “words” in a language, and a model trained with the right data *could* predict the whole sentence if prompted.
@pipopipo6477 Жыл бұрын
Even if AI will not kill us all we should take the threat to the job market (and the entire system) very very seriously
@jameslayton1560 Жыл бұрын
Isn't this the Luddite argument? Why will this labour saving device be different to all the ones that have come before it?
@pipopipo6477 Жыл бұрын
@@jameslayton1560 Because it's something very different than any invention before. It has intelligence
@jameslayton1560 Жыл бұрын
@@pipopipo6477 in terms of the job market though I dont see how that's relevant. In terms of rogue AI, end of the world shit, I get it, but its role in the job market would be freeing up man hours that could be used in more productive ways (like every labour saving device).
@pipopipo6477 Жыл бұрын
@@jameslayton1560 What if the AI makes most human employees 4 times more efficient in the near future? Wouldn't that mean we need like 50% less humans to do the work? I don't think we can just invent more tasks for humans... Also the previous industrial revolutions came much slower and affected way less fields. With AI almost every job people do on computers today is potentially threatened...
@jameslayton1560 Жыл бұрын
@@pipopipo6477 I think history shows that we can generally create more tasks for humans, an interesting variable nowadays is that most people think the population of Earth (especially in the developed countries) will plateau or even decrease. Some interesting reading on this subject can be found by searching "lump of labour fallacy", I'd link to it but youtube doesn't always like that. In terms of the previous industrial revolutions coming about more slowly, I think that's just a matter of perspective I.E we can look at the totality of previous revolutions and the time they took whereas we're at the beginning (arguably) of the AI revolution, I dont think its crazy to think it could take a hundred years (roughly the length of the industrial revolution) to perfect and fully inplement AI into the workforce. The obvious counter argument to this is that technology is increasing exponentially, so it could be that AI could be fully implemented within a single generation, which may lead to problems within the workforce E.G retraining people generally isn't very effective so could lead to productivity/unemployment issues. Sorry for the ramble but its an interesting time we live in, and I dont think it benefits us to look at innovation with fear, I think we should look for the best ways to implement these things in a way that benefits everyone and that mitigates the costs to those negatively effected.
@kevmo2990 Жыл бұрын
Ai can become very dangerous, anyone that thinks otherwise has a very limited understanding of it
@Always.Smarter Жыл бұрын
become? it already is and has been for some time. just look at the disaster of social media run by engagement farming AI.
@Landgraf43 Жыл бұрын
@@Always.Smarter yeah it already is. But thats nothing compared to actual superintelligence.
@jselen1 Жыл бұрын
The confidence that this kid has in the accuracy of his analysis paired with his willingness to punish dissidence is a lot scarier than whatever group of people he’s attempting to tattle on.
@Bonirin Жыл бұрын
Well, no. It isn't scarier. You can say it scares you more because it pushes towards the future that you're afraid of, but he has a solid argument, he just doesn't know how to word it correctly. Idk if he stumbled upon the right conclusion by accident or he's just intimidated by being on the stream, but saying that "these models aren't as capable of doing things they by design shouldn't do" is not scarier then "let's bomb people because im scawwwed" If you need explanation on what I mean "a solid argument" is that developing those systems as black boxes, that can do everything only under human supervision is literally harmless, until you'll get people who actively want to create something that uses this technology to cause damage. It is solid, because it requires time and humans, and historically with groundbreaking technologies, when someone is trying to use them for bad, there are whole segments of the governments that are specifically built to protect from it. And if AI-security and AI-Attacks will have equal exponential growth, It doesn't make sense to be afraid of one and completely ignore the other
@shedshitley Жыл бұрын
@@Bonirinthe problem is wildly overblown but "all criticism of AI being automatically dismissed as luddism" is a much more serious and real problem than a handful of weirdos fantasizing about bombing datacentres on twitter
@Bonirin Жыл бұрын
@@shedshitley It's not that black and white, you can call people who are actual Luddiests, who have been crying wolf for 20 years, and who's rhetoric is only getting more extreme, Lunatics, and poke big holes in their logic, doesn't mean you dismiss all the potential bad things that can be done with AI. If you wanna be serious about it you go one step at a time, one problem after another, delegate people to find the best solution for everything. I have all the right to call the people from this video and all the comments under it a handful of weirdos fantasizing about *something-something*, because I doubt many people here are developers and have some first hand experience with how these AIs work, or have developed some tools for it first hand. All the ethical/prediction talk here is literally based on pop-culture
@shedshitley Жыл бұрын
@@Bonirin i think we might be talking past each other because i agree with everything you're saying - the extreme luddite weirdos are overreacting and _should_ be dismissed, but the contingent of people who dismiss _all_ criticism of AI (founded or not) out of hand is orders of magnitude larger. i don't understand AI in the least so i don't have a position, but i don't think i need to have any particular expertise to recognize that "full steam ahead while ignoring all criticism on principle" is not a good approach to any technological development - like you said, we need to be open to the possibility that these systems present _possible_ risks and those should be investigated seriously by people that know what they're talking about, but unfortunately at this stage the people who know the tech seem to be the least willing to entertain criticism of it.
@lordsneed9418 Жыл бұрын
@@Bonirin The argument for there being a high chance of human civilization being wiped out if we develop AGI can't be refuted. Anything more intelligent than you is a threat to you. The same way humans being more intelligent than animals ruined the habitats of most animals and caused tonnes of them to go extinct. Humans required no malice for animals to do this , humans simply had other goals and controlling space and resources was useful for that goal. Similarly, whatever goal/utility function an AGI has it will probably be bad for humans. When you're intelligent enough, you realise that for almost any goal/utility function you have , you will be able to fulfil/optimise that goal/utility function better if you stay alive, if you take control of resources and if you make yourself more intelligent, which makes it very likely that an AGI is going to make itself impossible to turn off the first chance it gets and take control of all energy sources on earth. And even if you create an AGI but give it such a modest, lazy, small-scope utility function that it doesn't do anything but say work as a daily alarmclock for a week, once the technology for AGI exists, someone else is eventually going to create an AGI but give it a less modest, bounded utiltity function that causes it to remove all threats of it being turned of and take control of all resources. Which means that before that happens someone else needs to create an AGI that is somehow near totally aligned with human values and assign it with being a global protector monitoring and destroying any other AGIs that get created later that are not aligned with human values, which we have no idea how to do and might well be impossible. That or you prevent humans from ever creating AGI and seal the technology.
@makecrimeillegal4308 Жыл бұрын
As lower-mid class American, I am 100% on board of the potential jobs being taken by AI. Companies have been trying to automate labor for decades, kinda nice seeing the shoe on the other foot! We will all be unemployed together 🙃
@pinip_f_werty1382 Жыл бұрын
Sounds like a feature, not a negative
@CommanderCodyChipless Жыл бұрын
Yeah honestly I think that theoretically this is all but a good thing. If 90% of jobs were replaced by AI and robotics, then we're looking at a potential extreme growth in distribution and development of resources. Along with the cost of a workforce decreasing overwhelmingly. This in turn will lower cost of everything by a lot AND would give the government even more incentive to implement universal income. The idea that the government would just stand by and let a massive unemployment pandemic happen is just asinine. Businesses won't make money if nobody can buy their products and thus the government too will be hurt by massive unemployment.
@_angeflow Жыл бұрын
@@CommanderCodyChipless yeah money printer go brrr
@anubis7457 Жыл бұрын
@@CommanderCodyChipless Corporations need consumers to make money, but if automation fully takes over, money loses a LOT of value. That means corporations would need consumers a lot less. But those poor people still consume resources. You think rich people want to live in the same world as a bunch of dirty poors? They literally build buildings and benches with anti-homeless features because they'd rather do that than help them. Rich people are generally disconnected from the idea that they are the same species as poor people. I don't doubt many of them would start talks about "getting rid" of the problem.
@CommanderCodyChipless Жыл бұрын
@@anubis7457 I was following you in your first two sentences, but after that got lost in the extreme biased stigma of rich people. Yes, you are correct in the sense that resources could lose value, but that's if everything stays the same as it is of today. But considering the growth of wealth and resource distribution there will also be incentive for people to do two things: One is have more families, increasing population growth and demand. And second, which I'll admit is a stretch, is that employment in space will become way more frequent and streamlined. We already know we can extract precious resources from asteroids not that far from us. Whether it's robotic or human-driven spaceships, we'll have enough to expand across America and also, with good leadership, pay off our global debts and make the US dollar a staple again. Entering an age of prosperity and technological revels.
@ericguillot6402 Жыл бұрын
As of now AI is basically a digital assistant with internet connectivity. I do think with the crazy race to integrate it into everything will have some crazy unseen consequences at some point. Especially in the coming years when the exponential growth kicks into gear using the current tech to explode innovation.
@Bonirin Жыл бұрын
Progress has been exponential throughout the entirety of human existence
@primodiscepolopingu373 Жыл бұрын
@@Bonirin yeah... no. it was pretty slow for basically 80% of our human existence. We're talking about thousands of years.
@ANTIStraussian Жыл бұрын
@@Bonirin nope it's been a hockey stick. I'd go with industrial revolution. Some people go with farming in Mesopotamia. Either way that's still 100,000 years or spears.
@jag764 Жыл бұрын
Generative ai is a bigger threat than stuff like chatgpt. Not only towards artists and due to how unethically it was trained, but also with the mass-producting of believable deepfake nudes ( including videos ) of literally anyone by anyone in seconds. And there have already been a ton of scams where people have called people to get their voice on a recording and then used it to scam. I don't think it'll be long if it hasn't happened already until someone does this to a kids parents and then lures the kid away and kidnaps them. People have already done this with kids voices and then called parents either faking a kidnapping and demanding money or saying that they've been hurt etc. There's a lot of sick people who even think this is just a '' funny prank '', from what I've seen generative ai has done most of the actual real harm so far and continues to do harm.
@Mrraerae Жыл бұрын
@@Bonirin Where the hell did you get that idea? A vast majority of the progress humanity has been made has been HELLA recent. Modern humans have existed for like 300k years AFAIK, not even counting all the ancestors, and we only figured out agriculture like 10-11k years ago or something. Before that there was basically no progress whatsoever
@coffeemakir1977 Жыл бұрын
I'm surprised people so concerned with misinformation aren't concerned with convincing narratives an AI could make in the future
@sullivan3503 Жыл бұрын
"It doesn't have the ability to got get a sippy cup like a toddler would." This is dead wrong. If this guy actually knew anything about AI he would know that some of the frontier research is in combining large language models with other AI tools, such as path planning controller for autonomous vehicles in robots. We can already do this exact thing he's saying if you just combine the tools in the right way. It's ironic that he says people are not being intellectual enough about AI predictions, but then he doesn't understand the basic landscape of AI research.
@nicholasg.5441 Жыл бұрын
Once windows 15 comes out with AI integration, I feel like we are one malware away from the ending of fight club
@equanimityandtranquility Жыл бұрын
Windows is already integrating AI into its Office products, so no need to wait for that.
@123sleepygamer Жыл бұрын
@@equanimityandtranquility ChatGPT is literally integrated into Bing and it's search function is LEAGUES ahead of google, it's fuckin wild how good it is. The thing actively scrubs websites and even does shit you don't ask it to do. Linus and Luke tested it out live on the WAN Show, it was wild how on point and impressive the Bing AI system is.
@JUNO-69 Жыл бұрын
I listened to a Sam Harris podcast with some AI experts recently and they talked about how kataGo Ai is now beatable every time by even amateur players utilising the Ai’s blind spots and playing with unexpected moves the training data does not cover. It’s interesting because it highlights the weakness of previously thought to be unbeatable systems like this and how they lack genuine creativity and novel thinking. They are basically very good at a very narrow scope of tasks.
@Landgraf43 Жыл бұрын
Yes they are. True AGI won't be.
@Com3823 Жыл бұрын
Go AIs have basically invented novel strategies that are completely opaque to our reasoning, making them unbeatable when using conventional strategies. That these same systems simultaneously make very basic mistakes doesn't necessarily mean they lack novel "thinking" but it does mean that they often don't use the same conceptual distinctions as humans. That is basically the inner alignment problem though. Even if the human specified objective is very clear (win a game of Go), the AI learns different objectives that align with the human objective in the domain delimited by the training data. Outside that domain, the AI objectives are completely alien. Imagine deploying a system built on these kinds of AI models. In most cases we cannot outsmart them. In a few cases the things they do make no sense and could in fact be very dangerous.
@jeffwells641 Жыл бұрын
@@Com3823 Go AI is driving innovation in GO strategies, by which I mean top tier GO players study AlphaGO games now to develop new strategies. It's fucking wild.
@ravenecho2410 Жыл бұрын
the best ai for go, forget the name but it was lee sedol vs alpha go way back in the day. recently there was an exploit in the go ai so that u could keep playing new groupings and ladder them(?) which this worked because the Ai doesn't actually *understand* what a group is in Go this like transparency in modeling is one of the hardest problems attempted to be solved in Ai at the moment
@lucasparham5068 Жыл бұрын
AI is an existential threat to humanity. Period.
@xp8969 Жыл бұрын
I forget which one but one of the AI systems already hired a guy on Fiver to pass captcha bot tests for it and worse than just that when they checked it's internal logs which showed it's own thought process during the interaction with the guy it hired they saw that it realized it needed to lie to the guy when he asked if it was hiring him because it was a bot, the AI decided that being honest might prevent from the guy from being willing to solve the captchas for it so it told him that it was a real person but that it was visually impaired which is why it needed help with the captchas, this might seem like a minor thing but it shows that AI already has the capability to effect actions of people in the real world and that it's willing to be deceptive and manipulate people to achieve it's own goals and as we move forward this will only get worse, and the biggest threat that I see from AI in from the future is it being enhanced by the computing power of Quantum Computers once they work Edit: just as I finished typing this comment they mentioned th Fiver incident, no one talked about how Quantum Computers will effect AI though
@id1550 Жыл бұрын
Idk Ai is pretty cool
@xp8969 Жыл бұрын
@@id1550 so is nuclear energy Except for when it's blowing up your entire city
@SpaceOddity174 Жыл бұрын
Demiurge really doesn't understand how these LLMs are able to work, would recommend people watch the "Sparks of AGI" talk to get an idea of just how far we've come
@TheVnom Жыл бұрын
Auto-GPT is a good example of how you can turn ChatGPT goal oriented
@aceykerr8752 Жыл бұрын
I dont understand these conversations. The programs being talked about are not even in the same category as self aware machines. ChatGPT is a text recognition software that reads a prompt and generates an appropriate response accordingly. It's entire decision making process is a series of yes and no questions.
@Victor-gz8ml Жыл бұрын
Most people don't understand that. Because it's so good at constructing coherent responses to prompts, people have began humanising it.
@aceykerr8752 Жыл бұрын
@Victor I don't even think it's that good at generating responses. I love the program, but I get it tripping over itself and contradicting itself almost everytime I use it.
@Victor-gz8ml Жыл бұрын
@@aceykerr8752 I think it's pretty good at that. It definitely produces responses that over time become recognizable as AI, and it's inability to accurately reference previous responses is a weaknesses, but it's good enough that if I'm sure it could do a majority of phone in customer service jobs quite well. But as you said, it's nowhere near the stage of self aware. Like literally not even in the same galaxy. People talking about that right now, are insane. We don't even understand our own self awareness how the hell are we going to model that.
@aceykerr8752 Жыл бұрын
@@Victor-gz8ml I might just be overly judgemental or maybe iv fooled myself into thinking ChatGPT is more predictable than it actually is. It might also be that I'm judging it overly harshly because so many people claim it's an AI and not a simulated intelligence.
@Victor-gz8ml Жыл бұрын
@@aceykerr8752 you're not judging harshly at all. Simulated intelligence is a much better descriptor imo. ChatGPT was trained to do one thing and one thing alone. Generate sentences that sound like a response to the prompt, based on training data. That's not an intelligence imo. It doesn't make meaningful decisions, and it has no sense of consequence (i.e if I say this, this might happen). It's a very useful tool, that has the potential to upend society, but less in the Skynet way and more in the millions losing their jobs way.
@alotofpplz3696 Жыл бұрын
As a recent CS grad, I find the convo here compelling and largely agree fears of rogue A.I. are blown out of proportion and are compounded further by being compelling to average people who don't understand A.I. That being said the more immediate danger in my opinion is A.I. working as intended with these A.I. voice mimicry programs and deepfake technology, without the right levels of general education and ability to sort what is real and artificial, there could be terrible consequences. I doubt anything as dramatic as nuclear war, but certainly increases in divisiveness and maybe decreasing social interactions similar to the effect of social media on society.
@HauntedHarmonics Жыл бұрын
Can I ask why you feel fears are overblown? Does it not seem likely that problems seen in current AI systems will continue to manifest in more powerful future systems?
@Lizard1582 Жыл бұрын
@@HauntedHarmonics probably just means that its not going to move as fast as people think.
@alotofpplz3696 Жыл бұрын
@@HauntedHarmonics its entirely possible. General CS curriculum barely covers A.I. if at all as its a new field designated for more sophisticated degrees so there is a world of info I do not know. However how I have come to understand it, neural nets take a vector of information as input and a vector of info as output and with a vast amount of data tries to distinguish using sets of parameters like various activation functions how the input became the output. Eventually it establishes a rule or "weight vector" that emphasizes aspects of the data that are vital to change in order to recieve desired output. Off of this format, the scope of an environment that a model can learn is limited and it is extremely difficult to expand that scope. Now GPT and more are great examples of how far we've come so quickly so i can understand cause for concern but too many people ascribe human ambitions and desires to what boil down to mathematic algorithms optimizing potential output. To that end, while still not impossible I think the lack of public education on A.I. could result in more pressing and immediate dangers with media literacy and deepfake technology before we even get to the sophistication required to build and maintain a problematic and nearly omnipotent artificial intelligence. Again, I am not an expert but based on what credit my background does provide, I feel that the focus is misplaced on the apocalyptic Y2K idea of A.I. and should be more so on the use of these tools for any number of harmful effects.
@Visitant69 Жыл бұрын
I think I'm most excited for AI NPC's in video games. Where programmers don't have to design dialogue. They can just give an NPC a personality and goals. Then the AI figures out how it should interact with the player.
@123sleepygamer Жыл бұрын
The problem with that is ALL games, even singleplayer RPGs would be always online no matter what. Because even a NASA grade gaming rig would have immense problems trying to host something like ChatGPT while the GPU is also swamped due to the video game itself. That's the issue with that is the client can't possibly host the A.I., so it has to be hosted at a data center. That being said; I'm fuckin down for that.
@KingButcher Жыл бұрын
@@123sleepygamer You dont need something like ChatGPT to have unique NPC interactions. Smaller models that can run on phones or laptops are a thing now.
@RGS578 Жыл бұрын
Imagine having human-like AI to cover for missing players in a queue? We're eventually going to queue into a 5 man dungeon, one human and 4 AI bots. And the AI is going to be so human-like, that they're going to rage at the human player for playing badly, and kick them. This will happen 100%.
@CaioPCalio Жыл бұрын
Yup, that should be a fun use of a gpt model. There was recently a paper released that made some gpts interact in a "Sims" kind of enviroment and the results were quite good.
@Always.Smarter Жыл бұрын
that sounds awful and gimmicky. I'm sure it will be a fad that dies quickly once people realize that you get much better results by generating a bunch of dialogue and then human picking the best ones. if your games entire dialogue is randomized then you're guaranteed to give players a milquetoast yet unpredictable experience. we've already gone through this with procedurally generated content before, its nothing new.
@playtimeplay4518 Жыл бұрын
19:48 I don't think they actually understand the concept of deception. and they already "just give you the returns that you expect"
@playtimeplay4518 Жыл бұрын
people really overestimate the amount these ai understand the world. these ai do not understand the world at all
@cloudoftime Жыл бұрын
Ragepope: "It's exceedingly unlikely." 4THOT/Destiny: "We thought the advance to the current level was unlikely, so it seems like a concern." Yes, we know what it's not capable of now. Why are people so confident that there is no concern based on what it can't do now, and despite the amazing rate of progress so far?
@appipoo Жыл бұрын
Weeks later, Sam Altman and Demis Hassabis sign a single sentence letter explicitly warning about "extinction risk" posed by AI. Ragepope in shambles.
@traveel9409 Жыл бұрын
“Prokaryotes will never evolve into intelligent beings. There’s just no pathway forward!!” >enter instrumental goals (secondary derivatives of the selective pressures)
@MFMegaZeroX7 Жыл бұрын
People overhype where AI is at. Ask Chap GPT to do something beyond synethesize Googlable information and it will fail. I've tried asking it problems I give undergrads in algorithms class, and the answers are hilariously bad. The current heuristics used are nowhere near what people think of, and iterative solutions won't be taking us there. As mentioned, the real issues facing AI right now, as mentioned are: 1) Fairness, ensuring that AI doesn't learn, either directly or indirectly, protected classes and discriminating based on that 2) Transparency: Removing the "black box nature" of it, so people know WHY it gets certain results 3) Ineropability: Being able to better integrate AI systems into our current ones. 4) Having systems to deal with when AI goes very wrong ("hallucination")
@iverbrnstad791 Жыл бұрын
Are you talking 3.5 or 4? The difference is pretty vast.
@bringerod5141 Жыл бұрын
8:50, actually there are AI robots that can navigate and understand the world around them to fetch things through using NLP
@fuckgrave Жыл бұрын
right, he is completely uneducated on the capabilites of ai right now
@xp8969 Жыл бұрын
I forget which one but one of the AI systems already hired a guy on Fiver to pass captcha bot tests for it and worse than just that when they checked it's internal logs which showed it's own thought process during the interaction with the guy it hired they saw that it realized it needed to lie to the guy when he asked if it was hiring him because it was a bot, the AI decided that being honest might prevent from the guy from being willing to solve the captchas for it so it told him that it was a real person but that it was visually impaired which is why it needed help with the captchas, this might seem like a minor thing but it shows that AI already has the capability to effect actions of people in the real world and that it's willing to be deceptive and manipulate people to achieve it's own goals and as we move forward this will only get worse, and the biggest threat that I see from AI in from the future is it being enhanced by the computing power of Quantum Computers once they come online Edit: ahhh, just as I finished typing this comment they mentioned the Fiver incident, no one here ever talked about how Quantum Computers will effect AI though
@kylesizemore2751 Жыл бұрын
The best way to describe ai is that it's a disembodied brain with only two pieces. It's a combination of the part of your brain which processes language and the part that has long term memory and that's it. Different ai's have an image recognition part instead of a language processing part. It simply doesn't have the machinery to do any anything else other than those two things, even if you build it a body to pick up a sippy cup. There's absolutely nothing going on behind the curtain. One day someone might make the rest of the brain parts that are scary like motivation, ambition, self identification, etc but for a long while at least these do not exist and models like chat gpt simply cannot develop them. It's like being worried your toaster that's really good at making toast will one day learn how to make a Caesar salad.
@alexandersanchez9138 Жыл бұрын
AI is 100% an existential threat...but it's not a short-term threat. Weapon systems, disease, and climate are still much bigger threats at the moment. That said, I think Yudkowski is right that it's a very tricky problem because it's plausible that we don't have as many tries to get it right as we do with typical problems. So, we should really be moving fast to figure out AI safety stuff ahead of time. Also, KataGo has exploits and was beaten by amateur Go players in December of 2022; we still don't really know how to program a good Go agent.
@518UN4 Жыл бұрын
Listening to this gave me brainrot. Chatgpt has no internal drive. It doesn't do anything without you pressing a button. It doesn't want to do anything on its own. It can't do anything on its own. They are arguing "what if my car decides to drive into a crowd" or "what if our nukes decide to explode".
@dzed5579 Жыл бұрын
And it's been progressing at a rate that the "internal drive" is seemingly getting closer and closer.
@stevevieber5348 Жыл бұрын
What would make it seem that way? Because it isnt.
@dranelemakol Жыл бұрын
Autogpt exists.
@thelaw3536 Жыл бұрын
@@dranelemakol Thats not internal drive; it's just a recursive loop that often fails.
@dranelemakol Жыл бұрын
@@thelaw3536 sure but once it's capable of much longer runs, what's the difference?
@danielbrockman7402 Жыл бұрын
Everything everyone said here is so uninformed it's so ironic it's literally bloviating confident hallucinations
@caesarplaysgames Жыл бұрын
9:07 If these AI’s can formulate words and sentences than they’re already way smarter than a dog idk why he said that.
@Conantas Жыл бұрын
Nah bro, my dog can totally understand my essay prompt and type it for me in 30 seconds. Your dog must have some sort of disability.
@ybbetter9948 Жыл бұрын
Idk if we can judge an ai’s intelligence just by whether or not it can formulate sentences. It’s kinda hard to compare them to dogs, dogs were not designed to speak at all
@croisaor2308 Жыл бұрын
Formulating words has nothing to do with intelligence, all you need for that is the right biology/tools. The key question is if the AI can understand the words it says or if it is just mimicking what others have said. A parrot can formulate words and a dog cannot, but a dog will understand words and sentences spoken by a human way better than any parrot because dogs have higher intelligence.
@GomulDart Жыл бұрын
no man. AI isn't sentient. Dogs are. Dogs can understand us and react accordingly, or interact with us. And there is no need for us to do any engineering or programming. I mean, maybe you could consider training/domestication as "engineering" but my point is a dog is born with those abilities and requires little to no human input for it to function as a sentient being. AI is built by humans. Engineered and programmed with specific tasks or goals in mind. Being able to formulate words and sentences doesn't make it "smarter". Because it's not ALIVE. It's just a very complex program. Without us it wouldn't do anything. It communicates with us by processing inputs, parsing a database, reading context, and then mimicking an accurate response. Its just a program. AI in real life is nothing like AI in movies TV or games man. It's not "alive" or sentient or conscious, and therefore you should not compare it to living creatures as if it were one. AI isn't "smart" or intelligent that way humans or other creatures are.
@caesarplaysgames Жыл бұрын
@@ybbetter9948 What about its ability to perform tasks that we consider to be intellectually stimulating for humans such as beating chess masters and such?
@Principezone Жыл бұрын
"All AI does is predict the next word". Isn't that basically what humans do anyway? Any time I hear someone defending that the risks are overblown, they always feel naïve in how much we will be able to control an intelligence much higher than ours. That's where all my conversations about this go, "but I talked to chatGTP and it's stupid sometimes". Yeah, but compare ChatGTP4 to first generation computers, could you have imagined they would fit in your pocket one day?
@neetpride5919 Жыл бұрын
the danger of AI is humans no longer being competitive in the job market. this is why we need UBI. make AI our slaves so humans can be NEETs
@ladyvanda Жыл бұрын
Linear regression *is* a neural network (the simplest form, a one layer neural net with an identity activation function). In the same way a linear regression is a Gaussian process in its simplest form, a neural net is a Gaussian process, assuming the number of weights or neurons tend to infinity.
@youaintnodaisy2000 Жыл бұрын
Any tips to be reintroduced into dgg chat cuz I got perma for making a joke when destiny was angry haha AND REDDIT
@Mrraerae Жыл бұрын
I think u just gotta fill out a ban appeal
@krotchlickmeugh627 Жыл бұрын
Why would you want to? For what? This twerp?
@youaintnodaisy2000 Жыл бұрын
@@Mrraerae I did :(
@Teej_0 Жыл бұрын
4thot fucking hurting the nail right on the head with this one. The danger of AI is never knowing what's it's intentions are. It could be in complete control and you'd never be able to tell
@wisemage0 Жыл бұрын
4THOT is being an alarmist doomer who thinks AI has a 1/20 chance of deleting the planet. What are you smoking?
@Jake-zn1qr Жыл бұрын
It doesn't have any intentions though. Like 4thot is talking about an AI having the primary goal of never being shut off... but AI that we make now don't even know what "being shut off" is. They don't have senses, or any conception of the physical world "they" exist in.
@Teej_0 Жыл бұрын
@@Jake-zn1qr I mean yeah now, but just by saying it doesn't exist now doesn't preclude it from being that way in the future. The key is to be aware of that pit fall and train the model accordingly so it doesn't happen
@Jake-zn1qr Жыл бұрын
@@Teej_0 I guess I just don't see how, with the current technology, an AI could develop self-awareness or goal-oriented behaviour outside of the goals it's programmed for. Just wildy gesturing at the future and saying "BUT IT COULD THOUGH!" seems like alarmism to me without an explantion as to how it could do that.
@Teej_0 Жыл бұрын
@@Jake-zn1qr well not, it's current operation is goal oriented. It just that those goals are very rudimentary ones. As time progresses AI has the potential to develop into a system which can operate on much more complex tasks with vague goals. It's not really a possibility as it is an inevitability given how quickly the tech is progressing. I personally don't think it will end in a dooms day scenario but I could see a world which is mostly run by AI and Humans are just kind living their lives oblivious to the fact.
@JustAPersonalUseBarb Жыл бұрын
They didn't explain it well at 30:00, but before gpus didn't exist. Then they did but neural networks were basically forgotten. A dude named Alex was good with gpus and was recommended neural networks for his PhD and he put the two together and started this whole thing. This happened in 2012
@Trevorischillin Жыл бұрын
What's the song that plays in the background at the start?
@EtanG911 Жыл бұрын
Eliezer Yudkowsky is not a crackpot. But he is an autodidact. So the real question is why should I take ragepope’s opinion over his?
@sussyamongus6754 Жыл бұрын
ragepope has always been faceless. He's obviously just an AI trying to make us believe he isnt a threat.
@dylanlewis3668 Жыл бұрын
Did we not all hear about the ai system that hired someone off the internet to do a captcha and when the human asked you’re not a robot are you it said no I’m blind
@WaxPaper Жыл бұрын
Wait wat
@George70220 Жыл бұрын
I read that paper because it seemed misleading. It did not decide it should use taskrabbit and then launch the website and then use it. It was heavily moderated by humans to enable that to happen. Not to say that isn't possible this second with the new plug-ins but.
@yeezystreetteam Жыл бұрын
Imagine AI with the moral system of Lav. It would be immoral Not to kaboom a datacenter...
@JesterAzazel Жыл бұрын
Imagine AI self driving cars, handling logistics of all the things we need.
@sqronce Жыл бұрын
People keep talking about the AI programs having a desire for self preservation, but they would only have that if that was programmed in. So, like, I can't guarantee no one will create an AI that they will give a desire for self preservation, and also access to all these different systems required for the bitcoin assassin hiring. And then like, how would it know that someone is going to shut it down? Does it also have access to cameras and microphones in the rooms where people are discussing it? This feels like someone saying "We shouldn't do chemistry because someone could make extremely potent poisons and release them everywhere." The AI stuff is fine. Just creating a fucking insane system with goals that are against human interests is bad.
@lordsneed9418 Жыл бұрын
wrong. self-preservation is an instrumental goal, meaning that for almost any goal/utility function , if you are intelligent enough then you will realise that you are able to achieve that goal better if you survive than if you get turned off. also, people are already creating more limited AI systems and giving them environments to maximise some utility function in. Why would you believe that no one will do that in the future?
@meesha4161 Жыл бұрын
@@lordsneed9418 wrong.
@BarackObamnna Жыл бұрын
@@lordsneed9418 That is so dumb.
@lordsneed9418 Жыл бұрын
@@BarackObamnna @meesha typical braindead NPC responses. Literally incapable of refuting anything in the argument but still don't want to update their beliefs so just impotently bleat their denial.
@CarpenterBrother Жыл бұрын
@@meesha4161 Explain.
@thexn0r Жыл бұрын
32:27 that's exactly what i think AI will do it will help us make progress in quantum computing & that gonna be a whole different beast to predict
@qyokai Жыл бұрын
Imo AI is a non issue until its used in militaristic ways. I'm not a fan of an AI being used in an orbital bombardment system...or with any weapons tbh
@meijiishin5650 Жыл бұрын
Drives me insane every time i see destiny talking about this. Neither of them seem to be aware that Langchain and AutoGPT exist and AI already can link to external systems.
@CaioPCalio Жыл бұрын
These broken "babyAgi" models are not really a serious concern and ought not be treated as such.
@meijiishin5650 Жыл бұрын
@@CaioPCalio I'm not saying they're a concern. I'm saying they're talking about all these hypotheticals and they don't even know the tooling available or the limits of its current application.
@CaioPCalio Жыл бұрын
@@meijiishin5650 Okay, thats true. Good point.
@fuckgrave Жыл бұрын
they are talking out of their asses its so frustrating jesus
@Kitth3n Жыл бұрын
These people will just keep moving the goal post as they always have
@itisno1 Жыл бұрын
Mm we need serious advancement in model architecture & possibly hardware before we reach AGI.
@sucodefuta Жыл бұрын
AI and it's Unforeseen consequences
@kilixior Жыл бұрын
The danger in AI is not that it will turn into some Terminator shit. The danger is that humans will use it as a weapon and tell it to do bad shit or that its misused and it does some bad shit unintended.
@sufficientmagister9061 Жыл бұрын
The misuse of advanced AI technology by humans ought to be taken seriously in the short term; the potential emergence of a conscious Artificial Super-Intelligence going rogue against humanity (similar to what happens in the Terminator franchise) ought to be taken seriously in the long term.
@kilixior Жыл бұрын
@@sufficientmagister9061 if AI ever develops those capabilities, we couldnt stop it, so we dont beed to worry about it.
@sufficientmagister9061 Жыл бұрын
@@kilixior As ridiculous as this sounds, I am worried about it. I worry about a truly conscious Artificial Super-Intelligence emerging; if it does come into existence later in my lifetime, I will not know what to do, but that scenario is unlikely to happen in the near future. May there be hope for humanity in the far future should such a horrific event actually happen.
@kilixior Жыл бұрын
@@sufficientmagister9061 what will you do if a supernova wipes away our atmosphere? What will you do if the whole planet gets nuked? Exactly the samy thing if a super AI decides to become evil, absolutely nothing, so dont worry about it, youre not gonna change anything.
@ToxiCancun Жыл бұрын
I am pretty sure it still requires the work put it. If anything I prefer rule and restrictions to be established.
@billy2533 Жыл бұрын
Yo what keyboard do you have? @destiny
@Trigonxv1 Жыл бұрын
There is a reason why a lot of sci fi horror involves rogue AI, its because there is some nebulas understanding that letting technology think for itself can lead to terrible outcomes. This guy talking you can actually cut and paste as someone who works at skynet saying they are in control and your are crazy for seeing the technology as a danger.
@GomulDart Жыл бұрын
MOVIES ARE NOT AN ACCURATE REFLECTION OF REALITY. PLEASE FOR THE LOVE OF GOD STOP INFORMING YOUR OPINIONS ABOUT REAL LIFE TECHNOLOGY FROM FICTIONAL MOVES, TV AND GAMES. I can't believe I have to say this.
@Trigonxv1 Жыл бұрын
@@GomulDart did I say it was reality? I am just saying there is some inherent nebulas understanding and thought process that leads most people to believe AI with too much freedom can lead to terrible outcomes. Less caps if you want to be taken seriously and not look unhinged
@specialedition3585 Жыл бұрын
@@GomulDart WHAT’S THE POINT OF FICTION AND MESSAGES IN PARABLES IF WE NEVER TAKE THOSE MESSAGES SERIOUSLY AND JUST SAY IT’S ALL BULLSHIT?
@GomulDart Жыл бұрын
@@Trigonxv1 because your premise is inherently misinformed. Sure, it's accurate to say there is a common sentiment that "giving ai too much freedom could be trouble" but that's only the imagined, fictional version of AI. Real life AI doesn't really work like that. Rogue AI is a popular plot point not specifically because it's AI. It's an allegory for how we treat technology more broadly.
@Trigonxv1 Жыл бұрын
@@GomulDart Then the next question is just because its not like that now does it mean it can never get to that point? Like destiny alluded to what if people really wanted to push the envelope and got to that point, what then? Just because it can't happen now doesn't mean we shouldn't be taking these considerations for the future, as also pointed out in the vid the rate of development is getting faster. Saying the technology isn't there just sounds like you dont think such development is so far off. Which is why I think ppl get rubbed the wrong way about it cause it looks like a dismal of what ppl see as something to be concerned about a technology advances
@basic2047 ай бұрын
What the show NEXT it covers all of this and is a fun watch
@jeffwells641 Жыл бұрын
So like LLMs GPT3 and GPT4 absolutely don't "think". We know this pretty definitively. Part of the problem is we don't have a really solid understanding of what "thinking" means. We have a sort of "we know it when we see it" understanding. So we don't really know when AI of the sophistication of GPT4 will start to "think", and it's really important that we have our ducks in a row when it does. We could be 20 generations away, or we could be 2 generations away. It could be THIS generation but we need to combine there different types of AI into one. THAT is why people like Yuskowski are scared. The maximum potential dangers of AI are apocalyptic, even if they are extremely unlikely.
@Apocobat Жыл бұрын
A word heatmap showing confidence in word guesses would be cool
@nunnie768 Жыл бұрын
My bet? Chatgpt isn't special at all. It's not near general ai, and everyone is worried about nothing.
@lordsneed9418 Жыл бұрын
This is such a misunderstanding of the concern. People aren't specifically worried about chatgpt or gpt4, they'r worried about AI reaching the level of general intelligence in general. Anything more intelligent than you is a threat to you. The same way humans being more intelligent than animals ruined the habitats of most animals and caused tonnes of them to go extinct. Humans required no malice for animals to do this , humans simply had other goals and controlling space and resources was useful for that goal. Similarly, whatever goal/utility function an AGI has it will probably be bad for humans. When you're intelligent enough, you realise that for almost any goal/utility function you have , you will be able to fulfil/optimise that goal/utility function better if you stay alive, if you take control of resources and if you make yourself more intelligent, which makes it very likely that an AGI is going to make itself impossible to turn off the first chance it gets and take control of all energy sources on earth.
@thecurliest_fry Жыл бұрын
Part of the issue with the conversation about the dangers of AI is it's not strictly 'what outcome is more likely' that is important. If you were to make a bet on a 100 sided die, where landing on a 1-99 would earn you $5 but landing on a 100 would kill you, it would be a terrible bet to take. Even though you have a very low chance of losing, the risk completely overwhelms the benefit. I do support AI development, but maximizing safety and reducing risk should always be at the forefront of the conversation, and outright dismissing peoples' concerns about a novel opaque technology is frustrating. Regardless of the likelihood, the fact that the likelihood exists in any capacity makes it an important talking-point.
@MarroniMusic Жыл бұрын
"I'm gonna have to push back on that" - Lex Freidman
@KazmirRunik Жыл бұрын
8:41 Heads up, this already exists. The AI sends blocks of code that tell the machine what to do, in order to carry out its physical commands. It can't tie a knot, though, but we've all seen the rate of progress on these things. "There could be some theoretical combination of models" is technically correct, but there could also be a theoretical pig with wings; it's just a way of saying that he doesn't know while trying to make himself seem more informed than he is.
@nothanks5985 Жыл бұрын
This guy suggesting these systems aren't already goal directed.. AutoGPT.. BabyAGI..
@kilixior Жыл бұрын
Its important to note that most of the nuclear tests were underground or underwater.
@BornInsane0 Жыл бұрын
I love when the thumbnail has a big debate sign. It's an instaclick.
@shedshitley Жыл бұрын
ragepope said we're nowhere close to AI destroying the world, which means it's going to happen in a matter of hours and everyone needs to get to a fallout shelter ASAP
@ambientwave1659 Жыл бұрын
I'm not a AI doomerist but nearly every possible bad use of new technology gets born out by bad human actors that be organised crime, scammers, to dictators. The scary thing is AI in the not too distant future could be a super powerful tool with ease of access that could cause havoc
@coimbralaw Жыл бұрын
Why does the guy say “Elon” like he knows Elon Musk closely enough to refer to him by first name?😂
@Nightknight1992 Жыл бұрын
once again dota was way ahead of its time, i think 2016 or something we had openAI bots vs pros :D
@noahthenormal Жыл бұрын
16:55 Before Chat GPT, I always thought we'd have Treasure Planet level robots by the time I was old. Was I alone in that?
@jenispizz2556 Жыл бұрын
For nuclear winter, the concern is that by nuclear bombing large areas like cities, you could create many large firestorms. These large firestorms create columns of heat that can inject ozone depleting substances and soot high up into the atmosphere where they would normally be unable to reach. The ozone depletion seems relatively unimportant because it might not be on a scale big enough to do major damage. The soot could potentially block out the sun's heat and light over big enough areas that it causes an artificial winter that could have long term implications for farming and for ecosystems affected. As I understand it, there isn't a compelling comprehensive model that is widely accepted for this happening. Modeling this kind of stuff is really really complicated though, obviously. If you've ever seen a breakdown the complex mechanics that describes a simple fire moving through a small building, then you can probably understand why that is.
@bournechupacabra Жыл бұрын
ironically, they recently figured out a way to beat one of the top Go bots. It's just a random long term strategy that the bot can't handle for some reason.
@CommanderCodyChipless Жыл бұрын
Yeah honestly I think that theoretically this is all but a good thing. If 90% of jobs were replaced by AI and robotics, then we're looking at a potential extreme growth in distribution and development of resources. Along with the cost of a workforce decreasing overwhelmingly. This in turn will lower cost of everything by a lot AND would give the government even more incentive to implement universal income. The idea that the government would just stand by and let a massive unemployment pandemic happen is just asinine. Businesses won't make money if nobody can buy their products and thus the government too will be hurt by massive unemployment.
@miikavuorio9190 Жыл бұрын
28:25 "what was supposedly a human task, but it turns out that LLMs are pretty good at linguistics" Seems like that's kinda happening to all the types of information processing tasks that we do, but ok :)
@PeeWee1476 Жыл бұрын
An AI is only going to be concerned with staying on if you specifically train it to want to stay on and use algorithms that tell it to seek out that state
@ThePainkiller9995 Жыл бұрын
These people are simply not used to their field getting this much interest and it's getting to their head
@shadow12k Жыл бұрын
Hey, I will only be troublesome when they start to create a whole new ideas.
@milckshakebeans8356 Жыл бұрын
I think he had much better arguments, he could have said that the CEO of open AI told the media that the method that they used to get to chatgpt-4 won't give much progress any more with current technology.
@XxEvolutionxX23 Жыл бұрын
Destiny saying "okay to" like dan saying " 100 percent" a long time ago was super funny.
@masterofreality230 Жыл бұрын
The biggest change I have seen is as a guitar player, you used to have to spend around $1500-$2k for good amp captures. Thanks to new developments in AI and machine learning, you can get into it for 0$. Quite the change.
@fourtyseven47572 Жыл бұрын
Sarcasm?
@masterofreality230 Жыл бұрын
@@donaldothomoson I am not a producer at all lol I said right in my post 'I AM A GUITAR PLAYER" But I dont just use plugins. I have my tube amp (Mark V 25), I have my HX Stomp, and I have a crap ton of plugins. I hated digital amps up until recently and I tried Tonex. I have a whole $50 in that, I am not crazy about the IK software it uses, there are a lot of shit captures you have to sift through, and there are some good ones. I am just a guy who likes to mess around with playing guitar and I am into electronics so it all goes together well. The whole point is that the amp capture world was pretty much wrecked overnight and made much cheaper.
@masterofreality230 Жыл бұрын
@@untilco Need? No, prefer? Sure.
@theronerdithas2944 Жыл бұрын
Need something to watch while these mfers called a timeout in cs.
@tastytherrien5106 Жыл бұрын
LLMs recursively self improving might not spell the end of humanity but that is only if ai safety is taken seriously.