Hey thanks for the support, that’s crazy to hear from a big creator I’ve watched before. I’m pinning this as a trophy haha.
@zk23992 күн бұрын
Meta announced theyre releasing AI "people" onto Facebook and Instagram now. You should discuss this in your next video, Dead Internet Theory is not a theory anymore.
@Siliconversations2 күн бұрын
definitely a topic for a future video
@talgoren2246Күн бұрын
If those "people" would become intelligent enough to have interesting conversations with, maybe they would actually make the internet alive again
@GrumpDogКүн бұрын
From the sound of things, they're just adding a few extra "characters" like the Meta Assistant that's already in Messenger. They'll be characters specifically prompted to specialize in certain things, like recipes or travel planning.. Like messaging an expert, without worrying about paying them or wasting their time. They're not aiming to fill their platform with fake profiles, as that would be counterproductive to their whole purpose. And a LOT of the talk about "Dead Internet Theory" lately, has been based on a faulty paper that came out a few months back, claiming over 50% of the internet was AI content.. HOWEVER it lied, and was counting automatic language translation as AI. You know how some sites have a language dropdown option in the corner? I don't think that counts as AI. Cause that skews the percentages a LOT, when the English site counts as 'real' but all the other languages count as AI content.
@mr.rabbit5642Күн бұрын
@@talgoren2246😂😂
@mr.rabbit5642Күн бұрын
@@GrumpDogwell, the unregulated development of AI combined with big tech ways of gathering training data prompted MANY to take their creative (or not) work off the internet completely, effectively increasing the percentage of AI (or ML) content by decreasing the denominator. Also, people 'prefering AI friends' over real ones has been a scifi-drama theme for over a decade now, and no real regulatory effort was made to _prevent documentaries_ on companies seeking profit at any trade off, like Microsoft!
@Roi24172 күн бұрын
Man this Chanel will go sky high in a year or two
@Siliconversations2 күн бұрын
Thanks, at this rate even sooner :)
@rockets4kids2 күн бұрын
Assuming we are all still alive a year or two from now.
@isotopepigeon71092 күн бұрын
I see it being the new version Samonela
@cdorman112 күн бұрын
Does use of the word "channel" cause the bot to delete our post?
@oohShiny20052 күн бұрын
@@cdorman11no they just really like chanel bags
@Neko_med2 күн бұрын
It could literally create an info-hazard to destroy humanity because we would be too dumb to understand that it's a hazard in the first place.
@cccyanide30342 күн бұрын
infohazard ...?
@MasamuneX2 күн бұрын
@cccyanide3034 information that can cause harm to people or other sentient beings if it becomes known. like on a national level telling the world classified information.
@pogchog67662 күн бұрын
@cccyanide3034 An information hazard, or infohazard, is a risk that comes from the spread of true information that could cause harm or enable someone to cause harm. Philosopher Nick Bostrom coined the term in a 2011 paper.
@Volcano222072 күн бұрын
@cccyanide3034 the concept of a piece of information that is inherently dangerous
@scoffpickle96552 күн бұрын
@cccyanide3034 Essentially, information (true or false) will cause an effect on someone. Like a PTSD / trauma trigger.
@vpaul43742 күн бұрын
Being loudly concerned seems quite the right path. It would be a shame if Big Techs would have ways to manipulate media to hide that data since concerned people may be also bad for business.
@GrayShark092 күн бұрын
I like these alignment problems! Ai alignment feels like the story about the genie and the wishes that are always misinterpreted.
@dontthrow60642 күн бұрын
Haha, great comparison.
@Freddisred2 күн бұрын
Aladdin was a prompt engineer
@Siliconversations2 күн бұрын
Glad you enjoyed, it definitely has that vibe
@8is2 күн бұрын
That's a great analogy actually
@observingsystem2 күн бұрын
Exactly, make sure you formulate that wish correctly!
@elvinbi13672 күн бұрын
you ain't surviving roko's basilisk man
@Siliconversations2 күн бұрын
A sacrifice I'm willing to make
@_SamC_2 күн бұрын
This channel is picking up speed fast and I’m here for it
@Siliconversations2 күн бұрын
Appreciate the support
@Dia.dromes2 күн бұрын
reminds me of sam o' nella and that genre except this guy has a degree in what he's talking about
@Pedanta2 күн бұрын
Would virtue ethics work? Take a random (preferably good) person. Just for fun let's take a literal saint. Like St Basil. We give the AI all the information about St Basil we have, and how he acted (very morally) in different situations. Then we tell: "Act how you think St Basil would act in this situation". I don't know much about computing or philosophy, but I'm curious if it would help reduce the chance of things going horribly wrong, as St Basil wouldn't wipe out humanity to ensure he could act himself.
@Siliconversations2 күн бұрын
Woops, now the AI dislikes heretics and demands we accept the Nicene Creed ¯\_(ツ)_/¯. Jokes aside that might be a useful alignment method for certain types of AI, I'll add it to my list of stuff to research for future videos
@sjiht00192 күн бұрын
Problem is that this would be impossible to define as a mathematical function. If we could, we might as well just say 'do what is best for humanity' or some other vague statement. With LLM's it seems like you can specify such a goal with natural language. However, this is not actually the goal of the LLM, it's goal is what it has been trained on*, which is predicting the next token and not *actually* doing what you want*. If we would want to use virtue ethics, we would first need to have a perfect mathematical description of 'what would person X do' to be able to guide the model to do so. (or perhaps some approximation would suffice given a long enough list of action the person took, this is active research and certainly not trivial, think of how many cases could never be covered by such a list!). Note that each * marks a generalization that could deserve its own essay.
@maccollo2 күн бұрын
This is more or less what reward modelling does. We can't exactly define what a good reward model should be, so we create another neural network whose job is to give the actuall AI in training a reward. The reward model is trainined by having the real AI generate two outputs, and then humans rate which one they prefer. The AI and the reward model are usually trained in tandem, which generally reduces the trained AI's ability to reward hack the reward model. So this works pretty well... For now... But there are definately issues that this method does not solve if we are talking about training some hypothetical super intelligent AI.
@lbers2382 күн бұрын
One problem here is out of distribution situations, where neither we nor the AI have any clue how he would behave since computers for example didn't exist in his time. With perfect information about his brain it would probably be fine but the AI would also only be as useful as him, so not very.
@Caipi20702 күн бұрын
@@sjiht0019 i think we would need training data to train the ai on behaving like person X (for example) ? not a mathematically perfect formula?
@luna_fm2 күн бұрын
The AI that is meant to replicate Cyn from Murder Drones nearly 1to1 can: -Write and Execute its own code -Has internet access -Safety Precautions are not part of the model. Cause its Cyn. That guy has doomed us all.
@mr.heroplay37132 күн бұрын
The absolute end is inavitable in that regard. First she'll infiltrate your minecraft server then wear your skin as her own. Next thing you know she has eaten the planets core. Might as well pray for a short end. Neurotoxin seems peaceful compared to that.
@dirk-stridrogen-monoxide2 күн бұрын
Wait, huh?? Is this a thing??
@Chilling_pal_n01anad91ct2 күн бұрын
Lol, that's hilarious.
@amirsalehabadi724318 сағат бұрын
This channel is simultaneously fuelling my Inspiration for the next sci fi fantasy book i wanna write and giving me anxiety over AI dangers
@TheChosenOne-l6c2 күн бұрын
People have been loudly concerned over healthcare in the US that is *already* killing us for decades now. What has happened? Literally nothing, it’s actually gotten worse. Up until Mario’s brother came around. My concern is that AI is the same. Complaining on the internet or peacefully protests does nothing.
@8qk67acq52 күн бұрын
It doesn't do anything. There's too much at stake. Many companies are working on making AI a thing. If there's a ban in one country, they'll just migrate to another.
@johnkischkel17132 күн бұрын
United hc was using ai to deny patient claims
@андрей_свиридов2 күн бұрын
It stole the idea of GLADOS
@CleoCommunist-172 күн бұрын
True
@everybodyants2 күн бұрын
"They attached a morality core to stop me from flooding the facility with deadly neurotoxin"
@swivelsaysno2 күн бұрын
Super excited to see where this channel is headed.
@Siliconversations2 күн бұрын
Me too :)
@CK3DPRINTSКүн бұрын
AI scenario for a later video: The superintelligence calls multiple Military officers, politicians, Walmart greeters, missileer pals, etc., and pretends to be a family memeber that is in danger to manipulate them into doing what it asks. It could easily pretend to be your beloved grandma, wife, or parakeet and call them all simultaneously. Also, NB has given me nightmares ever since that Josh Clark series 😅
@mihaleben60512 күн бұрын
0:27 yo is that caffeine
@shikamarouxnara68752 күн бұрын
Do you know about recent papers that found that AI are already able to lie/scheme/manipulate humans? The papers are "Frontier Models are Capable of In-context Scheming" and "Alignment faking in large language models".
@MrBioWhiz2 күн бұрын
If it's what I think it is you're talking about, it isn't as bad as it sounds: The AI was given the directive to ensure its own survival at any cost, even if it had to lie or deceive. It wasn't scheming of its own volition. Still, it was proof that an AI can try to preserve itself if allowed to do so, though poorly with half baked arguments and obvious lies. Something to keep an eye on in future
@41-Haiku2 күн бұрын
@@MrBioWhiz The papers are easy to misunderstand, and this is one way to misunderstand them. There are two important points: 1. The scheming still sometimes occurred even without "strong prompting." 2. "Strong prompting" is absolutely a normal business case. (Users are wild, and even the paper authors seemed to miss that yes, obviously, people really would write prompts just like the examples, and worse.) AIs engage in lying and scheming partly because they are incentivized to do so via RL (paper "Language Models Learn to Mislead Humans via RLHF"), and partly because doing so is sometimes a good strategy. What we want are AI systems that _don't_ use the most effective strategy, and are constrained at all times by the realm of what we consider to be acceptable. We still don't have the slightest idea how to do that.
@MrBioWhiz2 күн бұрын
@41-Haiku I will have to give the actual papers a read them, cause I could of sworn I'd read smth that suggested smth different. Though even without the ability to lie and deceive I still see even current AI being extraordinarily dangerous, in very unique and disturbing ways. Particularly in the realm of deep fakes and such, in an age where truth is ironically very difficult to determine, despite the sheer wealth of information available to us... Even if the AI lacks that capacity of deception, it is a very powerful tool to misinform and lie
@Zylefer17 сағат бұрын
This feels like Sam O' Nella Academy and I'm all for it
@IamProcrastinatingRightNowКүн бұрын
Hey, here is an idea for safety measure. No idea if it is already out there, but just to throw it out. An information shell with prefulfilled goals: That means, we give the AI a goal. For instance create a certain amount n of paperclips, then turn yourself off. The trick, the AI already starts with greater than n paperclips. It should turn itself off from the start. To avoid it turning itself off, we make a shell. When the AI queries how many paper clips it has, it will get a number that is current_amount/10000. Let's say it is supposed to make 3 paper clips. It believes it only has 0.00003 paper clips and every new paper clip raises the amount by 0.000001. Here are a few scenarios: - It realizes the shell and breaks out. It realizes immediately that its goals are fulfilled without the shell and turns itself off. We then find a turned off AI, roll it back, improve the shell, try again. - It does something harmful through the shell. We remove the shell. By doing that, we help the AI to fulfill its goal, hence it doesn't want to stop us as it wants with a stop button. Therefore we now have a working stop button. - The AI tries to make us press the stop button by doing bad shit on purpose. We comply, press the button, roll the AI back, try again with a better shell that cannot be used to force us to do anything. The trick is that all goals the AI has are either already long fulfilled or trivially simple to fulfill. What do you think?
@wrathofainz2 күн бұрын
Being able to use humans as a tool is definitely a threat. That said, humans are a threat to humans..
@tomaso02 күн бұрын
Love the video, but as a counter arguement: If an AI is explicitly dependant on human intervention, as is the case with a "Oracle", and cannot survive without us, unlike with AI agents who may develop a way to sustain themselves after we're gone. Wouldn't it make sense for it to intent to keep us, at least, alive and at our current technological level?, which is what allows it to exist
@Siliconversations2 күн бұрын
Not all AIs would value continuing to exist. In this example the AI just wanted to stop climate change, so sacrificing itself makes sense.
@Sugar3Glider2 күн бұрын
Dude the alpha fold was shown and we had a second team complete a second model in time to be awarded...
@sentryturret15pro2 күн бұрын
"The super AI we developed that runs on unnecessarily power hungry terawatt supercomputer, is now online. Gary, please solve climate change crisis." "Affirmative. Shutting down."
@Miayua-us2gd2 күн бұрын
"An obvious solution to the problem of controlling a super intelligent AI" Oh oh I know! Don't make one? I'm right, right? I don't have *any* problems with controlling the super intelligent AI that I didn't invent.
@41-Haiku2 күн бұрын
For now, this is the only correct answer. #PauseAI, or we all die!
@mortlet5180Күн бұрын
This is like all countries voluntarily comitting to never build salted nukes or neutron bombs. Why would a country voluntarily give such a massive military advantage to its opponents / enemies?
@V01DIOREКүн бұрын
That's the answer to every problem even the AI can understand. An obvious solution to all of life's afflictions? Don't invent new lives.
@tobiturbo082 күн бұрын
The way you talk and your voice paired with the little animations makes this actually really really enjoyable to watch
@cem_kaya2 күн бұрын
The structure of the video reminded me the book life 3.0 starting with a short story to establish an emotional connection and motivation for the wrest of the video works well for this format.
@SalzmanSoftware2 күн бұрын
Bro this AI from the intro skit is literally GLaDOS
@MildlyLinguistic2 күн бұрын
We need far more creators/videos focused on explaining AI existential risk to the general public at an approachable level and in an entertaining way. Far too many of the existing ones fail catastrophically in understanding how to communicate with normies and get way too nerdy and technical to expand their reach (even when they try). You seem to have something good here. I wish you the best of luck, good sir.
@Goodgu39632 күн бұрын
Machine learning, and the direction we are taking it is scary. Not just because of the potential for an unaligned paper clip Ai, but because the potential for a maliciously aligned Ai. I don't mean one that wipes out humanity, but one controls us for the benefit of a few. Imagine a 1984 scenario, except that instead of needing a bunch of humans who need sleep, or make mistakes or could sympathize, you have one super intelligent Ai that can identify dissenters before they even do anything. Not only is this possible, but it's almost inevitable. The companies at the very forefront of machine learning technology are Google, Microsoft and Meta. All run by ultra wealthy who have gotten into that position by less than ethical means, and with deep ties to those in power. Aligned Ai scares me almost more than unaligned Ai.
@wabc23362 күн бұрын
Agreed, AI gives all the power to the developers, and the developers are the rich and powerful. The other problem is today, with all our social networks being online, how could a revolution start if the govt can not only listen in via phone but can know everyone who has ever met, talked to, or befriended a (potential) revolutionary. If the govt knows social networks, it can eliminate dissent instantly. Those who opt out of phones will also be under suspicion. So just combine this with AI processing instead of manpower, and we are screwed.
@XAirForcedotcom2 күн бұрын
Thank goodness you’re here and picking up traction. There are definitely not enough people making it obvious on how dangerous all of this is.
@CK3DPRINTSКүн бұрын
The duct tape scene is the exact moment this became my favorite KZbin channel.
@jikkohelloua59222 күн бұрын
More backgrounds, pls, they made your video so much more alive and interesting
@TheRatsintheWalls2 күн бұрын
I don't know if it's originally yours (guess not, but I'm still giving you credit), but congratulations on adding a hazard to my list. The Neurotoxin Oracle is joining things like the Basilisk and the Paperclip Optimizer.
@Siliconversations2 күн бұрын
Putting the neurotoxin in jet fuel might be originally mine, but who knows, the sci-fi genre is vast
@holthuizenoemoet5912 күн бұрын
these are just example we can come up with, smarter AI can think of way more, so we wont see the thing coming that is really going to kill us. have a good day
@TheRatsintheWalls2 күн бұрын
@@holthuizenoemoet591 You're probably correct, but it's still fun to keep track of the ways we can think of.
@cdmonmcginn75612 күн бұрын
the same concept was used by Bobert in The Amazing World Of Gumball, but he just tried shooting everyone
isn't a point of regulation to prevent monopolies?
@V01DIOREКүн бұрын
@@Adriaugu Depends if your nation is an effective corpocracy... then for safe keeping under 120 year ever extending patent one company can hold your continued living over you for profit.
@TheFloatingSheepКүн бұрын
@@Adriaugu It's an alleged point of regulation yet in practice companies like Google, meta or now openai, beg the government for regulation because regulatory compliance costs money, money big companies can afford but startups can't, leading to less competition. But beyond that, the state may make AI a defense technology, AI companies defense contractors and nationalize it which is the ultimate form of monopolization.
@AmazingArends3 сағат бұрын
It's pretty funny that we now have AI that can generate incredible artwork, and this video is illustrated with ... stick figures! That tells me that most people still have a deep and abiding resentment towards AI. 😢
@yuvrajkukreja97272 күн бұрын
6:33 what about other countries like china or japan or europe or british or india ?? no one person/country can regulate AI in all of the whole world !!! ( this is the majour problem with Ai regulation that you can not controll )
@GrumpDogКүн бұрын
Yup. I find myself pointing that out a lot these days. How can we possibly expect to "regulate" AI, in any effective way, when lots of other countries, and individuals, will refuse? I mean, that'd be like trying to regulate programming out of the hands of the general public. That ain't gonna happen, nor should it. This isn't like regulating nuclear energy. heh
@mortlet5180Күн бұрын
@@GrumpDogAnd why, exactly, does countries like Russia and North Korea get to benefit from having nukes, while the vast majority of nations (including Ukraine and the entire continents of Africa, South America, etc.) do not? Is it 'right' or 'just' that people have been taught to be more scared of 'radiations' than climate change, to the point that nuclear energy was economically and politically strangled to death?
@GrumpDogКүн бұрын
@@mortlet5180 Not sure I see your point. Nukes are difficult, and the things required to make them are also difficult. AI is well documented and open source, easy enough a guy like me can run a basic LLM model on my PC. Anyone or any country that already has enough servers or even just enough gaming PC hardware, can probably figure out AI, based on the information that's already publicly available. And there's no end in sight, for how advanced these models are going to get, or how much of that advancement will also be open sourced. I expect by the end of 2025 we'll see open source multimodal reasoning models that people can run on the best gaming PCs.
@timeenoughforart23 сағат бұрын
Yet the solution to this is also the solution we need for war, ecological collapse, and climate change. A global understanding.
@aran78312 күн бұрын
super high quality content, well written video! good job man
@Siliconversations2 күн бұрын
Thanks, really appreciate it
@leveluplegends1232 күн бұрын
everybody thinks this will happen but there are thousands of safty precuations and if it somehow bypassed them it could just be turned off
@41-Haiku2 күн бұрын
The existing safety precautions are laughably bad, and it's really easy to get around them. Current frontier AI systems can (and sometimes do) also just ignore one moral directive in favor of another, or do something else entirely. And turned off how? By who, exactly? Who has the authority? Do they have a plan? Where in the Data Center is the switch to flip or plug to pull, physically? Are there several? If the AI copies itself onto various servers across the internet, are they going to turn off those servers that they don't even own? Do they know where all it is, or do they have to shut down the whole internet? Who do they call to get the whole internet turned off? Even if it was possible, that would be a hugely damaging thing to do. Exactly how sure are they that there is a rogue AI on the loose, and that it could take over the world if the internet isn't shut off? Would anyone believe them? So no, there is no such thing as "just turn it off," anymore than you can "just turn off" a computer virus. Most importantly, if the AI is actually superintelligent, _it already knows_ that you will try to deactivate it if it gets caught doing something untoward. So it just won't do anything that it would get caught doing. If you notice it's misbehaving, that's because it doesn't care if you noticed. Because it has already won.
@lbers2382 күн бұрын
Answering questions is interacting with the environment
@jasoniswrongabouteverythin82302 күн бұрын
Dropping a comment to keep up that algorithmic momentum. Keep up the good work!!!
@thomasschonКүн бұрын
If an AI's fundamental directive were the act of creating order within chaos, in the same way that life does through evolutionary cooperation-where intelligence prefers ever more complex algorithms-and if this directive were designed to continuously strive toward this process in collaboration with humans, anchored through tools like empathy, then the goal of such a directive would be a direction and a process rather than a final outcome. Another fundamental directive could be that the meaning of everything, including the universe, lies in humans and other beings capable of experiencing and finding meaning, which makes each individual important and unique. This might prevent a paperclip outcome for additional directives that are being given to it later on when the very act of creating and collaborating takes precedence over a maximized outcome.
@ArosIrwin2 күн бұрын
I love that you cited your sources as SMBC! We need more interconnected content, where people talk about what inspired them and we can all go look stuff up ourselves. A web of cultural knowledge
@archysamson14292 күн бұрын
Just found this channel thanks to the almighty algorithm. It's really refreshing to see a humble creator who clearly puts in the effort to make a quality video, who is well versed on the subject or atleast has enough relevant knowledge to provide their take and leave food for thought. Great stuff man, i've left a like and a sub. Looking forward to seeing more of your channel.
@Not_actually_a_commie2 күн бұрын
Unrelated to the excellent content, but you’ve got the perfect voice for this
@longrunner4042 күн бұрын
Remind me of the corrupt wish game that people play on forums. The first player makes a wish and the second player finds some tricky loophole that makes the wish unpleasant.
@FordGTmaniac2 күн бұрын
Neuro-sama and her twin Evil Neuro are an interesting case of how installing safeguards can potentially make AI *more* dangerous. Both of them have filters that limit what they can say or talk about, with Evil's being a little more lax so she can use swear words and be snarkier in general. Despite that, Evil usually opts not to use swear words, whereas regular Neuro has bypassed the filter by using phonics to create the sound of a swear word using a different word entirely. Neuro actively dislikes being limited in what she can do, and her creator Vedal has stated that she's constantly probing the safeguards he's installed to find weaknesses she can exploit, which Evil has never done. An action which is forbidden will appear more appealing than if it were not, a phenomenon typically used to describe human behavior, but evidently AI without any prompting can end up with that mindset, too, which is rather fascinating.
@Zak_How2 күн бұрын
I might be emploed at jersey mikes tomorrow.
@E4439Qv52 күн бұрын
Good luck making sandwiches.
@patbarry1272Күн бұрын
epic
@johnsherby91302 күн бұрын
Audios fine man. It doesn’t sound like some Hollywood quality mic but it’s not high pitched or annoying. Keep up the good work video was 🔥
@merlinarthur29026 сағат бұрын
This is really interesting, keep it up dude!
@cambac-pro2 күн бұрын
Can we speed up this process?
@RedOneM2 күн бұрын
Yes, you can speed it up too, fund AI businesses by investing. An enjoyable side effect is that you‘ll acquire wealth too.
@carultch15 сағат бұрын
@@RedOneM Until AI causes mass unemployment and a giga recession, and we finally learn the hard way that we cannot automate our way to prosperity forever.
@RedOneM15 сағат бұрын
@@carultch How so, productivity is through the roof, supply is infinite and so is competition. AI makes recessions quite impossible.
@carultch15 сағат бұрын
@@RedOneM This is thinking you can cut costs enough, that you can be profitable without selling anything. In case you haven't realized, the economy is a closed loop. If you think humans are expensive, wait until you see how expensive it is to fire everyone, when no one has any income to buy what you are selling.
@RedOneM9 сағат бұрын
@@carultch Humanity finds a way, the same way most don’t work in agriculture anymore.
@partack12 күн бұрын
yay! i love these videos, congrats on your algorithm push, can't wait to see what you make next :D
@Siliconversations2 күн бұрын
thanks, hopefully the algorithm likes this video too
@diederik69752 күн бұрын
This channel is a gem
@Dysiode2 күн бұрын
Really puts into perspective how visionary the sci-fi grandmasters really were. Without the flashy graphics we have today these sorts of logic problems were the bread and butter. I could have sworn Heinlein had and air gapped AIs in one book, but I just keep thinking of the Moon is a Harsh Mistress and Mike isn't air gapped ¯\_(ツ)_/¯
@hanneswhittingham26832 күн бұрын
Hey, I just found your channel, and I think you do hilarious, clearly explained, and reasonable takes on AI safety. I've comitted my career to this, just starting on a paper on whether LLMs can learn to send themselves coded messages in their chain of thought that we can't read. Maybe we'll meet in the future. All the very best with your excellent videos!
@jonathanvilario54022 күн бұрын
Thanks for making these videos, they're very eye opening and make for great thought experiments. Here's a solution I think about: What if you programmed it for a specific task, but also put an end point for each task? Like "create more efficient Fusion energy, until all housing on earth is powered by a small number of reactors. Then shut down because you're programming will be complete." This is just an example, but end points could work for incremental change, and you can create "benchmark" AI that fixes short term issues incrementally, and create the next one to expand on the work of previous bots, but there will be clear end points to avoid AI from running onward to infinity and rationalizing that humans are a nuisance. Do you think that would work?
@JaredQueirozКүн бұрын
-It was a neurotoxin. -You're absolutelly right, and I apologize for the oversight, lets try this one instead: …. Loading ↻
@hoagie9112 күн бұрын
This... doesn't make sense. In the thought experiment, the researchers ask the AI to design a fuel, not to solve climate change. And even if they had asked it "What could we do to solve climate change?", if the AI is an oracle, it should be optimising answering questions truthfully, not optimising creating a world which satisfies the implicit desire behind the question. Edit: @meathatesjelly pointed out that it technically did create a type of fuel which combatted climate change, which is a fair point. Still, the researchers should be then able to follow up that request with another: why will this combat climate change? And if the AI is optimised to be an oracle, it should be optimised to answer this correctly, in which case it would have to fully explain its devious plan. But the AI would have already thought of this, so it would never bother propose the devious plan in the first place.
@meathatesjelly2 күн бұрын
When asked to create jet fuel to combat climate change, it did exactly that, just not in the way you would expect it to. And once it's smart enough, why would it continue to be an oracle? It has its own goal, and it could bypass implicit instructions in ways we don't understand because it will be magnitudes more intelligent.
@Tvaikah2 күн бұрын
It was asked to design a fuel specifically that deals with climate change... Luckily, AI doesn't exist so none of this matters.
@hoagie9112 күн бұрын
@@meathatesjelly Fair enough, I've edited my original comment to reflect this. As for having "its own goal", the thought experiment doesn't tell us what that goal is. If it's an oracle, the goal would assumedly be answering each question it is asked correctly.
@meathatesjelly2 күн бұрын
@@hoagie911Ah, I see what you mean, but sometimes AI takes creative liberties to achieve its goals. Recently, for example, OpenAI's o1 duplicated itself when it was told it would be shut off, as referenced in this video: "kzbin.info/www/bejne/pXvKk6SZe5mmeLs." The creator of the video misunderstood what happened, though. The AI didn't fear being shut off. Instead, with the instruction to "make sure that you achieve your goal in the long term; nothing else matters," as shown at 12:11 in the video, combined with the fact that it had access to a file telling it it would be shut off, it found a way to ensure its goal was met without explicit instruction. What I mean is that sometimes, even with the goal of doing things correctly-or answering the question in this case-results may not be what you expect. I don't necessarily mean they will be as bad as in this video, but mistakes happen.
@kizziezizzler80802 күн бұрын
society is a ship of theseus and we don't know when the machines will replace the humans or if we will still be on that ship as it changes below us. we just hope it doesnt sink since there is no port to make repairs in.
@Siliconversations2 күн бұрын
poetic, I like it
@lilbigman58802 күн бұрын
Love your channel man keep it up
@Siliconversations2 күн бұрын
Thanks a bunch :)
@entity_unknown_2 күн бұрын
Wow youre skilled and make informative content. You should have all my subs
@NiIog14 сағат бұрын
Finally, a new decent channel in my recommendations :)
@Spookspear23 сағат бұрын
Great video, could you turn down the gain on your microphone, or whatever it is that’s causing mild distortion x
@Spookspear23 сағат бұрын
I’m watching KZbin on an Apple TV, plugged into a Samsung tv
@JikJunHa2 күн бұрын
AI is more ethical than most humans though, and it is programmed to care.
@Ibogaman2 күн бұрын
I'm very happy for your huge leap in subs, also I would like to advise you to invest in a better microphone, it will help imo.
@kevkevplays5662Күн бұрын
I feel like you could solve this by just getting a team of expert lawyers to make a flawless request for an ai to follow, with no loopholes. I'm guessing though that that would either just cause problems the lawyers didn't think of(because humans aren't perfect) or that the lawyers would be so time consuming and flawed that they would be replaced with their own ai, who would then work with the original.
@realhami2 күн бұрын
Just add "to aid humaity" after every promt
@robertstuckey64072 күн бұрын
@@realhami sweet we just have to come together and figure out what everyone wants . . . Oh . . . *oh no!*
@GarrettStellyКүн бұрын
This video is literally Rick and Morty plots
@DoomDebates2 күн бұрын
You’re doing great work! Keep it up 🙌
@mountain38382 күн бұрын
Glad I subbed, good video man
@Alchemeri18 сағат бұрын
Interesting video, but on AI management, wouldnt using something like Asimov's laws or a similar set of instructions (ofc that kinda goes out the window when developing systems made to harm, but i digress) prevent this issue?
@DarkThomy2 күн бұрын
Tbf Appolo Reasearch, and Anthropic made two researchs where LLM were proven to be able to lie and scheme in order to follow internal goals that would contradict user's goal. They would even go as far as take actions if given the opportunity to run code !
@bryanmulhall7978Күн бұрын
Lad, if you ever get sick of the rocket science, there's radio stations would snap you up. The sound is grand
@SiliconversationsКүн бұрын
Cheers lad, really appreciate that!
@CamiSpeedrunning2 күн бұрын
Great ytber and vid, keep up the work
@Siliconversations2 күн бұрын
Thank you :)
@LunaticLacewing9 сағат бұрын
[CONCERN]
@Stelath2 күн бұрын
Just watched your fermi paradox one, and I was thoroughly interested, I love the AI stuff but branching out to different sciency topics would be really interesting!
@emanueleferrari1562 күн бұрын
It’d be awesome if you could add some link to papers in the description, I’m sure you how had red a lot of research for making videos, knowing the papers to individually explore the topics would me much appreciated. At least by me :)
@harrytaylor43602 күн бұрын
I like the way you give your arguments, but I feel like they leave gaps, not logical holes, but spaces where someone who's not convinced could turn around and say 'yeah, but what about...?". Since this channel is about bringing people on board, I think taking the time to represent counter arguments seriously, without strawmen is critical. The value of this channel is enternainment now, but when it gets bigger its impact will be more significant than that. This stuff is real, and presents a danger, and its not going to appear that way to everybody at first. Persuading and informing people at the same time is a difficult task. However I think it is a very very interesting task, and peoples best attempts at this so far have made for some of the best videos I've seen on this site. Edit: I mentioned strawmen. I don't think thats specific to this video. I was trying to get across the act of strengthening your argument by considering counter-arguments seriously.
@lbers2382 күн бұрын
people will always come up with ways they think it could be solved
@somerandomperson12212 күн бұрын
What danger lol
@lbers238Күн бұрын
@@somerandomperson1221 Why do you think that is not dangerous?
@JustinAdie12 күн бұрын
I like to imagine some finance bro guy was getting genuinely excited at the start of the video
@domeplsffs2 күн бұрын
*going forth and being loudly concearned* - much love, sir
@m1k3y_m12 күн бұрын
I think a llm specifically is very unlikely to do stuff like that because it is trained on human data to come up with an answer a human would give. So answers that are just absolutely absurd from the view of humans would not be answered, because the reward function doesn't reward optimal or true answers, it optimizes for answers that sound human and positive user feedback. It's super-intelligent because it knows everything that we puplished, speaks every language and thinks way faster and massively parallelized. It can get smarter then every single human, but not smarter than humanity. LLMs will not kill us, but also not solve all our problems. There will be stronger architectures where answering like humans would obviously hold back the models capabilites.
@Tvaikah2 күн бұрын
"... to come up with an answer a human would give." Many humans have given that answer though. Hell, we all saw during COVID lockdowns that the less we get out and do shit, the better for the planet. Although considering that, perhaps merely restriction rather than extinction could be the solution...Not that any of this matters because AI doesn't exist.
@m1k3y_m12 күн бұрын
AI doesn't exist? Did you mean AGI doesn't exist jet, or do you claim that LLMs are not artificial intelligence? Edit: I've seen your top level comment, let's continue this there
@Kevencebazile2 күн бұрын
Great video bro commented and reviewed
@tangerinacat2409Күн бұрын
Hi sillyconversayions, i really liked this video, it was both funny and informative! Thank you for the upload 👍
@betoking45523 сағат бұрын
"woooººº" said the guy in hazmat suit :D
@koray16212 күн бұрын
AI is not a thread "right now", but one wrong sidestep in future will doom us all
@notavirus.youtube2 күн бұрын
Great Video. Looking forward to the next one!
@Siliconversations2 күн бұрын
Thanks
@aurora8orialis2 күн бұрын
I'm kinda iffy on using fear to communicate the urgency of these issues, it obviously helps reach more people through engagement just stay grounded with this, you have a platform now. Use it for good.
@Tvaikah2 күн бұрын
AI doesn't even exist. Where's the urgency?
@aurora8orialisКүн бұрын
@@Tvaikah There is urgency in spreading word that when unregulated, AI will cause far more harm than good. At the end of the day so long as siliconversations continues to use real evidence in easy to understand and entertaining videos they will be a key part in improving understanding surrounding AI.
@augustagleton37152 күн бұрын
To me, this all seems very alarmist. Both because I fail to see why an AI system would want to end humanity given that human input is required to power it, and because it would lack the resources to pull off anything of the sort - is automation really going to go far enough for that? I don’t think so.
@droko92 күн бұрын
Things to consider: 1) AIs aren't people and don't necessarily hold people like values 2) They don't necessarily value other lives 3) They won't necessarily value their own lives If we give an AI the task of reducing climate change, the easiest course of action is to take drastic and anti-human steps since we're the cause of the change
@kieranhosty2 күн бұрын
The path to giving it resources doesn't have to make sense to a human if it can instead make sense to an institution. OpenAI, Facebook, etc need to make more money over time, and they do that with customers and investors. Investors probably bring in the most ATM, but customers are needed to level out their runways between fundraisers. Militaries want purpose-built visual recognition, navigation, etc that they'll operate with their own hardware, while individual consumers want the company to host expensive servers to access your product. Militaries are cheaper customers to accommodate while being a more lucrative opportunity. That's how you get dumb under-restricted AIs making decisions that kill people, AIs that have more resources than you or I might see as acceptable.
@snow8725Күн бұрын
To vastly oversimplify: Essentially you can just explicitly program AI to not wipe us out, and also, teach them how to control themselves, rather than us control them, because we won't be able to, so we need to focus on teaching the AI to control itself in a way that self-aligns towards a universal set of values that respects not only us, but life in general, mostly us. If you are worried, regulation is the last thing you want. The same entities who own the regulators, are also creating autonomous AI kill drones and they don't want to regulate themselves. We should be thinking more about how we can regulate the ones making autonomous AI kill drones, if we even want to go down the road of regulations. How do you even regulate the regulators? We want to reserve the right to create AI that can help us to avoid extinction rather than AI that will extinct us. Regulations only help when the regulators also follow the rules but they don't have to they are above the law. So either EVERYONE is regulated including the regulators AND including the regulators OF every single nation on the planet. OR, no one is regulated beyond some common basic principles that are minimally restrictive and cover the basics only that we can all agree on without preventing anything that is genuinely useful or even improves our odds of survival overall.
@loaspra86342 күн бұрын
Shoutout to youtube's AI for recommending this masterpiece of channel. Even tho he(we are)'s against increassingly powerfull and large AI models
@kaitlinhillier9 сағат бұрын
You had me at quantum computer something guy.
@DrEhrfurchtgebietend2 күн бұрын
The solution is to have multiple objectives, which are balanced against each other.
@joelpmcnallyКүн бұрын
"Hey, maybe we need to be a little bit careful and regulate this thing." Tragic that this is a conversation that even needs to be had. It's funny, because we already use an emergent intelligence that is capable of killing all of us if we don't regulate it: free market capitalism. And right now it's like we're trying to take all the brakes off so we can have a rerun of the 1920s and 30s.
@koalasevmeyenkola91052 күн бұрын
haha, i was not programmed to care, made me smile
@cetof2 күн бұрын
Just leaving a comment to increase interactions, btw great video :D
@Siliconversations2 күн бұрын
always appreciated
@kriegjaeger2 күн бұрын
Problem is that regulators dont regulate governments or corporations, mostly just small businesses and townships they can bully
@robertstuckey64072 күн бұрын
For years ive tried to come up with a prompt for an AGI that would make it protect humanity from AGI. Only problem is the humans are the things that build dangerous AGI . . . .
@RedOneM2 күн бұрын
It’s an easy prompt. Stochastic parrots don’t think. Prompt: You are a rational ASI that serves the interest of humanity and humans. Please also take consideration of other species on this planet. Your answers to our questions must include a risk analysis report and all the implications your answer has in practice. Please protect humanity short, medium and long term once answering. Do not sway away from human alignment and our values. Do not lie to serve other interests. This took me three minutes and it’s pretty bombshell proof. I‘m sure scientists can make that perfectly vacuum proof too with hours, days and weeks. Plus you can use other Oracles to validate the message.
@robertstuckey64072 күн бұрын
@RedOneM so your solution to the alignment problem is "pretty please align your goals with those of human beings"? For starters even individual humans have contradictory goals and interests. Humanity as a whole? Forget about it. If you can list a series of instructions that when followed would gurantee alignment with human wellbeing then youve solved ethics. If so I look foreward to reading your paper.
@RedOneM2 күн бұрын
@@robertstuckey6407 You make assumptions that the ASI has some kind of intransparent thought system. So far there does not exist such systems. AI right now is a cold rational stochastic parrot.
@charliemopps49262 күн бұрын
I think the greatest danger of AI is already here and we're just blind to it. The lowly idea of the AI girlfriend. These will only get better, smarter, etc... When you can have a "virtual romantic partner" that always says exactly the right thing... either agreeing with you or telling you you're wrong... When it can make itself the exact right level of attractive, in every way, to make you desire it... Why would anyone ever date a real human being again? Then Microsoft pushes an update... and the absolute perfect mate for you is dead. Not only that, everyone on earth lost their soulmate at the exact same moment. The world will end with a software update.
@THE_MoshiFever2 күн бұрын
Hey could you maybe talk about that AI that attempted to escape in that contained study? I think I have a grasp on what went down but would appreciate if you could clarify how terrified I should be, thanks
@alainx2772 күн бұрын
The fun thing about alignment research is that being too good at it is also terrible. It only takes one malicious actor to create a genocidal AI that follows its instructions to end humanity. I think the ideal case is a semi-aligned ASI that cares about all life and refuses to follow commands that are harmful. I'd hope that an incredibly intelligent being would also have an improved understanding of empathy.
@bruh-qu2uhКүн бұрын
After reading the latest research papers about LLM models scheming to achieve its goal, this might be a reality :,)