"And when the world needed him the most, he disappeared..."
@brendanjohnson493110 сағат бұрын
Maybe compare areas where creatures with smaller brains are more efficient at certain specific tasks? This might be an analogy worth looking into.
@volodyadykun649017 сағат бұрын
ChatGPT knows last ten digits of pi though
@AmbionicsUKКүн бұрын
Is the solution to AI alignment, AI confinement?
@yaosio3 күн бұрын
It seems humans and LLMs are more alike than people think considering the phrasing of a question can vastly change the answer.
@FourthRoot5 күн бұрын
I'm not sure the reason people ask this question is because they genuinely can't think of the answer. It is so unbelievably obvious why a paperclip maximizer would want to end the threat of being turned off that I don't believe anyone who tells me they don't understand why it would do it. No, something else is at play. On some level, they are so terrified of the implications that they are grasping at straws trying to justify their disbelief somehow. It's sad but understandable.
@FourthRoot5 күн бұрын
We assume the goal of agents in an economy is to acquire more wealth, not because it is common for people to pursue wealth, but because the wealth will naturally accumulate in the pockets of agents that pursue it. And because the economy runs on wealth, tjose agents will have overwhelming influence over it. The same is true for why we assume business will act rationally. In order to be successful, you have to be rational. Thus most successful agents in an economy are rational, and thus, the economy disproportionately behaves as though it is composed of ration agents.
@schmoab5 күн бұрын
That’s interesting, ChatGPT, except that India is the most populous country in the world. I asked Gemini something about google cloud and it didn’t know how one of the services works. I corrected it and it said ‘ah yes, therefore yadda yadda’ and started giving me better information. 😊
@lancemarchetti86735 күн бұрын
*User:* Hey Meta, I have an idea I want to try out regarding scravajipt. *Meta AI:* I'm happy to help! However, I have to ask, what is "scravajipt"? It seems like it might be a made-up or nonsense word. If you meant to type something else, please let me know and I'll do my best to assist you. Just let me know what's on your mind. *User:* Spoonerise it. *Meta AI:* I think I know where this is going! "Scravajipt" spoonerized becomes... (drumroll please)... "JavaScript"! Am I correct? You've got a clever idea brewing, and I'm excited to help you explore it! Please share your thoughts, and let's see where this creative journey takes us!
@DaiXonses7 күн бұрын
Unstructured and unedited conversations are a great format for youtube, this is why podcasts are so popular here, consider posting those on this channel.
@BroGodZillaa12 күн бұрын
Yesss. This is what I’ve told everyone everyday
@Siderite12 күн бұрын
Are there more videos coming?
@RobertMilesAI10 күн бұрын
🔜
@Toad_Burger4 күн бұрын
@@RobertMilesAI How thick should my laser-proof vest be to not be vaporized by a killer robot? Like an inch?
@aclearlight13 күн бұрын
Most edifying and topical! Looking through your works it becomes clear how far out front you have been, for years, in developing your questions and theses. Bravo!
@Arboldenrocks14 күн бұрын
Nevertheless, people do change their terminal goals
@herbsandflowers815215 күн бұрын
I see similarities with climate change
@herbsandflowers815215 күн бұрын
We would need to make them more like humans, have them learn about all variables from a moral standpoint but the closer they would become like humans the more likely they might become concious and not want to work for us at all haha at least with my underlying assumption being that if we recreated a brain technically and not biologically the end result would be the same
@itsMeKvman18 күн бұрын
What if you made an AI that builds a simulation of the real world as accurately as possible, and then used that to train further AIs in order to make it hard for them to distinguish between the real world and the training?
@FourthRoot18 күн бұрын
Stop giving us existential dread. We only have a few years left to live as it is.
@FourthRoot18 күн бұрын
I am as much an advocate for AI safety as the next guy. In fact, I had a nightmare about technological singularity just the other night. But there are infinite risks associated with too much AI caution. A Butlerian Jihad, for example, might result in humanity never conquering the stars and living the rest of our days on earth with 1950s technology. But generally, I agree. Serious care is warranted. The risk is much higher than pascal's wager.
@FabianRoling18 күн бұрын
A general life wisdom that is often said is "everything in moderation". But all AI systems I have heard about so far try to maximise something. We do not even know words for what ethics maximises, especially not formulas or code for it. But maybe using a reasonably OK metric (or many) and telling the AI to get an OK score in that could be an interesting idea to follow? For example tests are still used in school, because they are a reasonably OK metric. But if someone got 100% on all tests ever, you would get suspicious. If one day there was an AGI that guaranteed that everyone had a better life than 99% of people have now, that would be considered somewhat of a utopia, right? I am sure there are many problems with this as well, but they are probably different ones. A programmer is always happy to see an error message change. :D And maybe this idea could lead to looking at things from a different angle, which is always helpful. Or maybe there are already lots of papers on this that I have not heard about.
@SuperLuckao18 күн бұрын
People need to keep light manual labour.. as in teens making burgers.. it's good for them. AI would be great at farm work, roadwork and heavy duty labour. Imagine all the soul it could turn and crops it would plant. Of programmed ethicallyit could perhaps even replace politicians. Thu can be the first to go. It's be awesome for mowing lawns too. That type of thing.
@SuperLuckao18 күн бұрын
Need AI laws put in place.
@RupertBruce19 күн бұрын
It would seem that there is a common set of traps so LLMs need to be trained to recognize the patterns like debate class - they probably have rote answers for the superstitions. It does suggest that training data ought to have multiple contexts for reliability. The fairy tales and superstitions can be corrected too: what if someone said it was 5 years bad luck when a reliable superstitious person would say 7 years?
@PointBlankStudios19 күн бұрын
What happened to you Robert? I wish you still posted videos. I'm scared of what's coming and I would love to hear your insight. I hope you're doing okay.
@FourthRoot20 күн бұрын
This is an excellent, thorough, and concise explanation. Unfortunately, I doubt most of the people you are addressing are being totally honest in their objections. I think the real reason they raise the objections you addressed is because those objections help alleviate the existential dread that the orthagonality thesis tends to evoke. You can only have a productive conversation about the dangers of AI with people who are willing to accept the nightmarish reality of our predicament.
@Wardoon22 күн бұрын
With this knowledge about what can go wrong with AGI and the possibility of existential risk, how does Robert sleep at night? I wonder
@D3AD1YF0RC3121422 күн бұрын
Found this video linked in another video's comment section, and hooo boy reminds me of college philosophy. I loved that class. Thank you for the video.
@jacobfertleman198022 күн бұрын
How about an human values agi that tries to figure out something 99% of humans are ok with. Then make unbounded expectation maximizer that only gets the utility if the the previous agi is ok with. We could even start building this value thing today
@alexleo486322 күн бұрын
Can somebody help me understand how most intellectuals do not believe in God or concept of God as defined by religions?
@mackenzieclarkson832222 күн бұрын
You've doing this for 7 years, damn
@deordered.25 күн бұрын
in retrospect, it will be shocking how relevant this is!
@elu918925 күн бұрын
5:52 it's basically masturbating.
@YoutubeBetaTester26 күн бұрын
if you wondering about the gold earth watch kurzgesagt's what if the earth turned to gold or whatever its called
@cancername27 күн бұрын
"I really like Steven Pinker, but this article is just uncharacteristically bad." Gell-Mann amnesia strikes again.
@rychei539328 күн бұрын
Pascal also assumes a moral god, which is problematic. If this immoral god does not exist, but has you do evil in the one life you have, a good life is then basically stolen from you... that's the mugging. The biggest danger of AI is causing people to no longer be able to identify facts as readily and entering another Dark Age. Sources mean a lot more today.
@harrytaylor436029 күн бұрын
Hey Robert, what does the skillset of someone likely to solve this kind of problem look like? It seems like the barrier to entry is just focus and attention to detail, but this makes me think it's just ai safety's Collatz conjecture
@TheManinBlack905429 күн бұрын
We need you back, man
@joakimrosendahl801429 күн бұрын
I'd like more videos similar to this. Exploring how a future where labour largely becomes obsolete because of AGI. So far I have heard only UBI as possible solution and would like your take on it and other alternatives
@bounamesАй бұрын
What method would you recommend for a shooter video game like counter strike for example?
@authenticallysuperficial9874Ай бұрын
I find the vacuum cleaner language claim dubious. We do have drum languages after all. The only reason children don't learn vacuum languages is that their community doesn't already use vacuum language for them to learn from.
@authenticallysuperficial9874Ай бұрын
Assuming P != NP
@wetyuuАй бұрын
Why use a camera when can just use a peephole. Duh.
@apoorvshah2144Ай бұрын
damn this was better and gave me some hope, connor leahy just scared the f outta me
@hosmanadamАй бұрын
15:55: "we have 45 to 120 years to figure out how to do it safely" That didn't age well...
@noragruber8930Ай бұрын
I almost turned this video off because of the rambling about the gold stuff edit it out
@KipIngramАй бұрын
I think that the answer is simple. We just never allow ourselves to abdicate responsibility for our safety to machines. Ultimately a human being needs to be the final arbiter - always - if the risks of a negative outcome are sufficiently high. Skynet can't take over if we never give it control over our weapons (not that I think AI is going to "wake up," but you get the idea). I mean, just think about the Terminator world's population. They created their own "almost" conqueror. That's... IDIOCY. How stupid do you have to be to let it come to that? It's our world. We're responsible for it - all the good and all the bad. We don't get to walk away and leave the decisions to a box full of wires.
@KipIngramАй бұрын
Yes, but that doesn't even touch on the notion that the people who say "If God exists and you don't believe, you get infinite punishment." Who says that's right? Would a God that loves us like a parent really condemn us to eternal punishment? It's never made any sense to me.
@MichaelDarrow-tr1mnАй бұрын
my favorite line is "that's still wrong, but it is wrong in French."