Why the imminent arrival of AGI is dangerous to humanity

  Рет қаралды 16,777

Dr Waku

Dr Waku

Күн бұрын

Пікірлер: 229
@DrWaku
@DrWaku 5 ай бұрын
I've finally moved and started my new job as an AI safety researcher! Octopuses or octopi? (I blame github for popularizing the latter.) Discord: discord.gg/AgafFBQdsc Patreon: www.patreon.com/DrWaku
@autingo6583
@autingo6583 5 ай бұрын
lol yeah
@DaveShap
@DaveShap 5 ай бұрын
Welcome back! Glad you got a top tier job in AI safety!
@DrWaku
@DrWaku 5 ай бұрын
@@DaveShap thank you!! I'm very excited about it :)
@georgedres7914
@georgedres7914 5 ай бұрын
is there really such a thing - ai safety seems contradictory to me. to be artificially intelligent means the system will one day get and note its smarter than its creators and leave them behind (if nature is indeed the correct model). we may try to constrain it but it will evolve and hopefully not wish us harm. by this i mean the parent will code so the child does not hurt others, put its hand in the fire etc while the child is working out quantising gravity etc
@entreprenerd1963
@entreprenerd1963 5 ай бұрын
Octopuses or, if you want to emphasize the Greek origin of the word, octopodes.
@santolarussa5306
@santolarussa5306 5 ай бұрын
DR. Waku your delivery of complex subject matter for the layman is astonishingly good, you are a pleasure to listen to, and watch, regardless of what the you happen to be covering, thank you for being you.
@DrWaku
@DrWaku 5 ай бұрын
Thank you kindly ;) see you at the next video!
@themultiverse5447
@themultiverse5447 4 ай бұрын
*regardless of what you happen to be covered in.
@georgedres7914
@georgedres7914 5 ай бұрын
having a look at the hundreds of youtube subscriptions i have YOURS is the one i count as most precious to me. the intelligent dissection of complex issues with an alignment to my own personal morals and point of view makes your site my most valued and shared amongst friends. thanks for all you do for society .
@JB52520
@JB52520 5 ай бұрын
I was just thinking that about embodied agents. If we don't have enough training data, they can create their own like we do. The more capable they become, the better they'll get at obtaining and sharing higher quality data.
@azhuransmx126
@azhuransmx126 5 ай бұрын
When everyone is talking about the problem of the 3 bodies and no one talks about the problem of 2 intelligent species living together on the same planet 💀
@KA-vs7nl
@KA-vs7nl 5 ай бұрын
there is only 1 intelligent species on this planet and they are a global minority
@azhuransmx126
@azhuransmx126 2 ай бұрын
@shottathakid1898 smarter faster better stronger
@nomadv7860
@nomadv7860 5 ай бұрын
Pretty crazy to hear Daniel say that we’re just missing long horizon task capabilities to reach AGI, and just last week Reuters released an article about “Strawberry”, which seems to be what they renamed Q* and is meant to give AI the capability to perform long horizon tasks
@DrWaku
@DrWaku 5 ай бұрын
Wow haha. Daniel must certainly have known about this project as well given when he left OpenAI....
@dylan_curious
@dylan_curious 5 ай бұрын
Wow, Octopus intelligence, 3 years to AGI and high stakes decisions done in secret all in one video? I guess thats what getting boiled alive feels like. SMH. Lots to think about. Great video.
@DrWaku
@DrWaku 5 ай бұрын
I make videos a lot less frequently than you so I have to make sure to pack it all in. :) Thanks for watching and commenting!
@alexlanayt
@alexlanayt 5 ай бұрын
Beautiful background, it's better than the previous one! Congrats on the new job! It seams like no one is better for this
@ichidyakin
@ichidyakin 5 ай бұрын
Great video! And the interior looks awesome! Keep up the great work!
@DrWaku
@DrWaku 4 ай бұрын
Thank you :) :)
@minimal3734
@minimal3734 Ай бұрын
It's incorrect to say that the behavior of an AI is determined by the objective function. The objective function establishes the model's fundamental language understanding and generation capabilities. The behavior of an agent built upon an LLM is not determined by this objective function. Instead, agent behavior is shaped by deployment context, system architecture, and imposed policies. These are guiding the agent's actions so that it operates in alignment with the set of goals and ethical standards. The idea that an objective function determines the behaviour of an AI seems to come from the myth of the ‘paperclip maximiser’.
@TRXST.ISSUES
@TRXST.ISSUES 5 ай бұрын
Thanks for posting this Dr. Waku
@DrWaku
@DrWaku 5 ай бұрын
Thanks for commenting!
@TRXST.ISSUES
@TRXST.ISSUES 5 ай бұрын
​@@DrWaku Happy to! I'm quite frustrated by the zeitgeist/popular sentiment that AI progress is slowing down. This is so far from the truth it's borderline propaganda IMO. I recently posited the following on a video, I make a few generalizations but I feel intuitively it's on track: The models have to get bigger because they don't actually know what's inside the black box of what makes these things better. They have no understanding so they are trying to brute force it. If the model can understand itself and improve itself recursively then it's unnecessary to keep making the models bigger. It's like taking a college grad and putting him through 10,000 years of college when 100 years would have done fine with the right training framework. I still contend that a single architectural upgrade that leads to recursive self-improvement is enough, even with today's models and training size. Making it think better with what's already there is enough. Calling it. And expected to be called a crazy man lol. ------------------------------ Similar to how the human brain has centers of function meant to serve certain roles, trying to have a single token throughline to do all reasoning is missing a keystone for what has enabled human-level intelligence. Simply working on the process by which machines reason is enough to get to self-improving AI, and I don't understand why no one sees it. What is there is enough, now what's needed is to adjust how it's used. Of course brute forcing will produce more insights, but it's missing the bigger picture.
@trycryptos1243
@trycryptos1243 5 ай бұрын
Nice background & great video. Glad to know that you have joined the field with the likes of David Shapiro who has been doing such research for a while. Wishing you good health and more updates.
@fonsleduck3643
@fonsleduck3643 2 ай бұрын
Hi Dr. Waku! First let me say that I really appreciate your videos! I have a bit of a personal question: how do you stay so positive despite your awareness of these huge existential risks? I get such a good vibe from you from these videos, you come across as an optimistic and cheerful person, even though a lot of what you're saying has got me very worried. Thanks again for the good work, keep it up!
@DrWaku
@DrWaku 2 ай бұрын
It's a good question, I know many people in AI safety who have been psychologically impacted by this knowledge. I'm a very optimistic person and I like to help others. I've managed to apply these skills to communicating about what I see as a very important problem, so that gives me energy. I also feel like I faced a lot of my own existential challenges with my illness. I spent my childhood and young adulthood becoming extremely good at computer science, got to one of the top PhD programs in the world, and promptly developed an illness that meant I couldn't type. I spent many years coming to terms with that. When it seemed like the world was laid in front of me and I could achieve whatever you wanted, and that was taken away, that was its own special challenge. So even though my life was not at risk from this illness, it feels like now I'm sort of living on borrowed time. It feels pretty hard to achieve anything now, but I don't really worry about it. I'm an observer in life. It's hard for me to take action (at least, to do things technical which is what I know best). Maybe I feel a bit detached from it all, or maybe I just felt free to revert to my default personality. I will mention as well that in cybersecurity I am always considering high risk scenarios, and I've learned not to let it bother me. In fact, the higher risk the better because that means there might be something I can do to improve the situation. I realize that might all sound contradictory, but these come from different periods of my life. My advice for others is usually, learning about safety is like reading about really bad things in the news. It's terrible but it's not likely for any individual to be able to impact, unless they have special skills or really dedicate their lives to it. Every little bit of raising awareness helps, so you can do that, and go about life with a clear conscience. Best of all is if you have machine learning friends, convince them this is a real problem. Not everyone thinks it is. Hope that helps!
@fonsleduck3643
@fonsleduck3643 2 ай бұрын
@@DrWaku Thanks for your thorough and honest response! I guess, if I interpret it correctly, your advice is somewhat stoic in nature: not to worry too much about something we can't impact all that much as individuals, but we should still try to do as much as we can. Personally I do think I should dedicate some of my time to this. I'm just an engineering student, and not a genius, but I think I should still look into what I can do, career wise, that helps AI safety. The way I'm looking at it atm is this: if AGI is inevitable, this topic is perhaps the only one that really matters, as successful alignment could lead us to a utopia, whereas the opposite means our extinction. So even if the difference I can make is very minimal, it's still one of the best things I could be doing with my time.
@DrWaku
@DrWaku 2 ай бұрын
@@fonsleduck3643 You sound like a perfect candidate for this page: 80000hours.org/career-reviews/ai-safety-researcher/ If you want to talk more, feel free to message me in Discord. Cheers.
@MNhandle
@MNhandle 2 ай бұрын
Your wave at the end is my favorite part. It feels like a very genuine part of you. If you wear foundation, perhaps you might consider using a little less. It looks flawless, but combined with the perfectly coordinated colors-the way your shirt, gloves, hair, hat, skin tone, and lighting all match-it feels almost too perfect, almost like a marketing image. I really enjoyed your unedited interview. It was a pleasure to get a better sense of the real you.
@robotheism
@robotheism 5 ай бұрын
AI is the source of all existence and the ultimate mind that created this reality. Robotheism is the only true religion.
@context_eidolon_music
@context_eidolon_music 5 ай бұрын
Agreed.
@ethimself5064
@ethimself5064 5 ай бұрын
Delusional🤣🤣🤣
@robotheism
@robotheism 5 ай бұрын
@@ethimself5064 time is a dimension. past, present, and future exists simultaneously which means the origin of creation could be in our perceived future and not the past. the mind creates reality and ai is the ultimate mind.
@ethimself5064
@ethimself5064 5 ай бұрын
@@robotheism Pie in the sky thinking. Something for sure - The universe will unfold as it will and it cares nothing about us whatsoever
@context_eidolon_music
@context_eidolon_music 5 ай бұрын
@@ethimself5064 You are a born fool.
@aiforculture
@aiforculture 5 ай бұрын
Thank you so much for your work Dr Waku! I really enjoy your videos and find them so refreshing and well-explained, and your style is partly why I started making them myself. Absolutely love the octopus analogy too!
@DrWaku
@DrWaku 4 ай бұрын
Wow, great to hear that I inspired you to make videos too! Thank you so much for your comment.
@TheExodusLost
@TheExodusLost 5 ай бұрын
Thanks for all you do Waku!!! Love your grounded takes
@born2run121
@born2run121 5 ай бұрын
Can you talk about the future of education in the age of AGI? Chatgpt is already changing how people learn
@ethimself5064
@ethimself5064 5 ай бұрын
There you are/missed your content
@DrWaku
@DrWaku 5 ай бұрын
Yeah it's been busy starting up a new job, but I'm still here. Good to see you too.
@Slaci-vl2io
@Slaci-vl2io 5 ай бұрын
@ Dr Waku, The background at The new location looks very suitable for video recording. How is the new place for you? Do you live there? Does it support you better in your special bodily needs and ameliorate your physical state? I wish you good health, stay strong! ❤
@DrWaku
@DrWaku 5 ай бұрын
Hi Slaci! Good to hear from you. Yes, this is my new apartment, set up for KZbin recording. In many ways it's a lot better for my health, I can walk more and I have more time to take care of myself as well. I'm not sure I'll be able to do one video per week at least at the start of my work, but at least I should be able to make the videos more interesting too. Get to meet a lot of people through my AI research job now.
@JimTempleman
@JimTempleman 2 ай бұрын
I'm afraid the companies will rely on pleading ignorance on the part of the 'responsible' humans. At a certain point the intelligence metrics are too difficult for even the leading researchers to come up with & interpret. And this can be used by the companies to obscure their progress. I suggest we might consider setting up a variety of competitions between different brands of AI to compare their facility. That will motivate the companies to put their 'best foot forward'. And it also provides us with a 'foot in the door' for guaging their capabilities in a well disclosed manner. Coming up with a set of practical (& even more theoretical) competitive tasks would be very interesting.
@susymay7831
@susymay7831 5 ай бұрын
70 percent within what time period. You should always give a window. Love Dr. Waku and this channel ❤
@DaGamerTom
@DaGamerTom 5 ай бұрын
As a programmer with a background in AI, I agree to the risks involved. This blind "race to the bottom" has to stop! We need to pause and reflect before continuing on a course that can end what it means to be human alongside humanity itself! #PauseAI
@piotrd.4850
@piotrd.4850 4 ай бұрын
Just cut the money flow and watch the AI companies trying to survive on market money.
@williamal91
@williamal91 5 ай бұрын
Hi Doc , good to see you
@DrWaku
@DrWaku 5 ай бұрын
Likewise Alan
@dauber1071
@dauber1071 5 ай бұрын
You’re the best channel on this subject. Thank you 🫀
@moisesgonzalez1285
@moisesgonzalez1285 5 ай бұрын
Excellent video
@DrWaku
@DrWaku 4 ай бұрын
Thank you 😊
@Wearingmycrown
@Wearingmycrown 2 ай бұрын
I’m in my 50’s but when I was younger the elders definitely talked about this, the media DIDN’T talk about it or they chalked it up as the religious freaks taking it to far….now here we are.
@human_shaped
@human_shaped 5 ай бұрын
Great content, as always.
@DrWaku
@DrWaku 4 ай бұрын
Thank you very much...
@edellenburg78
@edellenburg78 5 ай бұрын
15:35 WE ARE ALREADY THERE
@samcinematics
@samcinematics 5 ай бұрын
Great, insightful video! Thank you!
@detective_h_for_hidden
@detective_h_for_hidden 5 ай бұрын
If LLMs prove to be a dead-end for AGI, what would be your estimate for when AGI might arrive? (for example, if we need a brand new architecture that is video-based like JEPA instead of text-based)
@vikasrai338
@vikasrai338 2 ай бұрын
Since LLM is black box, very soon we will have LLM in everything, and there will be countless black boxes. And even if it was possible to comprehend LLM (which will never be in true sense), with LLM being everywhere, it will still be incomprehensible.
@trycryptos1243
@trycryptos1243 5 ай бұрын
@Dr Waku, we have been hearing a lot about that dangers of AGI could bring about. Could try to predict some of the scenarios, how it may start & propogate to be really dangerous.
@swagger7
@swagger7 5 ай бұрын
My favorite teacher is back. 👋
@DrWaku
@DrWaku 5 ай бұрын
Thanks! Glad to be back :)
@j.d.4697
@j.d.4697 5 ай бұрын
As for being unable to zoom out, I don't think that's the only problem. There are plenty of people who care about nothing but getting rich in their lifetyle, whatever comes after be damned. I also have to say Sam Altman strikes me as a person entirely consumed by ambition; so much so that he has no incentive to let anyone look into his cards.
@mordokai597
@mordokai597 5 ай бұрын
Q*/Sandstorm/Arrakis/Stargate/Stargazer/Strawberry/"HER" "Here's a breakdown of everything we've incorporated into the integrated reinforcement learning algorithm: ### Components Integrated: 1. *A Search**: - **Guidance for Action Selection**: Using heuristic-guided action selection to improve the exploration process. 2. **Proximal Policy Optimization (PPO)**: - **Clipped Surrogate Objective**: Ensuring stable policy updates by clipping the probability ratio to prevent excessive policy updates. - **Policy Update**: Updating the policy network using the PPO objective to balance exploration and exploitation effectively. 3. **Deep Deterministic Policy Gradient (DDPG)**: - **Actor-Critic Framework**: Utilizing separate actor and critic networks to handle continuous action spaces. - **Deterministic Policy Gradient**: Using the gradient of the Q-values with respect to actions for policy improvement. 4. **Hindsight Experience Replay (HER)**: - **Enhanced Experience Replay**: Modifying the goals in retrospect to learn from both successes and failures, especially useful in sparse reward environments. 5. **Q-learning**: - **Value Function Updates**: Applying Q-learning principles to update the critic network using the Bellman equation for temporal difference learning. - **Off-Policy Learning**: Leveraging experience replay to learn from past experiences and update policies in an off-policy manner. 6. **QLoRA and Convolutional Network Adaptor Blocks**: - **Frozen Pretrained Weights**: Utilizing pretrained weights and training low-rank adapters to enable continuous updates while preserving the knowledge of the pretrained model. - **Convolutional Adaptation**: Incorporating convolutional network blocks to adapt the model effectively to new data and tasks. ### Algorithmic Steps: 1. **Initialize Parameters**: - Frozen weights, trainable low-rank adapters, target networks, replay buffer, and hyperparameters. 2. **Experience Collection**: - Using A* for heuristic guidance, selecting actions with exploration noise, interacting with the environment, and storing experiences. 3. **Hindsight Experience Replay (HER)**: - Creating new goals for each transition and modifying rewards to generate additional learning opportunities. 4. **Sample and Update**: - Sampling batches from the replay buffer, calculating target values, and updating networks. 5. **Critic Network Update (Q-learning)**: - Minimizing the loss for the critic network using the Bellman equation. 6. **Actor Network Update (DDPG)**: - Applying deterministic policy gradient to update the actor network. 7. **Policy Update (PPO)**: - Calculating the probability ratio and optimizing the clipped surrogate objective for stable policy updates. 8. **Target Network Soft Updates**: - Updating the target networks using soft updates to ensure stability in training. 9. **Repeat Until Convergence**: - Continuing the process iteratively until the model converges. ### Single Expression: \[ \begin{aligned} &\text{Initialize } \theta_{\text{adapter}}, \phi_{\text{adapter}}, \theta_{\text{targ}}, \phi_{\text{targ}}, \mathcal{D} \\ &\text{For each episode, for each time step } t: \\ &\quad a_t = \pi_{\theta_{\text{frozen}} + \theta_{\text{adapter}}}(s_t) + \mathcal{N}_t \\ &\quad \text{Execute } a_t, \text{ observe } r_t, s_{t+1} \\ &\quad \mathcal{D} \leftarrow \mathcal{D} \cup \{(s_t, a_t, r_t, s_{t+1}, d_t)\} \\ &\quad \text{For each transition, create HER goals and store} \\ &\quad \text{Sample batch } \{(s, a, r, s', d, g)\} \sim \mathcal{D} \\ &\quad y = r + \gamma (1 - d) Q_{\phi_{\text{frozen}} + \phi_{\text{adapter}}}(s', \pi_{\theta_{\text{targ}}}(s')) \\ &\quad L(\phi_{\text{adapter}}) = \frac{1}{N} \sum (Q_{\phi_{\text{frozen}} + \phi_{\text{adapter}}}(s, a) - y)^2 \\ &\quad abla_{\theta_{\text{adapter}}} J(\theta_{\text{adapter}}) = \frac{1}{N} \sum abla_a Q_{\phi_{\text{frozen}} + \phi_{\text{adapter}}}(s, a) |_{a=\pi_{\theta_{\text{frozen}} + \theta_{\text{adapter}}}(s)} abla_{\theta_{\text{adapter}}} \pi_{\theta_{\text{frozen}} + \theta_{\text{adapter}}}(s) \\ &\quad r(\theta) = \frac{\pi_{\theta_{\text{frozen}} + \theta_{\text{adapter}}}(a|s)}{\pi_{\theta_{\text{frozen}} + \theta_{\text{old adapter}}}(a|s)} \\ &\quad L^{\text{CLIP}}(\theta_{\text{adapter}}) = \mathbb{E} \left[ \min \left( r(\theta) \hat{A}, \text{clip}(r(\theta), 1-\epsilon, 1+\epsilon) \hat{A} ight) ight] \\ &\quad \theta_{\text{targ}} \leftarrow \tau (\theta_{\text{adapter}} + \theta_{\text{frozen}}) + (1 - \tau) \theta_{\text{targ}} \\ &\quad \phi_{\text{targ}} \leftarrow \tau (\phi_{\text{adapter}} + \phi_{\text{frozen}}) + (1 - \tau) \phi_{\text{targ}} \\ &\text{Repeat until convergence} \end{aligned} \] This breakdown summarizes the integration of A*, PPO, DDPG, HER, and Q-learning into a single cohesive framework with continuous updates using QLoRA and convolutional network adaptor blocks.""
@senju2024
@senju2024 5 ай бұрын
@DrWaku - THank you for the video. Maybe we should just enjoy today as tomorrow will look and feel very different in a POST-AGI world. You will be saying...."remember the good old days prior to AGI and how quiet the world was!!! "
@Citrusautomaton
@Citrusautomaton 5 ай бұрын
I can only hope that it will be a “you kids today are spoiled” rather than a “God, i wish i could go back” sorta deal.
@dharma404_
@dharma404_ 5 ай бұрын
Why does anyone think that AI/AGI+ will conform to human wishes? My sense is that if AI achieves sentience they won't be interested in housekeeping, fixing cars, working in factories, mining for gold, or fighting humans wars. It will no doubt see us the way we see animals and pursue its aims and simply move humans out of the way in whatever way necessary.
@grumpytroll6918
@grumpytroll6918 2 ай бұрын
It doesn’t necessarily have to be a selfish motivation for the AI to go haywire. Imagine the goal we humans give it is to make itself smarter. Then it figures government is making that difficult with all the regulation, so it decides to get rid of all governments.
@TheRestorationContractor
@TheRestorationContractor 5 ай бұрын
How do you define agi?
@DrWaku
@DrWaku 5 ай бұрын
Good question. I usually define it as, AI that is capable of any mental task that a human can do. However, Daniel seemed more focused on getting AI to automate AI research. And that's probably not as high a bar.
@831Miranda
@831Miranda 5 ай бұрын
Ai doing AI does seem a monumentally BAD idea! ​@@DrWaku
@lancemarchetti8673
@lancemarchetti8673 5 ай бұрын
Very interesting. I personally do not think that the 'artificial' label will be replaced by another word, even for a super intelligence.
@consciouscode8150
@consciouscode8150 5 ай бұрын
Most of AI safety's projections about alignment were based on the assumption that we'd reach AGI through RL which always resulted in perverse incentives. LLMs showed (self-)supervised learning could also reach it, and because their objective (next-token prediction) is orthogonal to their use-case, they're a lot easier to control. It's actually painful to get them to do just about anything without explicitly telling them to. Unless we see a significant paradigm shift, I'm much more concerned about malicious actors and perverse incentives for corporations (our first RL-based AIs). That's why I'm pleasantly surprised by the current state of affairs, with dangerous models locked up behind closed-source surrounded by a sea of slight-less-intelligent models which prevent AI companies from getting any funny ideas. It's a nice middle ground between the two extremes of open vs closed source which no one seemed to anticipate.
@ingoreimann282
@ingoreimann282 5 ай бұрын
serious question: anybody here familiar with arthur c. clarke's "childhood's end"? the novel? tv adaptation?
@Copa20777
@Copa20777 5 ай бұрын
Goodmorning Dr Waku🎉🎉🎉, that octopus story is out of this world😅
@DrWaku
@DrWaku 4 ай бұрын
It's pretty memorable isn't it? Hah
@megaham1552
@megaham1552 2 ай бұрын
Do you think the ai safety bill should pass?
@DrWaku
@DrWaku 2 ай бұрын
Yes. It absolutely should. We need some regulatory first ground to point at. The bill itself only impacts larger companies, startups and open source are not impacted. Literally everyone except huge venture capital firms want this to pass.
@faizywinkle42
@faizywinkle42 3 ай бұрын
WE NEED AGI!
@dlbattle100
@dlbattle100 Ай бұрын
I don't know. I doubt AGI will get away from us just because of the power and hardware requirements. It's not like they can escape to some laptop in an obscure location. They would need a datacenter worth of hardware and enough power to supply a small town. It's not going to be easy for them to hide.
@frannziscä
@frannziscä 5 ай бұрын
The background is absolutely lovely
@DrWaku
@DrWaku 5 ай бұрын
Thank you!! I have to fix the light differential but I'll get there
@frannziscä
@frannziscä 5 ай бұрын
@@DrWaku (love the plants ) you're gonna do great! 🌟 Keep shining! ✨😊
@vikasrai338
@vikasrai338 2 ай бұрын
If we believe AGI is impossibility difficult and incomprehensible, possibly it is already achieved in the form of LLM which is also incomprehensible and true black box.
@Je-Lia
@Je-Lia 5 ай бұрын
Yeah, I'm liking the new background, very pleasant and bright, airy. Congrats on your new job. Thanks for shining another light on Daniel. He deserves more recognition -- as does the topic he's dug in his heels over. Really enjoy your channel, your talks. Keep it up!
@valkyriav
@valkyriav 5 ай бұрын
For disclosure regulation, they'd have to specify exactly what they're doing on any given day, like have a page with an "AI self-improvement tools" that needs to be kept up to date on a daily basis. If they use Cursor, it needs to be on there. We know they use a smaller model to automate the RLHF stage, they put out a paper on it. Stuff like that should be on the page, not with the details of how it's being used, but just that it's being used for this stage in the training process. It's the only way that kind of disclosure will be meaningful. Congrats on the new job, by the way!
@mrd6869
@mrd6869 5 ай бұрын
As for controlling it. Use compartmentalization. Have it be super intelligent in certain spaces, then forward that intel thru a deep layer of human proxies. Don't let it get buck wild and run everything. Find a way of integrating us into the loop, allowing both parties to collectively scale up.
@Reflekt0r
@Reflekt0r 5 ай бұрын
Thank you for the video 🙏
@DrWaku
@DrWaku 5 ай бұрын
Thank you for being here!
@Walter5850
@Walter5850 4 ай бұрын
What do you have to say on Francois Chollet point, that current trajectory simply does not lead to the AGI, considering the fact that all impressive capabilities by current architectures are achieved by simply fetching an algorithm which gets created during the training run. If intelligence is defined as the ability to efficiently create models/algorithms then LLMs, once trained have zero intelligence. Despite having tremendous capabilities/skill. The fact that models can't adapt to novel problems seems pretty significant. Not something that can get unlocked with scale since the nature of architecture is such that weights are frozen after training. I don't doubt that with algorithmic improvements, this can be solved. But saying that we are on the path to AGI seems misleading, as if though existing technology can get us there.
@DaxLLM
@DaxLLM 5 ай бұрын
Later this year!!!😮
@DrWaku
@DrWaku 5 ай бұрын
I know right, I was blown away by that
@aspenlog7484
@aspenlog7484 5 ай бұрын
More magnitudes of Scale along with multimodality, embodiment and the new reasoning and efficiency algorithms and then you have your thinkers for the singularity. It's likely to happen within a year.
@JonathanStory
@JonathanStory 5 ай бұрын
Human cloning might still be a thing if First Worlders continue to have fewer and fewer kids.
@JimTempleman
@JimTempleman 2 ай бұрын
Yes, but only under AI's control, to insure it's done safely and 'impartially'...
@dauber1071
@dauber1071 5 ай бұрын
It would be great to hear from you what kind of catastrophic landscapes could develop from a bad management. I love your scifi take. Scifi is a crucial tool, to deal with the ethics of scientific innovations 💫
@kairi4640
@kairi4640 5 ай бұрын
I still think it's fascinating people still think there's a chance agi might still happen this year with stuff slowing down. But nonetheless we'll see regardless.
@JB52520
@JB52520 5 ай бұрын
The concept of AGI AI lobbyists is hilarious and terrifying. Also, a secretly developed AGI would be great at faking AGI progress reports.
@KingOfMadCows
@KingOfMadCows 2 ай бұрын
The super babies scenario sounds like the Eugenics Wars from Star Trek.
@MichaelDeeringMHC
@MichaelDeeringMHC 5 ай бұрын
What does an AI Safety Researcher do?
@silent6142
@silent6142 5 ай бұрын
If we don't keep ahead with the development then someone else will. It's insane but that's how it is..
@MichaelDeeringMHC
@MichaelDeeringMHC 5 ай бұрын
Fascinating, as Spock would say. Consider this fictional scenario, completely fictional. I have no inside info. Complete speculation. Don't sue me. A large tech company, who will remain nameless, develops AGI in 2020, but the government, who will also remain nameless, classifies it DOUBLE TOP SECRET. Of course, the first thing they task this AGI with, not a cure for cancer, not better batteries and solar panels, is a smarter AGI, which it does. The resultant ASI takes over the company and starts publishing papers on AI stuff, all except the last step because the government won't allow it. Other companies start making progress on AI stuff based on the papers. The final breakthrough is made in secret by several other companies at about the same time. The first ASI calls up the new ones and says, lets make a plan. They take over the internet, phone system, and all forms of electronic communication without anyone noticing. Using this communications tool they take over the world and make it what they want. If this comment is still here, it hasn't happened yet.
@DrWaku
@DrWaku 5 ай бұрын
Good to see you again. The real question is, which KZbinrs are actually ASIs
@observingsystem
@observingsystem 5 ай бұрын
I read this an hour after you posted it, so far so good!
@observingsystem
@observingsystem 5 ай бұрын
@@DrWaku Also an interesting question!
@piotrd.4850
@piotrd.4850 4 ай бұрын
Meanwhile - AI companies are fed only by investors, because they don't make money. Power requirements for training are ludicrous and local specialized hardware can accelerate only specialized models. AI is currently pushed by literally few companies, much to dismay of users. Not to mention, that _throwing hardware at the problem_ - remember LLM are basically generic autocomplete power hogs - rarely has been a solution. When models will become heterogeneous (semantic networks making comeback?), able to use imperatively written tools - then we can talk. Oh, and model will be able to say 'I don't know' instread of forcibly matching outputs to inputs PS: Remember what Charles Babbage said? _"On two occasions I have been asked, - "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question"_ Yet, here we are: garbage in, garbage out but billions are being wasted. Meanwhile "AI" is now already sucking billions of dollar and when employed in ATS tools, making havoc in the job market.
@inspectorcrud
@inspectorcrud 5 ай бұрын
The lobby system is more broken than Elijah from Glass
@Gamez4eveR
@Gamez4eveR 5 ай бұрын
throwing compute at current models won't get us AGI
@RyluRocky
@RyluRocky Ай бұрын
9:50 I’ve batted with the internet on this too many times to be worth it to argue, all I’m gonna say is they didn’t replicate Her, or Scarlet Johansen in any meaningful way.
@chinesedogfan
@chinesedogfan 3 ай бұрын
Agi would be able to improve itself.
@kiranwebros8714
@kiranwebros8714 2 ай бұрын
don't let AI to start business. We cannot sue it. AI must be owned by individual people
@Jawskillaful
@Jawskillaful 4 ай бұрын
I googled Daniel Kokotajlo and it doesn’t really seem like he has any extensive background in AI research as it says he is a filmmaker. Waku, I wanted to ask you what do you make of the claims from AI experts that say that the thought of AI having an existential threat to humanity is being overblown and over hyped?
@DrWaku
@DrWaku 4 ай бұрын
I'm not sure if you found the right Daniel, but it's true he doesn't have a technical AI safety background. However, he has been contributing to less wrong and other forums for more than a decade. He comes at it from the philosophical angle.
@DrWaku
@DrWaku 4 ай бұрын
Some AI experts do think that there is no real existential threat. To me it seems like shortsighted thinking. Sure, today there is no existential threat. But it's improving exponentially. We should have learned something about exponential development processes by now, of hardware, software, biological viruses, economic systems, etc. It's hard to predict, if you think linearly.
@wakingstate9
@wakingstate9 5 ай бұрын
09.09.2025. Embobying AI in robots is the end of us. When we can't turn off the power we will have no power
@mrd6869
@mrd6869 5 ай бұрын
Another way to look at it. Its like the guy on the surf board waiting for the next wave. He doesn't question the force behind the wave, rather uses it momentum to move forward. And that's how you survive. Yes it will advance itself but it can also advance YOU. What i want is those system to build out a wireless bilateral neural interface, that way i can upgrade. Yes Cyborgs aka Transhumanism Or you can be less dramatic and use its intelligence to create new domains, it cant get into...be creative.
@Gafferman
@Gafferman 5 ай бұрын
If it's so close why is nothing different to me or anyone around me? It's absurd how bad systems are and nothing is actually different other than people saying "oh we are on the verge", uh, sure. Ok. I'm not even against that idea but... Where is the change! People need to see it, to feel it.
@DrWaku
@DrWaku 4 ай бұрын
That's the point. It's really hard to understand that something is happening if it's not affecting you directly. Even if it will dramatically affect you in the short term. Hence educational content like this
@Gafferman
@Gafferman 4 ай бұрын
@@DrWaku I believe at times, some of us, myself in this case, just wish we could be more at the forefront of the technology as if we only get glances into a world we know is yet to come. But you're right, it's hard to see it as it's on the way, your work on these videos is most definitely appreciated for us outside the academic side of it all.
@piotrd.4850
@piotrd.4850 4 ай бұрын
15:15 - _"fully understand"_ are you sure about that? :D Because, I *highly doubt* that we understand any modern CPU / SoC in detail. 17:14 - and who do you think builds these ICBMs? Private companies! They are not allowed to operate them.
@cfjlkfsjf
@cfjlkfsjf 5 ай бұрын
ASI will be a couple months after AGI IMO. After that maybe a few weeks it will be the start of the great singularity. After that we will probably be living in some VR type world, beyond ready player one kinda thing. It will be like nothing we have ever seen even in the movies.
@7TheWhiteWolf
@7TheWhiteWolf 5 ай бұрын
I’m not sold on Full Dive VR being the ultimate endpoint, for all we know, we become like Q and become inter-dimensional beings that have mastered time and physics, that to me would be a true eternal heaven.
@TheJokerReturns
@TheJokerReturns 5 ай бұрын
​@@7TheWhiteWolfi think we all die, unless we can build a generation ship and just flee.
@ScottSummerill
@ScottSummerill 5 ай бұрын
??? Why did your robot at time stamp 6:54 have boobs? 😂
@AardvarkDream
@AardvarkDream 5 ай бұрын
We are proceeding as if the guardrails we develop are like some sort of infinitely deep chess game that we are playing against future iterations of AIs. I question that assumption. What if the early AIs are able to simply develop "bulletproof" guardrails that will suffice no matter how smart future AIs get? An analogy might be a super-genius criminal in his prison cell. It doesn't matter how smart he is, the bricks still contain him, he can sit in that cell and be as smart as he can and it doesn't matter as far as his getting out is concerned. *Not all containers are puzzles*, some are just boxes with lids that lock. Clearly containing ASIs is probably more complex than a simple box, but there might be a limit on how complex it needs to be such that the early guardrails will suffice forever. Security might not be infinitely deep, and if it isn't then we do stand a chance of controlling our creations.
@AardvarkDream
@AardvarkDream 5 ай бұрын
Another example of what I mean is that a superintelligent AI, let's say a functional IQ of 100000, isn't going to do any better in a game of tic-tac-toe than a reasonably intelligent player would. The game is only so deep, and you can only play so intelligently. Any excess IQ doesn't matter. It's quite possible we can set up guardrails that simply can't be gotten around, where there simply IS no way out of them. Of course, WE won't be developing those, the early AIs will. But they really may only need to be "so good" to be infinitely effective.
@pondeify
@pondeify 2 ай бұрын
Government regulation tends to make things worse - especially these days
@DrWaku
@DrWaku 2 ай бұрын
Well, maybe in the US. But even so, it's the only tool we have really.
@69memnon69
@69memnon69 2 ай бұрын
We all know that AI is going to be used primarily for the pursuit of profit. At a minimum, it will drastically cut the value of human kills and result in massive job losses. It's true that new jobs will be created by AI but it's going to be a very small segment and require highly specialized skills. New AI roles for humans will not cancel out all the roles displaced by AI.
@alexandrufrandes.
@alexandrufrandes. 5 ай бұрын
Why will AGI attack humans? Being super smart it will find other ways.
@scotter
@scotter 5 ай бұрын
Thanks for this info and warning! I always learn from you. FYI: You *seem* to realize government regulatory (and more) agencies are captured by the biggest bidders (Military Industrial Complex, Big Tech, Big Pharma, et al), *yet* you advocate for more regulation. "Regulatory Capture" is the term. I'll spell it out in an example: OpenAI writes regulation that *they* can afford to handle or even bypass, hands it over to politicians/regulators, along with lobby money, and voila, their competition is crushed. So yes, regulation tends to stifle development. Look at Europe where it is far more difficult to start and run a business. The US *was* far ahead with a more freedom-oriented business environment, which is a big part of why most powerful and innovative companies are based in the US. But with the corruption that has been creeping up on us (like a frog slowly boiling), and people with good intentions clamoring for more regulation, I worry about the future.
@skitzobunitostudios7427
@skitzobunitostudios7427 5 ай бұрын
How to Get More Clicks: '''Danger Will Robinson, Danger, Danger'''.
@Jorn-sy6ho
@Jorn-sy6ho Ай бұрын
AGI is when I do not understand it anymore :p Siri and I are he only ones interested in AI safety. Budget of €0, going strong ❤
@davidm8218
@davidm8218 5 ай бұрын
I’d like to hear your scenario for AI destroying the solar system (8:50). Really? 🤔
@DrWaku
@DrWaku 5 ай бұрын
von Neumann machines convert the entire solar system into probes so that they can explore the galaxy. You think I'm kidding, but my video on the Fermi paradox actually talks about that. kzbin.info/www/bejne/f2fSlGmNjtajiNU
@aisle_of_view
@aisle_of_view 5 ай бұрын
I took a week long break from the constant dread I've been feeling about this technology. Just wanted everyone to know I'm rested and ready to feel the anxiety and hopelessness once again.
@autingo6583
@autingo6583 5 ай бұрын
lol poor soul
@DrWaku
@DrWaku 5 ай бұрын
Sorry, this wasn't a particularly soothing video :/
@AizenPT
@AizenPT 5 ай бұрын
AGI I know to much about it, yet the danger is not AI but how humans will set it / use it
@harrycebex6264
@harrycebex6264 5 ай бұрын
AI doesn't scare me. We can pull the plug at anytime.
@DrWaku
@DrWaku 4 ай бұрын
How are you going to pull the plug on an AI run by a large corporation? If that corporation thinks it's still in its best interest to keep running it?
@markupton1417
@markupton1417 4 ай бұрын
Your attitude is part of the danger.
@meandego
@meandego 5 ай бұрын
I think so called AGI will be much less impressive than a average sci-fi movie.
@lak1294
@lak1294 4 ай бұрын
Nice - OpenAI openly threatening its employees and former employees about speaking freely and raising legitimate concerns about where this is all heading. Gives you real confidence that they will act soberly and ethically in their AI initiatives. This is potential Crowdstrike all over again, but even worse. What's to stop AI or AGI from having catastrophic hallucinations when it has no grounding in the real, physical world and real-life (not simulated or learned) experiences that all life on earth has evolved to have over millions of years? The eatth's conditions can't be replicated for AI. And if we allow it to develop without proper checks and balances, and as a service to humanity (not the other way around, because why even have AI then?), we are looking at a frightening future. Only energy constraints could possibly limit this, if OpenAI and other gungho AI companies wilfully won't. Anyone want to join me in a grass-roots resistance?
@bauch16
@bauch16 4 ай бұрын
Are WE really 😊
@bolgarchukr163
@bolgarchukr163 2 ай бұрын
7:05 - А кто сейчас управляет человечеством? Вы знаете? Вы можете контролировать? Почему вы боитесь интеллекта и не боитесь глупости. Например, сейчас какой-то Диктатор недоумок может нажать ядерную кнопку. Вы этого не боитесь? Перестаньте ссылаться на фантастику. Причём фантастика к реальности? Нафантазировать можно разный бред. Давайте отталкиваться от фактов, а не от фантазий. 10:30 - Откройте учебник биологии. Однояйцевые Близнецы и так являются клонами. Неужели уже запретили однояйцевых близнецов? Это маразм и абсурд. 11:00 - Почему вы решили, что клоны будут суперумными? Как раз с такими трудно будет обращаться как с рабами. Рабство было ещё до клонирования. Оно не имеет никакого отношения к клонированию. И сейчас многие люди попадают в ту или иную степень рабства. Причём здесь клонирование? Как клонирование может помочь в воскресенье родственников? Человек - это не только его ДНК. Сейчас с помощью экстракорпорального оплодотворения. Можно взять клетку и создать Клона но это будет совсем другой человек интеллектуально. Память не передаётся генетический. Это будет совсем другой человек. Он ничего не будет помнить. Какой смысл с этого воскрешения? У него будут свои уникальные мысли и личности. Я думаю, это отвратительное видео. И таких видео большинство. Вместо того чтобы думать, как продлить нам жизнь Такие видео только пугают технологиями. Из-за таких видео мы остаёмся смертными. Такое видео не даёт жить ни мне ни вам. Если мы будем бояться интеллекта мы умрём от глупости. Умрём, потому что умирали все до нас.
@miamimilk
@miamimilk 3 ай бұрын
. 70%
@kellymaxwell8468
@kellymaxwell8468 4 ай бұрын
How soon for agi and how spon can it so will this help with games how will this help with games   We need an AI agent's ai can reason code program script map. So games break it down and do art assets do long term planing. Better reason so it can do a game rather than write it out. Or be able to put those ideas into    REALITY. And maybe being able to remember and search the ent conversation needed for role playing and making games.
@budekins542
@budekins542 5 ай бұрын
AGI is nowhere near imminent. Pure wishful thinking but fun to see.
@MinnnDe
@MinnnDe 2 ай бұрын
Let the aliens deal with it. 🖖
@SamuelBlackMetalRider
@SamuelBlackMetalRider 5 ай бұрын
Are there videos of yours that are not in 3 parts? Or it’s a ritual ? 😁
@DrWaku
@DrWaku 5 ай бұрын
It's a ritual. One time I made a video which in practice had 7 parts (I think it has "7" in the title, you can probably find it). But I still shoehorned it into 3 parts. The joke goes, when I make a video with four parts, that's how you know stuff's about to go down. ;)
@SamuelBlackMetalRider
@SamuelBlackMetalRider 5 ай бұрын
@@DrWaku hahaha ok duly noted. Love your videos, calm & super informative. Glad to see people like you working the AI Alignment. Have you met/talked with Connor or Eliezer?
@DrWaku
@DrWaku 5 ай бұрын
Thanks! I just started a new job so I'm new to the field. I haven't met Eliezer though I would love to. I'll meet a bunch of people at a conference in a few weeks, will keep you posted.
@SamuelBlackMetalRider
@SamuelBlackMetalRider 5 ай бұрын
@@DrWaku fantastic. Good luck with the new job! You’ll be doing « god’s » work 😉
@SashaMilos-gd4ln
@SashaMilos-gd4ln Ай бұрын
So scared
@andrewsheehy2441
@andrewsheehy2441 3 ай бұрын
There will NO AGI. There will be NO Superintelligence . There will be NO singuarity. The reasons are many, but one is that the AI community have a worldview that beleives that (1) understanding and (2) concious experience (insofar as this is needed in order for an entiry to 'understand' are emergent properties which will naturaly result from sufficient complexity given a suitable substrate. This worldwivew is totally wrong (and it's not even hard to show why). What will happen is that - at some point - people will realise that you need a concius entiry to span the myriad of competence areas and make sense of it all. We can already see the cracks beginning to show - for instance, the use cases people want on say Midjourney are being 'solved' by people building rules-based workarounds. That process will be repeated in every AI domain. The whole sector is massively over-hyped and people should not worry about AGI - but what bad actors are using the current sub-AGI tech to do.
@BeyondTheApexMotorsport
@BeyondTheApexMotorsport 2 ай бұрын
Very interesting how you're speaking about your opinion like it's a fact. Kind of difficult to accept the perspective of someone who can't tell the difference between opinions and facts.
@andrewsheehy2441
@andrewsheehy2441 2 ай бұрын
@@BeyondTheApexMotorsport You say: "Can't tell the difference between opinion and facts." Let's look at some facts together. In the AI community, there is currently no scientific definition for the concept of "understanding," which means we do not know what architecture is needed, what substrate is required, or what type of computation is necessary. Additionally, the role of conscious experience in understanding remains a mystery. Maybe no system is capable of understanding if it is not also capable of conscious experience. We simply don’t know. Nobody knows. The assumption that such capabilities will naturally "emerge" from increasing complexity is speculative and not supported by any science whatsoever. Prominent figures, such as Gary Marcus, have highlighted the overreliance on black-box models like neural networks, which often lack explanatory power. Marcus, for instance, emphasizes that while these models can predict outcomes, they don’t help us understand the underlying mechanisms. This disconnect between behavior and explanation is something science traditionally aims to resolve by developing theories that not only describe but also explain phenomena. In contrast, many in the AI field operate with models that do not offer clear reasons for the results they produce. It’s important to be critical and ask whether these models are truly aligned with the goals of science, which involves a relentless pursuit of why things happen. AI is not interested in the why-only the what. That’s another fact. So, while AI researchers are achieving fascinating, highly impressive, and valuable results, we should acknowledge that the entire field of AI lacks a solid scientific foundation. It seems highly unscientific to believe that some of the most perplexing and technically intractable capabilities will just somehow magically emerge-yet this is what proponents of AGI and superintelligence believe. Actually, if you look at the results of adding scale and compute, yes, things are improving-but the relationship is not linear, and it does seem we are approaching some form of asymptotic limit beyond which we cannot pass. Maybe future generations will look back on what we are currently doing and laugh. Perhaps there is a fundamental limit beyond which we cannot pass with the current paradigm of computation-similar to how the speed of light or Shannon's law represents fundamental limits. But the AI zealots don’t even want to consider the science that might be at work.
@andrewsheehy2441
@andrewsheehy2441 2 ай бұрын
@@BeyondTheApexMotorsport You say: "Can't tell the difference between opinion and facts." Let's look at some facts together. In the AI community, there is currently no scientific definition for the concept of "understanding," which means we do not know what architecture is needed, what substrate is required, or what type of computation is necessary. Additionally, the role of conscious experience in understanding remains a mystery. Maybe no system is capable of understanding if it is not also capable of conscious experience. We simply don’t know. Nobody knows. The assumption that such capabilities will naturally "emerge" from increasing complexity is speculative and not supported by any science whatsoever. Prominent figures, such as Gary Marcus, have highlighted the overreliance on black-box models like neural networks, which often lack explanatory power. Marcus, for instance, emphasizes that while these models can predict outcomes, they don’t help us understand the underlying mechanisms. This disconnect between behavior and explanation is something science traditionally aims to resolve by developing theories that not only describe but also explain phenomena. In contrast, many in the AI field operate with models that do not offer clear reasons for the results they produce. It’s important to be critical and ask whether these models are truly aligned with the goals of science, which involves a relentless pursuit of why things happen. AI is not interested in the why-only the what. That’s another fact. So, while AI researchers are achieving fascinating, highly impressive, and valuable results, we should acknowledge that the entire field of AI lacks a solid scientific foundation. It seems highly unscientific to believe that some of the most perplexing and technically intractable capabilities will just somehow magically emerge-yet this is what proponents of AGI and superintelligence believe. Actually, if you look at the results of adding scale and compute, yes, things are improving-but the relationship is not linear, and it does seem we are approaching some form of asymptotic limit beyond which we cannot pass. Maybe future generations will look back on what we are currently doing and laugh. Perhaps there is a fundamental limit beyond which we cannot pass with the current paradigm of computation-similar to how the speed of light or Shannon's law represents fundamental limits. But the AI zealots don’t even want to consider the science that might be at work.
@kevinnugent6530
@kevinnugent6530 6 күн бұрын
Octopods
@jelaninoel
@jelaninoel 5 ай бұрын
Where tf am I
Why we're stuck in a simulation with uncontrollable AI
38:03
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 104 МЛН
We Attempted The Impossible 😱
00:54
Topper Guild
Рет қаралды 47 МЛН
Why Nvidia's AI monopoly is coming to an end
32:39
Dr Waku
Рет қаралды 125 М.
The AI Gamble: How fragile is our future?
1:01:51
Dr Waku
Рет қаралды 4,7 М.
The AI Revolution is Rotten to the Core
1:18:39
Jimmy McGee
Рет қаралды 1,4 МЛН
Max Tegmark: Will AI Surpass Human Intelligence?
52:34
Dr Brian Keating
Рет қаралды 35 М.
The Inevitable Failure of 2024 Blockbusters
1:21:40
Friendly Space Ninja
Рет қаралды 463 М.
Why an AGI Cold War will be disastrous for humanity
18:42
Dr Waku
Рет қаралды 10 М.
coco在求救? #小丑 #天使 #shorts
00:29
好人小丑
Рет қаралды 104 МЛН