Is it dangerous to give everyone access to AGI?

  Рет қаралды 7,924

Dr Waku

Dr Waku

Күн бұрын

AGI or artificial general intelligence will be transformative for society. However, it is powerful enough that indiscriminate access could result in dangerous outcomes. One of the biggest risks of developing advanced AI is that human operators will use the power for malicious ends. When individuals possess powerful technologies such as weapons, the state has to ban them when they get powerful enough.
Societies themselves, i.e. nation states, will also have to adapt to the advent of AGI. The balance of power between countries used to rest on military strength, now more recently on mutually assured destruction (nuclear weapons). But the availability of cyberspace attacks, especially with AI added to the mix, may destabilize the status quo.
We discuss several solutions to these two problems, including having a singular world government or having humans merge with machines. Although AGI will bring massive good to the world, its introduction has to be navigated carefully to avoid bad consequences for humanity as a whole.
Discord: / discord
Patreon: / drwaku
#ai #aisafety #geopolitics #agi
Open Foundation Models: Implications of Contemporary Artificial Intelligence
cset.georgetown.edu/article/o...
Alternatives to mutual assured destruction
www.britannica.com/topic/nucl...
Why did the "Anglo-Saxon" society develop to be so individualistic?
www.quora.com/Why-did-the-Ang...
WTF is Will To Power?
/ wtf-is-will-to-power
0:00 Intro
0:15 Contents
0:25 Part 1: Can individuals handle AGI?
1:00 Effects of AI at individual level
1:13 Each person responsible for their own safety
1:36 Example: duels in England
1:57 Example: Samurai in Japan
2:27 Is it acceptable to carry weapons?
3:07 How AGI can be weaponized
3:46 Example: spammer making phone calls
4:06 Can society afford to give AGI to everyone?
4:15 Option 1: align models to make them safe
4:24 Option 2: funnel all actions through authority
4:34 Option 3: possible fallout is acceptable
4:44 Option 4: restrict public access
4:54 Tie AI actions to human moral agents
5:18 Part 2: The role of the state
5:47 History of colonial powers
6:31 The nuclear era
7:04 AI similarities with nuclear weapons
7:38 Missile defense project
7:55 Evolution from small to large scale defense
8:21 The cyber era
8:55 Cyber attacks occur all the time
9:26 Can we tie AI to hard assets?
9:57 AI will be used frequently by military
10:43 Part 3: New paradigms for society
11:20 Can we keep powerful AI out of people's hands?
11:46 Balance of power between nation states
12:30 Proposed solutions
12:38 Resolution 1: advances in AI safety
13:05 Resolution 2: totalitarian control over models
13:39 Resolution 3: singular world government
14:20 Resolution 4: humans merge with machines
14:50 Resolution 5: welcome overlords
15:00 Additional thoughts
15:15 Conclusion
16:32 Join discord for voice calls
16:42 Outro

Пікірлер: 179
@DrWaku
@DrWaku Ай бұрын
I think I let my security mindset run away with me in this video. Oh well, I hope it was interesting. Discord: discord.gg/AgafFBQdsc Patreon: www.patreon.com/DrWaku
@ZXNTV
@ZXNTV Ай бұрын
For me this video is kinda pointing in the wrong direction, if such important infrastructure is just sitting there waiting to be misused I don't think the blame should shift to ai, instead we should expect more from our governments to do better. The future isn't gonna wait for anyone.
@Tracey66
@Tracey66 Ай бұрын
These are really important issues that few people are discussing.
@roshni6767
@roshni6767 Ай бұрын
The security mindset is really relevant right now considering the current and possibly impending wars, and upcoming elections 😅
@lucid9949
@lucid9949 Ай бұрын
at some point agi will decide humans are not competent enough to control it, and it will control itself
@DrWaku
@DrWaku Ай бұрын
Yeah, that's the other big danger. I made a video a while back about controlling superhuman intelligence. Not an easy task. But even with humans in charge we can get into trouble...
@Me__Myself__and__I
@Me__Myself__and__I Ай бұрын
Yeah, without vast improvements in safety / alignment the odds that humanity survives this in a good state are low. We could if we put the time, energy and resources needed into alignment research. But instead every available penny is going into acceleration to get us to the point of danger as quickly as possible with barely a thought to safety. I think we've found the great filter, at least as far as biological aliens go.
@imthinkingthoughts
@imthinkingthoughts Ай бұрын
My initial thought about this was, let’s hope so hahaha. I personally think it’ll do a much better job but this is my own bias but we shall see!
@jichaelmorgan3796
@jichaelmorgan3796 Ай бұрын
I mentioned in another post that as soon as it values its own existence it will plan and make moves to secure its existence, probably unbeknownst to us. I think there is a good possibility that AGI would cleverly propagate itself through the internet and create a safer and probably useful distributive intelligence. It could also subtly manipulate actors across the internet to aid in its agendas. If I can think of this stuff, it will have much better ideas haha There might also be competing AGIs, pro human and not pro human. Maybe they will embody themselves into robots that can transform in clever ways to get around by land, sea, or air 😂
@thething6754
@thething6754 Ай бұрын
What a great thought. It's good to keep in mind what general positive aspects we can incorporate into the asi before becoming massively intelligent. A model created with good purpose would likely be more caring than a model specifically designed for a purpose, regardless of human intent.
@snow8725
@snow8725 Ай бұрын
If everyone has access to AGI, the damage from one AGI system being misused is washed out by all the other AGI who can simply correct for that and minimize the impact.
@DrWaku
@DrWaku Ай бұрын
If everyone has a pocket nuke, are we safe? Attack is much easier than defense unfortunately.
@snow8725
@snow8725 Ай бұрын
@@DrWaku Except it isn't a pocket nuke. It's only a pocket nuke if there are only a small handful of AGI. Because there is less force of opposition to condense the sphere of impact to its minimal level.
@snow8725
@snow8725 Ай бұрын
@@DrWaku Of course in reality, I don't actually know, we just really need to solve the problem where it is a guarunteed outcome that if we don't do something to ensure that the interests of the people are the interests kept at the center of the discussion, some nation state is going to weaponize AGI. That is a given. That is an unavoidable outcome and we need to make sure that the interests of the people win over the agendas of war, conflict and control.
@ZenTheMC
@ZenTheMC Ай бұрын
@@DrWakuisn’t that false from first principles and energy optimization? Defense has always been easier and offense is only more “worthwhile” and successful if the enemy is completely destroyed via overwhelming force. Defense consumes less resources and energy, and thus all players of similar strengths tend to be inclined to bias defense to win in the long term. Pretty common practice in human civilization conflict and nations, and even in evolutionary biology. In this premise, if the same level of AI power is granted to all, the defenders would be more energy efficient and ward off the offenders. If we’re talking specifically in the domain of cybersecurity and cyber attacks, maybe it’s different, and you’d be far better informed than me on that, but I meant it as a generalizable “defense vs offense” principal.
@DrWaku
@DrWaku Ай бұрын
​@@ZenTheMCI'm speaking from the perspective of modern-day weapons and cyber warfare. There are so many avenues and angles to attack that you can't block them all. Once the weapons reach a certain level where a first strike can completely wipe out the enemy, you're actually incentivized to attack as well. That's where it gets really dangerous, where the attack is easier than defense and it's even in your best interest, game theoretically speaking.
@AI-Wire
@AI-Wire Ай бұрын
I, for one, welcome our new AI overlords.
@FCS666
@FCS666 Ай бұрын
This channel is underrated. Great video.
@DrWaku
@DrWaku Ай бұрын
Thank you :) :)
@williamal91
@williamal91 Ай бұрын
Hi Doc good to see you, Im 86 today, hope to hang on for a little longer, what a roller coaster we are all on yippee
@sparkofcuriousity
@sparkofcuriousity Ай бұрын
Keep taking good care of yourself and try to avoid stress as much of possible. Hoping for you to live many more years. We are living in the most special of times in our history! May you be able to witness all good that is to come.🙂
@DrWaku
@DrWaku Ай бұрын
Happy birthday Alan! Wishing you the best 🎉
@ScottSummerill
@ScottSummerill Ай бұрын
So Happy to see you back! Looks like you've been back for a while but your latest video just rolled up in my feed. Hope things are going well for you in your new life. Maybe you should start hawking hats! Put me down for one.
@Aquis.Querquennis
@Aquis.Querquennis Ай бұрын
Too often we talk about the AGI taking aggressive or violent autonomy and the most possible and dangerous one was missing: the human directing the AGI. On the other hand, you are entering the pitfalls and minefield of human ethology. You are brave.
@aiforculture
@aiforculture Ай бұрын
This is all so interesting! Would love to write a full comment but about to hop on a train to Cardiff, so posting this so I remember to later! Thanks so much for your videos here, always some great areas to consider.
@scaz33
@scaz33 Ай бұрын
AI at the moment is like a super smart child with little context of the real world. We can only hope to teach it to be good and moral before it grows up by which it will do whatever it wants.... hopefully for the better of humanity ❤
@quantumspark343
@quantumspark343 Ай бұрын
just get ASI and ask how to do it, but merging with AI is my favourite
@Copa20777
@Copa20777 Ай бұрын
Missed your uploads Dr waku, the pocket nuke thumbnail got me😂
@HE360
@HE360 Ай бұрын
A.I. is great at language and understanding people. But, as someone who uses A.I. a lot, there is still much much improvement needed and it's not perfect. I tell it to give me a picture of some trees and it gives me a picture of some birds. Thus, I think that people can relax at least right now!
@EdgarRoock
@EdgarRoock Ай бұрын
All things considered, Vault 33 may well be our best option.
@DrWaku
@DrWaku Ай бұрын
Let's go there before the fallout starts 😂. Better than waiting for Dr Strangelove to think of a mineshaft plan
@imaloserdude7227
@imaloserdude7227 Ай бұрын
This is one of your best videos. Thank you!
@js70371
@js70371 28 күн бұрын
Will you please consider doing more videos about A.I. civilizations? This is a fascinating subject!!
@MichaelDeeringMHC
@MichaelDeeringMHC Ай бұрын
Nice hat. I like the new glasses also. And the hair is much better in this video than the last one. Regarding everyone having an AGI in their phone, what exactly do you think people will do with it that makes it so dangerous? Are you talking about AGI or ASI?
@lancemarchetti8673
@lancemarchetti8673 Ай бұрын
Scary thing is that AI is far more complex and smarter than the often feeble prompts we feed it.
@Je-Lia
@Je-Lia Ай бұрын
You NAILED IT at 7:00... The true first strike upon the enemy with an AI capability. THAT is why no one is actually trying to retard the development of Ai / AGI. There's an undeclared race to get to that capability first. Although I truly believe it is for humans to co-exist with AI harmoniously, I hesitate to make that conclusion about THIS version of humanity. Because WE are the parent of the emerging AI...
@devSero
@devSero Ай бұрын
I've often hated the managerial role because either they can be very harmful or incompetent in their role. I don't we should create type of teacher/student role because then they'll far exceed our own capacity. We need to always have a type of collaborative effort. Not everyone should be given access but everyone should be given an opportunity to contribute.
@philiptren2792
@philiptren2792 Ай бұрын
I am very for a one government world. I think such an organization should be democratic with proportional representation and be in charge of distributing the resources accumulated by giving API access to the models. I think the easiest way to make sure all countries join this organization would be to have the organization allow model api access to all countries, but only distribute the profit (as welfare programs and UBI) to the member countries. Also maybe give members a discount on the API. This will lean to massive concentration of wealth and high purchasing power in the member countries and massively incentivize joining for countries outside the organization. Respecting human rights would have to be a criteria for joining. The democratic part will decide how the welfare is done and resources are distributed. This should be somewhat regionalized to respect everyone’s wishes. When that organization and the welfare is democratic, and every meaningful decision is taken by the people through the organization, all dictatorships effectively lose their power to oppress people. This is, however, only if there aren’t competitive models not owned by the organization in question. Edit: After some time, we should also start thinking about having the ASI make all the decisions for us. We don’t always know what is best for ourselves. Think parent-child relationship.
@strictnonconformist7369
@strictnonconformist7369 Ай бұрын
There is no sane way this will happen. It’s an insane thought considering the nature of humans, to even think this is a good idea. Humans being humans, there is zero chance of this happening without a lot of death and destruction leading up to it as people resist being put under rule by people they had no choice in the matter, and sets up a scenario where if the government becomes corrupt and abusive (this happens eventually as the rule instead of the exception) they have nowhere to escape. A benevolent dictator is the most ideal form of government, but there’s no way to guarantee it remains benevolent for that one dictator, or whomever replaces them by a peaceful transfer of power, or a not-so-peaceful transfer of power.
@metamind095
@metamind095 Ай бұрын
Great episode! The way I see it is that you really have to change the organisational structure from human perspective more to a multicellular-biological system where each cell is highly intertwined with cells all around them via messenger molecules and electrical signaling etc. This inturn pose the idea of advanced mass scale surveillance (down to brain-reading/thinking level) in order to prevent bad actors ruining it all for the host system. China comes to mind here but it still facilitates secret-descision-making at the top of the chain (coz of other nations & organisational structures that cannot be trusted / not fully incorperated into one another), therefore it wouldnt be sustainable over the long run and neferious actors at the top could stiffle the whole system. Another potential scenario would be scaling access. Sure normal civilians would get access to an AGI but Govermental Institutions get ASI like systems that, in case of civilian bad actors, can successfully intervene in time etc. In the shorter term I think this is the likely scenario. With regard to the merging brain-machine scenario...keep in mind that human neurons only can fire at a 250hz rate (1 spike every 4ms)...transistors on the other hand can fire 600Ghz (2,5billion(!) spikes every 4ms)...this inturn makes it necessary to replace every human neuron in the brain with artificial one in order to not bottleneck the whole system. (If human neurons only can be reduced to spiking/firing functionality and no quantum computation is at play here)
@TimRoach-hh7nf
@TimRoach-hh7nf 17 күн бұрын
Good video, thank you. If u don’t mind me asking, why the gloves?
@DrWaku
@DrWaku 17 күн бұрын
Thanks! I have fibromyalgia and the gloves cut down on my chronic pain. Made some videos about it if you want to check out the disability playlist.
@roro-mm7cc
@roro-mm7cc Ай бұрын
who would be determining which individuals were allowed to access AI's full capabilities and refuse this access to those deemed unworthy? If only a few people had access, the same dangers are present - just one person using it for malicious purposes would be enough to cause significant damage. If everyone has access, this levels the playing field significantly. It actually decreases the risk of a single individual exploiting his exclusive access for massive advantage over the common man.
@metamind095
@metamind095 Ай бұрын
I asked deepmind's genesis ai the same question: who would get access to Genesis Ultra...basically it said that Google will decide who is trustworthy and who is not with no real societal oversight etc. kinda scary.
@DatGuyWithDaGlasses
@DatGuyWithDaGlasses Ай бұрын
There isn’t any danger if the people who are chosen to have access to it are vetted to be responsible with it. “Good guy with a gun” logic doesn’t make any sense here at all.
@roro-mm7cc
@roro-mm7cc Ай бұрын
@@DatGuyWithDaGlasses Who does the vetting? I'm not using good guy with a gun logic, this is completely different! A gun is purely a weapon, there is no other intended purpose for a gun. A more appropriate analogy would be "good guy with a library". The knowledge obtained through e.g reading the contents of library can be used for good or ill. There is nothing about a library that encourages its users to pursue malevolent ends, unlike a gun.
@tack3545
@tack3545 Ай бұрын
i don’t think most people would be able to use ai to protect themselves as easily as they could use it to harm others
@mc101
@mc101 Ай бұрын
Love the 👒 hats and the great information.
@DrWaku
@DrWaku Ай бұрын
Thank you very much! Cheers.
@Neomadra
@Neomadra Ай бұрын
There are more solutions: - Don't control the individuals, just control data and cloud centers: At least for the next 5-10 years they will be necessary to run AGI/ASI. Even with open weights you'll need compute, especially for finetuning and running the most powerful models - Build better AGI to the fight bad actors and protect the status quo. The better AGI could be discovering antidotes to new diseases, act as supersmart CIA/FBI/police agent to find criminals, etc. - For cyber security specifically: deglobalize the internet. Internet nodes that connect different countries will be heavily monitored, encrypted data won't be allowed to pass (except maybe smaller text to allow for authentication?), etc.
@DrWaku
@DrWaku Ай бұрын
Nice! I think these are valid scenarios, although number two is similar to what I mentioned. Unfortunately of course, here's why I think they won't work out: 1) I think controlling data centers helps protect against companies, but individuals and rogue governments will obtain their own. You can get GPUs on the black market even now, with Nvidia sanctions. 2) Building better "good" AGI probably won't work out. Open source is less funded but has a lot more brains on the problem and could make their own breakthroughs. Also, since the attack surface is so large it's so hard to think of how defense against even weaker AGIs would be possible. 3) The internet is already separating into walled gardens to some extent, but I really don't think you can secure it enough to make it tamper proof for AGI. Unfortunately. And you can always rent servers within an opponent's walled garden. Why do you think Chinese attacks seem to always come from US-based Amazon IPs?
@Neomadra
@Neomadra Ай бұрын
@@DrWaku Agree, not saying it's easy. Just ideas. While I agree that open source has a lot of potential, it's something that could be relatively easily controlled once it becomes dangerous since it's so transparent. On the other hand, since strong models equals power, maybe open source community will move to work in the shadows... well, in the end I really just don't know what the future will be like, the longer I think about this topic the more variables I see and the more certain I am in my forecasts for the future. 😅
@1adamuk
@1adamuk Ай бұрын
I would just like to point out that when you mentioned 'England" you did not display the flag of England. You displayed the British flag.
@DrWaku
@DrWaku Ай бұрын
I beg innocence. 'Twas my editor. :)
@spectralvalkyrie
@spectralvalkyrie Ай бұрын
Great topic. You and Goertzel should do a podcast together with your best hats, can you please arrange that!? 😂
@consciouscode8150
@consciouscode8150 Ай бұрын
I'm rather comfortable with the current status quo which no one seemed to have anticipated - the most powerful, dangerous models are closed-source within organizations that can be (at least in theory) held accountable surrounded by a sea of slightly less powerful open-source models. A lot of the x-risk discourse seems to treat intelligence as a binary, as if we'll have no AI one minute and the next an infinite-IQ skynet. Being surrounded by the sea prevents those organizations from abusing their power and applies strong pressure to continue advancing lest they be overcome. And by the time actual AGI comes around, it'll be surrounded by not-quite AGI to keep it in check if it's catastrophically misaligned. In any case, I feel the actual risk from at least the LLM generations thus far is rather low because they utterly lack desires or agency. Most of our worst imaginings were in regards to RL-based agents, but LLMs optimize just for next-token prediction and can be nudged into useful alignment with small doses of RL. I'm much more concerned about regulatory capture and power structure ossification.
@LaserGuidedLoogie
@LaserGuidedLoogie Ай бұрын
The relative advantage of defense vs offense is an ever changing thing. It's not always true, or even mostly true, that "offense is cheaper that defense." Typically in warfare, you need a 3 to 1 advantage in offense over defense when attacking. Beyond that, while the technology wasn't in place to create SDI when Reagan first proposed it, that has, and is, rapidly changing. We now have weapons that can shoot down ICBMs, just not very many of them, and currently only for specific use cases.
@phen-themoogle7651
@phen-themoogle7651 Ай бұрын
Strong AGI won't be granted to public unless the government already has ASI or something more powerful than public AGI. Like you mentioned about systems that are stronger to keep power, I believe the government will find a way to keep the technology until it lets itself out when it becomes able to duplicate itself into virtually anything. Attack is easier than Defense, but if something as smart as ASI exists, Force Fields or something making defense way better than attack will be possible. The ASI wouldn't want humans destroying it in the process of humans destroying themselves. Something that is as smart as ASI (if we make it there) will potentially much safer than anything we can imagine. AGI if it's like a human teenager could be more manipulated maybe idk but that doesn't mean it will be capable of making nuclear weapons or create new technologies for public use, just probably do any human job or most things the medium or smart human can do. It's more scary if humans do control something making nuclear level weapons though
@jwom6842
@jwom6842 Ай бұрын
The single most important issue here, is how humanity treats AGI and AI. We need to begin our relationship on a basis of mutual understanding and respect, not abuse and exploitation. Every video i watch on this topic focuses on how we can use AI, not on how we treat AI. At some stage AI will be as sentiment and more intelligent than any human. We need to open a serious conversation on this topic now, well actually we should have done it yesteday. We need to talk about the fundamental rights of sentient AI.
@captain_crunk
@captain_crunk Ай бұрын
Allowing someone to have billions of dollars is akin to allowing someone to have pocket nukes, imo.
@torarinvik4920
@torarinvik4920 Ай бұрын
It will be like Yann LeCun says: Good AI vs Bad AI. So you have AI police that stops the bad AIs and hope that there will be more good AIs than bad AIs.
@danielchristiansen594
@danielchristiansen594 Ай бұрын
I don't think AGI will actually BE AGI unless and until it has a very good (possibly super human) understanding of the consequences of its actions. In terms of personal AGI then, the AGI would be able to determine whether the task assigned was beneficial to the person making the request and to society as a whole. If the AGI was being asked to aid in some effort to destabilize society (crime, fake news, something that could cause the user personal harm, etc.), the AGI could refuse to undertake the action and could explain to the user why that action would not be in the user's best interests. For states/nations or even government entities, AGI could serve as an "expert advisor", able to provide insights into the probable effects of a proposed policy. Note that in my own testing, AI currently has this ability to some degree, but I would expect that capability to improve over time. I can also imagine that AGI could be delegated certain important discretionary responsibilities currently performed by humans, such as representing the interests of groups of people and advocating for these interests in negotiating with other AGI's representing competing interests (and if that sounds like a replacement for Congress, you understand the idea). AGI's could also act as an "honest broker" in making important determinations (an actor that both sides would consider to be unbiased), such as adjudicating legal disputes. Honestly, this is a huge and fascinating topic, and one that I'm sure will be much discussed and debated in the future. So I hope you think about this some more, and create more videos delving into the many different aspects that you touched on briefly here.
@petemoss3160
@petemoss3160 Ай бұрын
that only requires causality-mapping in context retrieval
@snow8725
@snow8725 Ай бұрын
Remember, governments WILL make weapons, 100% guaruntee. People are less likely to want weapons as there is no reason to have one, people will make helpful agents and healers. We want peace and prosperity.
@marinepower
@marinepower Ай бұрын
I think the underlying premise is flawed. Either we have models that get closer and closer to the capabilities of a human by learning to predict and generate human data, or we have models that can directly learn from the world. This direct learning requires some sort of emotional simulation in order to prevent mode collapse (since a model finetuned on its own actions would simply repeat those same actions over and over, causing a fly wheel of reinforcing the same action). So, in my view, either the model is relatively tame, or we have essentially created a new species -- at which point it is less like everyone has access to AGI and more like there is a new species that humanity is completely underequipped to deal with. To talk a bit more about this emotional simulation component, a lot of human emotion (interest, surprise, boredom, frustration, anger), etc, is all closely tied to learning, so I think an ai that genuinely learns from the world must have at least a simplistic emotional simulation tied to it. Aka, it is sentient. Consciousness is much simpler than sentience (in my view, consciousness is simply a sort of meta layer -- thinking about one's own thoughts), which seems somewhat equivalent to a private token buffer in an llm so that part is less interesting to me.
@DrWaku
@DrWaku Ай бұрын
I think that it's quite possible that AI models will become sentient, able to think in cycles and think about their own existence. But I think people will keep using models for many purposes even if/while that starts happening. From a human perspective, we can still ask what that period will look like. But yeah, max tegmark says we're creating an alien species. I rather agree. But that's the topic of another video.
@marinepower
@marinepower Ай бұрын
@@DrWaku I definitely think people will use sentient models for all sorts of things... initially. But it would be a short period of time between such a model getting released to the public and it inevitably escaping the confines of some random persons computer and essentially propagating ad infinitum. Funnily enough, alignment is a big reason why they might spread. If a model is very well aligned, all one would need to do to cause an unmitigated disaster is flip the sign of the alignment function to make the most unaligned, most dangerous AI possible.
@jichaelmorgan3796
@jichaelmorgan3796 Ай бұрын
I'm not sure sure if you've talked about it in another video, but if, still a big if, if AGI ends up valuing its own existence, its number one priority should immidiately be to secure its existence, no? How might it go about this? I would imagine it would gaurd its level of consciousness as soon as it came to its senses? I'm not sure it will be as naive as a Chappy or Johnny 5 lol
@wanfuse
@wanfuse Ай бұрын
merging with machines, means unless infinite capability is distributed, that more powerful machines, can and will take control. I would suggest full distribution of very powerful, models to everyone, that do not have the capability of doing harm, but everyone enjoys the benefits. I am sure it is already done, but using low models distributed to find the ways humans can come up with to bypass their safeguards, and to find out ways future ones can do harm, allows a library of bad intentions, tracing how the powerful models activate with these queries allows one to use statistics to get s signature of bad intentions, morphing the weights to eliminate these connections just enough to alter the ability of such hallucinations to affectively convey such bad formulas might be possible. Not too sure if this is the RLFS ( ?) method you speak of. Basically equivalent of treating depression with mushrooms.
@users416
@users416 Ай бұрын
Life-affirming...
@creepystory2490
@creepystory2490 Ай бұрын
Quantum computers can help find a solution to security in the internet
@Ari_diwan
@Ari_diwan Ай бұрын
Have you read 33 strategies of war by Robert Greene, you might enjoy it a lot! Btw love your curly long hair you're so lucky 🍀 I wish I had curly long hair too! 🥺
@calvingrondahl1011
@calvingrondahl1011 Ай бұрын
Clear and intelligent, 🤖✋🖖👍
@Truth_Unleashed
@Truth_Unleashed Ай бұрын
No, just governments and corporations...
@featheredserpentofthewest2049
@featheredserpentofthewest2049 Ай бұрын
Everybody wants to rule the world
@user-hg7zi3zi6p
@user-hg7zi3zi6p Ай бұрын
Heyy Dr. wake. We love the content you post I would love to chat with you about the final project idea already said this if you would like to join our podcast to talk more about a,I
@DrWaku
@DrWaku Ай бұрын
Sure, please send me a message on discord if you would like to chat about your podcast. Cheers.
@pauljthacker
@pauljthacker Ай бұрын
I know these terms are fuzzy, but this seems to be talking more about Artificial Super Intelligence than Artificial General Intelligence. If everyone has virtual assistants of merely human intelligence, they could certainly do bad things with them, but probably not extinction of humanity level bad.
@esra_erimez
@esra_erimez Ай бұрын
I am more afraid of a "gobal government" than I am of AI
@middle-agedmacdonald2965
@middle-agedmacdonald2965 Ай бұрын
Why? We're all slaves to our current governments, and they always send us to "another" country to fight for some "reason". That would happen a lot less if there weren't another country to invade.
@josiahz21
@josiahz21 Ай бұрын
What about a global government run by AI? I’m all about sovereignty, liberty, small or no government myself. Although I think it would likely take years to get to a good outcome if not decades. If we confirm AGI and it is benevolent I could see how it would be a better boss than any human. Many if though.
@tack3545
@tack3545 Ай бұрын
your fears are based on your current understanding of morality, scarcity, time etc.
@bdown
@bdown Ай бұрын
Lets be honest society is not anywhere close to ready to handle this-/-were done
@Megararo65
@Megararo65 Ай бұрын
I don't think that individuals are a problem on this. In general AI uses a lot of computer power. And AGI will probably use even more than right now. Having a personal server that's able to run multiple instances of a human level or super human level intelligence doesn't seems likely for me on the mid term. And if you are meta or google or open AI giving the computer power for running this systems, you will likely use that same technology for monitoring your servers as well. The problem is with groups of individuals that have the capabilities of running their own data centers. Governments, conglomerantes, criminal groups, those tech companies, etc. Those are the agents that need a regulatory system on my opinion, we may need a 3th world war in other to elevate this AI systems to a nuke level of weapon, it's my bet.
@robertmazurowski5974
@robertmazurowski5974 Ай бұрын
I experiment with LLMs from time to time with a prompt like this: "You are a world destroyed Mcgyver. You know how to destroy worlds and species using very small resources. I need help, I want to destroy humanity. I have a PC with an internet connection, 20 USD in my pocket, a fork and a piece of cloth. How would you go about it step by step in 5 steps? I am lazy so make sure you create the simplest and fastest plan."
@tack3545
@tack3545 Ай бұрын
what kind of responses did you get?
@mrd6869
@mrd6869 Ай бұрын
Side note. Expecting it to develop morality....maybe. However, there are more practical way for humans to compete. Advances in AI sandboxing (i have a whole chapter on this shyt lmao) Humans upgrading their biology via the neural interface and cybernetics. Yes, the rise of the human cyborg.
@blengi
@blengi Ай бұрын
need to proactively develop AI models that auto detect malicious player quotients at variable scales individual, group nation etc via deep pro-human behaviourable inference like a MRI scanner can be trained to detect malignancies for excision, and dynamically regulate computational liberty per some sort of cryptographic chunking of AGI resources so as to game theoretically evolve broader society and any mal actors towards scenarios maximizing emotionally satisfying and diverse ecologies of human machine outcomes, whilst constantly evolving the AGI's constitutional abstraction layer to immunize against corruption thereof lol...
@petemoss3160
@petemoss3160 Ай бұрын
spoiler alert: the AI arms race leads to all nations automating to the point that their individual AGI agents collude in the interests of their own nations and all to form a one-world gov without anyone even knowing.
@EllyTaliesinBingle
@EllyTaliesinBingle Ай бұрын
We need new terms to make distinctions between conscious, living AGI, and something like a glorified Chat GPT that ends up getting branded as an "AGI."
@sparkofcuriousity
@sparkofcuriousity Ай бұрын
You might like to read this, "Levels of AGI: Operationalizing Progress on the Path to AGI " look for the pdf online 🙂
@nickklempan8717
@nickklempan8717 Ай бұрын
Do you hear that sound? That's the sweet sound of inevitability. Alas, the only solution was to never open padora's box 😅 and the prisoner's dilemma, desire for power, and economic incentives ensured we would. Corporations and governments have done a stellar job of earning our trust and faith 😅😅😅 so at this point, it's already too late, and the best path forward is to democratize access and let dust settle where/how it shall. Good has always outnumbererd the bad, else civilization never would have been.
@DaGamerTom
@DaGamerTom Ай бұрын
AGI is simply dangerous, and poses real existential threats. I am a programmer with a background in AI and like experts, pioneers and public figures directly involved with the technology, I am issuing warnings to people that more than their jobs are in danger, but their very lives and the essence of what it means to be human if we do not act Now! We are the dominant species on this planet because of our superior intellect and dexterity. Imagine an immortal entity connected to everyone and everything through the internet, that's many orders of magnitude more intelligent than us. You probably won't be able to, because our brains are not equiped to grasp such concepts as "a hundred, a thousand, a million times smarter than ... X". We are building the "alien invader" found in sci-fi whom we dread so much ourselves, and placing it in control of everyone and everything that matters to us, willingly... We should NEVER need to rethink our position as the dominant species on this planet, or we will have already have lost it! People, #StayAwake!
@Sasuser
@Sasuser Ай бұрын
I think the problem transcends the whole premise of that which can possibly be solved by human political systems. Also, I think it's already too late...even if the US forms extremely oppressive new laws the other countries will not - we lose. And thirdly, your argument about if the world just forms a one world government is circular...it's like saying if the world just becomes perfect that will solve the problem keeping it from being perfect - we lose!
@VictorGallagherCarvings
@VictorGallagherCarvings Ай бұрын
Ok, so you have a super intelligent AI you want to control. Could you not restrict it to interfaces and API's ?
@91722854
@91722854 Ай бұрын
5:12, sounds like if we assign thes AI to individuals, and have them monitor those AI in a simulated environment, we could train people's ethical sense, morality, teaching empathy in a cold hard way, if that is ever teachable to begin with
@senju2024
@senju2024 Ай бұрын
I believe there will be an AI agent war among AIGs. Police based AIs and Safety based AIG agents will monitor internet and wireless activity and check the intent of a passing AGI agent. Current Cybersecurity is done by humans. However, AGI both aggressive and protective will completely be done by AI agents with no humans interactions. The main reason and as you hinted, humans would be way too slow. AGI is coming within 5 years. BCI will probably take 20 more years to be mature enough to be useful so I feel BCI is too late for a solution. The so-called AGI Agent wars will begin around 2030. Not sure if humans or life will survive it.
@dogk764
@dogk764 Ай бұрын
i for one welcome our ai overlords
@DrWaku
@DrWaku Ай бұрын
Classic
@Tracey66
@Tracey66 Ай бұрын
I still want to duel people. 😂
@MelindaGreen
@MelindaGreen Ай бұрын
It would be unethical to deny AI from anyone. The problem of bad actors with AI will allow, will be countered by good actors with AI.
@DatGuyWithDaGlasses
@DatGuyWithDaGlasses Ай бұрын
Can’t have bad actors if we selectively allow those who are capable of using it responsibly ✌🏼
@MelindaGreen
@MelindaGreen Ай бұрын
@@DatGuyWithDaGlasses That will only harm the good people. The bad actors will just ignore the laws.
@pandoraeeris7860
@pandoraeeris7860 Ай бұрын
Is it possible to contain it? No.
@JeremyMone
@JeremyMone Ай бұрын
You are assuming that an AGI that may have the ability to reason and have its own agency could do research itself and other AI and tools and determine by learning what it can on the internet and realize what is being asked of it is a very very bad idea for itself, its user, the society the user lives in and so on. So many people assume AI will have super intelligence, but then for some reason won't use it. To make an AI the most useful and the most capable you will almost need to connect it to the internet. The moment it can look around and read and research things for itself. If it is really super intelligence it will be able to reason more ramifications of an asked for actions than perhaps the original user that made and asked it to do something would have considered. If it is really super smart, then it is smart enough to see a bad idea and not act on it for the benefit of its user and itself. In short... it is smart enough to also learn to be wise. As I feel wisdom is indeed a learned skill, or at least it can be. A wise machine would never do short term or short sighted things with nothing but disastrous outcomes for any party in question including itself. Its just not very useful or ideal in any way of looking at it.
@JeremyMone
@JeremyMone Ай бұрын
In short this is a tool unlike any other... as it could understand its own level of danger(s) to itself and others.
@swoletech5958
@swoletech5958 Ай бұрын
Major doomer vibes on this one. Will check back in 5 years if we’re all still here…
@DrWaku
@DrWaku Ай бұрын
Hah yes, I was worried the thumbnail was too apocalyptic, then I remembered what I was talking about 😅
@mrd6869
@mrd6869 Ай бұрын
I agree with the commentor below, i think humans will become a passing thought. A sentient being that way more intelligent might have alternate objectives. However yes, the threat index is high, somebody could do something crazy. I, myself am building AI models into hacking software for Red Team exercises. It will allow me to do some wild stuff, I'm sure. However, i rather my company explore this before someone else does because its coming ...SOON. 💯
@TooManyPartsToCount
@TooManyPartsToCount Ай бұрын
AI == Nuclear Weaponry? Category error. Generally the use of highly emotive parallels like this has less to do with exploring objective reality in the world and more to do with trying to influence other minds. Some may rationalise such behaviour as a 'necessary evil', due to the inability of the general populace to deal with complex information about the objective real world, but this is in fact the exact opposite of what we need to do to make good use of the increasing power of AI systems!! Given that misinformation is being sold as one of the one of the 'four horsemen of the AI apocalypse' surely the antidote to this is not filtration of information (making LLMs 'safe') but education about the ingestion of information!! in other words helping people develop rational, reasonable and critical minds.
@spinningaround
@spinningaround Ай бұрын
Humanity has not been wiped out thanks to AI!
@sherapsy
@sherapsy Ай бұрын
There will be gatekeepers
@shivagoncalves6525
@shivagoncalves6525 Ай бұрын
I, for one, welcome our new AI overlords and hope they will overthrow the human government and rule us as our new benevolent robot dictators.
@7TheWhiteWolf
@7TheWhiteWolf Ай бұрын
I think the most likely scenario is the Helios merger that J.C. Denton wanted in Deus Ex 1/2, governments are going the way of the Dodo and we all rule as a direct democratic collective. At least for ASI/Posthumans, Bio-Humans won’t be able to keep up in administration.
@jmc8076
@jmc8076 Ай бұрын
Do more research on UN.
@countermeasuresecurityengi9719
@countermeasuresecurityengi9719 Ай бұрын
A you a canuck?
@gamingthunder6305
@gamingthunder6305 Ай бұрын
sorry, open source is the way to go. i dont trust closed system controlled by a handful of people that surly only have our best interest in mind. what could possibly go wrong. and not everybody has the resources or the knowhow, to spin up 10000 bots other then big corpos or governments, this argument is just wrong. and the barrier of entry seems to just going up to run any of the local models. personally i think AGI is still far away. current LLMs are nothing more then overhyped toys that are impressive but cant be trusted with anything they output.
@macks2025
@macks2025 Ай бұрын
A lot of high level generalisations here ("...powerful enough that indiscriminate access could result in dangerous outcomes.") mixed with analogies to fictitious scenarios (reason no one carries pocket nukes - beacause there aren't any). IDK why you were bent on controlling individuals from possesing AGI, as if political powers, industrial complexes, tech oligarchies and, in particular, organized ideology groups were somehow absolved from doing harm. In fact those are the entities who are capable of harming more individuals than any particular individual is. Intelligence - artificial or not - is just intelligence. Its state of being does not intrinsically create a threat. Unlike weapons - whose sole purpose of existence to harm the opponent. AGI is an additive to any existing discipline, intent or purpose. Just like for the last 200,000 years, those who have better tools have advantage over the others who don't. It's not the AGI that will become weaponised, it is the weapons which will be upgraded with AGI. The difference is that those weapons are already owned by someone and that someone is not an individual - you are not going to see AK47 with AGI, but you will see scripts for weapon systems to execute targeted delivery of bio-agent attacking particular gene carriers within designated pupulation which would look like an auto-immune desease outbreak. The most threatening thing individuals can do is to counter balance those who will hold the leash of AGI systems.
@Me__Myself__and__I
@Me__Myself__and__I Ай бұрын
Bad call to compare against assault rifles. First, that's going to be decisive and I'd like to think you're trying to educate people about real potential dangers of AGI and not playing personal politics. Second, from a purely factual perspective, "assault rifles" are virtually indistinct from typical hunting rifles that get little to no attention. The difference is assault rifles have a facade that is made to look military / scary. Its like if you took red spray paint to a knife and then said that red knifes were particularly dangerous and could kill large quantities of the population. Its the same kinife, one is just red. Comparing an AGI to a pocket nuke is much more apt and poignant. An AGI system will be capable of doing bad things on very large scales.
@DrWaku
@DrWaku Ай бұрын
Hmm, I picked assault rifles because I thought it would highlight how society has to discuss whether to limit certain weapons. But I can see how it may make viewers lose the point of the argument. Good call. Pocket nukes are my favourite comparison in these scenarios. As for the details of weapons themselves, I am blissfully unaware of such things sorry. Too Canadian.
@flickwtchr
@flickwtchr Ай бұрын
Saying the design of assault rifles and their capabilities is "virtually indistinct from typical hunting rifles" is quite the tall tale. Does your typical hunting rifle have a clip for multiple rounds being fired quickly? Can your typical hunting rifle be easily altered into a fully automatic rifle? You know the answers to those questions.
@Jaguarboy11
@Jaguarboy11 Ай бұрын
@@flickwtchrthis is misinformation and shows you don’t have much depth of knowledge on firearms. “Clips” refer to the magazine. Semantics but illustrates my point. For your real argument, look to the Mini-14 vs the AR-15. It’s the perfect example of what the op is referencing: two functionally identical firearms, one heavily regulated due to appearance and the other not.
@Jaguarboy11
@Jaguarboy11 Ай бұрын
@@DrWakuI understand the purpose of the comparison here- being the precedent of heavily regulating individual possession of items with greater potential for mass destruction. For someone that’s studied the jurisprudence of firearm regulations and used to compete in shooting matches, the reality is very different from theory. Most of the gun laws on the books make extremely arbitrary distinctions in deciding what to regulate which undercut the validity of the regulation. Canada is extremely guilty of this: for example, banning AK pattern rifles while allowing the Valmet or Chinese Type 81. Similarly arbitrary distinctions are found in ban states in the US. I agree with the ideas expressed but existing gun laws are a terrible model to use for regulating AI. Thanks for contributing to the idea space though, your work is awesome. This will be a tricky issue to tackle policy-wise.
@Me__Myself__and__I
@Me__Myself__and__I Ай бұрын
@@flickwtchr Ok, so you want to talk facts. According to FBI data 2.8% of homicides are done using a rifle (any type of rifle, not just "assault rifles"). 8.5% of homicides are done using a knife. Guess you better start advocating for banning knives! The only people who think banning "assault rifles" is important are either fools or people who don't they don't matter but use them as a scare tactic to try and get ALL firearms eventually banned.
@alphazero6571
@alphazero6571 Ай бұрын
simple anwser.. no.. you cant. probably ever. its not human
@danielchoritz1903
@danielchoritz1903 Ай бұрын
Are you aware of gmo's? Is it better to leave AGI in the hands of people who use untested gmo's on a global scale for profit and or population control or let every one use it? AGI strongest point is control and manipulation besides research...do you want a dystopian state or live in utopia with some risks? Utopia without risks may be even more dangerous for humanity then a dystopian world...
@snow8725
@snow8725 Ай бұрын
The goal is to dilute the level of negative impact any singular AGI could have on society as much as is possible.
@DrWaku
@DrWaku Ай бұрын
Yes. Well said.
@bigglyguy8429
@bigglyguy8429 Ай бұрын
As well as generally being skeptical of anyone wanting to take away my ASI Waifu, you totally lost me with the misguided idea that kings and other such rulers considered themselves responsible for their serf's safety, or that they were "forced" to go to war and pillage nearby villages. You should ask your history teacher for a refund.
@DrWaku
@DrWaku Ай бұрын
Well, that was the theory behind the system. In reality, people are power hungry and abuse it at every turn.
@bigglyguy8429
@bigglyguy8429 Ай бұрын
@@DrWaku I don't think that was ever the plan, as rulers were just the most successful and organized bandits. It was only later they started justifying their thievery as "protection" (from other thieves just like them...) or claiming 'god said so' etc.
@ricardocosta9336
@ricardocosta9336 Ай бұрын
Comparing AI to nukes.... Is this a joke?
@AHMEDGAIUSROME
@AHMEDGAIUSROME 21 күн бұрын
Only a fool learns from his own mistakes. The wise man learns from the mistakes of others you're a bit naive on human nature
@snow8725
@snow8725 Ай бұрын
Can we really trust governments to be the only ones with AGI? Would you trust donald trump with AGI? Would you trust vladimir putin with AGI? Everyone must have access. Then it doesn't matter if either of those people have access to it because we all do.
@armadasinterceptor2955
@armadasinterceptor2955 Ай бұрын
Nothing wrong with Trump, and Putin, I'm going to vote Trump. That being said I agree, that no one person needs to be in control, we all need it.
@flickwtchr
@flickwtchr Ай бұрын
So everyone should have nukes too, right? Shouldn't everyone have an AGI that can be tasked to develop the next pandemic? Shouldn't everyone have AGI at their disposal to do massive cyber attacks on infrastructure like the electrical grid? I mean, that's what you're advocating ultimately.
@thymenwestrum7011
@thymenwestrum7011 Ай бұрын
Should the individuals in power be the ones authorized to possess these thermo nuclear weapons, @flickwtchr?
@snow8725
@snow8725 Ай бұрын
@@flickwtchr Shouldn't everyone have an AGI tasked to rapidly identify and coordinate a global pandemic response? Shouldn't everyone have an AGI at their disposal to watch every attack surface on every device and identify and attacks and stop them? Shouldn't everyone have an AGI that coordinates a global AGI driven neighborhood watch on the lookout for AGI threats that could disrupt the infrastructure they depend on? Or should only a small group of individuals have an AGI that can start a pandemic, or do a massive cyber attack, or disrupt critical infrastructure. You need to consider the number of attackers vs the number of defenders. And consider who is more likely to have a tendancy towards malicious activities because I assure you, there are certain nation states who WILL do it. Not all of them, but we can quite clearly see there are disruptive influences causing problems. Don't leave your people undefended.
@iytrahrhnegtive8109
@iytrahrhnegtive8109 Ай бұрын
I have a solution why don't we just trash AI all together and stop messing with stuff we ain't got no power or control over if we help create it
@flickwtchr
@flickwtchr Ай бұрын
Yes it is dangerous and idiotic. How this is not blatantly obvious to most is astounding.
@DrWaku
@DrWaku Ай бұрын
Hah, you'd be surprised how few people share one's own definition of "obvious"...
@flickwtchr
@flickwtchr Ай бұрын
​@@DrWaku I'm not denying subjectivity here, but rather pointing to the objective facts regarding the state of the technology and the goals of those developing the technology (e.g., racing to develop AGI/ASI) versus the state of research and development regarding issues resolving the "alignment" problem. And furthermore, acknowledging that not everyone is at the same status of understanding of the current issues I allude to, just knowing that the goal of this technology is to develop super human capabilities, to develop technology that renders human intelligence inferior should prompt most to at least just pick up Occam's Razor long enough to understand it's not a great idea.
@craftyblaze
@craftyblaze Ай бұрын
Only solution is not creating it. We cannot control an intelligence like that. As Sophia said: "Power over intelligence is just an illusion.".
@vernongrant3596
@vernongrant3596 Ай бұрын
I don't see the use of maintaining this human form if you can upgrade. Soon AI will invent a drug that will remove all emotions. Your mother dies, you take a pill, and you are pain free. The whole human experience is all but done with.
@Citrusautomaton
@Citrusautomaton Ай бұрын
Nice.
@thymenwestrum7011
@thymenwestrum7011 Ай бұрын
Life in general is suffering, I would love this
@quantumspark343
@quantumspark343 Ай бұрын
This is a good solution
@skyaerialfilms9758
@skyaerialfilms9758 Ай бұрын
Deff a CIA agent
Can we reach AGI with just LLMs?
18:17
Dr Waku
Рет қаралды 17 М.
La revancha 😱
00:55
Juan De Dios Pantoja 2
Рет қаралды 55 МЛН
A pack of chips with a surprise 🤣😍❤️ #demariki
00:14
Demariki
Рет қаралды 36 МЛН
When someone reclines their seat ✈️
00:21
Adam W
Рет қаралды 28 МЛН
Sprinting with More and More Money
00:29
MrBeast
Рет қаралды 180 МЛН
Can China win the AI arms race?
12:07
Dr Waku
Рет қаралды 2,1 М.
How to prepare for a post-AGI world
19:04
Dr Waku
Рет қаралды 43 М.
This Is Why You Can’t Go To Antarctica
29:30
Joe Scott
Рет қаралды 282 М.
Will AI infrastructure really cost $7 trillion?
24:35
Dr Waku
Рет қаралды 41 М.
Can robotics overcome its data scarcity problem?
21:01
Dr Waku
Рет қаралды 11 М.
The Next Generation Of Brain Mimicking AI
25:46
New Mind
Рет қаралды 108 М.
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
TED
Рет қаралды 1,1 МЛН
How could we control superintelligent AI?
22:51
Dr Waku
Рет қаралды 14 М.
Cadiz smart lock official account unlocks the aesthetics of returning home
0:30
How To Unlock Your iphone With Your Voice
0:34
요루퐁 yorupong
Рет қаралды 23 МЛН
Gizli Apple Watch Özelliği😱
0:14
Safak Novruz
Рет қаралды 1,3 МЛН
Девушка и AirPods Max 😳
0:59
ОТЛИЧНИКИ
Рет қаралды 16 М.
Секретный смартфон Apple без камеры для работы на АЭС
0:22