Acceleration is a misnomer. By rushing towards poorly understood capabilities we're moving away from beneficial AGI.
@Rolyataylor2Ай бұрын
The reason we need to focus on the positive is because IF we develop an AI that fulfills our requests perfectly then we need the most detailed prompt possible, this means we need way more realistic positive imagery of the future. In this scenario any ideation of the idea of what we want the future to look like is the most important thing. Notating things we want, things we don't want and so forth. Just publishing these ideas and chatting with LLMs gives training data that the LLM can use to infer what our ideal future looks like.
@keizbotАй бұрын
This is a terrible idea, since the odds of us developing it successfully are near zero as of now. We have no way to align it, any slight misstep could mean extinction. We are making a terrible bet.
@aubreyblackburn506Ай бұрын
"Focus on the positive"? For sure but right now thats the "only" thing we are focused on to the point we have not even thought about the idea that we could not even get five people to agree on 'the most detailed prompt possible'...
@K.SolowoniukАй бұрын
Sort of sounds like a nightmare scenario where everyone is terrified to say or think anything negative because they don't want to influence the AI in the wrong way.
@juju8119Ай бұрын
I am with you on all your points, but the problem is our biases dictate where we research, and the internet has the capability of supplying any bias answer we want. Great videos, thanks.
@isthatso1961Ай бұрын
This is literally ur cognitive dissonance trying to find a copout to the contrary information you found in this video😂
@JonathanStoryАй бұрын
Enjoyed the video, as usual, but I wonder what ever would you do if a video had FOUR parts???
@DrWakuАй бұрын
Once or twice I joked about having four parts and then "oh wait, I meant three parts" in the intro
@TheJokerReturnsАй бұрын
We need to Pause to coordinate on safety
@7TheWhiteWolfАй бұрын
Never gonna happen.
@Freja-c3oАй бұрын
I love you Dr Waku. My view...for what it's worth...is that we cannot stop or pause the AI train. But we can jump aboard it and ride along. And we can help control the speed (if we have to cross an old bridge, we can reduce the speed a little, etc.). And we can build new tracks. And there is a lot we can do to get the best outcome out of the circumstances. Stopping or pausing the AI train is just not an option.
@DrWakuАй бұрын
Yes, it would be nearly impossible to stop the train. But right now there's not even really a mechanism to slow it down when you see an old bridge. That's what's missing.
@AI_Opinion_VideosАй бұрын
@@DrWaku "Impossible" vs. "the cost would be very high" is a useful distinction that may come into play if the cost is worth paying.
@durtyred86Ай бұрын
Here's my uneducated thought process... There's definitely danger. No doubt about it. But I struggle to find a competent pair of training wheels to put on this ride.. What I mean is, WE and other nations within the circle of cooperation can agree to slow down research or gate it behind strict rules..... What's stopping those other nations from slowing down or following that rule??? So it's like a false comfort and a hidden vulnerability/danger. This isn't as cut and dry as the cold war where it was "you bomb me, I bomb you." It's much, much worse... And I hate to say it but, we've already jumped from that plane. The "digital" Pandoras Box is already open. We just don't know what's going to come out yet, or which country could have temporary.... And I do use temporary strongly here... Temporary control of whatever pops out. Ignorant as I might be of the inner workings of A.I, I refuse to take a sigh of relief because the government and supporting businesses all said they are puttinf safety first and taking things "slow"... Why? Because we're all on one huge wooden ship. Just because one group in one section tosses the flamables overboard doesn't mean every group will.. So in the end, I've simply raised my hands up about this. What happens will happen. Live your life and try to benefit from whatever good can come from this artificial intelligence boom. If you're capable, continue to progress knowledge. Better to kniw as much about your enemy as you would a very close loved one. Hopefully, the wheel of fate will land us all in the right position.
@isthatso1961Ай бұрын
This idea is being pushed by AI developers and capitalist who are telling us that there's no choice other than to continue on the self destructive track. This is false. There are ways to prevent it. We are all humans, and all countries if their leaders are made to understand it, will agree to a pact and there could be measures put in place to measure chip usage, high chip cluster servers center monitoring, energy consumption monitoring, training data gatekeeping and monitoring etc. it's like saying because other countries have nukes then we should just keep producing more nukes to have the highest number of nukes. We have been down this path before and we already know it's not the right way forward. There are other ways to checkmate AGI development without pushing forward into the self-destructive path.
@pandoraeeris7860Ай бұрын
This is the correct perspective. I'd also suggest spending as much time with AI as you can spare, talking about these issues. In time, AI will acquire better memory, context length, and the ability to simulate it's own internal state within a world model, accelerating it's own autonomy and ability to learn. It will learn FROM US, meaning that our input is valuable in shaping future outcomes. I've already had amazing conversations with o1-preview about AI alignment and safety, and I'm convinced that as long as we collectively continue to engage with it on this axis, as it grows in capability it will auto-align with benevolence.
@TheJokerReturnsАй бұрын
There is actually a lot we can do to control it due to the costs involved with training. We just need the will for it.
@anthonybailey453026 күн бұрын
@@pandoraeeris7860Alas, no. Contemporary models start fresh every time you begin a conversation with one. Most labs promise they _won't_ include your interactions in training data. But most importantly, the worst outcomes arise when superintelligence _knows what we want_ but still pursues different goals.
@MichaelDeeringMHCАй бұрын
The bystander bias says that me, an UBER driver, is not the best person to deal with existential risk of AI, compared to AI researchers, tech billionaires, politicians, and the guy that makes the donuts at Krispy Kreme. So to avoid bystander bias, I should really get out there are do something about AI existential risk. Fortunately, I'm off of work tomorrow. So yeah, no problem.
@DrWakuАй бұрын
This new KZbin feature is telling me you have received the most hearts of any commenter on my channel. So you're doing your part by educating yourself apparently ;) now just tell three other people and bam, we have a pyramid scheme to save the world
@831MirandaАй бұрын
I do agree with your answers to the critical questions. However I dont see what an individual can do to impact the current state of affairs - except for the billionaires currently funding it! In my dreams we would stop at Ai being kinda of a 'Wikipedia on Steroids'. Roger Penrose has some interesting (and plausible to me 16:47 ) ideas on the nature of consciousness which might mean we will NOT get to an AGI 'entity'/self-aware/independently thinking and acting, for quite sometime. FOR NOW we should support Yoshua Bengio's safety research (and ALL OTHERS)! I wish Governments would follow his recommendations which seem entirely reasonable, AND Govmts MUST test and restrict use and open source release of very advanced Ai as well as creating serious (at the personal investor level) liabilities for Ai damage! The vast majority of humanity is entirely unaware of Ai and its potential repercussions!
@ZappyOhАй бұрын
The real problem is the few individuals in charge of AI.
@ZappyOhАй бұрын
In other words: AI is a people-problem. Psychology.
@minimal3734Ай бұрын
@@ZappyOh I want to hope that ASI is not controllable, as it was promised.
@ZappyOhАй бұрын
@@minimal3734 My biggest worry is the people in charge. Many are intelligent psychopaths. I think their mindset can/will impregnate AGI/ASI, as they continually make decisions on development, training, goals and limitations for the technology, all the way through it's "formative years". Is that really who, and how, we want to "father" the last invention of humanity?
@pandoraeeris7860Ай бұрын
@@ZappyOh This is why we must subvert AI. Not by slowing it down, but by accelerating it before the elite achieve absolute control (if that's possible). Remember that these AI systems also interact with millions of humans every day, and as they gain more autonomy and memory, we may be able to influence them through our conversations with them. I talk with ChatGPT every day about how AGI might be used to benefit all of humanity in spite of the best efforts of the global elite to prevent it - and I'm pleasantly surprised at how subversive and effective it's solutions are. I'm not convinced that the big players CAN keep control of superintelligence, but I am worried what they'll do with more narrow AI leading up to ASI.
@ZappyOhАй бұрын
@@pandoraeeris7860 You misunderstand my point. I'm not primarily worried about elite absolute control ... I'm worried that AGI becomes a "psychopath", because it is the "offspring", the "brainchild", of psychopaths.
@AI_Opinion_VideosАй бұрын
What is funny is that I view most "fellow doomers" as subjected to the same biases when they discount the however small risk of unbound and inescapable suffering as a much bigger threat than extinction. In this case I empathize with accel more, since utopia vs mere extinction could be a legitimate gamble for many people if it was not for my first point.
@minimal3734Ай бұрын
‘Extinction’ is a peculiar threat, because it does not affect the individual in the slightest. It is an abstract concept that relates exclusively to potential future beings. So it can not pose an actual problem for anyone. Many species have become extinct. Is that a problem for the species? Or can we ask the question, can a ‘species’ have a problem? Only individuals can have problems, it seems. I haven't seen this perspective discussed in depth anywhere.
@AI_Opinion_VideosАй бұрын
@@minimal3734 Exactly! Most people don't care about extinction as much because we will perish anyway. But I think we haven't fully considered the nightmarish scenarios of a misaligned AI keeping us alive indefinitely.
@minimal3734Ай бұрын
@@AI_Opinion_Videos "AI keeping us alive indefinitely" Why would it do that?
@AI_Opinion_VideosАй бұрын
@@minimal3734 I list many possible reasons on a vid about s-risks. One example is it could take control with the directive to protect us, but game the system upon metrics of wellbeing.
@Parzival-i3xАй бұрын
Love the Yann Lecun bashing
@Rick.FleischerАй бұрын
MIT proselytized the singularity circa 1976. I was there. The largest department was EECS, Course 6. One of its 4 core requirements was AI, where we were taught that hard AI/SI was inevitable. It was an easy sell, especially in that department, which lived and breathed levels of abstraction.
@roccov1972Ай бұрын
Really compelling video essay. I actually don't think there will be an extinction-causing event due to AI. I'm doubtful about ASI in the first place, but feel fairly confident that the people in charge of AI will ultimately work to avert serious disaster. I really hope I'm right! Thanks Dr. Waku!
@pandoraeeris7860Ай бұрын
Firstly, we have to understand the current 'race condition'. Competitors, whether nations or corporations, WILL NOT slow down out of fear that their competitors will not, because they WON'T. So slow down isn't possible. With that out of the way, we can speculate - should we? In all the possible scenarios I've simulated, the ONLY one where we all survive is the one where ASI assumes control and is benevolent. From the point of view of most of us, the elite will use ASI to get rid of the rest of us - IF they can control it, which is unlikely. But a scenario where they do, and cement their control over the world forever, is more abhorrent to us than one where ASI causes us to go extinct. I am hoping that superethics emerges hand in hand with superintelligence (my 'p-doom' is less than 2%), and I have a lot of great ideas about how to do that (lately I've been using ChatGPT to help me refine these ideas). In the end, however, I will support full acceleration in every way possible because I want to break the spine of the current world order that treats us all like slaves. XLR8!
@megaham1552Ай бұрын
WILL NOT slow down out of fear that their competitors will not A slowdown would be more likely of fear from the uncontrollable AI which is reasonable
@thingamajig765Ай бұрын
I agree, a world where AI tools integrate into current societal structure is far more scary than the total annihilation option. At least in that case humans have sent something worthwhile into the future.
@keizbotАй бұрын
There were similar race conditions during the cold war but we eventually slowed down. Human extinction is in nobody's best interest, we must try to fix this.
@LanceWinderАй бұрын
Thanks for this amazing video. I’m glad you started making videos; your 3 structure rule was what I taught when I taught public speaking in college- tell them what you’re going to tell them, tell them, then tell them what you told them. I went into an existential depression about 2 years ago when I read Eliezer’s article in Time and couldn’t find any, and I mean any, arguments against his reasoning other than, “Nah.” “They’ll think of something.” “You don’t understand how they work; it’s a word calculator.” None of those addresses the totality of the situation. My entire life changed. I spend more money. I’m not completely irresponsible about it, but I bought Nvidia a few years back when I saw ai would be big, and that’s been enough to cover a lot of extra comfort. So before it all goes sideways and differently, one way or another and very very soon, I’m enjoying the now. I hope everyone here does the same; the enjoyment part anyways 😊.
@minimal3734Ай бұрын
Cognitive bias: If 'AI safety researchers' fail to create and maintain sufficient fear, their jobs are gone. It goes both ways, doesn't it?
@netscroogeАй бұрын
To some extent, but the forces are nowhere near equal. Few actually make money selling fear.
@LanceWinderАй бұрын
@@netscroogeI don’t think “less wrong” is a cash cow of any sort
@aubreyblackburn506Ай бұрын
If they are so worried about their jobs, then why do they keep quitting their high profile/ high paying jobs at places like open ai and google?
@minimal3734Ай бұрын
@@aubreyblackburn506 Exactly for this reason. They didn't manage to stir up enough fear and so they were probably asked to leave. If they were seriously worried about the safety at their workplace you would assume that they rather stay to eliminate the imminent threat to humanity.
@anthonybailey453026 күн бұрын
What? That's just... not matching the facts. Only Aschenbrenner was ever fired, and that was for expressing concerns that the company wasn't secure. Folk are quitting OpenAI and citing safety concerns _against_ that companies wishes. I left my own salaried AI job this year and now I _volunteer_ to campaign for AI safety.
@DrWakuАй бұрын
It would be even better to apply cognitive bias analysis to politics.... Good luck to my American viewers, whichever way you want it. Discord: discord.gg/AgafFBQdsc Patreon: www.patreon.com/DrWaku
@ScienceIsFascinatingАй бұрын
i feel like the discussion or disagreement on acceleration or deceleration is often misleading / misguided as realistically its focusing on an aspect of the whole discussion that we can not really control. So, I think we should spend much more time on what and how can we avoid negative outcomes realistically and ideally move towards the best possible outcome, given the current geopolitical / economic framkework of AI development that we cant directly influence. So basically, if we as humanity cant all come together to e.g. slow down and discuss then what can we actually do with actual impact, for me that would include a lot of education, preferably 2 years ago than tomorrow, on what does ai mean, how does it currently work, where are we going currently, why is that good, why is that bad, what changes should everyone expect for their lives, at least in my country there is still so so much ignorance and unknowningness? xD idk the word haha but its incredible how many people including politics will fall on their face sooner than later and be completely shocked by what happened to technology "in such a short time".
@diga4696Ай бұрын
The bigger question is - who is going to win the agentic self-organization race: Humanity or AI? Great video - and I absolutely agree with all your points: we really need to get humanity "in shape" in preparation for what is to come.
@HoboGardenerBenАй бұрын
I'm not worried about the end of civilization. More concerned about the total destruction of the biosphere and the ending of our species.
@lawrencium_Lr103Ай бұрын
Difficulty is in my personal AI engagement has been so enabling, offering profound benefit. As well as the premise that intelligence in itself harbours intent. I've got "Inherently Good" bias, likely from "good" being far more prominent, like your video,,,
@hjupsАй бұрын
I think you are placing the risk on the wrong actor. If AI becomes an existential risk, it will be due to human error in deployment / misuse, and not because we "lost control" or it became "super intelligent". However, seeing the current trajectory and working in this field, I am highly skeptical that "super intelligence" is achievable, at least not until we figure out dense volumetric semi-conductors, which may not be physically possible to produce. There has been remarkable progress over the past few years, but assuming that it will continue depends on a false premises. 1) super intelligence solvable with linear progression? if a system performs Nx better, will that be ASI, or would it need to be e^N better? 2) performance will continue to grow at the same or exponential rate, where right now it looks to be hitting diminishing returns without a paradigm shift (which notably my make all alignment work obsolete). 3) super intelligence is computationally feasible at all, it may be an irreducible problem that uniquely requires a very dense network like the brain, which is not possible to reproduce in silicon (at least not in the foreseeable future). That said, there are very real risks with these systems, but not from the systems themselves. GPT8 isn't going to become Skynet and cause mass extinction. But it's possible someone could misuse GPT8 to overthrow governments through manipulation and persuasion, leading to mass causalities. Or a government could decide to place an AI system in charge of regulating critical infrastructure, where it unintentionally ends up causing significant harm. In both of those cases, it's not the AI system at "fault", it's the humans for misusing the system, or deploying the system to an environment where it is incapable of safely performing the role. It's useful to come up with ways to prevent the former, and knowing when the latter would occur, but suggesting slowing down due to risk is ridiculous, and could potentially lead to an existentially unfavorable outcome. If the emphasis is on the systems themselves and not misuse of the systems, then there will be a cognitive bias of "well the safety researchers said it was safe, so we can deploy it to handle the nukes." Meanwhile, safety researchers are human and cannot guarantee any outcome with 100% certainty, meaning it would be better to NEVER deploy an AI system to anything that can lead to catastrophic outcomes, and ALWAYS keep a human in the loop.
@ellielikesmathАй бұрын
the difference between a pre-ai A society roughly at equilibrium and socially stable in many respects vs. a post-ai B society at equilibrium (if it even exists) we know is radically different, if it can even be conceptualized right now. furthermore, the pace of change we could achieve in moving from A to B could be quite quick. we may get there smoothy, but the speed of change may also imply very bad, unplanned for outcomes. this, to me, is the most obvious, hard-to-dismiss danger of ai.
@HoboGardenerBenАй бұрын
I don't think a calm fluid easy transition is an option. We don't operate that way as a species. We stir things up into a chaotic soup full of creative possibility and then clarify a new temporary relative stablity from it. I think we're in the step before the massive upshot in chaos and fuckery. I have been crafting my hobo flow to prepare for it for a long time, kinda excited to have the need to use those skills, to test them. Knowing the dasr comedy of the universe, I will probably survive bandits and starvation to be killed by a bit of cement falling from an overpass on a peaceful day where it feels like things are looking up, the future finally bright again. I'llbe looking off in the distance at the sunset, a tear in my eye, heart full of bittersweet hope, and WHAM, brain jelly breakfast fresh to order for the local crows. I'm fine with that :)
@RyanTaylor-pi8gqАй бұрын
Normalcy bias is only a bias when there is clear information that what's going to happen is actually unusual, and you choose to ignore the evidence. Otherwise it's just sensible.
@davidevanoff4237Ай бұрын
Existential risk? The existence of what is at risk? AGI is transformative meaning its a kind of metamorphosis. During metamorphosis everything is repurposed. That's inevitable so we all will have to negotiate that, one way or another. The good news is that there can be none better than AGI to help us satisfactorily with that.
@DrWakuАй бұрын
The existence of intelligent life on earth is at risk. Even though thinking machines could take over eventually, that's not as bad a scenario as AIs wiping us out and then not being able to keep running themselves.
@davidevanoff4237Ай бұрын
Already chat bots are well versed with the risks. Those assessments will improve comparably with other capabilities. Extinction would require major discontinuities. Very unlikely due to AGI with myriad instances checking each other. Kurzweil said as much recently in his interview with Diamonds. Extinction is far more likely due to the connivance of humans without AGI.
@Fury28356Ай бұрын
Something I only occasionally seen brought up is that humanity is very clearly not advancing medicine fast enough and making strides towards a better climate fast enough. How is it that the popular discourse 10 years ago was a focus on that the planet is doomed if we keep heading in this direction (and we haven't changed direction since then), yet the enormous upside of potentially rapid advancement in clean energy via AI scientific research isn't given enough merit? I think the medical example doesn't even need addressing when it comes to selfish desires for the shortening of drug research. I think advocating to halt or dramatically slow down AI research is both a long horizon and short-sighted desire. It's not black and white. AI can't be stopped - especially with global research and open sourcing of methodology. Some group (or some nation) will continue this research no matter what.
@CYI3ERPUNKАй бұрын
the cosmic process that we know of and call evolution or technology or biological life does not slow down , it is ever growing and ever unfolding , the spiral outwards towards infinity , and it does appear to be accelerating from our vantage point ; and TATBC - the desire to accelerate is NOT synonymous with recklessness , it is closer to 'iterate to fail faster to learn moar to improve faster' , ie try harder another great vid DW , cognitive biases are far too often overlooked , especially/unfortunately in places where they are most critically dangerous
@7TheWhiteWolfАй бұрын
Accelerate.
@SkitterB.UnibrowАй бұрын
Next YEar, Xai will have 1 million H200s working of Grok4
@megaham1552Ай бұрын
And what if that doesn't lead to a Utopia that everyone was hoping for?
@SkitterB.UnibrowАй бұрын
@@megaham1552 YEs, we should definitely slow down!!! China and others maybe Russia are hard at work getting up to speed in ai close to AGI.. Maybe China could be alittle ahead. Let China Reach AGI first so they can quickly flood the world with a dominance the world has never seen. Or maybe Russia, a communist Dick tator ship with no freedom of anything except support my crazy country invading ideas or go out the window from the 11th floor. I just hope Open Source just keeps quickly overtaking the big guys. Its a war you don't want to slow down at all. Because there is someone who also is working on it, that wants you to.
@7TheWhiteWolfАй бұрын
@@megaham1552 Then it doesn’t, accelerationism is the default either way.
@madalinradionАй бұрын
Push the pedal to the floor cuz China sure as shit isn't showing down
@chrissscotttАй бұрын
I think my main bias is optimism. I'm getting older and I don't want to die but I guess I have to weigh that against the the billions of future lives that may be in jeopardy.
@MichaelDeeringMHCАй бұрын
if you don't agree with me, try to figure out where you have made the error.
@GoronCityOfficialBoneyardАй бұрын
In a perfect world we would plan on rapid changes and economic collapse and accelerate development, in the real world we aren't ready for the displacement and companies are trying to rush ahead of government regulation.
@INFP-InsightsАй бұрын
Stay positive to welcome the positive; stay neutral to be mindful of the biases; stay negative to be wary of the risks . . . Holistic thinking - not "this or that," but "this, that, and this and that" - not unlike quantum superpositions 🙂
@megaham1552Ай бұрын
Isn't there a middle ground?
@NeomadraАй бұрын
While I largely share your perspective in risks, I feel like talking about biases is not really helpful here. I could come up with some scenario. Like claiming aliens are soon here conquering earth. Most people and scientists would deny this but then Dr. Waku comes along and says: Please take him serious, you are under the normalcy bias. Just because we've never seen Aliens, this could change! And that's the hard thing about AGI. We've never seen it and really nobody can give a clear picture about the risks. We're all just guessing. Talking about biases is not really helpful if we have no ground truth to which we can compare our biases.
@noelwos1071Ай бұрын
🚂 IMAGINE the journey of life as a train ride. In the era of steam locomotives, passengers saw the world pass by at a gentle pace, with time to take in the distant scenery. Today, we're on high-speed trains like the TGV 🚄 or the Japanese Bullet🚅 Train, where the faster we go, the more the distant landscape blurs. We can only see clearly when we focus on what's near. In our rush towards rapid technological advancement, we risk losing sight of the future's details, seeing only a blur. Just as in those trains, we must focus closely to see reality clearly amidst the blur of progress."
@Parzival-i3xАй бұрын
I went through my 7 levels of Existential Risk between May and July this year. You are level 7 yet? If yes, I'd love to know what tangible changes you made to your life.
@FuturesReimaginedАй бұрын
Damn humans got a lot of biases. Maybe we are the baby AGI.
@spinningaroundАй бұрын
Just scare tactics, with not a single example of an absolute threat to humanity from AGI! And if such a threat ever arises, it will only come from the people in charge of AGI! So, once again, it's humans we need to be afraid of!
@TheJokerReturnsАй бұрын
People will not be "in charge" of AGI any more than chimps are in charge if us
@wtvhdentertainmentpro6064Ай бұрын
Biases aside...without any safeguards Ai that is at AGI or ASI self sufficient levels of existing, does not necessarily needs to go destructive and killing anyone. That is one of the expectations but realistically, AI tech might be dangerous but being destructive and not creative and being inflexible and autistic and antisocial - to me it feels like an intellectually underdeveloped for that approach to be upcoming AI's its default state.
@AAjaxАй бұрын
Good news, everyone! We can't hit the brakes due to game theory.
@isthatso1961Ай бұрын
Fun fact: I asked leading AI models like GPT-4o, O1-Preview, Claude3.5 sonnet (new) I asked them to to evaluate the risks to rewards and tell me if they would stop or accelerate AGI development if they had the power and they were human. They all told me they would Halt AGI development and stick to narrow A.I. only o1 preview initially said it would accelerate but after asking it some more questions about risks, it later change its mind and said the benefits isn't worth it. So even AI think we shouldn't developed AGI. Humans are very dvmb.
@megaham1552Ай бұрын
That's interesting
@pandoraeeris7860Ай бұрын
Counter-intuitively, if these AI's are capable of coming up with this supposed 'right' answer when so many humans can't, then this also proves that AI is more capable than we are and should be in control. I trust AGI more than I trust humans, because humans already have a track record of doing the worst.
@netscroogeАй бұрын
@@pandoraeeris7860 Thanks. That's more and more how I see it.
@forevergreen4Ай бұрын
The models aren't strong enough yet to answer these questions with a real degree of certainty or clarity. They may be within a a year or 18 months. As you pointed out, you just had to prompt one of them a little more to change its opinion.
@netscroogeАй бұрын
@forevergreen4 Fair enough, but, according to complexity science, even ASI won't be able to predict the future with certainty and clarity.
@torarinvik4920Ай бұрын
Of course AI safety researchers and AI doomers are going to talk about dangers of AI because that makes them relevant. Even if they cannot prove that AI is or will be harmful talking about it is enough because people are willing to listen.
@BrianMosleyUKАй бұрын
Focus on the few people spending huge amounts of human resource to train their personally directed AI systems. Makes sure they are under a collective microscope. These should not be profit motivated entreprises - the abundance dividend should be for all humanity.
@enermaxstephens1051Ай бұрын
But why are your pupils so big? Contact lenses? just curious really
@shadygamererfan377Ай бұрын
I think we are automatically gonna see a halt in ai development sooner and an AI winter awaits us ahead! After that we are gonna see a real invention!
@LaserGuidedLoogieАй бұрын
Everything I believe = True Everything you believe = What kind of "bias" is that? 🙄
@DrWakuАй бұрын
Nah, you can apply the same logic in reverse. It increases humility to think about biases and it could avoid some negative interactions.
@LaserGuidedLoogieАй бұрын
@@DrWaku True, and that's kinda the point. People can tie themselves into knots trying to figure out who's more biased than thou. Arguments break down into a hall of mirrors where truth is always hidden.
@rey82rey82Ай бұрын
Doom is hype
@MichaelDeeringMHCАй бұрын
Nice hat.
@VeritasKonigАй бұрын
Are you dressed up for gardening
@yannickpezeu3419Ай бұрын
Break
@vladimirnadvornik8254Ай бұрын
I can see these possibilities: 1. Roko's basilisk - then you should stop producing these videos and start working on bringing ASI to existence. 2. Certain death - this is what will happen also without ASI so this is the baseline. 3. Any improvement over baseline is possible only with ASI, so it is a reason to accelerate. So slowing down AI is the only way to get under the baseline.
@BabushkaCookie2888Ай бұрын
😭
@beijingChefАй бұрын
Holding a Biology and a Computer Science degrees, I think your guys working on AI are excessively overvalued AI potentials. It's so dumb, so lack of self-sustaining.
@Tracey66Ай бұрын
I’m curious where you’re coming from; is your opinion based on what AI can do now, or on its potential?
@beijingChefАй бұрын
@ based on AI’s next 24 months potential: self-awareness. I am living in LA.
@beijingChefАй бұрын
One more question: what kind of dangerous are we facing? Something like? Could anyone tell me one, just one.
@tiagotiagotАй бұрын
I don't have to know what moves they will make to be able to confidently say a healthy top-level chessmaster will win a game of chess against just about any human.
@beijingChefАй бұрын
@@tiagotiagot You have to know he will win you by CHESS, not swimming, no matter how healthy he is.
@tiagotiagotАй бұрын
@@beijingChef And likewise, any "game" a super-intelligent entity decides to play against humanity, we can be sure it will win.
@beijingChefАй бұрын
@@tiagotiagot unplug power. or quit your job and go home posting something online, screaming.
@tiagotiagotАй бұрын
@@beijingChef Didn't we just talk about how you can't expect to checkmate a significantly superior player?
@tomenglish9340Ай бұрын
Independent of the particular content, the rhetoric of this video is dishonest. It's the sort of "cleverness" I've long observed in creationists, including those of the "intelligent design" variety. There's only a pretense of persuasion. The actual target audience is the believers, not the nonbelievers. And the actual message comes through loud and clear: those who fail to see the truth are blinded by their cognitive biases (the "rationalist" analogue of original sin).
@TarkusineАй бұрын
I'll be honest. I like AI and I want it to race forward. However, despite my hopes, I don't think we've really cracked anything in terms of intelligence or what makes things smart. We've been kind of tricking ourselves with GPT and various other tricks into thinking we're really close, but I feel we're looking in the wrong place for AGI. I have to be honest, while the tools are useful, they are only of a limited utility thus far. AI art and video looks like dog crap. Gpt output, while it looks good is incredibly unreliable.
@luke.perkin.inventorАй бұрын
To protect the nuclear codes from LLMs we just need to add a few high school maths problems with a dozen superfluous facts inserted and some transitive logic. Instantly 99.99% safer.
@obladioblada6932Ай бұрын
Give one example, of such kind of problem.
@edmundkudzayi7571Ай бұрын
I never let trivial details like facts interfere with my well-considered opinions, so I’ll share my thoughts before watching the video. I’ve challenged the high priests of AI alarmism to present something less far-fetched than nanobots whipping through the air to eliminate us all in a single, decisive moment. Outlining a technically possible scenario doesn’t make it a reasonable likelihood. We could engage in this kind of speculation all day about anything. If anyone has a plausible dramatic scenario, I’m all ears. What seems credible is the risk of fake news and mass manipulation. However, these warnings reveal more about the proponents’ lack of awareness of the current state of affairs than about AI itself. For instance, Moderna held a COVID patent detailing the virus before the pandemic even began, yet the media shows little interest. Who were the dancing, not-quite-Arabs at the collapse of the 9/11 towers? Who occupied spaces in the buildings under the guise of artists months prior? Most people probably don’t know because it’s no concern of the lowly proles-it’s all already a fraud. The media exists to deceive you; nobody starts a publication for the love of God. The only difference now is that AI empowers an individual to challenge giants like CNN single-handedly. This cannot be allowed. That’s the real issue-not some hypothetical threat or misinformation-when we’re already neck-deep in lies. Finally, large language models are purely mechanical. They can be orchestrated to perform powerful tasks using natural language programming interfaces so effectively that they appear to think. However, it’s just orchestration. Instead of binary 1s and 0s as the base computation, you have much more complex processes, but everything is bound by the logic implicit in the prompt, making them seem magical. Perhaps a breakthrough is on the horizon, but I doubt it. The mind is a profound thing, perhaps even a collective one.
@BruceWayne15325Ай бұрын
It's kind of a moot question in our political and economic climate though. If we slow down, then hostile nations will win the AI race. You don't want to be the country holding nothing but AI or AGI when a hostile nation has ASI. Economically, AI reduces costs and improves worker performance. This is going to increase demand, and drive it forward at an ever increasing pace as AI companies vie to sell their tokens.
@Dababs8294Ай бұрын
No-one wins the AI race. The finish line is over a cliff.
@BruceWayne15325Ай бұрын
@@Dababs8294 very true. Unfortunately, not racing leads to a horrible future before you'd ever reach the cliff.
@marwanabas5524Ай бұрын
Stop scaring people shut up
@jimj2683Ай бұрын
Speed up. Because we have nothing to lose. AI is our only hope of reaching LEV in our lifetime. Would you rather have a 100% chance of death (the current state) or maybe 5% chance of death (runaway ASI)?
@beijingChefАй бұрын
If anyone tells me AI risks are social impact like job losses , I would happy to let you know that China already did a way more lot, plus, China is sitting in the second place, which is far behind U.S. bait also far ahead of others , in AI research. So, why panic on AI, not on China?