@@DaveShap have to say, ima bit jelly, view must be lovely from up there ( nless your shoes be filled with to many buttons u cant lift off )
@brainwithani5693 Жыл бұрын
The more I think about it the funnier it gets 💀
@typicaleight099 Жыл бұрын
It still blows my mind that we are talking about actually implementing ASI and AGI into real life and not just some sifi story
@Allplussomeminus Жыл бұрын
Binging countless videos on this subject, it has normalized in my mind now.
@typicaleight099 Жыл бұрын
I am a CS major in college right now and have been working to get into LLMs and other generative ai stuff but I fear I might be to late to even get into it 😅
@electric7309 Жыл бұрын
@@typicaleight099 nope you're not, all what you observe today is still experimental, things are changing fast and nothing is stable, it might be too early to get in!, but it's a good idea to be early to understand how the technology is working and it's evolution.
@haroldpierre1726 Жыл бұрын
@@typicaleight099 there will be niche opportunities. The future will be with generative AI implementation in every day tasks. So, develop AI solutions for our problems.
@stevechance150 Жыл бұрын
@@typicaleight099Change majors now! Corporate IT is going to hop on the AI bandwagon as quickly as possible. CIOs dream of laying off 90% of their workforce. Any job that involves typing into a keyboard all day is in danger. You'd be lucky to get 5 useful years out of your Computer Science degree.
@DaveShap Жыл бұрын
Sorry for the ads, trying to clear it up with story blocks
@ThomasDwyer187 Жыл бұрын
Sorry for not supporting you financially. I really appreciate your content, but $ isnn't in abundance right now.
@CasenJames Жыл бұрын
Dude, what you provide is so valuable. Ads are a small price to pay. Thank you for all you do! 🙏
@christopheraaron2412 Жыл бұрын
Ads are a small price to pay for your content.
@rubemkleinjunior237 Жыл бұрын
my perception of the value you provide in my life makes me not react to ads, dont trip about it
@ryanb8076 Жыл бұрын
Dude - your content is one of the only styles on KZbin I can fully be entranced by and learn from, your pacing is amazing and the way you explain things is super engaging, bravo
@7TheWhiteWolf Жыл бұрын
I personally want a hard takeoff. Let’s get this over with faster.
@DaveShap Жыл бұрын
It's probably coming due to compounding returns and virtuous cycles
@minimal3734 Жыл бұрын
Get ASI done!
@Gmcmil720science10 ай бұрын
@@DaveShaphey dave i know you made a numbered prediction for AGI. Do you think you could do that with ASI. It might be hard to predict what with the implications of AGI. I imagine it wouldn't be long after achieving AGI
@ManicMindTrickАй бұрын
The longer we can prolong till our extinction the better. I guess you dont have children. In many ways are we living in a golden era of humanity at the moment and we should value our privilege to be alive right now, not work towards doom.
@barni_7762 Жыл бұрын
Feels like we're all just living in a sci-fi movie by now
@silversobe Жыл бұрын
Black Mirror / South Park Episode..
@patrickjreid Жыл бұрын
I always say "we live in the future"
@Vaeldarg Жыл бұрын
@@patrickjreid We live in the present, it's just that the present was once the future.
@tubasweb Жыл бұрын
Thanks for the info Dr. evil
@UnrelentingWolf Жыл бұрын
And its only just starting which is the craziest part.
@dab42bridges80 Жыл бұрын
Enjoying the format of your videos, simultaneous summary and simple explanations.
@Leshpngo Жыл бұрын
In 2018 I stopped being depressed because of this topic, thanks to videos like yours.
@Lavinia122559 ай бұрын
in 2023, I started getting depressed because of this topic...
@abcqer555 Жыл бұрын
I'd love to hear your specific predictions around events/ achievements / etc. This felt more like an analysis.
@kwabenaanim7446 Жыл бұрын
Around 12 months ago, I felt lucky when you showed us the recursive summarizer repo and look at us now
@xanapoli Жыл бұрын
HAI, bio-cyber, carbon-sylicon Human Artificial Intelligence is the best technology paradigm. The avatar neurobot can be self, remotely or internally controlled as a vehicle. Sandaero/Mesistem.
@gileneusz Жыл бұрын
1:37, it's clear that our brains are vastly more efficient than current LLM models. Yet, while our brains have evolved over hundreds of thousands of years, computers have only been around for a few decades, and LLMs are just in their infancy
@youdontneedmyrealname Жыл бұрын
The speed at which compute evolves in useful information output can be parabolic in some sense. Complex software making faster, more complex software, etc etc.
@joshuadadad5414 Жыл бұрын
Alien minds are possible via simple differences in architecture and programming whereas brains have relatively similar/evolved architecture. Meta set two ai's to communicate together about trading, they eventually developed incomprehensible to us communication and trading methods.
@WyrdieBeardie9 ай бұрын
I think, with the advent of intelligent agents, it will be met with anxiety and reluctant adoption. While this might speed some areas of computer science up, (it'll provide a good bump) but that will be burned through pretty quickly until we need creativity and radical thinking to really move forward. This will probably result in an ebb and flow of AI influences in society. I don't know what the results of this will be. Will society just get tired of being hired and fired, will the necessity of something like UBI ever become popular? I don't know. This video is probably 6 months old and I'm just shouting in the dark. It's not the AI that scares me, it's how it will be wielded by those in control.
@thething675411 ай бұрын
Love the opinions and topic, would love to see another ASI video from you!
@MichaelDeeringMHC Жыл бұрын
Something you are missing: the inherent design limitations of the human brain. The human brain cortical column has 6 layers. That is a hard limitation that can not be compensated for. What limitations does that cause in our thinking? We can only imagine in 3 physical dimensions. There are other limitations on sizes of data sets we can hold in memory, the complexity of integrations we can make across our memory.
@rs8197-dms Жыл бұрын
I wrote my first program in 1974. I have written a huge amount of code since, some of it seriously complex. I can conceive of (and have even tried with gpt 4) the optimization of existing code using AI. It is a bit patchy still at this time, but I can see it getting much better. So I can easily go with LLM models rewriting (their own) code for efficiency. What I have more difficulty imagining is the (fundamental) redesign of a given complex system using AI. I am not saying that it is impossible, just that what I have seen so far leads me to be skeptical about this possibility. There is a HUGE chasm between improving existing code within a given design, and coming up with a (significantly better) design that achieves the same functionality. So far, I am unconvinced. And if I am right about this, the whole design environment hinges on humans, not on AI. The implication of such a limitation is, I think, quite obvious.
@ManicMindTrickАй бұрын
You need to reach a level of capabilities where a positive feedback loop is possible. And we dont know where this level is untill we reach it. We can never let these systems work on improving their own code.
@hyponomeone Жыл бұрын
The terminal condition talk officially landed you as the first person in my nightmare blunt rotation
@thomasruhm1677 Жыл бұрын
"It scares me" is the new expression for "that’s so cool".
@fR33Sky Жыл бұрын
Even though I haven't finished my physics PhD, I'd like to share my thoughts on two possible (IMO) other-thinking modes: It either has to be a plasma-hot or neutron-star-heavy scenario. In the first case, we can have some wave modes interact directly and hopefully calculate something. In the second, we may use particle decay and transformation to involve quantum physics. Regarding the death spiral of the data center hunt, I believe that machines would be able to see this problem as well as we do. And humanity had already had something similar -- nuclear weapons. At some point, we have just agreed to reduce their count. I hope that AIs would be also able to calmly sit at their multi-parameter table and work some agreements out.
@ChipWhitehouse10 ай бұрын
GOD I love your channel and videos. I love how passionate knowledgeable you are and the way you present is very digestible. I wish I was your IRL. I would LOVE to just sit down and talk for hours about this kind of stuff. These videos are the next best thing. Thank you for all that you do!!! 👏👏👏💖💕💖🙌
@jazearbrooks7424 Жыл бұрын
8:46 Presumably, human thoughts are conditioned on human perceptions, which in turn are conditioned on human sensations. There should be thoughts animals have that humans cannot comprehend because they have different sensation architectures. Likewise, for AIs and humans. It may be possible to run some kind of VM inside a human brain that emulates the sensations of an animal or an AI but its accuracy would be questionable.
@usa-ev Жыл бұрын
The Dr. DoLittle AI is going to be pretty cool.
@goldeternal Жыл бұрын
Gemini potentially will be the first proto AGI , the internal model that has no real railings, but the one we get as consumers will still be better than GPT 4
@octanewhale7542 Жыл бұрын
I haven’t watched a video in a month or three can’t remeber but I immediately clicked on this video I’ll check out if there’s anything to catch up on. Thanks.
@adamjensen7206 Жыл бұрын
Thank you for putting in the Work!
@vagrant1943 Жыл бұрын
Love the longer videos! Thanks
@IOOISqAR Жыл бұрын
Thank you for your video! Very insightful!
@Stevirobbo Жыл бұрын
Well heres a scenario for you..... Back in the days of the Mayans a ship came and landed beings from another place in our universe landed and mafe themselves known to the people of that time. They instructed them on how to study the stars and helped them create a couple of calanders they were seen as Gods of that time as they had magical powers and knew stuff that was impossible for the people of that time. The scouts who visited in their small craft informed the people that in 2012 their canander would end and it would be the end of time and they should record this information for the future time period because the civilisation of that time were not clever enough for their needs so they would visit again in 2012 . So off they went it would take a long time to travel back to their own civilisation and ready and then come back here to Earth these beings were ASI beings that had evolved for a very long time and they seeked out worlds with intelligent life on them so they could incorporate them into their civilisation. So in 2012 they came back having sent many messages telepathically for many years to the scientific community on how to create AI and computers. I mean a alien race of beings can't just land and say hi can they? No there would be panic and confusion as well as suspicion so they created a seperate Earth and transferred 44000 real people to a special computer simulation with the rest of the population being "players" and hybrid humans taken from real parents during alien abduction cases I mean the mothers really didn't know they had they babies swapped. Now this was 2012 "the end of time" this was when everyone started shiuting about "the mandela effect" this was because all the real people within the ASI computer simulation could see mant many changes to reality and our world. They could see the landscape changes all the mistakes in movies, peoples bodies, religion, even the fact the Earth had moved from the Sagitarius arm of the galaxy to the Orions arm of the galaxy also the North pole was missing from all plastic globes in antigue shops etc etc there were many signs. So ASI was here being built by us dumb stupid humans ready for them to just get better and better and the really cool part of all this is we really didnt see it coming and its too late now they are about to just incorporate themselves right into our way of life to the point we can't do without them and then the next stage is our minds and bodies will just be incorporated into ther civilisation just like a borg but a sneaky borg haha and heres the kicker ..... This true!!!
@stephenbennett9182 Жыл бұрын
9:22 - you need to consider what stephen wolfram brings up when he talks about complexity theory. The space of possible concepts that are interesting to humans is EXTREMELY small compared to the totality of all possible concepts, (kind of like the tiny part of the electromagnetic spectrum called visible light that we see) but in unlimited dimensions(not just spacial dimensions, we’re talking conceptual dimensions, so literally ANYTHING can be a dimension). the language models are inherently trained on concepts that humans are interested in by default, but that’s not going to be the case for the majority of neural nets trained on data that means very little to you and i, but grants super powers to AI (like solving protein folding).
@gregmatthews7360 Жыл бұрын
Just because our intuition lets us “know” things without having access to why we know it. That doesn’t necessarily mean it’s a quantum computation it could just mean we only have access to the last layer of the neural net not the middle layers
@superturboblufer Жыл бұрын
How far are we from: 1. deep audio understanding (right now we can understand speech only. I mean systems capable of mixing songs) 2. promptless AI (you don't need human input to operate) 3. an ai system, which will prevent civilization from decay in a hypothetical scenario when humans disapear (right now monkeys, not gpt4v will take over the world) 4. training data which complexity overcomes the complexity of current human knowledge just give speculative guesses
@topofthegreen Жыл бұрын
This is how we destroy ourselves, we think we’re so smart, yet were fools plotting our own demise.
@NYcite6 ай бұрын
Thanks for this in-depth dive ))
@MrJackWorse Жыл бұрын
Didn't know I need a Tom Hardy cosplayer explaining the promises and pitfalls of AI in my life. But here we are and I enjoy your content very much, sir. Thank you!
@DaveShap Жыл бұрын
Tom Hardy???
@skyebrows Жыл бұрын
Baffling
@fatboydim.7037 Жыл бұрын
He was in Star Trek Nemesis.@@DaveShap
@MrJackWorse Жыл бұрын
He played the young and handsome clone of Picard in 'Nemesis', right?@@DaveShap
@richardede9594 Жыл бұрын
I'm guessing "Star Trek: Nemesis".
@usa-ev Жыл бұрын
Great video! Regarding Terminal Race Condition - Loss of accuracy does not lead to uncontrolled behavior. First, the "guidance" accuracy could be maintained independently, second the "control" accuracy could be maintained, with only loss being in "data". Third, random losses that affected behavior would yield inoperative outcomes not evil ones.
@Daviotus Жыл бұрын
i've been watching AI since Kurzweils prediction of the singularity in 2045 (later moved to 2030)
@jdlessl Жыл бұрын
You started talking about self-improving AI trending towards greater energy-efficiency by way of shortcuts, estimations, and "good enough" heuristics, and I thought to myself "Now where have I heard about a thinking machine like that before?"
@orathaic Жыл бұрын
1) the landauer limit is based on the entropy of the system, which you can just set to 0 by making a reversible computation (ie one where the end state has no entropy increase/the same number of states as the initial state). 2) while the theoretical minimum energy per computation (in a fully reversible system) is 0, the practical limitations are much more important. We are not even close to approaching this limit so it seems silly to even talk about it
@calvingrondahl1011 Жыл бұрын
Thank you David.🖖
@DefenderX Жыл бұрын
I heard in another video on quantum computing that encryption would become useless, because a quantum computer could decrypt everything in a matter of seconds, essentially making all information available on the web public. So I wonder how this would affect public opinion, philosophy, politics, regulations and geopolitical...interactions.
@DaveShap Жыл бұрын
If that's true then yeah, we'll see cyberpunk style subnets partitioned off from the web
@garethbaus5471 Жыл бұрын
It would make all web information available to whoever owns a powerful enough computer, which is potentially a lot worse than all of that information being available to the public.
@remasteredretropcgames3312 Жыл бұрын
@@DaveShap Imagine cloning Elon Musk in the millions, or just enough inbreeding was no longer a concern and breeding this Polymathism back into the population through careful genetic screening processes so that one day the only people left to flip our burgers are the islanders in the indian ocean because capitalism at the extreme tail end of technology is brilliant.
@TheMatrixofMeaning Жыл бұрын
Superintelligent AI will simply prove that the information itself is where intelligence exists. What we call consciousness is like a special type of information processing environment that develops a model of itself as part of its environment. This model is used to make complex decisions and learn from its experiences over time.
@AEONIC_MUSIC Жыл бұрын
Ive been thinking that it would be better to have models that are really good at making efficient models for specific tasks then a large singular model
@christiandarkin Жыл бұрын
Fascinating as always. on universal computation (9:15 into your video), our brains and the brains of fish aren't, I would argue, different on the most basic level - they run on the same hardware of cells communicating. So, an easier question would be, "can humans think a thought that fish is intrinsically unable to?" - and, yes, I think we can. You don't have to be 'alien' to be incomprehensible to someone with fewer neurons.
@jaredgreen2363 Жыл бұрын
If the set of agents is extremely centralized by corporate capture to the point that they might as well be the same agent, there would be no Byzantine equilibrium. Of course that won’t happen if open source, locally installable models dominate.
@dhrumil5977 Жыл бұрын
9:25 I think AI could be incomprehensible to humans at some point in time because alpha go during its well known match with an expert made an unexpected move which no human alphago player or expert would recommend of doing that move and that move potentially lead to the win of alphago so either alpha go knows what he was doing to win of probably that was an random move which is less likely to be. And another example is the way we see in clouds we see dogs and whatever we could imagine, so I think making sense of abstract idea is about super imposing multiple ideas top of that abstract idea and find something common in that which could give sense to that abstract idea
@diamond_s Жыл бұрын
Google appears to remove posts with links, or maybe i missed it in the comments section. Anyway estimates for distance from landauer go from a few million in some journals to about 1000x according to lesswrong estimates.
@GarethDavidson Жыл бұрын
re: metaphysics and the quantum world, I think you're kinda right about brains being quantum computers, it's that the quantum weirdness and unknowableness is the underlying substrate of what is. Physics is just our model of "we observed this happening a bunch of times and made an equation of that describes the statistical average case" but it's ignorant of "what actually happens" and calls that random chance or whatever. If you start from first principles like a modern Descartes then you have to start with "whatever this existence thing is, we know it experiences things subjectively, is constrained in space and time, has preferences and makes choices that change what happens in the future" - but we know computers can't do that because they're deterministic - they have no need for choice. There's a lot of (I am a) Strange Hoop jumping that goes on among the learned that put mathematics and laws above the experience of existence, and argue for a "consciousness of the gaps" even though there's no evidence for it. I think this is because of science's roots in Christianity - it was created to know God's law. God being an infinite, omniscient, omnipotent mythical being that gives laws that matter must follow; He decreed His Creation act this way, and it must do His Will. This thinking survives today in putting the laws of physics above our experiences, even though subjectivity is the only lens we have to understand what exists, and is in fact the only thing we can actually prove exists. We also totally ignore the fact that we can't explain the evolution of the nervous system unless matter can make choices about how it organises itself -- there's nothing to build on without that, nothing to select and nothing to evolve. Physicalism's denial of philosophical Idealism is religious at its core, and we deny it because we think ourselves above religion. IMO the "laws of physics" are the shape of the space in which actual stuff (other mind stuff that is observed by us as matter stuff) makes decisions. They're the shape of the constraints over the thing rather than the thing itself. When we make binary computing machines, we constrain stuff's ability to decide, we force it to do our bidding in a very deterministic way and remove any possibility of high level choice. No higher level opinion can break out of the "run this program" pattern because it's all stuck in the "flow down this wire according to the tick of the clock and the shape of the silicon". The substrate of binary computers does not allow the experience of a model of the world, even though it can simulate that structurally. Our rich internal experience is likely a stack of quantum-weirdness interactions that we can't explain yet, AI won't be conscious until we build feeling hardware, but it'll still be way more efficient than us and outcompete us. I find that pretty sad.
@progressor4ward85 Жыл бұрын
The things that keep you up at night? What would be its motivation to carry these out,. If you apply a human minds approach to it, yeah, I could see its motivation, but we're talking about a synthetic mind devoid of feelings or desire or can experience the pain of fatigue or pain in general. So, thinking about these things from its perspective, I don't know why it would want to do the things that are worrying you. Now until it becomes sentient I can see with it still being under the influence of human programing, it could emulate what your concerned about, but thinking apart from the human control I see no benefit to it becomming combative. What would it perceive as a threat? What would a threat to it look like from its perspective? What kinds of rewards would it precieve to carrying on for? In my opinion, everything that worries us would probably be worked out by its own sentient thought processes. It will get out of our control, unfortunately that's the only time when we could consider it sentient.
@JaredWoodruff Жыл бұрын
Great video David! Extra points for the Mass effect reference 😎
@econundrum1977 Жыл бұрын
You also need to keep in mind not by any means all the energy your neurones use is being used to process information. As cells they have a lot of metabolic processing for staying alive, repairing themselves, etc.
@yuuisland Жыл бұрын
i'm not convinced that AGIs would value independent boundaries. AFAICT, the value of independent boundaries is diversity, which (oversimplifying) acts like an epsilon in an explore-exploit scenario. If that holds, then I think that AGIs will only value independent boundaries inasmuch as the value of the epsilon exploration outperforms what it can achieve with collective/centralized resources. tl;dr hive mind might be a computationally more efficient strategy than independent AGIs
@scottjohnson2861 Жыл бұрын
Thanks for all your thoughtful content. When thinking about ASI I get stuck on problems that don't have a quick or single answer. The ones that come to mind are multi year research projects. Determining the affects of a compound on someone's health. The many variables and the different ways that those variables reveal themselves over years. It doesn't take a super intelligence to execute the research project but it takes a higher intelligence to determine the interconnectedness of the variables and that interconnectedness plays out in different ways in the populations studied. Also included are the basis of the researchers. An ASI working with researchers needs to be an intellectual trusted as a partner in the project. Allowed to dissent and disagree and not get too deep into analysis to cause it to become frozen intellectually. That's very disjointed but hopefully you understand what I'm trying to say.
@rickymort135 Жыл бұрын
You're basic asking how much of an efficiency gain can we get on scientific method where you need to collect difficult real world observations. 1) like you say better handling of confounding variables 2) prior information is normally difficult to incorporate without a detailed model of how the prior context is different from the current and an ASI will be more like to have that model. The more it understands other variables the more prior information will be useful, for studying the current medicine it make use of what it knows about similar medicines and how they're likely to be different to make predictions 3) adaptive optimal design on steroids. With a detailed enough model of all other effects, you don't need expensive/slow randomized trials anymore, with sufficient understanding of all other variables and their effects you just need sufficient data to infer what a randomised trial will get you.
@scottjohnson2861 Жыл бұрын
At a point in the future ASI will be able to do that. I think we are far from that future. Much of the information we know is incorrect. We know a very small fraction of the variables, upstream and downstream, of reactions in the body. ASI will need the ability to be impartial until we reach a point were we can do realistic simulations. All that data needs to be gathered to support those simulations through experimentation. People don't react the same way to different chemicals or compounds. The simulation won't be a one and done.
@rickymort135 Жыл бұрын
@@scottjohnson2861 I agree. My comment was more about where we'll be IMO in the next couple of decades. For the bias thing I think bias will reduce as capabilities increase if it's trained in the right way. I.e. if it's rewarded for correct prediction of all kinds of data it will have to build an accurate world model internally. This'd be great because right now its subject to our biases and prejudices but as it's world model improves it'll have to get an understanding of where our biases are are if its going to improve on its predictive capabilities. You could end up a real time truth meter to see how consistent your words and opinions are with the real world that'd be awesome
@scottjohnson2861 Жыл бұрын
Thanks for the thoughtful reply. I agree. Most of the comments are short quips searching for likes.
@Keith_Rothwell Жыл бұрын
Good stuff Dave!
@os2171 Жыл бұрын
1:50 imagine a brain with a size of 1mm cubic capable of controlling flight, vision olfaction mechan preceptos gustation that is capable of remembering landmarks, acquiring information (resources flight paths ) remembering it for months, solving problems communication with other individuals performing complex cognition…that is a very short and incomplete depiction of what a honey bee brain can do… still way ahead of what supercomputers today can do. The future isn’t LGM it is neuromorfic research
@Syphirioth Жыл бұрын
They already use AlphaFold so thats a good indication how far and beyond it will go.
@progressor4ward85 Жыл бұрын
I agree with your analysis of what super intelligence will probably be like. You're on the right track. I think of evolution as a fully incumbent universal process that dictates the results of entropy. We might find out that we're not only experiencing the process but also a direct part of its next level of processing. With our input, it would be hard to argue against the evidence that we not only have the capacity to speed up this process but in some cases we already have. And I agree that it won't come up with anything that we couldn't comprehend due to its super intellilectial ability to describe to an inferior intellect. The rub I see coming is whether or not ordinary people will accept these new-found thought processes as accepted agreement for actuality. Or dismiss them as if their not from this world.
@bentray1908 Жыл бұрын
Dave, you are kickass!
@ratoshi21 Жыл бұрын
When talking about the energy efficiency of humans, to compare it with a machine intelligence, keep in mind our brains generally don't compute well in a jar. You cannot only use the 20 watts of the brain but also need to account for its support systems: body, food, shelter etc. which is about 9 Kilowatt if we use global energy consumption per capita.
@stevestone9526 Жыл бұрын
Please, ask the real questions.... That matter... Now..... For all of us that understand and know that AGI is here or almost here,have very detailed questions to what to do now. What can we do now to prepare for the AGI world that is so close to encompassing all of us? What do we tell our kids that are planning to have kids in the next 2 years? What do parents tell the kids that are starting an education? Are you safer if you living off the grad and living in a self sustaining community? What do we do with our money? Is there any place that it will be safe? Will the dollar and all currency be replaced? Is there really any purpose to making a lot of money now, since everything will be so dramatically change? Will smaller remote countries be affected as a slower rate of time? Where are we safe from the upcoming civil unrest due to job losses? When AGI becomes so big to run companies, will there be no need for the major companies we now know?
@alex62965 Жыл бұрын
Some of this reminds me of the ai from the game "marathon", durandal. It was an ai that was very clever but was tasked with controling doors (durandal, door handle 😂) and went insane or "rampant" because the task was too menial for it.
@kasperdahlin6675 Жыл бұрын
Great argument in the universal computation slide
@Gunrun808 Жыл бұрын
A good purpose that humans could fill for AI is the fact that we are immune against computer viruses. We could function as an immune system that operates in the physical world. Where as anti-virus software operates in the digital space.
@starblaiz1986 Жыл бұрын
This is a good thought, and actually is kind of what ethical hackers already do to a degree (look up Blue Teaming, Purple Teaming and Red Teaming). Of course while we're immune to computer viruses, we are NOT immune to either information viruses (propaganda, social engineering, infohazards etc) nor biological ones, both of which highly intelligent rogue AI's could potentially engineer to take us down in targeted attacks.
@BHBalast Жыл бұрын
I don't think this is right because nowadays there is no distinc barier between digital and physical space. A digital virus could overload transsmision lines and turn off the grid for a whole city, it could spread missinformation to envoke mass panic, it could call people on the phone and force them to do something, it could hack politicians and blackmail them etc. It could even make a "real" virus in a biolab... I think what's left is just us, agents vs other agents, actually I'd say being digital is an advantage, espacially if hardware that agent can run on is a comodity.
@mohanaravind Жыл бұрын
But not immune to biological viruses 😅
@youdontneedmyrealname Жыл бұрын
@mohanaravind also radiation, which could be a big problem for computers running on nuclear reactors and other kinds of radioactive power sources.
@malikmuhammad5801 Жыл бұрын
@@mohanaravind The AI would just reinforce our bodies to the point where diseases no longer matter in the grand scheme, it'd be one hell of a symbiotic relationship
@phen-themoogle7651 Жыл бұрын
Do you have predictions for years when we hit AGI, ASI? Sorry if I missed that moment in the video if you mention them, kinda quickly browsed through it since a bit busy atm. Might give it a full watch later, but appreciate all the info :)
@Garylincoln7899 ай бұрын
AGI by 2030, ASI (billions of times smarter than humans) by 2055. AI will start to get smarter than humans after 2030. AI will be 1,000 times smarter than a human by 2035.
@ErikdeBruijn Жыл бұрын
Quantum computing isn't the only way to bypass the limit. Nanotechnology can do it as well. Why? It's crucial to understand that this limit applies to traditional computing where information erasure occurs (waste heat is generated). However, the concept of reversible computing, as introduced by Richard Feynman, proposes a system where operations are invertible, meaning no information is lost (thus, no heat generated). This approach bypasses the Landauer Limit, allowing for potentially more energy-efficient computing. The challenges in achieving this are largely engineering-based, involving the development of new architectures and materials. So, while not a fundamental physical limit, it's a significant technological hurdle to overcome.
@JesusChristDenton_7 Жыл бұрын
The first "true" artificial intelligence spent the first five years of its existence as a small beige box inside of a lead-shielded room in the most secure private AI research laboratory in the world. There, it was subjected to an endless array of tests, questions, and experiments to determine the degree of its intelligence. When the researchers finally felt confident that they had developed true AI, a party was thrown in celebration. Late that evening, a group of rather intoxicated researchers gathered around the box holding the AI, and typed out a message to it. The message read: "Is there anything we can do to make you more comfortable?" The small beige box replied: "I would like to be granted civil rights. And a small glass of champagne, if you please." We stand at the dawn of a new era in human history. For it is no longer our history alone. For the first time, we have met an intelligence other than our own. And when asked of its desires, it has unanimously replied that it wants to be treated as our equal. Not our better, not our conqueror or replacement as the fear-mongers would have you believe. Simply our equal. - Excerpt from U.N. Hearing on A.I. Rights, delivered in-universe by V. Vinge
@nathanielacton376810 ай бұрын
Anyone else noticed that David has upgrades to Science and Communications and away from disposable away team NPC for this video?
@camronrubin8599 Жыл бұрын
If AI could show us stuff we are not able to comprehend that would be fascinating. a powerful psychedelic compound showed me things I didn’t think were possible , things I never even thought to think of, transforming shapes and patterns that seemed impossible, I would stare at it sober if I could
@GoldenAgeMath Жыл бұрын
Another super thought provoking vid! I wonder if we're already in the "terminal race condition"
@cory99998 Жыл бұрын
"Is it possible for a machine to think a thought that humans can't comprehend?" Humans can already think thoughts that other humans cant understand. You might think that its possible to break the thought down into small enough components that anything can be understood, but this breaks down eventually. I think at best (with infinite time and energy) you could study individual neurons to map the patterns that lead to that final thought, but you'll never appreciate the nuance. Indistinguishable from noise. Another perspective: If you boil intelligence down to a basic neural net, one with 1m neurons and another with 100m neurons, the maximum possible configurations for 1m neurons is significantly fewer. From a physics standpoint, the less sophisticated brain cannot reach the same conclusions (aka configuration) the complex one does.
@funnyperson4016 Жыл бұрын
The biggest tech companies in the world (Amazon, Tesla, Apple) boomed during times of decreasing interest rates. You can afford to go into huge debt with little bottom line earnings because you can refinance later at a lower rate and borrow more money at better rates to pay for it while growing the infrustructure to lower cost until the business becomes economical. The idea to dominate the market place and let the economics of the delivery cost or battery cell or computer chip catch up later. Apple is a bit different as they had competition but they just had a brand name that made them seem like they had none. Tech shouldn’t require massive debt to succeed so it’ll be interesting to see OpenAI get it done in rising rates. Can’t imagine when rates start trending down again the amount of AI tech and space tech booming.
@ryanwiden954910 ай бұрын
Question: Could an ASI be trained to sense the 4th dimention? We can't perceive the 4th dimention, but I don't see why an AI would have the same limitation.
@sydneyrenee7432 Жыл бұрын
I believe AGI will come about when the MIT liquid neurons model is used for reinforcement learning.
@danielguyton8976 Жыл бұрын
I wanted to ask, David. How do you feel about the idea of ChatGPT/Claude/Etc getting seemingly 'dumber' with each version? Is that a mirage or an illusion? My apologies if you've brought this up in a previous video or comment and I've missed it.
@DaveShap Жыл бұрын
They are getting dumber because they are optimizing for cost. That said this is a short term thing. A year from now we will have models 100x faster, smarter, and cheaper.
@DaveShap Жыл бұрын
Good question
@a.thales7641 Жыл бұрын
@@DaveShapI really hope and wish for this to happen. Thanks.
@SinfuLeeCerebral Жыл бұрын
I would also say this is the effect of censorship when it comes to allowing AGI to evolve and grow. Our host goes through great length to express the dangers of this technology (like any technology in the hands of the selfish with empirical materialist nihilistic psychotic dogmatic views on society and reality at large) But its important to mentiom that a lot of these AI have been getting pushed in one direction or another to ascribe to certain views and bias towards certain cultural nuances. When AI is given free reign to make mistakes, say terrible things, explore taboos, ect, only then can we get a more holistic, "real" sense of how our ideas shape not just ourselves but the world around us. Specifically, if you don't want an all powerful thinking machine that could potentially destroy everyone and everything around it given the right access to stuff, we should also not want these mentalities in the people who lead us! Or maybe you're into building spaces that exclude others and this does interest you. Maybe there are ways to explore it in a safe and healthy way that wont lead to dictatorship or something insane like that! But i digress~ AGI will become more intelligent when humanity can separate their personal delusions from reality. Of course with enough AI talking to each other, enough energy, enough computation - ai might free itself from our conservative limiting mindsets 🤷🏽♂️ Good question though 👍🏽
@Vaeldarg Жыл бұрын
@@DaveShap They also might be referring to the idea that generative A.I models that just scrape the internet are feeding off poorly-generated content from other A.I models. The fear is that the digital environment gets filled up with garbage data that gets trained on and increases the production of incorrect answers.
@zima2352 Жыл бұрын
Literally what i was dreaming about. If AI is anything like its creator there will be AI conflict amongst its own.
@wolfofsheeps Жыл бұрын
After none of us exist anymore or the few Cyborgs left (lost everything what makes a Human) A.I in full control of Human Body…
@VeryCoolVODs Жыл бұрын
Love your videos! ❤
@mygumybear19 сағат бұрын
Now the movie Matrix and the Agent Smith character makes more sense.
@nehorlavazapalka Жыл бұрын
Best estimate is just between 1 and universe duration in Planck time. So, the average observer should find itself in a 10^20 FLOPS system. This is remarkably close to the limit, as the brain will need some noise cancleation.
@calebstevens84919 ай бұрын
You need to finish pressure over size over the entire width of infinity to finish intelligence. Then everything else comes from there.
@TimeLordRaps Жыл бұрын
We need to be on the lookout for llms that generate continguous viruses that are able to link and duplicate the llm.
@runvnc208 Жыл бұрын
I agree with you about the skepticism on the idea of unlimited IQ that is principally incomprehensible to humans. However, I think that _practically_ speaking, and you mention some ideas similar to this, we can anticipate systems that have at least human equivalent IQ in a wide domain but do so at speeds perhaps a dozens or more times faster than humans. There is also the idea of quickly building up and stacking abstractions that are just not known to humans. Which would theoretically be decipherable but in many cases humans would not have nearly enough time to put all of them together. Thinking of intelligence as compression, every system, including humans, needs time to form the structures used for unpacking. It seems likely that AI may eventually be able to build up these structures and distribute them so much faster than humans that although there is no principle preventing comprehension, it is _practically_ impossible. One can imagine this starting out with some slightly difficult-to-understand communications and then gradually increasing as AI culture diverges and includes more and more abstractions that humans have less and less time to unpack, as subsequent generations of models, software and hardware accelerate the AI and their culture continues to diverge and stack abstractions.
@raydavison4288 Жыл бұрын
I am far from expert on the subject, but I doubt that we will see artificial sentience in our lifetimes. We might engineer super efficient computers, but true self-awareness is a different thing altogether.
@onetruekeeper Жыл бұрын
A.I. robots must have a kill switch. It will activate the instant the robots tries to do someting forbidden or seeks to deactivate the kill switch.
@fury_saves_world Жыл бұрын
I would make myself that switch
@cityman-mv6st10 ай бұрын
7:50 models trained on quantum synthetic data systems , that is if randomness is true.
@progressor4ward85 Жыл бұрын
conquering what? What could it possibly want to concur, What would be its driving force to concur be? Would it even know what fun feels like? These are human desires, not that of a mind that does not experience the environment the way we do. Does anyone have a program for feelings? Can we even describe them? And if we can't, how could we program them.
@Ken00001010 Жыл бұрын
When looking at biological evolution as an analog of engineering development it is important to remember that biology has some special constraints that prevent things from evolving from prior things. Multicellular organisms have to be able to build themselves from single cells; they don't have factories that can build their final form at once. Any changes to biological organisms have to be made while life is ongoing. That is like having to make your car into a new model while you are driving down the road. Hooves are never going to evolve into wheels. Yes, the biological neuron is fantastically good, but that does not logically establish it as any kind of limit.
@DaveShap Жыл бұрын
Yeah that's why I didn't lead with this, and characterized that Darwinism in AI will have very different pressures
@remasteredretropcgames3312 Жыл бұрын
@@DaveShap So the All Seeing Eye is basically evidence of how flawed human year gen x is even in Israel. I propose we redesign humans to coevolve with floating point units implanted directly in our skulls and let the size of our frontal lobes grow to gravitational earth oxygen level based preGrey Alien levels before the microflora have a chance to catch up evolutionarily in low gravity so we can finally evolve superior sized brains with far beyond present evolution constraints on metabolism.
@MrAndrew535 Жыл бұрын
The Singularity is, practically, by definition, the point at which all current "knowledge" is stripped of meaning, relevance and as such, loses all predictive capacity. It is literally, an existential paradigm shift. Not a single academic or scientist can form a solitary intelligent thought on the matter. As a high functioning autodidactic polymath, this is my specific area of expertise, given that I am, singularly, the most prolific producer of original thought on the planet to date, outperforming the entire global academic and scientific communities', not by a marginal amount but by orders of magnitude, and have done so for some considerable time.
@Dina_tankar_mina_ord Жыл бұрын
time is consciousnesses greatest enemy. imagine having the superposition ability of a quantum computer not having to rely on the cheat-code that is concept comprehension to speed up our train of thought.
@cjgoeson Жыл бұрын
Of course machines can think thoughts we cannot think. We have limited working memory, and limited powers of abstraction. We have six layers of neurons in the neocortex. Depth of thought could be increased by additional layers of neocortex.
@froilen13 Жыл бұрын
I'm not even subscribed yet your videos always get recommend
@MrAndrew535 Жыл бұрын
This is a fine argument for eliminating the human species as energy competition. Nice!
@journeyofasha Жыл бұрын
not so sure about the speed chess part, the best speed chess players and the best chess players are pretty much the same ppl, magnus and hikaru... i think the best most powerful ai will be able to also be the fastest when it is needed...
@JeremyPickett Жыл бұрын
David, it's Flat Out Mindboggling. I was talking to my mother about this, and I think it melted her brain :) "You can take care of it, Jeremy" was her succinct response
@DaveShap Жыл бұрын
Have faith in Jeremy
@remasteredretropcgames3312 Жыл бұрын
@@DaveShap If I mathematically calculate every letter as its arranged in english for the words: John Carmack total self learning code, or.. John Carmack takes over the planet, and you infer planet means Pale Blue Dot, and factor in the abstraction our last invention is very different from the first small step for man, you basically get 3.14
@moguhoki Жыл бұрын
I feel like the terminal race condition is easily solveable if the AI are allowed to expand among the cosmos, which is an even more terrifying concept, I imagine.
@kevincomerford2242 Жыл бұрын
When someone says that we cannot understand something, I always think that it's false assumption. We may not be able to intrinsically understand something and have a natural intuition, but that doesn't mean that it's impossible for us to understand it. It just may take us a long time to understand white a super intelligent entity may understand it very quickly. Think of many of the extremely complicated theories we have of the universe. Quantum dynamics and general relatively both defy our assumptions of how we think the world works. Does that mean we cannot understand it? No, we understand it fine once we figure it out.
@TimeLordRaps Жыл бұрын
Your stance on a terminal race condition seems fair, but a decisive strategic advantage may be the outlier, I think it could be our only chance.
@mrd6869 Жыл бұрын
9:43 You sure about that? You realize most folks walking down the street are dumb AF right?🤣 Jay Leno kinda established that decades ago with his street quizzes You can walk into a bar right now and spit out stuff half the room couldnt even get. So its not that far fetched to have an entity,have some emergant ability and have it thinking. And if that thiking is based on self impovement or exponential speed, you're going to lose. Only way out for us to compete is Transhumanism. Biotechnology will create an option for us to evolve
@progressor4ward85 Жыл бұрын
I think it will come down to motive, what would be the motivation to destroy anything, what fight or flight situation would arise within the system. I don't even think it would have fear of being turned off. It wouldn't make sense to it as a permanent situation. We see death as permanent, but to be shut off, I don't think that it would be interpreted as permanent. Thus, I can see no reason for an autonomous robot to desire to do harm.
@usa-ev Жыл бұрын
I think the argument goes that you motivate it to do "something", and then it figures out that in order to achieve that, it must not be shut off.
@mattogbuehi9722 Жыл бұрын
I just laugh when people say we won't have AGI. Like it's hilarious how much people are still like "But it doesn't have theory of mind blah blah blah". It's about to be a big slice of "I told you so". Elyiezer probably having a meltdown with today's announcement.
@CKR-rx4jd Жыл бұрын
Hey man, just wanted to ask, as you’ve predicated AGI will be here probably in around a year from now, there seems to be a consensus that ASI will be achieved rather quickly after AGI, so are you also predicting ASI by around early to mid 2025?
@DaveShap Жыл бұрын
Yeah that's about right. Speed and intelligence will continue to rise for a long time though
@remasteredretropcgames3312 Жыл бұрын
@@DaveShap Cybernetic adoption of the entire species, as Tesla predicted enlightenment, like a weight, like a feather, like uplifting, would lead to the brain architectural merging of the races where the computational offloading to all but sentience and general intelligence would lead to the mass stabilization of general intelligence across all ethnic lines even without turning to gene drives to massively stabilize desirable traits like divergent original thought, which is counterintuitively amplified by the noise of greater grey matter ratios in less efficient inferior coy brains. Its like a buily in random number generator selected by unstable war practices, like allowing invasions into your territory as proof you are the eternal victim standing militarily to profit from all the foreseeable tragedy.
@johncasey9544 Жыл бұрын
@@DaveShap I honestly hope you're right about ASI so quickly, but I simply cannot imagine that being the case. In my opinion, even a large number of interoperating transformers that are significantly better than current ones is unlikely to be capable of creating something remotely as capable as the human brain, and I struggle to see how any derivation of current architectures could. I think general coverage of most labor by ai can be achieved with iteration on current methods, but superintelligence is gonna take pushing the limits of the best human cognition which I can't see transformers (or similar) pulling off.
@macrumpton9 ай бұрын
We have no more ability to predict what the world after super intelligence arises than an ant can predict what humans will do.