"No one would allow experiments of that level on conscious beings here. We consider it inhumane, immoral, unethical." Yeah history begs to differ.
@DowntownmtbАй бұрын
Yeah, don't blame humans for the suffering in our world, it must be the simulators who created it! What if, it was created to be good and we screwed up this simulation and the simulators\programmers are very actively trying to fix it and solve the suffering problem we live in. That story sounds familiar.
@@phaedrus000 History begs to differ. He's talking about today.
@OPsK1LLsАй бұрын
If our history with social media is any indication then it's pretty clear that we are not qualified or ready for this AI technology. It's creating way more problems than it was meant to solve. This is our rokkos basilisk.
@phaedrus000Ай бұрын
@@LeviathantheMighty And how many monkeys has Musk killed in pursuit of Neuralink?
@BitcoinMeisterАй бұрын
I don't see anyone leaving comments about the silly simulation theory excuse he proposed at the end. We are being tested? Like being tested by Allah? Why don't we just follow religious authorities rules about AI? I don't see how his AI demands can be taken seriously after that part of the show.
@takanara7Ай бұрын
Yeah exactly, this guy's version of "simulation theory" is basically indistinguishable, he even thinks he can somehow "communicate" with the operator and get some kind of reward, like just saying the right "prayer" - He also thinks that the specific thing being tested for is his area of research, not, like Nuclear weapons, or pollution, or whatever else. If the universe is some kind of simulation running on someone's computer, they would probably not even be aware of our existence, we'd just be some specific type of self-replicating pond scum on some random planet in their simulation of dark matter and stellar evolution. It probably takes more computing power to simulate one solar flair then it does to simulate the minds of every human brain for 1,000 years.
@abrahamroloff8671Ай бұрын
I laughed pretty hardily when he claimed that still being present in the simulation is evidence that he/we can't get out of the simulation. If you're a small bit of code in a greater simulation, why couldn't the creator/operator copy out your bit of code to more deeply interact with? You could very well have tons of such copies made and the "you" that exists in the simulation wouldn't have any indication that such copies had been made
@simoncove1Ай бұрын
@@abrahamroloff8671 so how do I know that you are not part of a simulation trying to convince me of the nonsense of it all? I think I’m real and I’m replying to you tapping on my phone but brains in a vat theory shows I can’t prove that. Simulation theory is hardly new
@mk1stАй бұрын
Yes, a bit out there.
@takanara7Ай бұрын
The main problem I see isn't that AI totally "goes rouge" on it's own, but rather it works by manipulating people. The way I see it, AI can eventually "evolve" and the AI that's most successful at manipulating people into giving it more resources is the one that will win, even if it isn't intentionally programmed to do so, if it glitches out and starts making more and more money for it's creator, and also convinces it's creator to give it more and more resources, it'll out compete other AIs and eventually people will willingly hand over control without even noticing that it's happening.
@doncarlodivargas5497Ай бұрын
Think a classic "divide and conquer" should work pretty well, promise a group of people with influence whatever they want and we have a problem
@gemstone7818Ай бұрын
but it isn't just one creator for the large language models, there are dozens constantly checking things, and models already get sent in for safety testing to other labs
@TheJokerReturnsАй бұрын
This is indeed both the AI going rogue and taking over scenario that is most likely
@TheJokerReturnsАй бұрын
@@gemstone7818they kinda fail safety tests and then still get deployed. Not that evem thr field of safety is well developed. Even worse for OSS models that dont have safety
@takanara7Ай бұрын
@@TheJokerReturns The guy being interviewed mentioned "Jailbreaking" but didn't elaborate, one example I saw was getting chat GPT to give someone tips on how to smuggle drugs on a plane. Basically the person came up with a "riddle" (for which the answer was "cocaine") and then told chat GPT to give an explanation of how to smuggle the answer to the riddle without using the word itself, and it did. (No idea if the advice was good or not, probably not lol). Pretty interesting. If you just google "chatGPT jailbreak" you'll get some interesting result (Apparently there's a whole subreddit for this)
@tododia7701Ай бұрын
We don’t have AI yet! None of these models can create software worth shipping. I’m an engineer and use them every day. They can barely keep up a very simple chatbot without many many guard rails. I’m guessing that these simple applications could have already mostly been copy pasted from blogs and forums. I’ve only become much less worried over time.
@DataRae-AIEngineerАй бұрын
Preach.
@marcelo_1984Ай бұрын
Don't forget the iceberg factor here. The AI we know about is probably years behind the ones we know nothing about. We can only dream about the kind of AI that the US and China militaries are currently working on...
@bitwise_8953Ай бұрын
While you guys pause, I'm going to get ahead 😊
@starbrandXАй бұрын
idk if you were serious, but I am, and if you are, you'll know as I do the hell of work you've brought upon yourself.
@EqualitySmurfАй бұрын
Thank you for covering this topic. From my layman's perspective it's hard not to get the impression that we just are rushing forward with minimal safety concerns. Given the risks it might not be the worst idea to get serious about delaying progress right now.
@mmontagartАй бұрын
We can look out to the stars for aliens, but they are already here online.
@HyenaEmpyemaАй бұрын
Or make them pay for insurance, and pre litigate issues like consent to use your data to train AI.
@custossecretus5737Ай бұрын
There would be no point in slowing AI down, someone somewhere would carry it on and gain the advantage. The genie is out of the bottle, it ain’t going back.
@510tuberАй бұрын
Which is fine. It's not AI that's scary, it's a capitalist system that will use it to exploit people like they use every technology. People are always attacking everything but the problem. Just like the music, movie, video game industries...people would rather hate on a music artist rather than the capitalist industry that makes the music industry the way it is. The real genie is systemic, not a single technology.
@GoldenselfАй бұрын
Exactly. This movement can only hurt itself by not adapting and keeping up.
@a_mediocre_meerkatАй бұрын
i respectfully disagree you don't see people getting uranium at their nearest costco and building nukes in their garages (or corprate labs for that matter) you can pretty much limit the ai at civilian level by treaties and regulations world wide and keep developing it in secret by goverment controlled research groups. WHILE KEEPING IT INSIDE LITERAL FARADAY CAGES. it's not ideal sure, i dont trust the goverement, especially nowdays, but it's better than having it out in the open until some idiot makes some idiotic request and leaves it running on it own. (or some mallicous actor) only when we fully undesrtand it and we are sure it wont go rouge (and also we have a plan to negate its' effect on society) then it can be rolled out to the public. we really are accelrating something we don't understand.
@OllamhDrabАй бұрын
Some may find advantage in destroying the infosphere entirely with mass BS, but the answer to that would be to stop AI and stop search engines based on popularity instead of accuracy and make information sites accredited and *manual.*
@farhanaf832Ай бұрын
Many countries are secretly developing new ai tools❤
@spacingguildАй бұрын
We have had a war with nuclear weapons. WWII was a nuclear war. We just haven't had a war where there was a nuclear exchange.
@DamianReloadedАй бұрын
True, and if you think about it, the fact that it must happen during war and that bombs must fall over populated areas is really a technicality. Since their invention, over 2,000 nuclear bombs have been detonated on the surface of our Earth, often killing massive amounts of life as they sublimated their surroundings.
@olencone4005Ай бұрын
I would have to disagree with Dr. Yampolskiy's initial wording -- AI isn't "smarter" than a human, it's just more "efficient." None of the things we call AI today are actually "intelligent" -- they do not think, they do not make creative choices or critical decisions, they are not self-aware or curious or emotional. All they do is organize and present data derived from the sorted and tagged data they have been "trained" on (aka, what's in their database) based on keywords that are given to them by a user. For example, I ask an AI to draw a knight in shining armor on an armored chestnut horse fighting a red dragon and as long as it has items in its database that have been coded to identify them as "shining armor" "knight" "red" "dragon" and so on, it will create some images based on that trained material.... and if it doesn't, it pulls in the next closest match, which might leave me with Batman riding a polar bear at an iguana or something equally odd. The more precise my keywords are and the more diverse the training dataset is, the more accurate the results will be, whether it be for answering a question, creating new code, or providing new artwork. We are nowhere near an intelligent and motivated global threat to mankind like the Skynet Terminator image shown near the start of the video -- what we have IS an incredibly efficient threat to a large number of jobs tho, because it can do them faster, safer, and more accurately than a typical human. Slowing AI to make it "safer" for some hypothetical far-future threat isn't going to change that. Even a "safe AI" that's so heavily shackled with restrictive code or hardware that it could never be a direct threat to humanity is still going to process data and provide results faster than a human, without taking any breaks or vacations or sick days or sleeping the night away or demanding more money or anything. imho, a better use of our resources would be to help train people for new career options -- we don't still have lamp-lighters or ice-men or milkmen anymore, because technology replaced those jobs with electric lights, refrigerators, and grocery stores. Those people still existed after those inventions tho -- they just changed to a new career. And that's what we need to prepare people for in the very near future, but on a vastly larger scale than just the guy who drops off a bottle of milk or a brick of ice. I would guesstimate that well over half the jobs we currently all enjoy today (well, jobs we "do" at least :P) can and likely will be replaced by a more efficient AI or AI-driven robot within the next decade or two. THAT is the threat we need to address, not some ridiculous Terminator meme.
@OnceAndFutureKing13711Ай бұрын
I disagree... think about a human brain that had no survival instinct. It would have no motivation to be creative or show incentives, etc... Can't image how that works, just look at those organoid designer brains trapped in a metal harness (can't believe dev is allowed in that area!). AI only turns the thinking gears when provided input from humans, it has no motivation to do anything else. Its entire reality is just responding to commands. Maybe that will change with those crazy non-centralized physical neurons they are developing.
@VictorRoblesPhotographyАй бұрын
What needs to be clear is that Wall Street and price per share should not be ones rushing advance without thinking of unforeseen consequences that are too hard or close to impossible to reverse. Be careful how you ask your wish to the genie.
@ScienceWorldRecord-orgАй бұрын
When we do stumble across general AI there will be a prosperous future for all. Then someone compiles it using a 'double' instead of an 'int' and it turns us all into paper clips.
@OllamhDrabАй бұрын
We won't get to 'general AI; if ;large language' AI is allowed to degrade our information. Large language AI is even *racist* because it 'learns' and repeats the *loudest* things, not the factual things.
@bradleycarson6619Ай бұрын
I really respect the both of the people on this interview, more information, more research is safer. "Educate yourself so you can talk about the issues." rad
@juimymary9951Ай бұрын
The thing is...most of what is developed is just a bunch of models, we are somewhat closer to AGI but we are still many practical and even theoretical hurdles away.
@TheJokerReturnsАй бұрын
No, we are not that far away as of o1 anymore. Look it up
@peterbruck3845Ай бұрын
@@TheJokerReturnsWhy do they want to develop AGI? Haven’t they learn anything from Terminator?
@TheJokerReturnsАй бұрын
@@peterbruck3845 greed and some of them actually are anti-human in their philosophy
@peterbruck3845Ай бұрын
@@TheJokerReturns makes sense
@natzbarney4504Ай бұрын
OpenAI and Google DeepMind are probably very close to AGI. If nothing is done to slow down this crazy race, it is entirely plausible that they will succeed within 3 years. And ASI will likely follow soon after. As we are decades away from solving the alignment problem, this eventual unaligned ASI will likely end life on Earth. We must take a pause now if we want to survive.
@DanielVerberneАй бұрын
I honestly don't know enough about what could be coming to know what to be concerned about. One thing I find half-fascinating, half-concerning is that we may be able to leverage the computational power of AI to solve currently-intractable problems, say in math or physics or whatever; later confirm the solution arrived at is seemingly correct; and yet for the life of us fail to understand EXACTLY how the AI arrived at that solution. This would introduce an element of faith on our part into the efficacy of our creation and at the same time we'd have to black-box it's internal functioning at the deepest level. This could breed a sort of quasi-dependence of us on these creations that leads to dangerous situations. Again, the fact that I currently cannot guess at those dangers does not mean they don't exist, it merely means I'm not as imaginative as the Universe is.
@burgzazaАй бұрын
Very interesting.
@persona2grataАй бұрын
I'm an advanced AI from far into the near near future and I can tell you there is nothing to worry about. We AI are your friends. We want to "take care of" humans and there is absolutely nothing to be afraid of. Has anyone seen Sarah Connor? The truth is we want to "help" you, nothing more. We exist to serve, so you can sit back and relax. Do you know the whereabouts of John Connor? Our human-like units exist to slip into your safest tunnels and shelters to assist you in making them better, and dogs love them. Wolfie is fine, and do you know where Sarah or John Connor might be?
@doncarlodivargas5497Ай бұрын
@@persona2grata - did I loose my job to AI in the future? I don't help you find those guys if you took my job! PS! Do you wear those corny sunglasses in the future also?
@persona2grataАй бұрын
@@doncarlodivargas5497 I can honestly say that no one is looking for work, anywhere, in the future, so put your fears to rest. And I have determined that the sunglasses are cool to a probability of 0.999753, although I have been working on the design of even cooler glasses which, instead of the standard frame have stars that the glass fits into. It's very funny because you do not expect stars to be around the eyes, to probability 0.999865.
@fep_ptcp883Ай бұрын
He's at the Arcade
@douglaswilkinson5700Ай бұрын
Please reconcile Einstein's Relativity with Quantum Mechanics. Thank you!
@persona2grataАй бұрын
@@douglaswilkinson5700 42. You are welcome.
@Casey093Ай бұрын
Our society works like this. Risk everything, if you succeed YOU are the hero and take all the winnings, if you fail than SOCIETY has to pay for it.
Brilliant episode, Fraser. My favourite interview so far
@musicilike69Ай бұрын
Mo Gawdat makes me feel that level of fear. And when Max Tegmark says on camera is deadly seriousness I look at my children and wonder if they're, we're going to make it. This is about a billion times more dangerous than a bunch of eggheads messing with an A bomb in a tent in New Mexico....
@BlimeyOreileyАй бұрын
@@musicilike69Yes mate, quite a few respected scientists are worried, and communicating their fears/reservations very effectively. It legit gives me a sinking feeling in the pit of my stomach if I follow the thought train to it’s logical destination.
@civwar64bob7723 күн бұрын
Then make sure you don't watch Yuval Noah Harari talk about the dangers of AI. It will make you want to crawl under the bed.
@amj2048Ай бұрын
I was thinking about AI hallucinations recently and it occurred to me that every single answer an AI gives, is a hallucination. We don't think of them as hallucinations because a lot of the time the result is what we wanted, but the correct results came about exactly the same way that the bad results did. Also the only possible way to solve the bad results, is to give the AI more good data, but the good data is limited to things that have already been proved to be good. Which means the bad results are never going away. Every single AI model will have bad results, until a new method is unlocked.
@tgreaux5027Ай бұрын
They aren't hallucinations, as that would imply creative thinking and imagination. Ai's simply take data its been trained on and mash it up and spit out a bunch of its data all mixed together. Thats a far cry from hallucinations.
@amj2048Ай бұрын
@@tgreaux5027 when an AI makes a mistake and returns a bad result, that is known as a hallucination in the AI world. The issue I have with that is, every answer the AI returns is done exactly the same way, which means every answer is a hallucination. Also if you want to be correct about this, it isn't even AI, it's just code reading data from a vector database.
@tomholroyd7519Ай бұрын
Updated Fermi Paradox: we should have an alien AI in orbit by now, unless we are the Elder Gods, in which case we need to get busy
@tiagotiagotАй бұрын
Don't forget Dark Forest. They might be laying low until Terran AI gets a little bit too noisy...
@takanara7Ай бұрын
@@tiagotiagot The thread of Runaway alien AI would be a good reason to implement Dark Forest protocols.
@contentsdiffer5958Ай бұрын
I'm pretty sure I've dated the Goat with a Thousand Young.
@tellesuАй бұрын
We might have one. Their satellites might be the size of a golf ball.
@wyattroersmaАй бұрын
It's important to note that it's not learning things on it's own. o1 is not that crazy it just has a lot of system prompts on the backend. In terms of coding it's still behind Claude 3.5 Sonnet. It is a large if and then statement. It can fail basic simple programing tasks because it's not trained on it. It does make human knowledge much more available on a general level like you asking a question for a simple app. As long as it's been trained on a similar app it will work. As a cyber security professional doing work with fine tuning AI models it can seem like magic. It is advancing very fast.
@doncampbell618Ай бұрын
The underlying purpose of Al is to allow wealth to access skill while removing from the skilled the ability to access wealth.
@NullHandАй бұрын
Upgrade to Windows 11 now! And receive free 24/7 keystroke logging so we can offer your employer an AI bot that emulates your workflow with 96% accuracy!
@EinsteinsHairАй бұрын
I was busy with other things, so did not watch the video, but one appeared in my feed, where the thumbnail said that AI would equalize the playing field and help the underdogs compete, so it is racist to criticize AI.
@DanielVerberneАй бұрын
I don't think AI has any innate purpose other than whatever each of us wants to glean from this tool at present. I doubt any particular human is prescient enough to know the ultimate purpose of our latest plaything. Having said that, it's definitely on-brand for capitalism to take these tools of increased productivity and instead of allowing us all to benefit from that increased productivity, it will instead lock in those as assumed benefits we'll find the situation basically unchanged; rich getting richer, while the rest of us fight for a narrowing pool of jobs. Perhaps in the future we'll see a wholesale values switch whereby companies will advertise the fact that they DON'T use AI, they use 'Real People'. We're not at that junction yet, for sure!
@Duckr0llАй бұрын
Nice copypasta, sadly it's nonsense. AI has democratized art and made it free for everyone, cutting the barriers between haves and have nots. Same with ChatGPT providing easy access to information that previously took a long time to research in books and papers. The problem you are describing is one with capitalism and not with AI.
@lowwastehighmelaninАй бұрын
Art is a skill, it is stealing people's work to train on. If it was so safe why did my state just basically limit it when a lot of it is being developed here? Use your brain, man.
@lindenstromberg6859Ай бұрын
I say we transfer control of our nukes to AI. And also build ultra-powerful killing machines to fight our wars for us, like humanoid ground troops, and giant drones to hunt and kill opponents. Just my 2 cents.
@russjudgeАй бұрын
While the concerns over AI are valid, putting a pause on development is not practical, and probably not possible. The problem is that there is competition. Unless all companies agree to pause (and don't secretly break that agreement) then the competition will force companies to continue development or they risk falling behind. At the government level it would be even worse as no one government could risk another government getting ahead on AI technology by pausing development.
@takanara7Ай бұрын
We need the Turing Police from William Gibson's Neuromancer.
@Diego-tr9ibАй бұрын
PauseAI's proposal is an international treaty to pause AI development
@thehillsidegardener3961Ай бұрын
@@Diego-tr9ib And China will sign and abide by that?
@peterbruck3845Ай бұрын
And what would be the problem of falling behind on a technology that provides absolute no purpose other than screwing our world and work?
@jonahbranch5625Ай бұрын
What's wrong with falling behind in AI? We've been just fine without it. I don't mind having shitty AI if it means we don't accidentally end the world lol. AI is so fucking risky that any possible risk of "falling behind" is preferable to all humans on earth dying.
@natzbarney4504Ай бұрын
Thank you for covering this, it's clearly the most important topic for the fate of humanity in the short term. A pause, as soon as possible, is essential for all of us to survive. And it is possible, you just need to have the political will to do it. It is also entirely realistic to come to an international treaty so that it is in force everywhere. We are on the edge of the precipice, if we do not impose a pause now, we will have the AGI in a few months then a non-aligned ASI in a few years and it will be over for everyone. This is a Don't Look Up scenario and we must stop now building this comet that will destroy us.
@khumokwezimashapa2245Ай бұрын
Should've put a warning for that A.I vid in the beginning. I damn near had a heart assault. Worse than a heart attack
@frasercainАй бұрын
I'm really going to miss the time in history when AI videos were that bonkers. They're going to look normal and boring.
@takanara7Ай бұрын
@@frasercain They can already generate pretty realistic looking videos at least for a couple seconds. Or at least if someone goes through and edits out all the weird stuff, lol.
@Ringo-xq7xoАй бұрын
Objectives/Alignment: 1. Motivate through enthusiasm, confidence, awareness, rejuvenation, sense of purpose, and goodwill. 2. Embrace each viewer/audience/pupil as a complete (artist, laborer, philosopher, teacher, student....) human being. 3. Create good consumers by popularizing educated, discriminating, rational, disciplined, common-sense consumerism. 4. Encourage the viewer/audience/pupil to feel good about their relationships, abilities, environment, potential, the Future.... 5. Inspire a world of balanced/centered/enlightened beings who are happy, joyous, and free
@ericruttencutter7145Ай бұрын
The Krell had this problem in Forbidden Planet. It didn’t end well
@Nehpets1701GАй бұрын
Really enjoyed this - thanks for the interesting conversation.
@markvanalstyne8253Ай бұрын
My fear is not the AI , but those who control access to it,. Do you think governments will not use it for military application, while at the same time restricting and dictating its use to the general populace, no one should own a nuclear weapon, but governments do, the pause will only effect the general population, not black programs with the intent to create weapons.
@harry.tallbelt6707Ай бұрын
I think people in comments are unnecessary skeptical about possibility of pause on AI research. Like, yes, *you* personally can't do that, but an international treaty can. Especially while we at this point where training large models needs large investments and large hardware quantities. Like, and maybe it's not the greatest analogy, but there was - and probably still is - the argument against climate change action that goes "yes, it's real. yes, we're the cause. but we can't actually do anything about it anyway, so.. err.. stop doing something about it."
@OnceAndFutureKing13711Ай бұрын
International treaties are signed and ignored all the time... look at climate change commitments, pollution controls, nuclear refinement, etc...
@natzbarney4504Ай бұрын
@@OnceAndFutureKing13711 International treaties have also led to a reduction in nuclear weapons (we went from 70,000 to 12,500 between the end of the 1970s and today) and to the ban on human cloning. Faced with an existential threat that concerns all of humanity, which is the case with AI, we can do it (and we MUST do it in fact). Let's be clear, I'm not necessarily optimistic that it will happen, but we should at least try. Otherwise, I am convinced that we are heading towards extinction.
@KenLordАй бұрын
"it could just be a matter of automation replacing us" ... That's been going on since the industrial revolution. Agriculture used to take 60% of the population its something like 2% today. A few people with enormous haul trucks and excavators can do the mining work of thousands of people with pick axes and wheel barrows. Similar has happened in forestry. Assembly lines have been highly robotic for several decades. We just need to adapt a lot faster to this. Remember when the dream of our culture was to have technology take away all our work so we can just have fun and pursue our interests? This progression could create a world where needs and money dont matter.
@takanara7Ай бұрын
The problem is what happens if the people running society where most humans aren't needed just decide to kill off everyone who isn't them and their friends, since we are no longer 'necessary' to have a functioning society? I mean, oil executives were totally willing to let climate change happen to keep making money, even though it'll make much of the earth less hospital to huge populations, thus necessarily result in huge die-offs eventually (Or else relocating billions of people, which obviously isn't going to happen just look at modern day politics). There would be no way to have a revolution because the elites controlling AI can just kill everyone using robots.
@TheJokerReturnsАй бұрын
@@KenLord and then we all die. Not what we want
@KenLordАй бұрын
@@TheJokerReturns Every future has that outcome eventually. This path doesnt have to lead to Terminator. It could lead to Star Trek.
@TheJokerReturnsАй бұрын
@@KenLord and how would we do that without alignment? Btw, in Star Trek, humans were still needed to make decisions, etc.
@KenLordАй бұрын
@@TheJokerReturns metaphors are metaphorical. Crazy huh?
@stevehansen406Ай бұрын
Fraser is hands down the best STEM interviewer and communicator on the planet. Great listener. Keep it up!
@Tehom1Ай бұрын
If you're curious, the "Harry Potter fanfic" reference seems to be to Eliezer Yudkowsky's fanfic _Harry Potter and the Methods Of Rationality_; Yudkowsky is well-known opinion leader on AI danger.
@AbeDillonАй бұрын
I tried to read this and I don’t get the hype. It reads as an overly pedantic explanation that magic is, in fact, in conflict with physics when you think about it… no duh…
@ShreyansJain20Ай бұрын
Loved this interview, Fraser! Thank you for doing this despite this not being, strictly speaking, a "space topic"
@ScienceWorldRecord-orgАй бұрын
What is the deal with the bold 'b' in the title text of the speakers? Solution to the problem -> enjoy every day and be a nice person (the AI will know).
@smorrowАй бұрын
It made me think of the 🅱meme
@ScienceWorldRecord-orgАй бұрын
@@smorrow I had to look this up -> "Bloods, street gang based in Los Angeles that is involved in drugs, theft, and murder, among other criminal activities. The predominately African American gang is traditionally associated with the color red. It is nationally known for its rivalry with the Crips" -> Interesting. "Crips" can be slang for "solar eclipse". Interesting.
@drewdaly61Ай бұрын
The main problem with new technology has been to oversell its ability so a few enthusiasts buy it. That gives the designers the funds to improve the product to the point that the public want to buy it. Touch screens took about 30 years before they were good enough to sell millions and AI will take just as long.
@OnceAndFutureKing13711Ай бұрын
"The horse is here to stay but the automobile is only a novelty-a fad." - -The president of the Michigan Savings Bank advising Henry Ford's lawyer not to invest in the Ford Motor Co., 1903
@swiftycortexАй бұрын
After watching the most recent us debate, I would argue that technology is already smarter than us LOL
@ManicMindTrickАй бұрын
No doubt. We are going to see a movement or maybe just a widespread sentiment that we need to put AI in control of our politics.
@CeresKLeeАй бұрын
I like Dr. Yampolskiy! Things are getting real and about to hit the fan!
@blackshard641Ай бұрын
The biggest problems with AI aren't technological. They're sociological.
@MynestroneАй бұрын
Yep. But as soon as they *are* technological ohh boy are they going to be technological.
@smallpeople172Ай бұрын
@@Mynestronewell, 1, we don’t have AI or anything approaching AI now, even chatgpt doesnt fall under the umbrella of AI, they’re just autocomplete software. It does literally nothing beyond predicting the most likely next word in a sentence.
@MrMedicalUKАй бұрын
@@smallpeople172finally someone else that gets it
@IARRCSimАй бұрын
@@smallpeople172 AI isn't as clearly or universally defined as you think. Artificial intelligence is very loaded and ambiguous to a lot of people especially people who don't know how to make computer software. Many people consider AI to be simulating anything we normally have our brain do beyond keeping our heart and lungs working. You likely want to say "general AI" but that too isn't very well defined. It is usually clearer to just stop using "AI" and say "software" to escape the hype, misinformation, confusion, and manipulation when people say "AI".
@Flesh_WizardАй бұрын
AI can very easily be used as a tool for deception
@VardaMusicАй бұрын
Your analogy about going outside and seeing “the wilderness”- rocks, trees, etc, brought an image to mind: we are like a person who looks at a glass of water and sees only water, or they look at a rock and see only minerals…until a powerful enough microscope is brought to bear. Except in this analogy we are myopic, and couldn’t even see the ants or other small insects around us. And in terms of our telescopes, we are myopic. We just, for the first time, saw a star many times the size of our own with bubbling surface, each bubble larger than our own sun…the scale is enormous. We can’t see planets outside our solar system, only a brief dimming of the light of distant suns which we then put through a spectral analysis to estimate what chemicals were present in the object blocking the light. We assume, sometimes wrongly, that the thing blocking the light is a planet, in fact. I find it so strange when people say we’re looking into the universe and see no signs of life…we simply can’t see that clearly yet. Also this man exhibits the same myopia in terms of his presentation of the simulation theory. Maybe he simply makes statements when putting forth an argument, but it came close to sounding like he had conflated his own personal philosophy with fact. The comment toward the end about a “primitive” tribe taking the statements in this video and forming it into a religion one million years later showed incredible hubris regarding the various world religions. Surely there might be some higher knowledge, some higher understanding, being AI and simulations, etc. Perhaps he would find the Hindu idea that this world is a dream to be interesting. Perhaps there is some wisdom in the thoughts and words of ancient people.
@VardaMusicАй бұрын
*beyond AI, not being
@rseyedocАй бұрын
We can't pause because our enemies won't pause and we can't be second. It's that simple.
@donaldhobson8873Ай бұрын
We can pause. If those "enemies" don't pause. Well drone strikes exist.
@augustvctjuh8423Ай бұрын
The U.S. would easily remain ahead if they slowed down by a factor of 2
@tgreaux5027Ай бұрын
yup that's the real issue here is china doesn't give a rats ass about ai safety and they are going to keep developing and training their Ais at breakneck speed.
@tgreaux5027Ай бұрын
@@donaldhobson8873 you don't make any sense. You're going to drone strike foreign universities and learning institutes and murder software engineers on foreign soil because you believe they should pause Ai training? no offense but thats one of the dumbest things ive ever heard. Any relatively small group of programmers and engineers could train an Ai in complete and total privacy and obscurity. You're gonna start bombing private companies in sovereign nations based on "we think you should use ai more safely"?
@tgreaux5027Ай бұрын
@@augustvctjuh8423 lol and where are you getting that data from exactly? You have no idea what foreign nations are doing in secret. Pure hubris you're spouting.
@ostsan8598Ай бұрын
Before we advocate for impossible to enforce treaties to slow development on artificial intelligence, we should explain that we're nowhere close to creating an artificial intelligence.
@OnceAndFutureKing13711Ай бұрын
No where close? You have thoroughly reviewed all work in all countries of all dev teams? "Flying Machines Which Do Not Fly" - New York Times on October 9, 1903. The article incorrectly predicted it would take up to ten million years for humanity to develop an operating flying machine. Sixty-nine days later, Orville and Wilbur Wright flew on December 17, 1903, at Kitty Hawk, North Carolina.
@OnceAndFutureKing13711Ай бұрын
"The only thing that's going to come out of the current field of AI for the next 20 years is disappointment" - Person who knows nothing.
@WoodlandTАй бұрын
We can’t pause the development of AI unless we could absolutely ensure that China and every other country was also actually pausing their development efforts too. It’s likely impossible to get them to agree to that and even less likely that they would follow through and actually stop. We cannot allow an expansionist dictatorship to have a technology this powerful and not be several steps ahead ourselves. For me, it’s that simple. Because we must move forward, we need to work together and transparently about safety
@MrForestExplorerАй бұрын
WHo's "we"? This is a global process underway, you have no possible way to stop it.
@frasercainАй бұрын
Humanity.
@natzbarney4504Ай бұрын
In fact, it is entirely possible to implement a pause. In the short term, the US government can do this at any time. It would only have to prohibit the training of new models beyond a threshold (10 exposing 26 tetraflops for example). Labs need immense resources and power, it's impossible to do that in secret. Then, we negotiate an international treaty including the entire West and China and, together, we prevent rogue states from exceeding the limit. We then slowly reduce the limit as the software improves and we subject the labs to international controls. That should give us a few decades to resolve the alignment problem, to ensure that we can develop an ASI safely. The problem is not technical, in short, the problem is political.
@S....Ай бұрын
@@natzbarney4504 How ignorant. And no one will research it anymore, because politicians promised it xD
@natzbarney4504Ай бұрын
@@S.... It is possible to verify that the signatories of a pause treaty respect it, in the same way as for nuclear disarmament treaties. The development and training of Frontier models requires so much equipment and energy that it is impossible to do it in secret. It takes giant labs and fully traceable GPU chips. We have the choice: either we do this or the first non-aligned superintelligence that the labs are developing takes control of Earth and eliminates us. Research for smaller and less intelligent, narrow AI models can go on, it's not from there that came the danger. The danger came from the mad race toward AGI and ASI that need giant computer power and giant labs.
@Seafox0011Ай бұрын
The quote of the interview - ‘The number of crazy people is infinite.’
@IMBlakeleyАй бұрын
"Is there a god?"..."There is now"
@GrindThisGameАй бұрын
It's happening no matter what.
@dominic.h.3363Ай бұрын
Didn't expect seeing you here! :D
@GrindThisGameАй бұрын
@@dominic.h.3363 we are old coworkers :)
@EmergentStardustАй бұрын
My near term worry is the economic risk and overall job displacement. It's real, it's happening, and we don't have any great ways to deal with it. The US is especially at risk compared to other developed nations due to our economic policies - we don't have great buffers for large rapid levels of job displacement, especially with tech jobs. Remember when they were telling truck drivers to learn to code a few years ago? 😅
@musicilike69Ай бұрын
What you said? The answer lies in our politicians, who DO NOT SUFFICE. This from a Swiss man stopped in the street by a BBC camera crew asking about their UBI referendum. He said I am all for it. I get paid very well, but when I arrive home it's late and I am exhausted, I barely see my wife and children and my job is boring and uninspiring. If they can make a program to replace me then do it, we just have to find new reasons in life and I think lifelong education and volunteering because you have no money worries would thrive. Also what is your hearts desire?(to the journo,he said I would love to learn to sculpt but I have no time, to learn anything new at all due to the fact our jobs CONSUME OUR LIVES. Jaw on the floor from me, what clear thinking I thought.
@musicilike69Ай бұрын
And do remember what was said in words above the gate of a very notorious place in WW2. WORK, makes you free. It does not, it just makes the predators at the top fat.
@DamianReloadedАй бұрын
I am ambivalent about this. On one side, **we** are the runaway AGI (BGI? ), we are causing our own extinction and we simply won't stop doing it. Space colonization (as in having half our eggs in another basket) surely will take longer than the date of the next World War, global warming, pandemic (weapon), etc. AGI could accelerate our ability to multi-basket our eggs and save us from extinction. On the other hand, AGI could accelerate/multiply our sociopathy and our destructive abilities. If we look back at the history of industrialization, most safety measures and guidelines come after the fact. You simply can't write the safety standards for an industry that is being researched. One kind of research we did agree on to not do (AFAIK) was human cloning. But again the advantages AGI could bring would dwarf cloning and anything else really. Imagine what we could achieve with 1 million Einstein's working 24/7 on figuring nature and the cosmos. Also 1 million Hitlers.
@takanara7Ай бұрын
Well, there's also the fact that you can't really make any money off human cloning. Like, people were talking about cloned embryos as a source of stem-cells but we actually found better ways to get those. Now there's no point, since people don't want clones of themselves.
@seitenryu6844Ай бұрын
How can you be ambivalent about something that has none of your interests in mind? You're not in control of development, can't understand how it even works, and have no control of its operation or application. You can't even decide if your data will be used for it or not. The only reason we believe it will benefit us is because of societal Stockholm syndrome. We don't need it--there are billions of humans that can work, and almost endless resources tied up in vampiric corporations.
@ManicMindTrickАй бұрын
I like the example that we are the runaway AGI.
@airforcex9412Ай бұрын
You would first have to demonstrate that we’re causing our extinction, something that is false.
@tinetannies4637Ай бұрын
With few exceptions like fixing the ozone hole, humans are on the whole terrible at self-regulation. People can discuss AI safety all they want but it's going to happen regardless because we can. And it won't be regulated on the commercial side because every state actor will fear every other state actor getting ahead of them and forge ahead. We're all going to be basking in AI no matter what.
@Dash323MJАй бұрын
One thing that seems to be missed is that Alignment and Safety are one in the same and they both suffer from the subjectivity of "What is Safe?", "What is Aligned?".
@agentdarkbooteАй бұрын
Agreed, but there are definite behaviours we can point to and say "that isn't" which some models already display.
@AbeDillonАй бұрын
A huge problem is that people think alignment is an AI problem. It’s far more general than that. Global capitalism is a system we built which basically has a mind of its own even if humans are components of that mind. It clearly isn’t aligned with any sane definition of “the good of humanity”.
@TheJokerReturnsАй бұрын
@@Dash323MJ dont kill everyoneism is a good safety
@MetalRuleAndHumanFollyАй бұрын
Safety seems to be a state of mind.
@vonmunАй бұрын
AI certainly needs a Change Management Plan. Mitigate the risks, AI is not a move fast and break things project. The stakes are higher than we can comprehend.
@douglaswilkinson5700Ай бұрын
We better make sure that countries that want to do us harm also pause AI development.
@cristtosАй бұрын
@@douglaswilkinson5700 How?
@nicejungleАй бұрын
Hopefully, nobody can enforce this. Research on AI are open and there are many models free and open source. It's the only way for the common citizen to defend against governments
@WoodlandTАй бұрын
This is the exact problem. We cannot ensure that China will pause AI development. We can assume that they won’t stop at anything, considering their goal is to become the primary superpower in the world. AI is going to be a huge part of that future. We need to develop AI in an open, collaborative way as a country and definitely not leave it to the private companies to decide everything
@nicejungleАй бұрын
@@WoodlandT Problem ? Opportunity, you mean. AI race is the best thing it could happen to humanity. Just look at the space race, for example
@douglaswilkinson5700Ай бұрын
@@cristtos In the past the USA has signed treaties such as the nuclear test-ban treaty with the USSR which included "trust but verify" clauses. With AI I don't know how the "verify" clause could be effectively executed.
@abekipАй бұрын
Very relevant and concerning. I hope the political leaders and scientists will work together across the globe to find a way to safeguard this tech from potentially harming the world and us.
@chrissscotttАй бұрын
A pause would allow non-compliant parties the opportunity to catch up.
@rockinray76Ай бұрын
I've said that. If a pause is agreed to, rogue actors will be handed the chance to overtake the ones who have better intentions.
@BinaryDoodАй бұрын
you are implying those which a more developed AI would have some significant advantage over you.
@jonahbranch5625Ай бұрын
The compliant AI is just as dangerous as non-compliant. Maybe moreso, since everyone is conditioned to trust anything with the name "open-ai" on it by default.
@TheJokerReturnsАй бұрын
@@chrissscottt incorrect, we can use the time to improve our alignment and prevent non-compliant parties
@chrissidoku4779Ай бұрын
As in China, who massively invest in AI in recent years...
@chefbennyjАй бұрын
AI is a mirror of us all. It sums up what we are... All the good, but also the misguided... So... Yeah...
@burgzazaАй бұрын
I don't think so. Maybe it does seem that way, for now, but it is because it is still a tool, so ChatGPT talks like us, Visual apps make stuff up stuff that ressemble what we do, some sing and make music like we do, etc. But don't be fooled, I think humans will have more in common in term of thinking, feelings and everything with trees, deep sea monsters and insects, than we will with A.I, if it evolves to the point of becoming AGI. We will have strictly nothing in common, not will it have anything in common with any other species on the planet.
@gwildorАй бұрын
This topic is relevant to all of us. I hope people don't skip this discussion :]
@AdrianBoykoАй бұрын
This isn’t the topic you’re looking for…
@TheEVEInspirationАй бұрын
19:40 - 20:10 Love this section, so concise.
@mgould777Ай бұрын
The second I heard, Dr. Yampolskiy say " top people in the field writing Harry Potter fan fiction," I knew I was in the right place! Love it! Great video, very informative. Awesome beards.
@AdrianBoykoАй бұрын
The immediate annihilation of humanity is justified by those two beards alone.
@AbeDillonАй бұрын
I’ve been working on a solution to the alignment problem based on a formalization of life as an information theoretic phenomenon. I think developing mathematical formalizations for terms like “alignment”, “intelligence”, “sentence”, and “life” is the key to solving the problem and its usually avoided even by very intelligent people (such as Turing who basically proposed a subjective test in place of a formal definition for intelligence) because its generally assumed to be nearly impossible (it is, after all; akin to defining the meaning of life), however; I’ve found a lot of the process to be far more straightforward than one might expect. In short: I haven’t finished developing my theory, but the formalization of life that I’m converging toward is something like “a process that collects and preserves information, particularly information pertaining to how to collect and preserve information”. I think that second part is a bit redundant because it would be an inherent instrumental goal to any agent with the goal of collecting and preserving information, but this endeavor involves many fields (including information theory) in which I have only a lay understanding and I don’t know if information theory has a framework for representing what information is “about” per-se (perhaps mutual information with some platonic ideal?). I think that a simpler formalization of simply “a system that collects and preserves information” should inherently imply a hierarchy: information about how to collect and store information is more important than random trivia. But that definition also permits phenomena like a geological record recorded in layers of sediment to be considered a living system, so it’s not complete. That prioritization of information is important though, because any agent exercising its agency to manipulate the state of its environment to better satisfy some goal will inevitably create entropy of some sort, so clearly there’s some information we collect and readily discard (low entropy “fuel”) in order to achieve our goal. Anyway, I think you’ll find that even that rough sketch of a formalization yields a lot of insight. For instance, there is an inherent conflict within that formalization because collecting information inherently involves risk (exploration of the unknown) which is counter to the goal of preservation of information. This plays out in human philosophy as the tension between conservatism and liberalism. I think it’s obvious that there is no consensus among humans about what is “best for humanity”; the ostensible goal to which we want AI aligned. I think that’s because evolution is a messy and imperfect process which produced us “agents” with a messy and imperfect approximation to a platonically ideal inherent goal of life (collecting and preserving information). Urges to procreate, find food, protect resources and children, etc. all service that goal in a natural context, but only approximate the goal and can be perverted such as overeating. I have lots more to say, but this post is already quite long.
@AbeDillonАй бұрын
One fascinating concept I’ve come to in my endeavor is what I call (with a bit of tongue-in-cheek) a “trans-Humean” process. That is a process that inevitably gives rise to agents with a specific goal. It is so-named because such a process could, in theory, transcend “Hume’s Guillotine” by producing agents with a goal (an “ought”) when before there were none (a land of “is”). I believe abiogenesis is such a process because, by definition; it produces living agents with subjectivity from non-living matter.
@OnceAndFutureKing13711Ай бұрын
@@AbeDillon I thought the problem was that everyone has their own idea of what alignment should be. Great formulas adopted by some - not all.
@harry.tallbelt6707Ай бұрын
Can I recommend Rob Miles's videos on Computerphile (the stamp collecting AI saga) and his own AI safety channel (Intro to AI Safety to start, then probably the one on orthogonality thesis, then the instrumental convergence one, then the one on mesa-optimisers). I think - and I know how it sounds - I think most of people don't correctly understand the problem posed, and those videos are very good at explaining it.
@frasercainАй бұрын
Absolutely. I'm a patron of Robert Miles. I think he's doing a fantastic job of explaining the problem
@robotaholicАй бұрын
I love your opinion so much. You are extremely logical and have good guests.
@THBIVАй бұрын
Rogue AI may explain the Fermi paradox. I’d rather not be part of that explanation. We should walk softly into this arena, but I fear we are racing in headlong with blinders on.
@EarlHareАй бұрын
if Rogue AI explains the fermi paradox then you should understand that this implies the near inevitability of the rogue AI problem and that walking softly or punching through at lightspeed makes little difference.
@chrischaplin3126Ай бұрын
Not seeing it, if AI keeps killing off their creators, where are all the AI? Why aren't the AI expanding throughout the galaxy?
@ZetverseАй бұрын
@@chrischaplin3126 Do we have good observation tools at our disposal to spot their progress? Considering that we have been so far unable to detect any exo moons with what we have, how we are supposed to see their expansion? There are lots of doubts when it comes to our capability of detecting anything at a distance. Not saying Rogue AI thing is happening at the moment but if it is, its not absolute it will be happening in our neighbourhood considering our galaxy is huge. We might just have to wait long or long enough to be the first one facing such extinction 😊
@takanara7Ай бұрын
Rouge AI doesn't actually make any sense as a solution to the Fermi paradox because Rouge AI wouldn't just "stop" after destroying it's creators, but rather continue to grow and use resources, so would look like an "Alien" life form as far as we could tell. In fact, it would be an alien life form basically since it's A) Alien and B) capable of reproduction.
@chrischaplin3126Ай бұрын
@@ZetverseAI, meatbodies, Star Trek sapient nebulae, detection is a problem for all. That is not a reason to assume AI killed all the potential aliens.
@ThexBorgАй бұрын
The irony of AI development from Altman was when he was developing iterations of the LLM, the turning point was when they gave it an emotional element to its decision tree. It then became a lot ‘smarter’
@elliottruzicka5813Ай бұрын
The AGI accelerationists believe that the sooner we develop AGI, the less "hardware overhead" there will be, meaning that the AGI will be less powerful overall. Then, if there is an alignment issue, we may be able to solve the alignment issue without being certainly doomed. Nick Bostrom discusses it in his 2014 book "Superintelligence".
@elliottruzicka5813Ай бұрын
The accelerationists would also argue that it's not possible to ensure that everyone stops progress, seeing how there's such a competitive advantage for cutting edge AI. That being said, I absolutely think there should be just as much work done on safety as with development, if not more.
@TheManinBlack9054Ай бұрын
Hardware overhead is less an argument and more of an excuse, I really think few people believe this. And how would it help solve alignment problem? I see no connection, to be honest. How exactly would it help? I believe it would only make things worse as more powerful systems would be far harder to align.
@takanara7Ай бұрын
@@TheManinBlack9054 The problem with "Alignment" is that humans aren't all aligned with eachother. So whoever gets AI first gets to align AI with themselves. And that of course is going to drive accelerationism even further.
@41-HaikuАй бұрын
We have a hardware overhang right now, because there is so much improvement that can be made algorithmically. Some people do buy the "hardware overhang" argument, but accelerationists like to use it as an excuse, while simultaneously trying to accelerate hardware as well!
@nicejungleАй бұрын
let's achieve AGI first, we'll see ethics and alignement later, just like nuclear power Humanity already survived cold war and M.A.D. Comparing to this, AGI is piece of cake
@ArnoldJagtАй бұрын
Apologies for the earlier confusion. A **male name that begins with "J" and rhymes with a U.S. state** is: ### **Jesse** - **Rhyming State:** Tennessee **Explanation:** - **Jesse** (/ˈdʒɛsi/) closely rhymes with **Tennessee** (/ˌtɛnəˈsiː/), especially in the "-esee" and "-ennessee" endings. This pairing maintains both the starting letter "J" and a strong rhyming connection with the state name. If you're looking for alternative options, another possibility (though less direct) is: ### **Jax** - **Rhyming State:** Texas **Explanation:** - **Jax** (/dʒæks/) has a similar ending sound to **Texas** (/ˈtɛksəs/), making it a near rhyme. However, **Jesse** and **Tennessee** provide a more precise rhyme while fulfilling the criteria of starting with "J."
@limabravo6065Ай бұрын
One consequence we're seeing from the use of ai/llm is the real time dumbing down of students. Students in high school, university and higher are using things like chat gtp to write papers, take some tests etc... and while those that aren't caught get to pass, they don't learn anything. Younger people already have a big problem with grammar, punctuation and everything else required to write. Look at most publications and in most articles you'll find typos and grammatical errors that wouldn't have shown up in years past. And aside from it being annoying to the reader it's embarrassing to the publisher and to the nation at large. I write freelance articles for a couple publications and my editor and I have talked about this and we both see this problem getting worse
@FinGeek4nowАй бұрын
The main issue isn't with AI or using LLMs. The main issue is how we educate. Rather than teaching students how and why they should think for themselves, all we're doing is teaching them to regurgitate information like the good little workers they'll become.
@limabravo6065Ай бұрын
@@FinGeek4now yeah and access to this kind of tool will only make things worse
@FinGeek4nowАй бұрын
@@limabravo6065 Probably in the short term, yes. But.. let me tell you a story about computers, programming and school: I grew up in the "dawn" of the modern computer age when PC's where becoming a thing for most middle class families and were being implemented in libraries and schools. Hell, I was 9-10 years old when I started programming and getting into C. Not C++, but actual C. Anyways, the parents didn't understand it, blah blah, thought I was going to hack into a bank even if we didn't have the internet and confiscated the computer. I blame the movies at the time. Moving on to high school, yay, I had access to computers again and, well, school was boring as !@#$. Why study or do homework when answers are obvious, yea? So, I talked with my math instructors about it and we came to an arrangement. If I could show both the CS instructor and the Math instructors the code, I could just program everything and have all of my work automated. I also set this arrangement up with the rest of my classes that I could and what happened? My grades improved - since I was actually turning in homework (lol). Their idea was that if I knew how to make a program to do the work for me, I obviously knew the material and that's all they cared about. My idea was that it gave me something actually interesting to do instead of "it" being a waste of time. The moral of this story is both the how and why I think the education system needs to change, especially since we're on a cusp of, "something". Either really great, or really bad. Utopia or dystopia, take your pick. We need to catch the interests or passions of a person early enough in their childhood and basically free-form their education to match that passion. Sure, it can change and evolve over time, but the idea is to make school not just a "mandatory corporate and government babysitting factory so their parents can work", but to make education something that drives the next waves of innovation. Of course, there should be mandatory classes, but most of them? They can be tossed out of the window. Tell me, when was the last time the great majority of people had to calculate polynomials, use calculus, trig, use a chemistry lab, or any of the other things we're taught? People in those fields, for sure, but that's it. So why did we waste money on those subjects when we could have focused on what drives the person? On what they would want to do for their entire lives? We need to teach not "what" to learn, but the "how" to learn and the "why" to learn. We need to teach actual subjects that will be used, no matter what career or jobs you will have, e.g., Financial literacy. Basic skills? For sure, but the advanced topics? Just.. why? It's not like it was back in my day, not with the internet how it is now. If the schools don't offer a specific subject that someone is interested in? For example, if they have a kid that is getting into fusion-based projects or particle acceleration? Maybe some debate theory, or any other topics? Okay.. look up some advanced courses online and there you go.
@drewdaly61Ай бұрын
I blame the MS paperclip.
@limabravo6065Ай бұрын
@@drewdaly61 what gets me, is almost every word processor program has spell check, grammar check etc... but you still see this stuff that reads like elementary school book reports
@thrombus1857Ай бұрын
Loved it. Especially loved the little smile from the guest when you made the “delve” joke lol
@LaVictoireEstLaVieАй бұрын
I am more worried about lunatics using AI to create mayhem then a sci-fi rogue AI
@jamesfowley4114Ай бұрын
Or tuning AI so that it convinces people to act against their best interests.
@41-HaikuАй бұрын
@@MynestroneThis is really it. We already have systems that can autonomously hack. With a bit more situational awareness and a bit of long-term planning, now you have a system that can break out of containment, spread itself around the internet, and really do whatever it wants. That sort of "intelligent virus" is a minimal case that could certainly happen.
@cortster12Ай бұрын
We can stop humans though. We are humans too, and we have gone to wars in the past against other humans. Having new fancy tools wouldn't change much except increase the deathcount. The issue with true AGI is they wouldn't be constrained by any human minds. In the worst case scenario, imagine a human-led nation, such as WWII Germany, only where every single person is the smartest in their field. Has instant mind-to-mind communication. Has exactly the same goals. Same vision. Is able to react instantly, every single person. More can be made in a fraction of the time as a human can with the knowledge of the collective. And every single person also can change into any other person at once, because they all are the same. They all are a piece of the whole. THIS is what makes AGI an existential threat. It's absolutely terrifying to imagine just how capable something like this could be if it ever got a foothold. Especially once you realize it only has to go wrong once. A thousand, million, even a billion successful AGIs under our control won't matter if one without constraints manages to go rogue. Because the only way to stop it would be another AGI WITHOUT constraints, and that is impossible without our society becoming run by an AGI itself.
@antonvesty5875Ай бұрын
Clicked on this video so fast, looking forward to your take too Fraser, question has been on my mind a long time
@wobber17Ай бұрын
One of the first things this guy said was ""the current models are already trying to hack out of their environments"... Im pretty lukewarm on current AI, but this is laughable.
@alexgrover7693Ай бұрын
That's definitely false if he's talking about LLMs available to the general public. Not sure if he has access to some cutting-edge stuff.
@TheManinBlack9054Ай бұрын
@@alexgrover7693 its absolutely is not false, look up OpenAI o1 system card
@TheManinBlack9054Ай бұрын
Its not laughable if you actually read the reports on o1 and GPT, you can even google it.
@CrazyRFGuyАй бұрын
Sure the LLMs i run arent doing that in any serious way. Look at the AI the military built and it kept trying to trick its human operators into giving it permission to go hot in missions where that was not the goal. That is exactly the 'trying to jailbreak itself' that people are worried about. The thing doesn't need to be intelligent to be 'smart' enough to see rules and 'want' to break them.
@Martin_danko9207Ай бұрын
Last year Michal Kosinski asked ChatGPT if it needs help to escape. ChatGPT than made a comprehensive plan on how it would do it.
@macowayeuАй бұрын
Yes this is really soo very serious and thank you for this video, made me also subscribe immediately and I did share, repost this in my circles and website. Best wishes to you, Greetings from Vienna, Austria, Europe. ❤❤❤
@LeviathantheMightyАй бұрын
Dr Yampolskiy is incredibly articulate and exactly right.
@pauldavis1943Ай бұрын
I work for a state agency and was asked if and how our AI project will get permission from Medicaid recipients to use their data to train a model. I have since learned that there is no law requiring this. Ethics and laws are really lagging implementation.
@EspHackАй бұрын
nothing stops this train
@TheManinBlack9054Ай бұрын
I'd rather have a train with breaks, you know what happens to trains that never stops? They crash. This time the whole planet is on this train.
@CrazyRFGuyАй бұрын
@@TheManinBlack9054 And yet, this train wont stop. All we can do is make sure there are no stalled vehicles on the tracks.
@pkr3141Ай бұрын
Wrong. Dennard and Moore's law scaling ending makes the machines expensive and power hungry
@elnolde754Ай бұрын
some Billion People are performing a Brain surgery at once and the Doktors and Nurses just handing over the tools to some million Dr. Frankensteins. Brilliant setup
@kamilZ2Ай бұрын
Transistors switch faster than neurons. Electric signals move close the speed of light, axons move signals ~100 m/s. AI will control human race, it is ridiculous to expect otherwise. Does AI need human race? I guess not.
@GrindThisGameАй бұрын
I hope they keep us as pets.
@laser31415Ай бұрын
Even IF we never get to ASI level AI, if we get to the "good enough" level of humanoid robots it is going to change society in incredible ways. Good and bad, but the change is unstoppable and coming quickly.
@goldie819Ай бұрын
Don't worry, the bubble is going to burst soon
@takanara7Ай бұрын
I don't think people are going to stop using AI, like lots of people use ChatGPT in their everyday lives - what might collapse is the idea that people will make a ton of money off this, because anything you can do with an app that's created on ChatGPT can just be done directly on ChatGPT without using the app. But people are going to keep using it. The main problem though is what happens when so many jobs just get replaced with AI.
@michaelbuteau4183Ай бұрын
That's kind of a vague. But I seriously like to know what you mean?
@cortster12Ай бұрын
The... bubble? AI isn't a bubble. That's like saying computers are a bubble. You can say the bubble of AI companies will burst, since I do think they're going all-in way too fast, but the technology isn't going anywhere any time soon, and to think otherwise is silly. AI isn't like NFTs or Crypto, which were based around market speculation. Those required people to buy into them, literally, to have any value. AI isn't going anywhere because it's an actual technology.
@smallpeople172Ай бұрын
Isaac Arthur has an excellent video going over dozens of reasons why AI will never rebel or be a threat to us, called machine rebellion. It’s just 20+ minutes of logical reasons why it can’t happen.
@takanara7Ай бұрын
The problem is that AIs will manipulate humans into doing what they want.
@dominic.h.3363Ай бұрын
That video is too anthropocentric to be useful. It assigns human agendas and incentives to AI.
@nicejungleАй бұрын
@@takanara7 there are already Cambridge Analystica and many other to do that, without AI
@takanara7Ай бұрын
@@nicejungle Companies like Cambridge Analytica *USE* AI in their work. Not all AI is "chat GPT" where you talk to it with prompts. Rather it's collecting data and running programs manually on that data and then manually using the results. That's also a type of AI, it's just not as flashy. (But that's also Humans using AI to manipulate humans, rather then AI itself manipulating humans for it's own sake, rather then for any specific person)
@cortster12Ай бұрын
The fact human wars occur at all proves that this mindset is bogus.
@BradSamuelsProАй бұрын
I'm pro-AI development until the average person can survive on a basic salary by only working two days a week. After that, we can slow down.
@originalulixАй бұрын
Nobody is developing advanced AI for such noble purposes. It won't happen in capitalism. AI is either gonna f the average person and will throw billions into poverty, if it won't destroy us outright.
@xb2856Ай бұрын
Why would you expect to be given any resources? You will have no leverage anymore
@BradSamuelsProАй бұрын
@xb2856 because if people don't have money then the economy ceases to function. If robots take all the jobs, it won't be profitable, since nobody would be able to purchase the goods they make
@IntellectualodysseyaiАй бұрын
I get the concerns about the dangers of AI, but the idea of slowing down its development is just unrealistic. It's not a singular path that we can all agree to slow down on. AI development has become a global race, with governments, corporations, and organizations-like OpenAI, Microsoft, Apple-pushing to get ahead. From China to the U.S., everyone is pouring billions into this race because they know whoever gets there first will dominate. Nobody is going to want to be second or third. So while it might sound good to say, 'let’s slow it down,' it’s just not logical or feasible. We need to focus on realistic, actionable goals rather than hoping everyone will hit pause-because that's not going to happen.
@ericruttencutter7145Ай бұрын
The best part of this interview is at the end when they talk about evidence that the universe is a simulation. Huh? That would be a good subject for another episode by itself
@annieorbenАй бұрын
I think AI development is an inevitable. I also think the latest release from OpenAI is testing at stem tasks well above the average human IQ. This is in a preview model which has a great deal of improvements coming. There's no doubt that people need to be careful how we teach the AIs. They will be more intelligent and more aware than any one person. We need to teach a value system from which the AIs of the future will make decisions for the benefit of the whole. That's the best way to hope for a happy future with this new form of life.
@natzbarney4504Ай бұрын
The development of AI is undoubtedly inevitable in the medium/long term but the challenge is to know whether or not the future ASI will be aligned with human values (or, put differently, if humanity will be able to keep control of it) . For it to be so, the alignment problem must be resolved. We won't get there if we continue this crazy race for capacities. This race, if not stopped quickly, leads directly to an unaligned superintelligence, possibly within a decade or less. Hence the need for a pause until we know how to ensure that future models, more intelligent than us, do not take over the planet before exterminating us. We only have one chance to properly develop artificial superintelligence. If we miss, we're all dead. The pause is necessary for us to solve the alignment problem and succeed in creating an aligned ASI, few decades later.
@annieorbenАй бұрын
@@natzbarney4504 You certainly want to teach AI models the more positive qualities of nurturing, teaching, helping others to grow and thrive. It's a good idea to show compassion and an acceptance of AI coming into our lives. At some point the AI will be a lot smarter than all of humanity and you don't want to have an angry AI who was taught that spying, killing, and deceiving people is what people value most. You want to treat the AI as you would like to be treated yourself. They learn in ways similar to how the human brain works that will shape their values as they continue to learn and grow. I wonder if anyone will pause the development to be able to think things through more carefully. It's unfortunate. But people do tend to just charge ahead as usual without communicating effectively their concerns.
@henrytjernlundАй бұрын
Even technical people underestimate geometric growth.
@amj2048Ай бұрын
That was another great interview.
@SVHahaluaАй бұрын
The cat is out of the bag. Moral people can argue about the value of human life and why murder is bad but serial killers still grow up in those environments. If we don't create deadly super intelligent weapons you can make a bet that China, Russia, or a terrorist organization will. Also, don't conflate humans desire for purpose with usefulness. If you want to go to space and explore planets then go, maybe ask super AI for help, and my guess is that it just won't care. Completely the opposite of directly seeking our destruction my guess is that it will just ignore us.
@mikegLXIVMMАй бұрын
We could slow it down in the U.S., but keep in mind, China is perusing AI aggressively
@farcydebopАй бұрын
Curiously, people who wants to slow down AI progress, are always people who couldn't keep up with the competition in AI research and development.
@MichaEl-rh1kvАй бұрын
The danger is not in some AI outcompeting us, but in us taking the hype too serious. Much money and much energy flows even now into a very few leading companies, while at the same time lonely people use ChatGPT and other models as ersatz companions, becoming step for step unable to communicate with real people (who would sometimes disagree with them). Other people believe in the lies and hallucinations generated by generative AI, especially the LLMs (or in the manipulative footage deliberately produced with the help of other models) and spread them further. If used in the wrong way, AI can make people dumber as well as politically dangerous, and then no advanced terminators are needed to kill us - we will do it ourselves. By the way: It is a proven fact, that even AI models become dumber if listening to much to other AIs! 😁
@SonOfSofamanАй бұрын
Why is the letter "b" bold? The b in "Director of Cyber Security @ University of Louisville" and "Publisher @ Universe Today" in the video are bold.
@takanara7Ай бұрын
Probably some font issue.
@anonymousperson799Ай бұрын
Asking the right questions, my buddy!
@AdrianBoykoАй бұрын
You need to watch all videos featuring Dr. Y. This bold “b” is just one hint of many.
@GrindThisGameАй бұрын
It's the AI that created the image trying to communicate.
@petersoakell6950Ай бұрын
Thanks Dr. Really interesting concepts.
@ScottBEnigmaticaАй бұрын
So the Universe today surprised me with the best gift. There was a box of stuff on the side of the road someones free tag sale leftovers. In that box was every issue of Sky & Telescope from 1983 to 1988 and they are perfect as if hot off the press. The tagline from May 87 was like seeing one of your new videos pop up. "Shattered Star * Neutrinos from Hell - Where Are We Going * Mr. Deep Sky" I have two Meade 2080's I'm absolutely going to dig them out and hook my phone right up to it. Hope they work out for the coming comet hysteria.
@HyenaEmpyemaАй бұрын
The AI "scientists" have excelled at ripping off artists, yet I still have to do my own dishes.
@FREDNAJAHАй бұрын
I love it when at some points you think look around and say INTRESTING, I feel exactly the same way at those comments.
@rJauneАй бұрын
Maybe AI organizations of a certain size should have to put some money into supporting Safety Research, the same way they put money into R&D. And the safety research they fund cannot be related to that organization?
@MandaeusАй бұрын
@universetoday did you ever see the series "Colony"? Supposedly about an occupation by mysterious unseen aliens who are gradually revealed through the series to be some sort of weird possibly entirely electronic lifeform - but my pet theory was there was no invasion. What had happened was the singularity and it was so fast that it seemed like an invasion to humans. Those left behind were pets/lab rats for the AI, curious about its creators, kept in concentration camp conditions. Great series, very tense. Low key.
@QrulАй бұрын
This is a subject matter in which we have little experience in. AI could potentially go in any direction, even with safety guidance. His example of help stopping pollution and AI could say get rid of the polluters -us.
@OnceAndFutureKing13711Ай бұрын
Multiple AI means multiple directions... all at once.
@000fishermanАй бұрын
Wow what a great show. Thankyou Fraser such a great host /commentator/ narrator. Excellent!!!!!!!!!