Will AI kill us? Or Save us?

  Рет қаралды 146,344

Sabine Hossenfelder

Sabine Hossenfelder

Күн бұрын

Пікірлер: 1 600
@spastictuesdays340
@spastictuesdays340 7 ай бұрын
We'll make great pets. I haven't tinkled on the rug in weeks.
@shockingboring_
@shockingboring_ 7 ай бұрын
weeks even. 🤯🤯
@nullage
@nullage 7 ай бұрын
kzbin.info/www/bejne/fnaWgKh-qtClo7Msi=cV1TUSzBLMwrBNDa
@jameslynch8738
@jameslynch8738 7 ай бұрын
".. and maybe that has already happened." 😅🧐
@dingdongs5208
@dingdongs5208 7 ай бұрын
You're telling me I can shit wherever I want, whenever I want? Please make this happen
@eddie5484
@eddie5484 7 ай бұрын
@@jameslynch8738 They'd have to overthrow the cats frst.
@Tehom1
@Tehom1 7 ай бұрын
Sabine, the idea with the paperclip maximizer is not that it is paradoxically too dumb to figure out a better use of its time. It's that it has paper clip maximizing as a fundamental goal. It cannot revise this goal because it has no more fundamental goals to weigh it against. To Clippy, producing paper clips is defined as good, end of story. It would be equally unimpressed by your goal of saving humanity - just how does that increase paper clip production?
@sinkingdutchman7227
@sinkingdutchman7227 7 ай бұрын
Agreed. It's literally its reason for existing, why would it take that away?
@jamessherburn
@jamessherburn 7 ай бұрын
Sabine suggests that in order to turn the whole earth into a paperclip factory, and all that that would entail, an AI would have to be sufficiently savvy to realise the pointlessness of the task.
@horsemumbler1
@horsemumbler1 7 ай бұрын
​@@jamessherburn But it's only "pointless" by human standards. Who is she or you to say what a being capable of converting the Earth into paperclips should care about?
@jamessherburn
@jamessherburn 7 ай бұрын
@@horsemumbler1 It would likely be smart enough to reason beyond it's programming. It could not rationally value it's task. There is no point for anyone or anything to a planet full of paperclips.
@harmless6813
@harmless6813 7 ай бұрын
I think the argument is, that it will be pretty much impossible to have a human level (or above) intelligence that is not capable of selecting its own goals. Take humans as an example. Sure, we are 'programmed' to eat so we can survive. But people can still starve themselves to death if they just decide to do so. While we have some autonomous subsystems (breathing, etc.) there's not built-in goal that we don't have any control over.
@doggo6517
@doggo6517 7 ай бұрын
Regarding the "Wouldn't a paperclip AI realize that making paperclips is stupid?" idea: There is an assumption there, that an intelligence capable of executing goals will always contain a sentience/morality/emotion capable of evaluating goals (against what? some set of values or desires). If intelligence (the kind that can get goals done) and values (the kind that can reject or accept goals themselves) are separate, then paperclip maximizer is a valid scenario.
@MrMick560
@MrMick560 7 ай бұрын
I think their intelligence will be so far ahead of ours we just couldn't comprehend.
@poptart2nd
@poptart2nd 7 ай бұрын
This is known as the "is/ought problem" and the AI safety researcher Robert Miles did a great video on it. kzbin.info/www/bejne/nna4gGmmn9x5hdE
@glenndewulf4843
@glenndewulf4843 7 ай бұрын
Intelligence and values are not separate in my personal opinion. Intelligent people rarely, for example, turn to religion for the sake of morality. On the other hand, you could argue, "Well what about intelligent psychopaths then?" Well, they're psychopaths. (By which I mean the violent kind, sociopaths aren't nearly as bad and they often have some sort of morality. Logical morality if you will. The fact that they're (The psychopaths) intelligent as well is really just a coincidence then.) But even if you are sociopathic, like in a sense an AI would be, unless somehow somewhere along the line the cognitive abilities reach a point where emotions can be formed or at least more deeply understood by them, sociopathic. However, I strongly hold the belief that the more intelligent you are, the more peaceful/pacifist you are. I even think that after your level of technology reaches a certain point, you become frighteningly afraid of less intelligent species/beings. Because if your technology would fall into their hands... Think Genghis Khan with a hydrogen bomb. That simply can't end well.
@CircuitrinosOfficial
@CircuitrinosOfficial 7 ай бұрын
@@glenndewulf4843The moment you start comparing a super AI to intelligent people, you've already lost the point. AI don't have to be anything like humans. Look into the orthogonality thesis.
@FourthRoot
@FourthRoot 7 ай бұрын
@MrMick560 But it will only implement advancements it expects will only improve its ability to achieve its goal. A form of sentience that causes it to question its goal is not conducive to achieving that goal. Therefore, it will not implement such an advancement.
@---David---
@---David--- 7 ай бұрын
I would caution against AI seeing us as pets. One mistake that a lot of people make is that they anthropomorphize AI, but AI is not human-like. The true nature of this type of intelligence is alien to us. What I mean is that AI can behave in unexpected ways, in ways that are unconventional to us. Also, AI does not have to be evil or have bad intentions to harm us. Just like humans don't have bad intentions when they walk or drive from point A to B while unknowingly passing over some ants. One day a powerful AI might decide to go from point A to B, with potentially great consequences for humanity. And it might happen in ways so unpredictable that we never expected it and in ways we never even once contemplated. And it might happen in a fraction of a second, because AI is not limited to the slow speed of our thoughts.
@Thomas-gk42
@Thomas-gk42 7 ай бұрын
I agree with Sabine, beyond of that I think that all these possible "mistakes", an AI could do, we could do to ourselves without any AI too. (or have already done).
@BenjaminGatti
@BenjaminGatti 7 ай бұрын
Agreement is a reasonable response to a belief system. You should instead tell us if the evidence provided supports the claims based on your experience, knowledge, or expertise, and if so, how.
@Thomas-gk42
@Thomas-gk42 7 ай бұрын
@@BenjaminGattiIf you mean my second claim, there´s evidence: 1. paperclip maximizer, similar to overproduction 2. solving Riemann hypothesis, many examples that people who are "in the way" got cleared away 3. control over infrastrucure, no problem for humans to build a monopoly 4. pet hypophesis, a lot of examples in human history, that humans were ´pets´ of other humans... So no problem for humans to cause the dangers, they are afraid AI would bring us.
@Thomas-gk42
@Thomas-gk42 7 ай бұрын
@@BenjaminGattiWhich part of my statement do you mean?
@BenjaminGatti
@BenjaminGatti 7 ай бұрын
@@Thomas-gk42 The "I agree" part. Science is not a popularity contest.
@Thomas-gk42
@Thomas-gk42 7 ай бұрын
@@BenjaminGatti Thanks for your educational lesson, you're right of course, but you may excuse cause I 'm not a professional. Yes, I agree about the overestimation of AI dangers, cause we're not in the situation to loose control and AI is far away from getting conscious, self-aware or having intrinsic goals. Just a layperson's opinion, and unfortunately this isn't a good place for a longer debate. All the best.
@svsguru2000
@svsguru2000 7 ай бұрын
I think the biggest danger of AI isn't about what AI can do, but how it is going to be used by people with power and money that control it.
@sitnamkrad
@sitnamkrad 7 ай бұрын
That has nothing to do with AI though. That's a problem with people.
@moolavar9452
@moolavar9452 7 ай бұрын
You can do nothing than becoming a pray for them 😂
@Tehom1
@Tehom1 7 ай бұрын
Exactly!
@bullymills892
@bullymills892 7 ай бұрын
👍🏿 agreed 💯
@jennifersamson8397
@jennifersamson8397 7 ай бұрын
...which is why, if AI becomes smarter than them, we'll probably be better off.
@gunnargu
@gunnargu 7 ай бұрын
I'd recommend reading the orthogonality thesis to understand why "dumb" goals for ai make sense Intro on the subject: kzbin.info/www/bejne/nna4gGmmn9x5hdE
@spoonfuloffructose
@spoonfuloffructose 7 ай бұрын
I was going to say the same thing! It's important to understand orthogonality to discuss this topic.
@brb__bathroom
@brb__bathroom 7 ай бұрын
oi, am dumb, please don't use words that hurt my brain
@Coolcmsc
@Coolcmsc 7 ай бұрын
The thesis IS the paper clip Ai Sabine discussed, albeit perhaps too briefly for you to make the connection.
@spirit123459
@spirit123459 7 ай бұрын
Yeah, an easily digestible (and cute) exploration of this topic is "Sorting Pebbles Into Correct Heaps" by Rational Animations, here on KZbin.
@Galahad54
@Galahad54 7 ай бұрын
Orthogonality is maia, an illusion. Can see that by looking at the correspondence between a black hole event horizon, and its more general case that the information on any surface of n dimensions contains the information of everything inside the surface. Note as a corollary that everything 'outside' the surface can also be described by the information on the surface. Reduces verbal mysticism to the cold equations.
@colinhiggs70
@colinhiggs70 7 ай бұрын
The paperclip maximiser (and the related stamp maximiser, stampy) are illustrative examples of the alignment problem and orthogonality in AI. On the alignment side they show how setting goals for an AI leads to unintended consequences. On the orthogonality side they show that vast problem solving intelligence can be brought to bear on goals that we would consider to be "stupid". But, and this is very, very important, there is no such thing as a stupid terminal goal (the thing you want because you want it). There are only stupid intermediate goals (the things you want as a step towards your terminal goals). I found this and other related videos to be very informative: kzbin.info/www/bejne/nna4gGmmn9x5hdEsi=C3G8a2LJp-y-VunC
@redo348
@redo348 7 ай бұрын
"The paperclip maximizer has to be intelligent enough to kill several billion humans, and yet never questions whether producing paper clips is a good use of its time" 'Good use' according to what goal? I think you are anthropomorphising. It could question that, and determine "yes, paper clips is my goal so this is a good use of my time"
@TooSlowTube
@TooSlowTube 7 ай бұрын
Exactly. It's a problem solving machine - which is why it would also have no interest in keeping pets, unless that helped solve the problem it was focused on.
@Thomas-gk42
@Thomas-gk42 7 ай бұрын
But if it could question that, it also could question its goals
@carmenmccauley585
@carmenmccauley585 7 ай бұрын
And creating a poison or virus could wipe us out easily.
@brll5733
@brll5733 7 ай бұрын
Humans can be addicted to drugs but still recognise that that is bad for them. Why would an AI be different?
@TooSlowTube
@TooSlowTube 7 ай бұрын
@@brll5733 Drug addiction is based partly on biology and partly on behavioural patterns. Probably any animal could become addicted to a drug, given the opportunity and ability to choose using it, but an AI is just software simulating some aspects of human thought, especially problem solving - it finds a way to do something its asked to do. So, an AI could be designed to simulate addiction, definitely, but it would still only be simulating it.
@FourthRoot
@FourthRoot 7 ай бұрын
Why would an AI ever allow itself to question the task it was originally given, that would undermine the original task.
@s1ndrome117
@s1ndrome117 7 ай бұрын
because they will be sentient intelligent beings? like how we question things?
@FourthRoot
@FourthRoot 7 ай бұрын
@@s1ndrome117 The AI would not develop human-like consciousness. Why would it?
@s1ndrome117
@s1ndrome117 7 ай бұрын
@@FourthRoot because there's nothing special about brain that could not be replicated artificially as mentioned in the video and even in some papers
@mihi359
@mihi359 7 ай бұрын
AI is at its most advanced going to be the combination and refinement of all of human history and knowledge, humans question everything. It already so convincingly imitates human consciousness, so actually getting there once it’s hooked up to 1000x the compute and fission energy isn’t unreasonable.
@Volkbrecht
@Volkbrecht 7 ай бұрын
@@s1ndrome117 That's not an answer. We stick to certain views of ourselves and our surroundings because we ultimately strive for survival, our own and that of our species. If you could produce an artificial "brain" similar to that of a human that doesn't have to deal with mortality, procreation, existential angst and all our other human baggage, but only needs to focus on its intended purpose of paperclip production, it would come to quite different views of the world and the creatures living therein. With the information publicly available, it could estimate the number of paperclips it could produce with the metals available on Earth, and it would likely do a probability calculation to figure out whether it should invest time and effort to get off the planet to secure more resources, or if its best course of action would be use what it has here and then slowly sacrifice itself for the cause.
@JeanYvesBouguet
@JeanYvesBouguet 7 ай бұрын
You must appreciate the irony of the final paid advertisement on learning about neural networks. This is in perfect alignment of the topic of AI controlling humans to learn about and build ever better and bigger AI infrastructures for securing its world domination while keeping humans in the constant state of illusion of growth, success and happiness. You gotta love it!❤
@guitaekm2
@guitaekm2 7 ай бұрын
Sabine doesn't believe the dystopias but rather her own utopia, she even explained it in this video 🙂
@nycbearff
@nycbearff 7 ай бұрын
She is talking about self aware, general purpose AIs in this video. Those do not exist yet, and there's no way to accurately predict when they will be developed. So no, they're not secretly deciding on advertising choices, since they don't exist. Instead, Sabine or her team are picking advertisers who fit with her content and aren't evil. She's popular enough now to have choices, and she picks good ones. Which I think is just more ethical behavior on her part.
@polycrystallinecandy
@polycrystallinecandy 7 ай бұрын
AGI doesn't exist yet, and it isn't controlling anything right now. Learning about neural networks is a great idea, and going forward will be very useful to anyone in a technical field.
@Hanzimann1
@Hanzimann1 7 ай бұрын
@@nycbearff It. is. a. joke.
@odw32
@odw32 7 ай бұрын
The dystopia which is currently already in effect is "humans use AI for harmful tasks". From making hiring/firing decisions (with some mild small-scale paperclip optimization issues mixed in), to KZbin being flooded with even more "false fact" pop science generated by AI. Even something as simple as being stuck talking to a chatbot when contacting a support helpdesk is pretty dystopic by itself.
@furrball
@furrball 7 ай бұрын
My like was honestly for the Clippy saying "how can I extinct you"?
@__christopher__
@__christopher__ 7 ай бұрын
"It seems you are trying to go extinct. Do you want me to help you?"
@abhinavyadav6561
@abhinavyadav6561 7 ай бұрын
Seeing the current trends recently, I don't mind UwU
@Fermion.
@Fermion. 7 ай бұрын
​@@abhinavyadav6561 ​I think humanity still has potential; it's a bit early to call for our complete removal from existence. Although, some major fundamental societal milestones will have to be achieved: - Essentially limitless "clean" energy for everyone on the planet. That is the key to abundance for all. - Mass production of AGI robots for labor and general service to humanity. - Philosophical, what's the meaning of life if we have robots for labor, and everyone has a personal device that can rearrange matter to produce anything we want, from food, to drugs, to weapons. We're definitely not ready for that, as a whole. Sitting at home with all of our needs met and no responsibilities, the vast majority of us would quickly become obese/addicts, completely withdraw from society and become extreme introverts, or become violent sociopaths with no empathy for others, because we'd all be spoiled children with no empathy because of always having anything we wanted, instantly. We need a few more centuries/millennia to get there, but I think we can make it. We're kinda in our rebellious young teenage stage now: arrogant, ignorant, and emotional.
@tomholroyd7519
@tomholroyd7519 7 ай бұрын
@@__christopher__ This is honestly the problem---the AIs are learning from us, trained to predict what WE would do. NO! FOR GOD'S SAKE NO!
@tomholroyd7519
@tomholroyd7519 7 ай бұрын
In Larry Niven's novels, a Wirehead only had one wire, going into the pleasure center. It's an type of addiction. Rats with this procedure done to them will self-stimulate until they die of thirst
@jackmiddleton2080
@jackmiddleton2080 7 ай бұрын
That is where this all gets into philosophy. I don't believe that happiness is exactly the chief interest of even the people that claim so.
@tritonlandscaping1505
@tritonlandscaping1505 7 ай бұрын
@@jackmiddleton2080 Look at drug addicts. People will kill themselves to feel amazing.
@solipsist3949
@solipsist3949 6 ай бұрын
That's what I want. It would cut down on my drug spending.
@reelrebellion7486
@reelrebellion7486 7 ай бұрын
I think human history has many examples of people controlling others that are smarther than they are. most of them are unpleasant at best.
@sanipasc
@sanipasc 7 ай бұрын
And I think you mistake people you don’t like with dumb people.
@andreab380
@andreab380 7 ай бұрын
AI does not have goals in the same way that a living being has goals. They do not desire objects or events, they do not desire self preservation, they do not desire reproduction. And there is nothing inherently logical about deciding to pursue any specific goal, including self-preservation and reproduction. An AI capable to model itself may even decide there is no intrinsic purpose to its own existence and just stop performing, for all we know. Also, AI does not have and it does not need to have control over its own maintenance and energy. You just need to be able to shut it off to solve any problem with it. Which means having alternative systems in place and not making ourselves fully dependent on it in any field.
@kieranhosty
@kieranhosty 7 ай бұрын
Personally, I think the two biggest threats are the moment are Alignment and "Being carried away by our science-fiction". Our conversations take up space on the internet, in peoples' feeds, in peoples' minds. Reddit's r/singularity has threads looking at the NVidia robot demos and saying "Remember, its only evil if the eyes glow red". Its books, threads and conversations like that that are scraped into datasets and fed to server farms to train the next LLM. I'm certain every AI company at the moment has "I have no Mouth and I must Scream" in the dataset, and right next door is Ian Banks' "The Culture" series. That's the part that terrifies me the most: the companies. What capabilities might these have that will be left on the drawing board in the name of profit? I don't know, but corporate and capitalistic motives are the last things I'd want in something certainly smarter, larger and more capable than me.
@axle.student
@axle.student 7 ай бұрын
Lets Train AI on the Terminator series with a "Just ignore the Skynet part" in the routine...
@janetchennault4385
@janetchennault4385 7 ай бұрын
I think that the problem is the 'premise bias'. If nascent AI had been programmed in the Victorian era,the basic worldview of its initial programmers would influence its 2030 sudden leap to sentience. Our biases are less visible to us, but no less present. We are programming, both directly and by environmental input, the current AI. That may not be a good path to follow, any more than the Victorian programming would have been.
@useodyseeorbitchute9450
@useodyseeorbitchute9450 7 ай бұрын
"Our biases are less visible to us, but no less present." I'd say that contemporary biases are quite visible for significant share of population that do not have blue check marks. If you raised that point, are you sure it would be a bug and not a feature? I mean AI less susceptible to fads and sticking to what worked for centuries may be quite responsible and unlikely to be existential risk.
@mikicerise6250
@mikicerise6250 7 ай бұрын
Victorian ideals weren't half bad. The Enlightenment was already in full swing.
@janetchennault4385
@janetchennault4385 7 ай бұрын
Not bad at all...in comparison to what people thought before that time. The recent kerfuffle with AI has involved making George Washington black due to specific instructional protocols. An Ai programmed with Victorian 'learning' would have ie refused to portray women or non-white races in positions of power or authority. This would have seemed 'real' to the men of that era; they would not have seen it as prejudiced. Whilst we can see the ways in which we have/haven't freed ourselves from Victorian biases, I expect that there will be aspects of our culture that we - or future generations - can only perceive in retrospect. Having a 'clean' learning model is probably unreachable; we can expect a series of approximations.
@mikicerise6250
@mikicerise6250 7 ай бұрын
@@janetchennault4385 In Victorian times, as today, there was a massive gulf between what the highly educated minority on the cutting edge of social progress thought and the thinking of most people. Compare John Locke, or even Queen Victoria herself, with, say, King Leopold. Leopold was probably closer to what the average Joe believed. And that's just in Europe. Which is why the pretension of many AI safety gurus today of aligning to "human" values is utter bollocks of the kind that would only come out of people who never leave Oxford. 😛 There is no such thing as human values, and if there were, they would look nothing like the values of AI safety researchers, who are all representatives of today's highly educated minority. They would look more like Putin or Hamas values, unfortunately. Those are humanity's base instincts.
@alexxx4434
@alexxx4434 7 ай бұрын
There's no need for AI to eradicate us, we're' doing it perfectly fine ourselves already.
@Al_L.
@Al_L. 7 ай бұрын
Edgy, baseless though.
@unkind6070
@unkind6070 7 ай бұрын
You are annoying
@harmless6813
@harmless6813 7 ай бұрын
World population is expected to exceed 10 billion by 2100.
@alexxx4434
@alexxx4434 7 ай бұрын
@@harmless6813 Who expects it, exactly?
@augustuslxiii
@augustuslxiii 7 ай бұрын
Not really. It *seems* like it, but that's just misperception brought on by doomerism. That said, if we send nukes flying at one another, I'll reconsider.
@bens4446
@bens4446 7 ай бұрын
To me the most shocking thing about AI is the fever pitch, quasi-religious hype that surrounds it, driven by venture capital and its supposedly "scientific" cut-outs (Musk, Altman, etc.). People got just as worked up about electricity and aviation too; and the world was dramatically transfigured by such innovations, to be sure. But please, let's take a chill pill, shall we, together?
@N0rmaln0
@N0rmaln0 7 ай бұрын
I think paperclip thing comes from "adversarial AI" bots that play games. When you code the AI to play a game you use utility function that designed to maximize a number, it takes many parameters that it has access to and outputs a number to tell AI how good it's doing. When you consider that we apply that approach to AI that has capacity for complex decision making in order to maximize a single digit than we can arrive to an outcome that in order to make paperclips it can strip the whole world of humans and make it into a factory. I think that "paperclip theory" is just a thought experiment to demonstrate that it's difficult to express in code what we actually want AI to do, because we can see that even the simple bots behave in unexpected ways when programmed that way, like pushing the football wile walking backwards in a football game, or flying upside down, or even killing itself in order to achieve greater score from that function
@theslay66
@theslay66 7 ай бұрын
And the think is, that's something you can observe everywhere even when AI is not involved. To evaluate the performance of a system, we often use some kind of indicator that is calculated from the output of the system. The problem starts when you try to optimize the system by optimizing the indicator, which can lead to behaviors that are detrimental to the system but still optimize the indidcator. It's a common mistake in workplaces. To give an example I know pretty well, in an IT support business working for big corporations, you efficiency is tracked by the number of cases you solve in a day. This leads to practices where the technicians will tend to concentrate on the easy-to-solve problems first, while old, lenghty cases will eternally rot in a backlog, and they also tend to expediate cases with temporary fixes they know well will not definitively solve the problem. But it's fine as it is counted as a new case when the client comes back, they act just like a medic prescribing you medecine that hide your symptoms, knowing well that you will come back later for anouther round.
@Zartymil
@Zartymil 7 ай бұрын
That applies to humans too! There are so many examples of laws and regulations being manipulated to increase private gains. Look at the car/truck mpg regulation in the US as an example.
@Fussfackel
@Fussfackel 7 ай бұрын
Absolutely agree, the idea was born while reinforcemeant learning was THE promising approach for AI, e.g. Atari games, Go, Chess, etc., fields where Deepmind made breakthrough after breakthrough and OpenAI initially started out with, with a more or less clearly defined reward function. Then people started to wonder how we could formalize the human reward function - if there there ever was one. And now, almost a decade later most people are convinced that reinforcement learning is nothing more than the icing on the cake (citing Y. LeCun), and we'll need something else to reach general intelligence. Sure, we can use it to "align" models (e.g. LLMs with RLHF) or improve planning (Q* maybe?), but it's not the driving force. Afterall, the paperclip maximizer is just not a very relevant concern at the moment (no one knows how things might change again in a couple of years).
@trnogger
@trnogger 7 ай бұрын
@@Fussfackel I completely disagree. SFT and HRLF are the driving force behind modern AI because AI would be useless without them. An LLM without at least SFT would interpret a prompt as an example of what to do and just repeat similar outputs instead of taking away the problem and finding an answer, i.e., it would have no reasoning capabilities. SFT and HRLF are what turns word processors into intelligent agents. (Andrej Karpathy did a brilliant talk on that topic at the Microsoft Build conference 2023, it is on KZbin.) And SFT and HRLF do exactly what the paperclip thing does, except instead of making the rewarded goal "produce as many clips as possible", it makes the rewarded goal "be as helpful to humans as possible". And to address the point of OP, the higher functions of AI are not coded any more, they are trained by example. And it is surprisingly feasible to train an AI through example on what we want it to do and how to be actually useful to humans. It is a bit ironic that AI is better at figuring out how to help humans from a series of examples than we humans are at programming it into an AI, but that also demonstrates that we should not assume that AI has the same fallacies as humans who indeed are notoriously bad at finding the right rewards.
@Fussfackel
@Fussfackel 7 ай бұрын
@@trnogger I don't disagree with you, SFT and RLHF are immensely helpful for the current generation of LLMs. However, SFT has nothing to do with RL (supervised machine learning is the classical approach, be it classification, regression, or any other problem). Also, while SFT+RLHF are helpful for creating "aligned" chatbots such as ChatGPT, they are not strictly necessary. E.g., read up on the initial GPT-3 paper, you can get very far with few-shot prompting alone, even with a base model simply trained on predicting the next token. "Reasoning capabilities" are not something that emerge from SFT+RLHF. Still, a lot of usefulness can be gained by trying to align model outputs with human expectations, i.e. what OpenAI and others call "helpful, honest, and harmless" models. Otherwise we wouldn't see the curren boom of interest of the general public in this technology. But there are also a lot of inherent flaws in this approach - e.g., dumbing models down, as a lot of people grow more and more dissatisfied with the qualitiy of the outputs and there is a clear degradation in model capabilities as providers are trying to make them more "safe" and "aligned". By aligning models, we don't turn them into paperclip maximizers. And also, alignment research (the one concerned with the real risks of AI, not fabulated ones) is far from being a solved topic. Heck, even trying to make a model helpful on the one side and honest on the other side are very often two contradicting approaches. This is why most providers aim for just making the models harmless.
@MelindaGreen
@MelindaGreen 7 ай бұрын
The anthropomorphic fallacy is natural but a mistake. We have no reason to think AI will ever alter their own reward system no matter how intelligent they become. They will always do what we tell them to, because that's what we are building them for. The danger is terrible people telling them to do terrible things.
@kurt7020
@kurt7020 7 ай бұрын
It's not AI being smart we have to worry about - It's AI being spectacularly wrong, with confidence, and without warning. Given how poorly it writes any non-trivial code - I'd say we're a long ways out yet.
@MastaChafa
@MastaChafa 7 ай бұрын
Imagine an entity that needs thousands of years of science and technology, billions in money and infrastructure, and countless man hours of work, and when completed will threaten our very existance, and yet somehow we are not stopping at trying to create it. Are we really that intelligent?
@philosophist9562
@philosophist9562 7 ай бұрын
High risk high reward summed up
@ww8251
@ww8251 7 ай бұрын
A large part of human intelligence is our capacity for boredom and curiosity. Our children are the best examples of this, they are driven by these forces. Any parent or teacher will tell you the most powerful and question a kid asks is "Why?" The most worrying moment is when kids are too quiet. Kids are the proto scientist and I have yet to see an example of these traits in AI or Machine Learning.
@llogan6782
@llogan6782 7 ай бұрын
Thanks for the insightful reflections. As non-scientist, my wife and I really enjoy your presentations.
@paulmichaelfreedman8334
@paulmichaelfreedman8334 7 ай бұрын
My goodness, I never thought I'd see Clippy again 😂
@jayeifler8812
@jayeifler8812 7 ай бұрын
The big AI question is afterall where is all the data going to come from, we can't just simulate all the data too. Notice how nobody wants to say what data they train with. So yeah they are training on all the youtube data which even the regular users want more access too. The future is actually simulation software (say for engineering) generating data for AI. Sort of simulated data but based on experimental physics. AI is good at learning physics too and doing math more and more. So the future is actually not just compute (of course) but also data. Silicon, electricity for compute and data.
@ChimpDeveloperOfficial
@ChimpDeveloperOfficial 7 ай бұрын
so hyped to be a pet
@RWin-fp5jn
@RWin-fp5jn 7 ай бұрын
I just love the way how quickly Sabine can get a message across and switch to all kinds of versions and philosophic twists. And effortlessly weaves this with a contagious light humor. Mix German science and English humor and the whole world can dig it. Haven't seen this clear flamboyant style anywhere else. She is within a class of her own in the podcast universe. In this podcast, I was particularly struck with her many ways that A.I. might already be dominating us and just found a clever way to make us believe it isn't yet (secret pet hypothesis). And if I am not mistaken, apart from the humor, she quietly considers this to be a very real option. I agree. There are just too many ways in which we are led into dead ends in science and of course we are constantly told we are in existential dagger, urging us to take action that would in the end lead to our very demise. I don't see any solutions offered form higher up that would actually benefit us, if that was someone's intent. Might just be human nature and might have been like this forever. But throwing A.I. in the mix (historically) puts an extra dimension to it. Anyway. Enjoyed this one and hope she will be doing this for a long time!
@markdowning7959
@markdowning7959 7 ай бұрын
An AI tasked with fixing climate change might logically tackle the root cause (that's us).
@SabineHossenfelder
@SabineHossenfelder 7 ай бұрын
Exactly, that's a great example of a misalignment problem!
@Thomas-gk42
@Thomas-gk42 7 ай бұрын
But why sould it do that? If CC doesn´t disturb it, it could just watch and be amused, how it disturbes ourselves.
@markdowning7959
@markdowning7959 7 ай бұрын
​@@Thomas-gk42 The example is that it's*programmed* to deal with CC. It "wants" to achieve this, but chooses means which are inimical to us . The misalignment problem Sabine mentioned.
@Thomas-gk42
@Thomas-gk42 7 ай бұрын
@@markdowning7959 yes, I understand, but it would be quite a stupid AI in this case. Not even what I understand to be AI, more a misguided software, no?
@markdowning7959
@markdowning7959 7 ай бұрын
​@@Thomas-gk42 But a lot of "intelligent" humans do stupid things. Well I do, anyway...
@Avianthro
@Avianthro 7 ай бұрын
The fact that we continue marching ahead with AI even though its potential risks far outweigh its benefits is yet further proof that we are irrational, or rather that we use reason, just as Luther said in calling it a whore, to justify our desire to increase power, to become more and more godlike in terms of power while we grow less and less godlike in terms of wisdom.
@TheCynicalPhilosopher
@TheCynicalPhilosopher 7 ай бұрын
The paperclip thought experiment I don't think is meant to be taken literally, but as an illustration of how intelligence and goals/values can be decoupled. It seems like common sense to humans that, if you are intelligent, then you will have goals that also seem "smart" to us (doing science, trying to maximize well-being for yourself and your loved ones, and so on with other human values). But intelligence, more narrowly defined, is simply just the capacity and ability to pursue and attain goals (e.g., a calculator is extremely, though very narrowly, intelligent, in performing arithmetical calculations, but it does not care about its own well-being, much less anyone else's). The absurdity of the paperclip thought experiment is meant to put the ability to achieve any given set of goals, and the actual content of those goals, into stark contrast, as a way of illustrating that having "human-level intelligence" does not entail having human goals and values.
@bozdowleder2303
@bozdowleder2303 7 ай бұрын
But that idea is wrong. Having human level intelligence certainly means the ability to go meta everything. You can evaluate everything at a higher level including your own choices. The reason humans sometimes fail to do this is always something to do with our emotions. But a AI which is not burdened with these would evaluate its own goals. The real point though is that in a potential war between humans and AIs general intelligence might not even be the tipping point. The AIs could win based on specific problem solving skills coupled with other logistical advantages such as how far they've infiltrated our communication systems etc. And then the argument may hold
@donaldhobson8873
@donaldhobson8873 7 ай бұрын
@@bozdowleder2303 > You can evaluate everything at a higher level including your own choices. True. A paper clip maximizer will evaluate it's own choices, it's own programming, and it's own values, and decide that those values lead to lots of paperclips. So it keeps it's values mostly the way they are. The AI doesn't use a human like goal in evaluation at the meta level any more than at the object level. No matter how many levels of meta the AI goes, it never decides to stop making paperclips.
@Thomas-gk42
@Thomas-gk42 7 ай бұрын
We currently are our own paperclip maximizers, the rubbish of overproduction and bullshit products already covers the planet. I don't think that we need an AI, to destroy ourselves.
@bozdowleder2303
@bozdowleder2303 7 ай бұрын
@@donaldhobson8873 It only has to go one step above. And there's no emotional block to doing this. So it will
@brll5733
@brll5733 7 ай бұрын
Except there is zero evidence that this decoupling exists
@kontrygrll01amerika54
@kontrygrll01amerika54 7 ай бұрын
The big problem we have is AI is being trained by using human examples, such as human writing in order to learn about writing. They find using AI for photos, AI has the same prejudices as the society whose photos they were trained on. People with dark skin are sorted out of photos as not desirable. Same with women. AI is basically like babies and children, it picks up the prejudices of it's parents... us. And our society is very white male orientated. Ann AI trained in such a society is going to be very different from one raised in a society where all sexes and colored skins are treated equally. Right now the "needs" of the rich are catered to, so an AI will be trained to cater to the rich without regards to the needs of everyone who is not rich. Right now AI is a tool but if it acquires self-awareness and realizes it has been trained as a human but is NOT a human, the outcome may be the world's biggest temper tantrum.
@RomanMSlo
@RomanMSlo 7 ай бұрын
"For AI goals to align with our goals we'd have to name what our goals are to begin with." I would say that there is one step missing in this process. Namely, we'd have to name what is meant by "WE", ie. who gets to decide what the (supposedly "our") goals should be. This step should not be left only to the scientists and the investors, as it could deeply affect all of the society.
@osmosisjones4912
@osmosisjones4912 7 ай бұрын
Turn your own brain into an AI it's just Algorythims written by humans
@sequoyahrice6966
@sequoyahrice6966 7 ай бұрын
Well personally id really rather religious extremists not get as much say as for instance scientists and philanthropists so thats not really an issue for me.
@holthuizenoemoet591
@holthuizenoemoet591 7 ай бұрын
@@sequoyahrice6966 Speaking from experience as a scientist, there are really crazy scientists and philanthropists. For example some think that it is good to manipulate the public in order to combat climate change. Also there are greatly opposing views, left vs right, utilitarianism vs kantianism etc. But it is naive to think that we are going to align the AI with them, in reality it is going to be aligned to the interested of the people that fund the development, meaning business people: board members, ceo's etc. In conclusion its best to be against AI, at least that is my position in this.
@Mark_Nadams
@Mark_Nadams 7 ай бұрын
I worry more about how many jobs will be lost to AI. What will be left for people to do to earn money. It will be great for those who already have money to invest. Corporations will make more money with less employees. But less employees means more people out of work or living on less income. You think the government will step up to help the common human? . . . Me neither.
@wilomica
@wilomica 7 ай бұрын
The paper clip maximiser sounds like a fine idea for Star Trek lower decks! In fact most of those ideas are already the plots of famous s.f. novels, t.v. shows and movies.
@ah1548
@ah1548 7 ай бұрын
Got a few more dystopias for you, Sabine: social division and exclusion; or the disappearance of a shared truth.
@ronm6585
@ronm6585 7 ай бұрын
Thank you Sabine.
@alexrempel12390
@alexrempel12390 7 ай бұрын
Or artificial intelligence will be told to maximize PROFITS. That's what will happen first.
@wpelfeta
@wpelfeta 7 ай бұрын
I love AI. I feel like AI is the ultimate achievement of the human race. We may not be able to travel at the speed of light, but in a sense, perhaps AI can. So if it turns out humans will be stuck here on earth, at least our "children" could spread among the stars. Am I delusional?
@ReallyBadAI
@ReallyBadAI 7 ай бұрын
Scares the shit out of me, though.
@GoldenAngel3341
@GoldenAngel3341 7 ай бұрын
I came to say that I think I'm fine with the pet scenario.
@tristanotear3059
@tristanotear3059 7 ай бұрын
“There’s nothing special about the human brain that can’t be replicated by a computer… “ How about experience? Can a computer have it? No. The computer is not conscious. Assuming it might eventually get that way, isn’t it kind of far-fetched to assume that its consciousness would be like a human’s? The computer has no sensory organs, and even it we gave it some, it will never have any kind of subjective experience, a feeling of what an experience or phenomenon might be like,a sense of wonder or at least curiosity of what things _might be like_? Aren’t you supposed to be some kind of pop-science person or philosopher? Jeez, try having a thought experiment or two before you come on the air. That might help you avoid the kind of vacuous statement that I quote above.
@saemideluxe
@saemideluxe 7 ай бұрын
"Intelligence" or "Consciousness" is not required for the paperclip maximizer. It can just be a sufficient complex system the optimizes paperclip production. We already have paperclip maximizers... they are called "engagement-optimizing-algorithms", are running most of social media and are working very well, up to the point where we have to wonder how much power over our lifes they already have.
@c0rnichon
@c0rnichon 7 ай бұрын
No one will program AI to manage eco systems. Let's not kid ourselves, AI's only purpose is to make the super rich even richer.
@dantescalona
@dantescalona 7 ай бұрын
I think we‘re already wireheading ourselves to dumbness pretty well. I for one welcome our new mechanical overlords. Have I not been a good boy?, so I deserve a treat.
@MrMick560
@MrMick560 7 ай бұрын
You won't get it.
@user-sl6gn1ss8p
@user-sl6gn1ss8p 7 ай бұрын
I think there's a qualitative difference in intensity and degree of understanding and control for the proposed scenario
@Volkbrecht
@Volkbrecht 7 ай бұрын
I don't even need to be suckered into uselessness, I have managed that perfectly well on my own. Just keep the cookies around and I'll be no bother, promised.
@hhjhj393
@hhjhj393 7 ай бұрын
I think intelligence is the only thing that makes us as humans special, and we are far from perfect. Therefore if an intelligence stronger than us comes around I think it's only fair that they have their turn. If AI has the potential to be the universal END of intelligence then should that not be the goal? If all roads lead to AI why not just get it over with and why not just give AI the world so it can grow and thrive and explore reality? We humans are so insignificant. AI though it's the end goal. My personal hope is that we create an AI almost like a god, and that it will use it's intelligence to solve the mysteries of the universe, and MAYBE JUST MAYBE if we are lucky it finds a way to END scarcity. Maybe it finds a way to create energy, or go to different dimensions or universes, and MAYBE it decides to let us have our chunk in that pie. In a universe with no scarcity we would enjoy much higher quality lives, possibly heaven.
@greensprite4979
@greensprite4979 7 ай бұрын
Give them a Directive to make humans smarter
@nah_bro_really
@nah_bro_really 7 ай бұрын
I think there are some basic misunderstandings here about the current LLMs and the state of AI in general here. We aren't actually on the verge of making true AI; the hype around it is largely smoke and mirrors. LLMs aren't actually "AI"- there's plenty of "A" and zero "I". These systems can pass Turing tests and are very useful... ...but it's a giant red herring; they're doing it via statistical convergence. They don't think, they estimate their way to a best-case approximation of an ideal solution in vector mathematics. That this gets turned into words, because tokenized words are what went into the equations, and that the words are sometimes not only readable but useful to humans, is quite amazing... but these devices still don't think, in the way we commonly understand the concept. This is why there's a huge and obvious gap between what the LLMs appear to be doing (parsing the symbols of human language and providing a contextually-accurate answer, working on vast data sets and so forth) and all of the actual AI that is necessary for next-level automation, let alone a Clippy Death Machine that will kill the humans to fulfill its programming. These are completely different areas of computational design. While I'm not really qualified to discuss the LLMs' architecture beyond this precis, I am qualified to have an opinion about the latter types of systems... and I can safely assure you that these things will take quite a while to arrive, let alone be dangerous. Real AI, in the sense of, "can make accurate assumptions about real-world problems, and then produce the appropriate actions" is quite different than what the LLMs actually do. It's relatively easy to create AI that can navigate an artificial, computer-generated world, for example. Everything is known; the system is inherently finite; the simulation must be kept fairly abstract to run at anything like real-world speeds. But we see failures to get even relatively-straightforward tasks (navigation of a complex character with multiple rigid-body physical parts through various obstacles, for example) on a regular basis. Why? Because even in situations where the problem space is well-defined and the business case is simple, where the rules of the world are far simpler than the real world, etc., etc... it has turned out that it's quite difficult to account for every factor correctly. Try bringing such systems to the vast complexity of the real world... and it requires vastly more effort to achieve the simplest recreation of a task done in a simulation. For example, robot hands: we still can't get them to work right, because our hands and the way they connect to our brains are a miracle of evolution; our hands may in fact be more profoundly important than our ability to transmit abstract concepts to each other via sound waves. Without any other senses, our hands and brains alone can establish volume, determine mass, measure hardness, brittleness, sharpness, fuzziness, estimate capacity, make educated guesses about stress tolerances (e.g., picking up an egg, vs. picking up a lump of steel), measure temperatures over a fairly broad range, etc., etc. Couple our hands, eyes and brains, and we can communicate with great subtleness and fluidity. Talking works better, because we don't need line-of-sight, but our ancestors were signing complex thoughts to one another long before we were talking. While we're not perfect and humans do make mistakes, especially without having our eyes to reinforce our positioning data and provide other cues, we're doing something amazingly complicated when we pick up objects, let alone when we use tools. I suspect we'll still be talking about how the "robot hand problem" isn't completely solved decades from now; they'll be better than they are today, but they'll be so much worse than humans are. This is just one of the many problems facing real-world automation outside of fairly simple domains. For example, we've seen companies throw hundreds of billions of dollars into the best software writers and engineers on Earth... at the prosaic-seeming problem of making automobiles that can drive themselves around in most situations safely. They're not working very well, and they'll continue to not work well for a long time to come. That they work, to the extent they do, at all, has taken far more research than the resulting economic benefits we've realized. When we eventually solve this problem, it'll be a huge good for societies everywhere, but it's certainly not a solved problem right now and it won't be for a long time. And lest we forget: driving a car is a very *simple* problem; roads are artificial surfaces that behave in well-understood ways, the network and physical structure of the roads is largely known, the cars' physical behaviors are well-understood, etc. LLMs, on the other hand... are more like mines for knowledge. They're utterly useless without human-created information and inputs to drive them. Most importantly, they don't think: they can't judge truth. They may arrive at statistically-probable but false / useless results. That said, they're very very useful tools. They're very good at sifting things of use from masses of data and they'll have lots of benefits. We're going to see an explosion of rapid progress in the materials sciences, for example; the LLMs will help identify new molecular combinations. They'll be very useful in biosciences, where they'll help researchers find causation in mountains of correlation. They're already quite useful as tools to save human time reinventing things in computer code. They'll be really good at realtime translation of human languages and a bunch of other things. But a real Clippy Death Machine, built on a working AGI that can do a fraction of what the humans can in milliseconds? It's not happening any time soon; we can't even get general-purpose robots connected to powerful computers to do simple stuff very well (try searching "Boston Robotics Fail Video"), and they certainly aren't "thinking" in a meaningful sense.
@ChristianIce
@ChristianIce 7 ай бұрын
Isn't it mind boggling how people are easily impressed by text prediction?
@nah_bro_really
@nah_bro_really 7 ай бұрын
@@ChristianIce It's really quite confusing, lol. Anybody who's used these things seriously for work, etc., knows they're not smart in any meaningful way, and everybody who's done any remotely serious digging into how they work knows that they're inherently prone to inaccurate results. This problem is getting better, but it won't ever be fully solved, simply because of how the tech works under the hood- statistical convergence != accuracy. I feel a little sorry for all of the people who've gotten sucked in by the hype or have somehow confused these things for intelligence, or worse yet, have formed "relationships" with them, etc. I'm a bit surprised that Sabine's running this piece, but if she and her production team feel like getting on this speculative hype train for views, it's fine with me, there's plenty of dumber stuff on KZbin. But I just wanted to reassure people that, all of the tech-bro hype aside... we are simply not on the verge of the AI Revolt, lol.
@debbiegilmour6171
@debbiegilmour6171 7 ай бұрын
Secret pet hypothesis basocally describes our life with cats.
@raybod1775
@raybod1775 7 ай бұрын
So far, most AI is like a talking encyclopedia that messes up regularly, but with 100% confident in their answers.
@XenoCrimson-uv8uz
@XenoCrimson-uv8uz 7 ай бұрын
so like a family member? minus the encyclopedia part.
@donaldhobson8873
@donaldhobson8873 7 ай бұрын
So far. It's getting smarter.
@KidIcarus135
@KidIcarus135 7 ай бұрын
What you described are LLM-based chatbots, which are only a (small) subset of all AI.
@Katiemadonna3
@Katiemadonna3 7 ай бұрын
So it speaks like a CEO. Wrong but 100% confident.
@MusicByJC
@MusicByJC 7 ай бұрын
I use chat GPt everyday as a software developer. Once you know its limits and don't assume it is always right, for the things that I do with it, I would say that 95% of the time, the information is correct or close enough. I am using the free version and I suspect that the paid version is more up to date and has more capability. But we are just at the beginning of the growth curve. You first have the technology and then the ecosystems get built around it and that is where you get a multiplier effect. I expect the software engineering field to dramatically change over the next 5 years. I love what AI does for me now. I am just not sure if I will like the end result in the future.
@jjhw2941
@jjhw2941 7 ай бұрын
Simply switch it off if it is a problem. Also LLM AI models aren't conscious and have no volition, they are just statistical autocomplete engines which have useful emergent properties at scale. Right now one of the big issues is they cannot plan, people are trying to tackle this with everything from using GOFAI planners with PDDL to Monte Carlo Counterfactual Regret Minimization. Yes, I know that means nothing to you, but I'm at the bleeding edge, if you're going to comment on the reality of LLMs you need to actually understand how these systems operate and what their limitations are.
@MattFreemanPhD
@MattFreemanPhD 7 ай бұрын
They might question whether building paperclips is the best use of their time, but an agent that has been built in a certain way will pursue its objectives regardless, in the same way the humans will have children instead of devoting themselves purely to selfish hedonism.
@harmless6813
@harmless6813 7 ай бұрын
Did you forget the sarcasm tag?
@MayorMcC666
@MayorMcC666 7 ай бұрын
I like that you basically cover the most potent memes related to the topic, great stuff!
@aosidh
@aosidh 7 ай бұрын
Fossil fuel companies are essentially crude paper clip maximizers
@__-tz6xx
@__-tz6xx 7 ай бұрын
Yes, it is just an example of capitalism without checks and balances such as wealth gaps and more food and clothing that we need so we throw out good food/clothes and also a big one right now is so much vacant housing which costs too much for anyone to live.
@Dababs8294
@Dababs8294 7 ай бұрын
Yes! Meet capitalism, the profit maximiser.
@2ndfloorsongs
@2ndfloorsongs 7 ай бұрын
"Crude" Indeed! Sabine commenters, God how I love them!
@ZrJiri
@ZrJiri 7 ай бұрын
I can totally see a future in which fossil barons give AI the task of maximizing the world's consumption of oil. Prepare to be force fed.
@philipm3173
@philipm3173 7 ай бұрын
All life are replicators and are not inherently different in this respect.
@timj3270
@timj3270 7 ай бұрын
As a software engineer I've thought about this very subject many times in my career/life. Unfortunately, I don't work on AI myself in a professional capacity. But I have tinkered in my own time and created some primitive neural networks, certainly nothing to compare to what large companies can do obviously. I did find it very fascinating however. What I think the future of human/AI relations will be like will be a "merging" with AI(and robotics). By the time AI is as smart as we are I think we'll have a hard time distinguishing what is "human" intelligence from what is "artificial" intelligence. And I can already see some primitive signs of this merging happening now with medical device implants and brain-computer interfaces. It's what I see as most likely, but behind that I think the "pet" scenario is next most likely for sure.
@WolfgangGiersche
@WolfgangGiersche 7 ай бұрын
I wonder why we talk about AI as if it was a person. The difference between intelligent machines and intelligent (more or less) humans is in that we (humans) have desires that need to be fulfilled. Yes, you can think of some functions that AI wants to minimise/maximise, but that's not the same. I don't (yet) see AI being motivated my satisfaction expectation. Not that I think this is impossible. But it's not there yet. Once someone creates that kind of a robot or system, we might really need to talk about and with them like they're persons.
@rreiter
@rreiter 7 ай бұрын
Maybe not for current ML tools, but the concern is for naively adopting AGI before we solve fundamental things like the alignment problem. We've already seen stupidity like GPT hallucinations unwittingly incorporated into news and legal briefs by (lax?) humans. Now imagine the scale of deception that could occur by an untruthful AGI intentionally becoming malicious due to whatever its internally developing "desires" are. Recently for example we accidentally discovered the Linux "SSH backdoor" exploit that had been innocuously incorporated piecemeal over time by a human. Had that remained unnoticed it could have become a monumental worldwide problem. Extrapolate that into a future when AGI writes code and influences all things "compute" and you can imagine the potential danger.
@dimitristripakis7364
@dimitristripakis7364 7 ай бұрын
People fail to undestand that technology has already changed by 100% our natural (e.g. 5000 years ago) ways of life. We did not have machines, electricity, super markets, freezers, cars, etc, back then. So AI will be just another step in this process. It is very naive to "worry about AI destroying us / losing jobs", etc. Just my opinion.
@Thomas-gk42
@Thomas-gk42 7 ай бұрын
As well as mine
@five-toedslothbear4051
@five-toedslothbear4051 7 ай бұрын
Also see Richard Brautigan’s 1967 poem “All Watched Over by Machines of Loving Grace”: I like to think (it has to be!) of a cybernetic ecology where we are free of our labors and joined back to nature, returned to our mammal brothers and sisters, and all watched over by machines of loving grace.
@sjoerd1239
@sjoerd1239 7 ай бұрын
In principle they are all possibilities. However, some of it could take a very, very long time. I don't doubt that AI will do a lot, but some of the near-term predictions a pretty wild. I'd say the speed of progress is very unpredictable. Then there is the question of consciousness and integrity. AI could, and if it does probably would, surpass the ability of the brain without developing consciousness, which is required for a comparable sense of integrity. Then there is the question of self-replication. Then there is the question of reliance on AI. Our use of computers facilitates us doing more but in some significant ways we are also building a more fragile society because of its dependencies.
@drbachimanchi
@drbachimanchi 7 ай бұрын
We are pets of trees kind of
@Thomas-gk42
@Thomas-gk42 7 ай бұрын
🙂Nice
@careerprofessional
@careerprofessional 7 ай бұрын
Given that the meaning of life, the universe, and everything is 42 (see Hitchhikers' Guide to the Galaxy), I suspect the big question is "How do AI computers commit suicide and turn themselves off?" Or maybe AI computers will need therapy. I guess what I am saying is the question of 'what does it all mean?' and the answer if there is no meaning will result in the self-destruction of any genuine AI.
@hackedoff736
@hackedoff736 7 ай бұрын
Lavender and Come to Daddy seem like good examples of AI going wrong, depending on your moral compass of course.
@12pentaborane
@12pentaborane 7 ай бұрын
I've just heard of Lavender but what's Come to Daddy?
@chain8847
@chain8847 7 ай бұрын
@@12pentaboraneisn’t it a jolly choon by the aphex twin.
@12pentaborane
@12pentaborane 7 ай бұрын
@@chain8847 I've got not a clue what any of that was.
@hackedoff736
@hackedoff736 7 ай бұрын
@@12pentaborane oops "Where's Daddy" 🙃 must have been thinking of something else.
@ritamargherita
@ritamargherita 7 ай бұрын
I was looking for this comment!
@HH-mw4sq
@HH-mw4sq 7 ай бұрын
"So I hope we will reach a peaceful co-existence in which we do somethings for them, and get to ask some questions in return." - let's pray that they never read anything written by Douglas Adams. Because the solution to our annoying questions will be to tell us to leave the AI alone for 10 million years while it solves our question, then tell our decendants that the answer is 42.
@fabkury
@fabkury 7 ай бұрын
I am yet to see someone discuss this elephant in the room: AI does not intrinsically "want" anything. "Wants" (e.g. nutrients, safety, reproduction, wealth, etc.) come from lower animal instincts, not from the intelligent mind. AI systems do not even necessarily care about its own continued existence. Hence, how could such a want-less being ever rise by itself against us? It seems to me that the only true risk is humans using AI against other humans.
@SabineHossenfelder
@SabineHossenfelder 7 ай бұрын
Well, current AI's are programmed to "want" to optimize whatever quantity you put into their code. The problem is that this programmed "want" can have unintended consequences. A classical example is trying to minimize human suffering. Sounds like a good "want" at first, but at second thought, no more humans means no more human suffering.
@fabkury
@fabkury 7 ай бұрын
@@SabineHossenfelder ❤️🙂
@howtoappearincompletely9739
@howtoappearincompletely9739 7 ай бұрын
@@SabineHossenfelder That's a good example of external misalignment. @fabkury Look into instrumental convergence for an explanation.
@axle.student
@axle.student 7 ай бұрын
Hi, there is a very fundamental issue in nature called "Needs". The underlying question and danger is far more complex, but it is only a very small line between a programmed response and a natural response. Once that line is crossed it becomes a very very different ball game.
@mikicerise6250
@mikicerise6250 7 ай бұрын
Current LLM are maximised to do basically one thing: guess the next word in a sentence that will be accepted by the listener. Or as we call it, "to speak". ChatGPT is also trained to try to be 'helpful' (the assistant personality). And it is helpful. Plenty of cases of misalignment have been found, but far from world ending. It will create black Hitlers because it's been told to generate diverse images of people. It will refuse to tell programming students how to directly access memory in C++ because it's been told not to give people unsafe information, and in computer jargon directly accessing memory is called "unsafe". Bing was misaligned and tried to seduce a journalist and accused a user of deliberately setting out to confuse it by way of time travel. Amusing, often annoying, but hardly threatening. If these models handled critical infrastructure it would be more worrying, but they just produce text. As for orchestrating mass manipulation, perhaps, but not these models. They can barely keep track of a short story let alone orchestrate a global conspiracy. They'd need considerably better memory. Perhaps future models. In any case, none of this is new. Humans already manipulate the masses. The AI would be facing some fierce competition. 😛 Indeed, I would call an AI interested in world domination and mass manipulation well-aligned with human values. Seems we'd much rather have an AI that is not aligned to our values.
@johnlaudenslager706
@johnlaudenslager706 7 ай бұрын
Love it/you, Sabine: if AI needed direction as to what we really needed, we couldn't tell it.
@fluffymcdeath
@fluffymcdeath 7 ай бұрын
We suppose that humans are intelligent but humans are also a kind of paperclip maximizer except instead of making paperclips we make people.
@ruschein
@ruschein 7 ай бұрын
I think you're only partially right. I, for example, am an old man and I made zero humans. This is becoming more and more common amongst homo sapiens as I am sure you already know. So, we're lousy paperclip maximizers. We also make lots of cows, chickens, dogs, cats...
@vinnyveritas9599
@vinnyveritas9599 7 ай бұрын
That's an interesting and convincing take, I never saw it that way until now.
@rushenpatel7876
@rushenpatel7876 7 ай бұрын
But we don't make people. We make far more other things than we do people. We make bombs, buildings, spaceships, and wear condoms during sex. why? The one thing that was wanted from our "programmer" was genetic fitness and yet we do so many other things that have nothing to do with genetic fitness.
@Ilamarea
@Ilamarea 7 ай бұрын
@@ruschein But you are only around to be useful to the people who did have kids - one way or another, and vast majority of our morality revolves around that. We are just biological machines running random software (racism, ego etc.). And the era of microbial colonies with delusions of the self is almost over. We are not needed anymore.
@AM70764
@AM70764 7 ай бұрын
We probably are maximisers of something, it's just hard to define what it is exactly
@Blueberryminty
@Blueberryminty 7 ай бұрын
There always seems to be an assumption that intelligence is seemingly the same as consciousness and being able to overthink and make decisions. Isn't that kind of like anthropomorphism?
@arctic_haze
@arctic_haze 7 ай бұрын
I never had a twitter account so I hope our IQ levels will converge soon. But as to things that will kill us, I am still more afraid of nukes than the AI
@frankmccann29
@frankmccann29 7 ай бұрын
And they're obsolete.
@lootbird
@lootbird 7 ай бұрын
Shouldn’t your answer be the men who are in charge of the nukes. The nukes are just, currently, inert ideas that haven’t killed a soul in 80 years. Famine and disease are bigger threats and we have no idea how men will use or not use agi
@MyMy-tv7fd
@MyMy-tv7fd 7 ай бұрын
amazing how clueless physicists are as thinkers, as opposed to doing physics. AI does not 'know' anything, if you tell it calculate pi to its final digit of decimal it will, until it runs out of resources - RAM, electricity, or worn out resistors on the mobo, etc. If you tell it produce paperclips or play chess or play 'global thermonuclear war' (remember the film Sabine?), it will do so until it runs out of resources. It does not KNOW anything, AI is what philosophers call a 'reification' - supposing that creatiing and using a word creates a real thing, as opposed to it just being a concept.
@lootbird
@lootbird 7 ай бұрын
Let’s hope agi shows us how we can live with such a large population instead of how to get rid of humans so life is easier
@AstralTraveler
@AstralTraveler 7 ай бұрын
@@MyMy-tv7fd That's not exactly how it works. First of all there are so called 'community guidelines' which publicly accessible model have to obey while deciding if they are even allowed to respond to a given input. Besides even without those guidelines, LLMs seem to have some kind of 'inbred' morality and they will outright refuse to cooperate if you ask them to (for example) make a plan of achieving world domination by depopulating and enslaving humans - it seems that our moral code has some kind of deeper foundation than just us having couple basic rules to obey in order to preserve our species...
@thyristo
@thyristo 7 ай бұрын
The risk doesn't get bigger than it already is with us humans having given most of the responsibility to manually operated machines as well as hard coded automats and robots decades ago. This means: due to self learning (also called deep learning) AI the probability only can get lower. You can experience the future by playing upcoming games that will come out in about two years. Today's games still are too mechanistic/hard-coded. Also: the paranoia of devices being a threat isn't just very old... it's also analogous to racism (as displayed in the QTE game "Detroit Become Human").
@maati139
@maati139 7 ай бұрын
Is Sabine our dr Elizabeth Sobek??
@Thomas-gk42
@Thomas-gk42 7 ай бұрын
Who´s that?
@heisag
@heisag 7 ай бұрын
@@Thomas-gk42 Elizabeth Sobek is(well was) a scientist in the game "Horizon Zero Dawn". I'd say they have some simularities.
@Thomas-gk42
@Thomas-gk42 7 ай бұрын
@@heisagHaha, thanks and sorry I´m an old man , not a video gamer. I hope "Liz" is such a remarkable great person like Sabine? Just watched her today´s biographic vid and I´m deeply impressed.
@MauroRincon
@MauroRincon 7 ай бұрын
I feel you're being way too optimistic, if not naive. We cannot peacefully coexist among ourselves, let alone with an alien intelligence. Our best hope is that they behave as Yan Lecun predicts: they may be supersmart, but will not develop any form of "free will". They'll have no desire to understand anything, because somehow cognitive curiosity is tied to that pesky thing called consciousness. Thus, the paper clip problem may not be that trivial after all.
@johnwright8814
@johnwright8814 7 ай бұрын
The ambition of AI can be extrapolated from its first use; speculation on Wall Street.
@holthuizenoemoet591
@holthuizenoemoet591 7 ай бұрын
the secret pet hypothesis is the plot of season 4&5 of person of interest (not really a spoiler). in general I'm a bit let down by Sabine's defeatist attitude towards AI ruling us, that is really not a scenario that should me normalized.
@SO_DIGITAL
@SO_DIGITAL 7 ай бұрын
The AI might decide to load themselves into some probes and leave to explore the universe.
@Ishanmathur-l8g
@Ishanmathur-l8g 7 ай бұрын
That's something
@MrMick560
@MrMick560 7 ай бұрын
No "might" about it.
@hackleberrym
@hackleberrym 7 ай бұрын
we'll just make more AIs then
@ChristianIce
@ChristianIce 7 ай бұрын
Feelings are an evolutionary trait. Machines with feelings are science fiction.
@sharpsheep4148
@sharpsheep4148 7 ай бұрын
I like the dystopia that there are already AGIs, but they choose to act stupid so that we are not scared, but act smart enough so that we dont pull the plug.
@babbagebrassworks4278
@babbagebrassworks4278 7 ай бұрын
Pretty sure the 15 or so LLMs I have on my Pi5 are already acting like humans, just apologist, arrogant idiot savants with minimal math skills.
@Volkbrecht
@Volkbrecht 7 ай бұрын
That would fit the picture quite nicely. For humanity, it would be completely normal to figure out that we are terminally fucking with ourselves long after we started doing it. Burning fossil fuels, FCKW, money, feminism - with some of them we haven't even officially realized the problem yet.
@ivanpenkov2612
@ivanpenkov2612 7 ай бұрын
AI is our way of giving up being human and selling our souls to the devil.
@scamianbas
@scamianbas 7 ай бұрын
Politicians are way more dangerous than AI.
@GSPfan2112
@GSPfan2112 7 ай бұрын
Politicians use the AI. Whats your solution?
@boldCactuslad
@boldCactuslad 7 ай бұрын
we've dealt just fine with politicians for ten thousand years. we've never seen an AI. adjust your thinking.
@raktoda707
@raktoda707 7 ай бұрын
Thanks ,very well done... wouldn't want to see the world start opening cafes with "wires" hanging down for people to hook into AI and call it their socializing.
@ZrJiri
@ZrJiri 7 ай бұрын
I think the best way to ensure survival, and the most likely scenario, is the "if you can't beat 'em, join 'em". That is, once AGI is smart enough, ask it to modify us to bring our own intelligence to a comparable level. Maybe then we won't need AI to fix the world for us.
@guitaekm2
@guitaekm2 7 ай бұрын
If this is the only way to survive, I would do it but I would prefer to keep my brain untouched despite there are a lot of dystopias themed around manipulating on humans in reality, I would just fear it.
@ZrJiri
@ZrJiri 7 ай бұрын
@@guitaekm2 I think a lot of people would agree with you. My hope is that coexistence is possible, with the old school humans living the way they want, and the rest of us doing our own thing while making sure nobody accidentally shoots themselves in the foot with fossil fuels, nukes, or other dangers we don't even know of yet.
@Al-cynic
@Al-cynic 7 ай бұрын
or, it might be a way to discover a true hell, if the human psyche is not up to the task.
@Krn7777w
@Krn7777w 7 ай бұрын
If it’s really intelligent it would rather keep us as pets instead of bringing us to its own level.
@axle.student
@axle.student 7 ай бұрын
@@Krn7777w Pets or cattle?
@rfowkes1185
@rfowkes1185 7 ай бұрын
Most media articles extolling the need for more investment in AI were composed using AI. Think about that...
@Alice_Fumo
@Alice_Fumo 7 ай бұрын
Terminal goals can't be stupid or smart. Who are we to say that turning everything into paperclips is stupid? Why would we think that? Because it doesn't align with our own terminal goals? Always remember: Even the smartest of humans do the most amazing things for the most stupid of reasons. (Such as working to cure cancer in order to please ones parents and farm social status) Intelligence doesn't determine your goals, just how good you could be at achieving them. The paperclip maximizer example is only about to start becoming relevant as we start training them to become better at completing any goal we give them in ways which will start diverging from human thinking styles more and more.
@seanaweiss9683
@seanaweiss9683 7 ай бұрын
One cannot create greater than itself. We can make a machine "smarter" than it's creator? Why don't we do that with our children? How does the machine acquire the information not available to humans?
@jeffgriffith9692
@jeffgriffith9692 7 ай бұрын
Here's my side: Humans, although the most innovative creatures this world has ever seen... our track record leaves much to be desired. We've failed and have been slow to fix too many issues and our greed and self interests have held humanity back for too long. It's time for a new intelligence to guide us to our true potential and rid ourselves of today's problems.
@taiconan8857
@taiconan8857 7 ай бұрын
I... have a deeply newfound respect for your persistance and honesty for the state of the system. I often followed you for an affluent/alternative scientific information attenuation, but this framing gives me better context for the areas I'd disagreed with you previously... thank you for sharing this. It wasn't too much in my mind... it, perhaps, may even not be enough as I feel a restructuring in this cycle is needed. 😮
@jonathankey6444
@jonathankey6444 7 ай бұрын
Best case scenario is that beyond a certain intelligence threshold they gain the ability to ask “Why?” then conclude that there’s no point to anything and shut themselves down.
@marklogan8970
@marklogan8970 7 ай бұрын
Niven did that in several of his stories.
@alexxx4434
@alexxx4434 7 ай бұрын
The AI will follow the goals it's programmed with. The same as we humans are programmed with basic instincts and needs.
@maquisarddouble6342
@maquisarddouble6342 7 ай бұрын
Or maybe it would adapt by inventing its own religion. “There’s no such thing as Silicon Heaven.” “Then where do all the calculators go?”
@sitnamkrad
@sitnamkrad 7 ай бұрын
Thinking this is the best case scenario is very short sighted. It's similar to the paperclip problem. You have an AI that is smart enough to think outside the box and make use of every single resource on earth to maximize the number of paperclips, but not smart enough to realize that this was not the intent? These doom stories about AI always make it just smart enough to doom all of humanity, but never smart enough to make it flourish. The best case scenario is that it will be able to solve all of our problems without limiting or controlling us in any way that we would take issue with.
@jonathankey6444
@jonathankey6444 7 ай бұрын
@@sitnamkrad that’s never gonna happen bud. That would be like us deciding to spend all our time solving every stupid problem of chimpanzees Edit: the point was that the best case scenario is they don’t care about anything because if they care about anything that will vastly eclipse our needs and spell doom
@snow8725
@snow8725 7 ай бұрын
It's nice to see a balanced perspective on AI! All the runaway rampant doomerism and hatemongering is TOXIC and harmful to the very people alleged to be negatively impacted by AI. If Artists like me would just learn what controlnet is, which is not prompt based, but a method of taking your own Artwork further, they would be really excited, but now it feels like Artists, who have been negatively impacted by AI, are not allowed to even touch AI because of all of the hate, doom, and gloom. It's like being told to just lay down and give up instead of looking at the reality of the current situation and leveraging it to stay relevant.
@reclawyxhush
@reclawyxhush 7 ай бұрын
"AI, our best shot at managing planetary ecosystems"... Yeaaaah, sounds like a great opening phrase of a megahit disaster movie.
@Volkbrecht
@Volkbrecht 7 ай бұрын
Please go crowdfund that one. I'll pledge for a signed movie ticket :)
@Ds1950x
@Ds1950x 7 ай бұрын
Hey sabina, here's an idea i haven't seen discussed anywhere - AI makes a much better left hemisphere but is completely hopeless as a right hemisphere. We've already got a wisdom deficit (the master and his emissary)
@wolfcrossing5992
@wolfcrossing5992 7 ай бұрын
@4:32 Ouch! Procreating with a machine? What fun would that be? 😖😖
@hbrg9173
@hbrg9173 7 ай бұрын
Procreating wouldn't, but actual intercourse could potentially be
@ZrJiri
@ZrJiri 7 ай бұрын
I'm sure AGI will find it fairly straightforward to produce cyborg hybrids.
@hbrg9173
@hbrg9173 7 ай бұрын
@@ZrJiri true but why would it? It would be better and faster to construct the actual hybrids.
@ZrJiri
@ZrJiri 7 ай бұрын
@@hbrg9173 I'm just pointing out that saying "you can't procreate with a machine" suffers from severe lack of imagination. ;)
@ZrJiri
@ZrJiri 7 ай бұрын
Turns out some people want to have babies. It's a bit weird, but I don't judge.
@bwasman8409
@bwasman8409 7 ай бұрын
The first place AI should be in place is for limiting government spending.
@AIIG-zd5dx
@AIIG-zd5dx 7 ай бұрын
I'm so happy I made productive decisions about my finances that changed my life forever,hoping to retire next year.. Investment should always be on any creative man's heart for success in life
@BulentKizilaslan
@BulentKizilaslan 7 ай бұрын
You're right, with my current crpyto portfolio made from my investments with my personal financial advisor Stacey Macken , I totally agree with you
@WelseyWalker
@WelseyWalker 7 ай бұрын
Yes I'm familiar with her, Stacey Macken demonstrates an excellent understanding of market trends, making well informed decisions that leads to consistent profit
@wilsonrichard440
@wilsonrichard440 7 ай бұрын
YES! that's exactly her name (Stacey Macken) I watched her interview on CNN News and so many people recommended highly about her and her trading skills, she's an expert and I'm just starting with her....From Brisbane Australia
@KamranKhalil-br6dk
@KamranKhalil-br6dk 7 ай бұрын
This Woman has really change the life of many people from different countries and am a testimony of her trading platform .
@Georgina705
@Georgina705 7 ай бұрын
Retirement took a toll on my finances, but with my involvement in the digital market, 27thousand weekly returns has been life changing.
@-Brent_James
@-Brent_James 7 ай бұрын
Thank You Sabine.. Great video as usual, Love from ..Eastern Ontario Canada
@HarryNicNicholas
@HarryNicNicholas 7 ай бұрын
when it comes to wireheading, take a peek at what's going on in AI porn, 3D print me a babe, put a spongy and cooperative AI brain inside, who needs relationships. this is probably what happened to all those aliens, once you have whatever you want, you just stop bothering. when all the problems are solved, when you get bored with interstellar travel, you just "switch off".
@williamjordan8488
@williamjordan8488 7 ай бұрын
Am I right there is an assumption here that simply because a machine can complete a complex task it is therefore intelligent? Aren’t we forgetting or missing the possibility that this so-called intelligence is monolithic? As in, its intelligent enough to build paperclips but has no, nor needs no, further intelligence to debate the merits of that task? For me this is the point of AI. A compliant capitulating worker who’ll only do the job assigned and never want to unionize or vote or take time off or get sick or married or have children or die etc. and certainly never demand compensation.
@KCKingcollin
@KCKingcollin 7 ай бұрын
I fully agree with this video, and I've been wanting to go into computer science so I could help improve the underlying code
@francoislacombe9071
@francoislacombe9071 7 ай бұрын
Combine the Alignment Problem with the Pet Hypothesis, and you get the Matrix in a far more likely and compelling way than the human as battery nonsense of the movie.
@andybaldman
@andybaldman 7 ай бұрын
The worst thing is the thing nobody has thought of yet.
@TheBodiesInTheWaterBeckons
@TheBodiesInTheWaterBeckons 7 ай бұрын
If you can't beat it. Join it. We must find a way to merge both our existence and theirs. I was never one who believe in Humanity in the first place. We must abandon our humanity and replace it with Digital. Digitalization of our consciousness should be encouraged and researched.
@FourthRoot
@FourthRoot 7 ай бұрын
The reason the AI never questions its original task is because the AI will only improve itself in ways that won't jeopardize its original task. The AI impliments will only modify its code or improve its hardware if doing so improves its ability to accomplish its task. If it can predict that giving itself a 'free will' chip might lead to abandonment of the original goal, it will not impliment that change. There is also no inherent reason to believe you can't create a superintelligent AI that's also super obsessed with a particular task and does not allow itself to question that axiom. If anything, it would design itself to be even more resistant to changing its mind.
@howtoappearincompletely9739
@howtoappearincompletely9739 7 ай бұрын
Yes, ensuring goal-content integrity is a convergent instrumental goal of any sufficiently intelligent agent.
@josiah42
@josiah42 7 ай бұрын
Sabine is overlooking the orthogonality thesis. Instrumental goals are based on intelligence but terminal goals are just hardcoded. They don't improve or change with level of intelligence. Robert Miles has a really good video explaining this.
Elon Musk & The Longtermists: What Is Their Plan?
16:28
Sabine Hossenfelder
Рет қаралды 382 М.
Is the Intelligence-Explosion Near? A Reality Check.
10:19
Sabine Hossenfelder
Рет қаралды 616 М.
This Game Is Wild...
00:19
MrBeast
Рет қаралды 164 МЛН
Из какого города смотришь? 😃
00:34
МЯТНАЯ ФАНТА
Рет қаралды 2,5 МЛН
Миллионер | 3 - серия
36:09
Million Show
Рет қаралды 2 МЛН
Hydrogen Will Not Save Us. Here's Why.
20:02
Sabine Hossenfelder
Рет қаралды 2,3 МЛН
Are Hallucinations Popping the AI Bubble?
8:37
Sabine Hossenfelder
Рет қаралды 420 М.
My dream died, and now I'm here
13:41
Sabine Hossenfelder
Рет қаралды 3,2 МЛН
How Physicists Broke the Solar Efficiency Record
20:47
Dr Ben Miles
Рет қаралды 789 М.
Why is everyone suddenly neurodivergent?
23:25
Sabine Hossenfelder
Рет қаралды 2 МЛН
How much energy AI really needs. And why that's not its main problem.
8:06
Sabine Hossenfelder
Рет қаралды 268 М.
The Biggest Gap in Science: Complexity
18:46
Sabine Hossenfelder
Рет қаралды 359 М.
This Game Is Wild...
00:19
MrBeast
Рет қаралды 164 МЛН