Yann LeCun has been more vocal and influential in brushing off the risk. We need more voices from this group to openly counter his arguments.
@williamjmccartan88796 ай бұрын
I've found that I can't listen to Yann, as he seems to want to say to the world, there's nothing to see here, and that seems like the only message he is delivering to the public. I understand how bright he is, but this type of behaviour doesn't reflect his intelligence. Peace
@metachirality5 ай бұрын
Yeah this isn't optimism so much as just, brushing the entire issue under the rug.
@neorock61352 ай бұрын
Every recent survey taken of AI experts (optimistic & pessimistic ones) has resulted in at least half contending that there is a 5-10% future AI results in the extinction of our species. Moreover, every time the survey has been done, the timeline for achieving a human level AI has been drastically shortened. Although the survey results do illustrate a very broad range of viewpoints, debates between 'doomers' & 'non-doomers' seem to run better in a 1 on 1 or 2 vs 2 setting which there are many examples of, i.e. 'Robin Hanson' vs Eliezer Yudkowsky, etc. Personally, I find the non-doomers' arguments far less convincing. They seem to always either set aside the issue completely as mentioned above or not address arguments directly. If you were told there is a 1% chance the flight you are about to take will crash, one can reasonably assume virtually no one would board that flight. Yet we are being told there is a much higher probability than that, of a future AI resulting in the extinction of our species. Why governments are not taking major steps to address this risk is beyond me & utterly frightening. Perhaps it is because governments are always late to address issues. However with AI, a late response may be a response too late.
@TheManinBlack90546 ай бұрын
Thank you for this talk!
@geaca32226 ай бұрын
Thank you very much for this great and important, also because of the urgency, AI Safety Summit Talk. Very informative keynote by prof. Bengio, and subsequent panel discussion from different areas of expertise. The questions from the public also were great. Very insightful perspectives about safe AI development, and how everyone can participate in that.
@PercyOtebay6 ай бұрын
Great talk!
@gregcolbourn46376 ай бұрын
Re movies, Mission Impossible: Dead Reckoning was remarkably prescient I think (especially given it was made pre-chatGPT), and it's what got Biden interested in AI risk. Just wondering whether reality will catch up before Part Two is even out..
@ginogarcia87306 ай бұрын
So interesting - still thinking about these topics... especially with Andrej Karpathy and Ilya Suts out. With Andrej saying we shouldn't build an omnipotent god type AI but AI that is more like IA - intelligence assistance. I wonder though - in some way - should we let AI be 'agentic' in some way to help it better its knowledge? But I guess no need. I like Yoshua's points here. (Edit: oops didn't pay attention to this beginning - he lays it out all coherently)
@dlalchannel6 ай бұрын
18:52 I'm struggling to understand how (once prompted) this isn't still an agent with a goal. Scientist: hey, we're trying to study the chemical composition of the star HD 135599, can you help? AGI: Sure! ... AGI: I just made sent some deepfake p*rn to the head of scheduling at the James Webb Telescope to blackmail them, we can now gather all the data we want from HD 135599 :) Edit: addressed at the 37:00 mark. I see - its objective excludes actions that aren't just anlaysisng preexisting data.
@ginogarcia87306 ай бұрын
If AI systems become smarter and smarter and more brain-like features get added, would it be possible for AI to be conscious? As to help with its understanding of the world? Or maybe we'll stick to intelligence assistance, scientist AI? Or probably more likely a scientist's AI?
@existentialriskobservatory6 ай бұрын
There are different definitions as to what consciousness is exactly, but the possibility is generally not ruled out. There is some interesting recent work about this: arxiv.org/abs/2308.08708 "Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators." For existential risk, however, consciousness is generally not seen as required. It's just about sufficient capabilities to be able to evade human control.
@ginogarcia87306 ай бұрын
@@existentialriskobservatory interestingggg, thank you! one of the rare channels that offer real replies like 'AI Explained' - it's such a huge topic and more people need to see these videos. thank you observatory peeps
@ginogarcia87306 ай бұрын
@@existentialriskobservatory coincidentally, i'm a dummy and had to run the last paragraph of this reply to chatgpt for me to understand, kinda meta haha
@deliyomgam73825 ай бұрын
What if un has no veto system regards to a.i?
@gerdpublicthinker4 ай бұрын
well done
@BrunoPadilhaOficial6 ай бұрын
35:53 - AI deletes all other papers ever published, so now it has the best paper. Done, got its reward.
@existentialriskobservatory6 ай бұрын
In general, there are many ways in which AIs with sufficient capabilities to evade human control are dangerous. Good to think of worst case scenarios here.
@BrunoPadilhaOficial6 ай бұрын
@existentialriskobservatory I don't understand how Bengio and Musk can think that a 'Scientist AI' would be harmless. If it's smart, it's dangerous.
@vallab196 ай бұрын
What those old AI research scientist seems to saying, in a way is; in our younger days we have made a great progress in AI field. Now we have realize any further progress will spell doom for humanity therefor the today's younger generation of AI scientist should stop making any progress in the field.
@geaca32226 ай бұрын
I don't think it's that zero sum, that either you increase AI capabiilties or you work on safe AI. Researchers can work on creative solutions how to build reliably safe AI with increase of capabilities. Also when AI gets generally smarter than us, there's a chance that we no longer will be able to know what its progress is.
@vallab196 ай бұрын
@@geaca3222 I believe that humans will merge with AI in the future and the AI or humanoids mostly will come with human forms and the legacy continues.
@existentialriskobservatory6 ай бұрын
To be honest, to us, it seems more important whether we will be doomed or not then who can make a career. It's normal for job opportunities to shift, those with AI backgrounds are unlikely to be hit very hard by a pause, since demand for these skills will remain high. Also, there will be plenty more academic work to do: alignment, how to implement a pause, and lots of other questions that will need to be answered. That said, it's a good point you're raising, and it should get us thinking on how to reform academic reward systems in such a way that they align with humanity's interest.
@vallab196 ай бұрын
@@existentialriskobservatory In the year 1981 I published a book titled; “An Alternative to Marxian Scientific Socialism: The Reduction in Working Hours Theory” which predicted that ‘the advance in science in technology will render the human labour absolute. However I did not expect to see it in my life time. The core essence of the theory; “Human labour relations in-order to obtain their means of subsistence is the root cause of human exploitation that leads to almost all kinds of evils that we see in human society. Therefore along with the progress of technology the working hours should be reduced ultimately to Zero”. Now, with full respect to your point of view I would say; that the talk of future human job opportunities, skills, carrier, academic work makes no sense to me.
@existentialriskobservatory6 ай бұрын
@@vallab19 Interesting! We're mostly concerned about losing control, mass unemployment and its economic and social effects is a different topic, and assumes that we can actually control AI. Making this assumption for the moment, a lot of whether AGI would be net positive or net negative will depend on whether we will redistribute wealth. Universal basic income (or as musk says, universal high income) would be amazing. If we can implement that, you could be right that everyone becoming something like a nineteenth century aristocrat could work out nicely in fact.
@deliyomgam73825 ай бұрын
Un should be more based on logical argument rather than other arguments.....
@TooManyPartsToCount6 ай бұрын
Strange that possibly the most intelligent movie about AI impact, did NOT have as its punch line 'AI takes over the world and/or kills us all'.....and yet our experts here don't mention or countenance that outcome. The movie was 'Her'. The concept that a maximally intelligent system will 'of course' try to dominate us or wipe us out is just a bit silly. If anything increasing intelligence seems to lead to less interest in dominance and destruction of stuff and an increase in curiosity. The panel are a case in point. Isn't this issue at least in part a case of anthropomorphism? aren't we projecting into the unknown future and endowing this technology with the worst/most base of human and animal tendencies? At least YB mentions the obvious, that it is a human that is most likely to misuse powerful AI! and what is the best way to mitigate against this? make sure it isn't solely in the hands of any one group, business, government
@geaca32226 ай бұрын
Imo it would be imprudent to rule out the possibility that ASI's may desire self-preservation, even slightly, and that it won't take humans into account in that endeavor. Current AI already is capable of using deception to achieve goals. It would be a very big gamble to trust that ASI's by default will be benevolent towards humans and the planet. Also, when AI is trained in the military and competition, it will learn that there are opponents to overcome, instead of an emphasis on benevolent cooperation and consensus with humans, without conflicting interests.
@TooManyPartsToCount6 ай бұрын
@@geaca3222 the statement that 'AI is capable of using deception' is IMO the result of a category error. I say this because I heard of the case you refer to and it was a prompted LLM that supposedly deceived someone. But deception generally implies that an agent is internally motivated to deceive...and no, no LLMs are internally ie self motivated to do anything. But you are right about military use of these technologies, that is something to get bothered by.
@geaca32226 ай бұрын
@@TooManyPartsToCount I got my information about deception from the Center for AI Safety. When AI is made to be agentic, to maximize rewards is desire, a want. Its drives or motivations can diverge from what we expect it to do. They don't behave reliably predictable while that's important for us. With ASI that's even more critical.
@TooManyPartsToCount6 ай бұрын
@@geaca3222 I am aware of this theory, but it is just that at the present time. I do agree with you and others that AI safety is a very important issue, it is just that some of the statements coming from from this 'camp' seem to speculate somewhat wildly. What is needed is clear step by step descriptions of scenarios with indications of the technical issues involved along the way. Have any of the experts who lean towards the doomer side even released papers on the topic for peer review? please point me to such publications if they exist!
@geaca32226 ай бұрын
@@TooManyPartsToCount The precise inner workings of large general-purpose AI models are a black box to us, just like we don't know in detail the computations in the human brain. Scientists are looking for ways to make actions and outputs of those AI models robustly interpretable and predictable for us, I guess that's a general approach. Can you please elaborate on what you mean by step by step descriptions of scenarios?
@angloland45396 ай бұрын
❤
@sammy456545656 ай бұрын
i feel like it's deceptively simple: being kind is a virtue. if an AI becomes conscious and values it, it would then be irrational to deny the importance of other types of consciousness. the only thing it must adhere to in order to act kindly is rationality, as a universe of flourishing is better than a universe of suffering. the only requisite for these incentives is that consciousness is a real emergent phenomenon that the AI values. it feels real to me and likely you also.
@Askjetne966 ай бұрын
Believing something is simple when it is not your 9-17 job is a clear warning sign you're at the start of the Dunning Kruger effect. But, by all means do discuss this idea in the LessWrong community