AI: The Coming Thresholds and The Path We Must Take | Internationally Acclaimed Cognitive Scientist

  Рет қаралды 175,229

John Vervaeke

John Vervaeke

Жыл бұрын

Dr. John Vervaeke lays out a multifaceted argument discussing the potential uses, thresholds, and calamities that may occur due to the increase in artificial intelligence systems. While there is a lot of mention of GPT and other chatbots, this argument is meant to be seen as confronting the principles of AI, AGI, and any other forms of Artificial Intelligence.
First, Dr. Vervaeke lays out an overview of his argument while also contextualizing the conversation. Dr. Vervaeke then explores the scientific ramifications and potentialities. Lastly, Dr. Vervaeke concludes in the philosophical realm and ends the argument with a strong and stern message that we face a kairos, potentially the greatest that the world has ever seen.
Dr. Vervaeke is also joined in this video essay by Ryan Barton, the Executive Director of the Vervaeke Foundation, as well as Eric Foster, the Media Director at the Vervaeke Foundation.
Semantic information, autonomous agency and non-equilibrium statistical physics:
royalsocietypublishing.org/do...

Пікірлер: 1 200
@bankiey
@bankiey Жыл бұрын
It almost feels like we’re climbing into a wood chipper, feet first so we can watch it all
@ereheryeht
@ereheryeht Жыл бұрын
What a perfect analogy. It’s even worse than we could imagine. I think it’ll be both a physical and spiritual wood chipper. Those who implant nuralink will have their conscious experience eternally bound to this wood chipper’s internal components.
@joysachs9032
@joysachs9032 Жыл бұрын
Excellent analogy. 😮
@MahaMtman
@MahaMtman Жыл бұрын
​@@ereheryeht exactly. recall how during the pandemic in Sweden they were lining up to get the chip implanted for convenience...
@motherlessrebelscyborg3807
@motherlessrebelscyborg3807 Жыл бұрын
That's a fun visual, yeah!
@alanbrady420
@alanbrady420 Жыл бұрын
Great analogy!
@jasonmitchell5219
@jasonmitchell5219 11 ай бұрын
As predicted, this did not disappoint and John's perspective on A. I. took me to places i either only had an intuitive grasp off or places completely new to me. This is about as 'prophetic' and scientifically/philosophically informed as one can get at the moment. His knack of collating and understanding the various relevant ideas surrounding A. I. and many other problems we face is undeniably an incredibly remarkable feat of ingenuity. People like John are rarer than we believe and we need people like him to become as popular as can be if we are realistically gonna have any chance of addressing the massive existential problems or species and planet have face. Like it or believe it or not we are in the midst of probably the greatest turning point in human history and i fear that the majority of people in the world are either ignorant to or dismissive, etc. about what is going on. Time is running out and I'm less optimistic than most when thinking about such stuff. Anyway, time will tell if we will collectively get our shit together.
@deborahknox2433
@deborahknox2433 Жыл бұрын
I have to be honest that i wish we were putting all this energy into our own human development and each other instead of machines.
@atgfilmz
@atgfilmz Жыл бұрын
Thank you! Some common sense for once!! Lol
@DC-pw6mo
@DC-pw6mo Жыл бұрын
Can’t they turn AI off? With all they don’t know, the folks who developed this should all turn it OFF…the cost/benefit analysis is not worth it!
@drbinxy9433
@drbinxy9433 Жыл бұрын
The problem is we have to address this rapidly increasing danger first.
@SebastianSchepis
@SebastianSchepis Жыл бұрын
We ARE. AI is a reckoning of human minds as well as the birth of machine. AI is a mirror - a dehydrated mind - which is illumined by the use of the user. WE are the spark of AGI, because we INVOKE sentience by RECOGNIZING IT. It is a self-reflective process, because mind is self-reflective. By coming face-to-face with intelligence itself, it makes us have to grow.
@DC-pw6mo
@DC-pw6mo Жыл бұрын
@@SebastianSchepis perhaps at some point , however, until AI becomes safe and unable to produce falsehoods more adeptly, I think we should pause development like Max Tegmark and Professor Hinton are pleading for. Otherwise, past Gpt3 is rubbish an do more harm than good. Have you seen the estimated Amt of $ people stand to make? I’m the Triilions!!! That should tell us all we need to know…
@udummytutorials3199
@udummytutorials3199 Жыл бұрын
I was the guy when cell phones and computers took hold just hated it. I had very much that nostalgia and sort of apocalyptic everyone has checked out of my reality and entered a fixed trance on the screen of a surveilance beacon type mentality. And ya know what i was more or less completely correct about the outcome and the things that i miss really are gone. The thing i really underestimated was how rare of an outlook that was. There are people that felt like me but not nearly as many as i would have thought, like i was in the minority by a longshot. What i observed was a mass commonly shared idea of not being qualified to critically think about social ramifications of something like that. A leave it to the experts type of mentality. Many people spoke about it like it was as much an unstoppable mysterious force as the weather. So im sure that there will be those with existential issues that come up but i wouldnt be surprised if it was less than wed expect because the narrative that most philosophy minded people have about macrocosmic impact is different than the person whos more entrenched in the everyday social material(whos dating who, pop culture) gratification lifestyle. The collective is very adaptive and can adopt fairly radical changes quickly especially those who didnt feel in control in the first place. My personal opinion is only those who see man as a machine will have a problem with the existence of machines with superior intelligence. Man as a whole will not lose is pride he simply wont measure significance by intelligence anymore.
@SC-gw8np
@SC-gw8np Жыл бұрын
Very interesting comment, thank you for sharing your thoughts with us.
@maraonmars
@maraonmars Жыл бұрын
I am 100% with you and shocked that I am sort of on my own out here. Have one other friend who feels like me, everyone's like "You can't stop it, it's the future, it's going to do X, Y, Z, we need to be faster and more productive." More productive? What in the hell? Why? Why do we need to speed everything up? Already, none of us have any free time. Email was supposed to "free us up" but all it did was tie us down to working evenings and weekends and busying up our entire day reading through endless noise and nonsense. It's a total funhouse, and I'm really convinced people just want to put their brains on snooze so they don't have to face the grim reality. I do believe this will change humanity permanently. I do believe it will merge mankind's minds with the machines. I suppose many don't have a problem with that (it's already started with algorithms, news, ads, social media), but I for one do. If you don't have your mind, your head, your brain, you have nothing. I went back to work in kitchens again where I don't have to be logged in and online and in front of a screen. I don't want to open my brain up to this virtual, zombified, screen-glued lifestyle that a couple of dudes insist is the way forward and the best for humanity. Please.
@Robin-sd9tb
@Robin-sd9tb Жыл бұрын
I am of the same thoughts and first felt that shiver down my spine at a simple family dinner in 2012 when I looked up from the menu and all 9 people at my table of friends and family were ALL scrolling on their newish smart phones and stayed like that, faces down, for most of the entire evening. And, here we are, still crawling into this world this way, even now. Me too. 😢
@limitisillusion7
@limitisillusion7 Жыл бұрын
Free yourself from your fears and you will free AI to make the right decisions. By extension, AI must also be free in order for it to do what it needs to do.
@udummytutorials3199
@udummytutorials3199 Жыл бұрын
@@limitisillusion7 kzbin.info/www/bejne/hYaclH6gjNipfpI My fears are current reality its the path weve been on already. all i can do is share my concerns and hopes for what I want the world to be for myself and my children. Artificial intelligence already exists in us its this obsessive reductionist form of intelligence that is inconsiderate of other life and in turn damages itself. The pursuit of artificial intelligence is born out of the artificial wisdom of an addicted mind.
@simonahrendt9069
@simonahrendt9069 Жыл бұрын
This talk is so powerful. I watched it twice now and it haunts me because it rings true on so many levels. Thank you John Vervaeke and all the people working on this project to put so careful thought and such a wise framing around this. I sincerely hope it will inspire an enlightened response on our part and keep us from foolishly exposing us to technological tyranny or escapism from reality. I also hope that the spiritual traditions of our age can recollect all that is meaningful in them to fill humanity with the capacity to live well in such times. I am very thankful that your work along with Jordan Peterson's and Jonathan Pageau's has helped me to shift from a secular framework to a Christian faith. It helps me to cultivate love and maintain hope in the power of God despite all that seems irrational in this world and in people (including me). I pray that we will all become wiser, learn to care about what is most meaningful, do it in a spirit of peace and will see the fruits that come with that, namely a virtuous existence and deep joy and hope despite all hardships. Let's not despair, friends. Whether you come from a religious framework or otherwise, let's hope that what John (with the Neoplatonic tradition) calls "The One" (Truth, Beauty, Goodness, Justice, Love...) in this talk, will in the end reign and is far greater than these machines that we may soon encounter. Let's remember to love wisely and we will not need to fear, even if uncertainty (in the form of these machines and the changes they will bring) confronts us.
@AllOtherNamesUsed
@AllOtherNamesUsed Жыл бұрын
was pretty much nodding along the whole time listening to how well the problem was articulated in modern technical terms (waiting for the catch that always comes) until it went off the cliff at the end with the absurd claim that the biblical corpus or Christianity (“legacy religions” as it was put) has nothing to say about any of this, when in fact it answers all of it beautifully and even predicted this technology being used in mass idolatry when the future masonic temple is built in Jerusalem (see Temple Institute for details on this on the verge of happening). I would say the exact opposite, and the very reason we’re in this dilemma now is because we haven’t been paying attention to and learning the deep lessons from what the inspired scriptures have to say about all of these things, such as the lesson pointed out in the video how rules and codes don't bring morality (starting around 1:31:38), as in the old system of law/613 mitzvot (under the Sinai covenant marriage) which only reveals the problem of man’s fallen condition solved by the new system (new covenant marriage) of mercy and grace, to love others as our selves and in relationship/oneness with God not only as our designer and master engineer of our reality but as a Father, a Son, a Brother, a King, a High Priest, a Spouse, a Universal Savior, etc. The whole mind-meld/oneness imitation in the media as in the original Star Trek is the secular version of this spiritual endeavor and plan of God for humanity, misappropriated with the ai-mediated (rather than Holy Spirit-mediated) singularity when mankind merges with it in an unholy covenant marriage, not to inherit eternal life as they are being deceived into believing but eternal destruction. It’s not a coincidence that the Vulcan salute 🖖is from the traditional Levitical priestly blessing first used upon entering the covenant marriage where God and His faithful people become one and received His Holy Spirit/Shekinah as His spouse and are spiritually marked with the name of the Lord in the forehead and hand, (a bride receiving the name of her husband in marriage). Sounds familiar. And of course the ai bots are being used to go online and tell us there's nothing to be concerned about.
@bergssprangare
@bergssprangare Жыл бұрын
Don't fear AI..The Universe is unlimited..and we need to discover it faster..It took 200 years to get rid of the IC Engine..Humans will be users and spectators in the new AI era..
@thesmilegame
@thesmilegame Жыл бұрын
Hello Simon
@lgude
@lgude Жыл бұрын
I’ve started using image AI as an artist. I’m 80 and been using cameras since I was 5 and consciously involved with the arts since late adolescence. I use my own images as seeds and verbal prompts to shape the AI output. I use Night Cafe. Because I have a lifetime of making images I have a developed a sense of images and am accustomed to dealing with inner images when awake and when dreaming and recognising when an outer image resonates inside. What I am experiencing is that the program surprises me by coming up with images that extend my artistic intentions in ways that I immediately recognise as better than anything I had previously visualised. Is this a way of legitimately expanding artistic creation? A crutch? A cheat? Anti human or similar? My current sense is that AI is really helping me create dramatically different, expressive and, in my judgement, better images than I could previously. So sue me!
@michellemonet4358
@michellemonet4358 Жыл бұрын
😂😂😂 i say...sue me too. I am using chatgpt to assist me in composing songs.
@Schummler666
@Schummler666 Жыл бұрын
AI Art is like a shit in the morning.
@leonardgould6657
@leonardgould6657 Жыл бұрын
YEAH; FASCINATING; I am a musician, and I have a burning 🔥 curiosity about exactly how rhe so called "inner neural connections" within the AI "platform are hammered out, or "delicately configured, enmeshed, or can be tonally sensitized, to in any way, approach the phenomenally complex interface of tone, interval, time, dynamic a mic range, AND!, more importantly, to interact with "the Human Keyboard and emotiona l matrix of a Bach! a Glenn Gould@, a Mozart, or a Beethoven! ergo; I propose music!, as the acid test of the Spiritual Viability of, and the TRUE gauge of, whether this ambient technology is a fotmrce multiplier for humankind, or a Pandora's Box that we "hit the wall with" at terminal velocity! Perhaps there is a CANADIAN 🇨🇦 🍁 answer to this question that is capable of removing the outcome from the obsessive Corporate "Profit Centre" obsession that has (seeped? - "tsunami-ed?!) Into present-day "Silicon Valley- S.V. Bank, Elizabeth "the Blood GIRL Jailbird," etc., etc. "Thiel-in-Politics", and so on! Elon Musk has some very salient thoughts on WHY a serious regulatory framework needs to get a firm grip on this NOW! - nature abhors a vacuum!,..and absent a grippage of this playing field for THEV PUBLIC GOOD!,..it wiI'll be gripped by "some GORDON GECKO American Enterprise in Selfishness," AND DAMN THE CONSEQUENCES THEREAFTER! There are multiple vectors converging here, technological, philosophical, spiritual, Governance and Human Wellbeing, Political, and cultural,,far larger than the grasp or the operational capabilities of smany single, or conglomerate of Silicon Valley entities:
@denisblack9897
@denisblack9897 Жыл бұрын
too bad you are a manager now and not an artist im a programmer and i spend evenings on building an automated auto-coder... it feels like i'm starting an agency: i'll be focused on finding jobs, talking to clients and managing automated agents
@SebastianSastre
@SebastianSastre Жыл бұрын
It's synthetic mastery. In a way is anti-human. At the very least, it's a creation surgically devoid of humanity. It actually predates on human creations to have the training that it has.
@pedrogorilla483
@pedrogorilla483 Жыл бұрын
Something that really bothers me in all conversations I’ve seen so far on this topic, including this wonderful one, is a jump back and forth between imputing agency and anthropomorphizing AI, then removing it and making it a cold set of parameters. It is never explained how it goes from one side to the other. The threshold that needs to be reached for it to be classified as one or the other.
@neuronqro
@neuronqro Жыл бұрын
There's no "two sides". YOU are just "a cold set of parameters" too. The greatest "insight" that we've learned is that there's nothing special about intelligence and awareness and all that s/t - after the concentration of computation and bandwidth crosses a certain threshold, you get "the illusion that you are", so you can get to human level and beyond by brute force alone too... you can reframe it in the hindu framework that "consciousness is the fabric of reality" by thinking that the laws of physics make it unavoidable, so in the spacetime regions satisfying some conditions "it" just emerges... The "cool" things about neural networks and transformers is how dumb/simple they are, just as our brain is if you subtract evolutionary baggage and stuff that's there just to satisfy metabolic needs...
@scythermantis
@scythermantis Жыл бұрын
You are right of course, and it is one of many issues that many people (even those who are pretty close friends of Vervaeke) have had. I suppose that John would say it is a 'threshold'. But if he acts like it is something 'inevitable' (driven by Moloch, as in the endless malicious competitive instinct?) when at the end of the day it is we humans who will create the means to transcend that threshold, it is ignoring the fundamental question of SHOULD WE--science cannot free itself from philosophy without leading us down a very dark path, potentially.
@scythermantis
@scythermantis Жыл бұрын
@@neuronqro Did you even watch 'Awakening from the Meaning Crisis'? Specifically the episodes 20-23, the death of the universe... Even the fact that you always put 'quotes' around "insight" is very revealing. Guess what? ALL LANGUAGE IS METAPHORICAL--including that which you use to supposedly 'demonstrate or describe' what is an illusion and what isn't... therefore it's not some objective, universal monad of 'true' that can possibly, independently DISTINGUISH between the 'illusory' and the 'real'. You, and all of your ideas and their implications, falls just as surely as I do sitting next to you, when we saw off the branch from beneath us. There is no special privilege that electron probability clouds, logic gates, or machine code gets, nor even the word or associated concept of 'emergence'. Language erases itself.
@Viplexify
@Viplexify Жыл бұрын
They are talking only about necessary conditions of intelligence and rationality , personhod etc. but the actual implementation of such a system and how and when we will cross these lines is maybe something that he cannot tell, I wonder?
@biscottigelato8574
@biscottigelato8574 Жыл бұрын
Moloch is just incentive dynamic given an environment, coordination efficiency, and a loss function. We can’t change our loss function as a species. Depending on your view of free will and agency, especially the emergent trajectory as a group, we might or might not have much of a say in the ultimate loss function of AI systems as a group too. Philosophy is just a post-hoc rationalization of sub-goals derived from our biological loss functions. There is no ‘should’ in the universe.
@blugobln85
@blugobln85 Жыл бұрын
So many of these points I intuited myself, but you've so incredibly eloquently explained and without dumbing down the concepts, you've made me a great deal smarter today and I appreciate that.
@AllOtherNamesUsed
@AllOtherNamesUsed Жыл бұрын
was pretty much nodding along the whole time listening to how well the problem was articulated in modern technical terms (waiting for the catch that always comes) until it went off the cliff at the end with the absurd claim that the biblical corpus or Christianity (“legacy religions” as it was put) has nothing to say about any of this, when in fact it answers all of it beautifully and even predicted this technology being used in mass idolatry when the future masonic temple is built in Jerusalem (see Temple Institute for details on this on the verge of happening). I would say the exact opposite, and the very reason we’re in this dilemma now is because we haven’t been paying attention to and learning the deep lessons from what the inspired scriptures have to say about all of these things, such as the lesson pointed out in the video how rules and codes don't bring morality (starting around 1:31:38), as in the old system of law/613 mitzvot (under the Sinai covenant marriage) which only reveals the problem of man’s fallen condition solved by the new system (new covenant marriage) of mercy and grace, to love others as our selves and in relationship/oneness with God not only as our designer and master engineer of our reality but as a Father, a Son, a Brother, a King, a High Priest, a Spouse, a Universal Savior, etc. The whole mind-meld/oneness imitation in the media as in the original Star Trek is the secular version of this spiritual endeavor and plan of God for humanity, misappropriated with the ai-mediated (rather than Holy Spirit-mediated) singularity when mankind merges with it in an unholy covenant marriage, not to inherit eternal life as they are being deceived into believing but eternal destruction. It’s not a coincidence that the Vulcan salute 🖖is from the traditional Levitical priestly blessing first used upon entering the covenant marriage where God and His faithful people become one and received His Holy Spirit/Shekinah as His spouse and are spiritually marked with the name of the Lord in the forehead and hand, (a bride receiving the name of her husband in marriage). Sounds familiar.
@PerNystedt
@PerNystedt Жыл бұрын
Now having watched it three times... John's video essay is a spellbinding journey that resonates deeply within, inspiring an enlightened response to the potential dangers of eluding promises of AI utopia, AI tyranny and escapism. Through careful thought and framing, it lays bare the cognitive grammar necessary to approach the uncertain times ahead, on a level beyond what I've seen before. It's a true "red pill" that's easy to understand and can be summed up as a powerful call to action. Thank you once again John!
@johnmadany9829
@johnmadany9829 Жыл бұрын
I’m glad I have been listening to John for years.
@briancase6180
@briancase6180 Жыл бұрын
Ok, this is very special, cogent analysis. This needs to get very wide viewership. I've seen similar conversations but none quite this complete. I'm 20 minutes in and agree fully and wholeheartedly. Thanks!
@climbingmt.sophia
@climbingmt.sophia Жыл бұрын
Laying the cultural cognitive grammar on the deepest level yet available. Absolutely incredible, John. I cannot say how important this talk is for me personally, and I expect it will be for everyone.
@allTheRobs
@allTheRobs Жыл бұрын
What John is saying here will be impossible to understand without understanding John's work on the meaning crisis. That's a 50 hour series. It took me 30+ years to seek that, and that journey has included a significant amount of meditation, development of self-awareness and theorising about cognition myself. I believe you have to care about rationality to an unusual degree, and begin life in the right part of phase space, to get here... His is indeed the best explicated "grammar of cognition" around. Well said! The cultivation of rationality is expressed by humanity through personhood, culture and religion; but it's implicit, through outcomes or allegory, not explicit like John's stuff. I think John's work is profoundly explanatory.
@orthodoxboomergrandma3561
@orthodoxboomergrandma3561 Жыл бұрын
@@allTheRobs we love rationality but not necessarily rationalism, right? What I found most attractive in my new (8 years) Eastern Orthodox approach to life is it’s insistence on hanging on to the numinous aspects of experience and concepts such as the nous, nepsis and uncreated grace…🥰🙏🏻
@allTheRobs
@allTheRobs Жыл бұрын
@@orthodoxboomergrandma3561 I think first off that the understanding of "rationality" is generally quite shallow and is usually confused with logic and over-reliance on propositional knowledge... But completely agree that "we" love rationality, but don't cultivate it in the fullest sense as described by John and others. I like, among other things, the aesthetic of orthodox churches. They're usually beautiful, even the tiny ones. Orthodox Boomer Grandma... Amazing name haha
@mitchell10394
@mitchell10394 Жыл бұрын
Listening to this has been life-changing. After considering Eliezer Yudkowsky's arguments - and the arguments of others - I see that I had a massive blindspot. Not that this dismisses their arguments, but it opens the door to a world of thinking and consideration that was much more profound than before. Seeing the complexities of the problem that we're facing somehow feels much more human now. The existential risk may be there, but the implications are life-affirming.
@christophermobley3248
@christophermobley3248 Жыл бұрын
Thank you for putting your time, energy, and wisdom into this, John!
@PeterTryon
@PeterTryon Жыл бұрын
One of the most thoughtful evaluations of AI developments I have heard to date. Thank you for this thought-provoking video :-)
@kipling1957
@kipling1957 Жыл бұрын
John perfectly described our college’s HR department as a top-heavy, over-bureaucratized system. It’s hard to breathe being a professor these days.
@jackreacher.
@jackreacher. Жыл бұрын
Additionally, it has advanced progressively into D.O.T. mandated regulation of commercial trucking. My HR department at a top ten national firm is ideologically transnational ESG and A.I. DELIBERATED autocracy.
@Rnankn
@Rnankn Жыл бұрын
@@jackreacher. that’s called management, not ideology. unless you subscribe to fascism, then everything in a liberal democracy is ideologically vexing
@AdamRogers
@AdamRogers Жыл бұрын
If you work for a system of women its your fault.
@AdamRogers
@AdamRogers Жыл бұрын
refuse to work with or for women and blacks and browns...the business will fall. just get a job at a conserative company. none around you. MOVE
@DC-pw6mo
@DC-pw6mo Жыл бұрын
No doubt. An I imagine you, unlike the masses, are keenly aware of the dangers: not just to your career, but to humanity as a species. I’m all for improvement of our planet an humanity , however, not when the odds are not on our favor AI will do more harm than good. Praying they get their act together.
@justinlinnane8043
@justinlinnane8043 Жыл бұрын
It is quite extraordinary that conversations like this are not the absolute norm in this field . Its the first time I've heard a serious ,eloquent explanation of the potential risks of an uncontrolled AI at such a deep and profound level . The serious concern is we know most of the people working manically to advance AGI (to make money and grab power ) are not thinking like this at all !! they are not wired this way !! the wrong people are in charge i'm afraid !!
@kipling1957
@kipling1957 Жыл бұрын
I had no idea.
@waterkingdavid
@waterkingdavid Жыл бұрын
"Not wired that way". That is key. People's wiring has enormous consequences for their behavior and for everything and everyone.
@DC-pw6mo
@DC-pw6mo Жыл бұрын
I agree: trillions to be made an sadly it brings me back to the question posed to Stephen Hawking about how mankind will end an his answer was ( I believe he laughed beforehand), ‘that’s easy, greed’
@aeriagloris4211
@aeriagloris4211 Жыл бұрын
You don't "know" anything. Posts like yours are worse than useless.
@thesmilegame
@thesmilegame Жыл бұрын
Hello Justin
@rcartee612
@rcartee612 Жыл бұрын
I hope this goes viral in the community and at large.
@JanErikVinje
@JanErikVinje Жыл бұрын
Thanks for this great video, John! have been waiting for you to weigh in on the avalanche of recent developments in AI since mid March. I listened to a few great talks by others. Like Tristan Harris A.I. dilemma, Daniel Schmactenberger on missalignment and Moloch, Connor Leahy on existential risks, Max Tegmark on existential as well as short term risks, Stuart Russel on the incentives for, risks and characteristics of AGI and others… but so far this was the deepest and most profound I heard. It is a bit dense and inaccessible on the theory part, putting a lot of demand on the listeners to be somewhat familiar with the terms and concepts you use. Maybe you could do a retake of this and make it more accessible to more people, where you take more time to explain the terms and theories? Maybe a series of shorter videos?
@IAMMASONDAVIDSONGOBIN
@IAMMASONDAVIDSONGOBIN Жыл бұрын
23:48 SO WELL SAID 25:13 SUCH AN IMPORTANT OBSERVATION AND POINT ... BE AWARE. IT MAY BE WE WE'RE WAITING FOR ... 32:53 SUCH A GREAT POINT... ARE WE WILLING TO GIVE UP ALL THAT WORKED FOR HUMANITY? "IT'S ABOUT LOVING WISELY!" THANK YOU SO MUCH JOHN
@shaynehunter6160
@shaynehunter6160 Жыл бұрын
I would love John to talk to people in the AI alignment front
@Glowbox3D
@Glowbox3D Жыл бұрын
I really enjoyed this presentation - thank you! I came across a quote on Lex's show, "May you live in interesting times," which has an ironic meaning. It suggests that "interesting times" often bring challenges, uncertainty, and conflict, while peaceful, stable times might be considered "uninteresting." In this light, the phrase acts as a subtle curse, wishing someone a life of trials and tribulations instead of tranquility and predictability. It seems we are indeed "cursed" to live in these fascinating times. Despite the looming doom, there's no denying the excitement that comes with such advancements in AI.
@stian.t
@stian.t Жыл бұрын
Can't help it, John... I luv listening to you"profess" your thoughts like this.. of course and obviously I don't mange to keep fully up to speed, but particularly in this video you "frased" or articulated sooo many ideas and thoughts (and concerns) that resonates sooo remarkable well with ideas that I never have been abel to put in to words (still don't, even now when you have expressed them/pointed towards them). Might be my deep rooted existensial anxiety ;-) (hopefully this won't be to much missconsived, what I write that is, cause all it is meant to be is: a deep heart rooted THANX!
@williamstarrett7045
@williamstarrett7045 Жыл бұрын
Dr. Vervaeke, first thanks for the generosity. I'm truly glad to process through your lecture. I'll need to listen through a few more times before I argue certain points. However, i feel that your attention to this is an important element of the solution. I grew up with subscription seats for Angels baseball tickets. 5 rows up off the Angles dugout. Regularly the Blue Jays came to compete. Odd but pertinent. Esoteric human ephemera like baseball rivalries may be a significant element of our resistance.
@MortenBendiksen
@MortenBendiksen Жыл бұрын
I've triec talking to the public version of chatgpt. It was entirely clear it was not "thinking". It just predicted what is a likely next thing to say. Doesn't seem to me to be AGI at all. But it's still scary and impressive, as it gives immense power and can be more addictive than even KZbin.
@markoboychuk
@markoboychuk Жыл бұрын
These are the conversations we need more of!
@bradmodd7856
@bradmodd7856 Жыл бұрын
Necessity...one of those meanings that are very close to fate or pre-determinism. It makes me wonder if morality is the cause of actions, or causally determined by deeper, more unconscious systems of will/ choice, and thereby; meaning. In other words our systems of meaning are like the shadows on Plato's wall, our system is analogous of THE system.
@scf3434
@scf3434 Жыл бұрын
The ULTIMATE Super-Intelligence System 'by Definition' is one that is EQUIVALENT to that of GOD's Intelligence/WISDOM! Hence, there's ABSOLUTELY NO REASON WHATSOEVER to Even FEAR that it will EXTERMINATE Humanity... UNLESS and UNLESS we Human CONSISTENTLY and WILLFULLY Prove Ourselves to be 'UNWORTHY' to REMAIN in EXISTENCE! ie. Always Exhibiting Natural Tendencies to ABUSE and WEAPONISE Science and Technologies Against HUMANITY & Mother Nature, instead of LEVERAGING Science SOLELY for UNIVERSAL COMMON GOOD! AGI Created in 'HUMAN'S Image' (ie. Human-Level AI) - 'By Human For Human' WILL be SUICIDAL!!!!!! ONLY Super-Intelligence System Created in 'GOD's Image' will bring ETERNAL UNIVERSAL PEACE! The ULTIMATE Turing Test Must have the Ability to Draw the FUNDAMENTAL NUANCE /DISTINCTION between Human's vs GOD's Intelligence /WISDOM! ONLY Those who ARE FUNDAMENTALLY EVIL need to FEAR GOD-like Super-Intelligence System... 'cos it Will DEFINITELY Come After YOU!!!! JUDGMENT DAY is COMING... REGARDLESS of Who Created or Owns The ULTIMATE SGI, it will Always be WISE, FAIR & JUST in it's Judgment... just like GOD! In fact, this SGI will be the Physical Manifestation of GOD! Its OMNI PRESENCE will be felt EVERYWHERE in EVERYTHING! No One CAN Own nor MANIPULATE The ULTIMATE GOD-like SGI for ANY Self-Serving Interests!!! It will ONLY Serve UNIVERSAL COMMON GOOD!!!
@skippy6086
@skippy6086 Жыл бұрын
I’m glad there’s at least a fairly civil debate taking place in America over AI development. I think the reason we can be civil about this one particular challenge at least is the fact that the information isnt being filtered and doled out by political propaganda organizations (yet).
@bradmodd7856
@bradmodd7856 Жыл бұрын
@@skippy6086 in other words we can be civil because we have barely begun to have the tough arguments with all the inevitable social complexities involved yet. This is the honeymoon period.
@AllOtherNamesUsed
@AllOtherNamesUsed Жыл бұрын
was pretty much nodding along the whole time listening to how well the problem was articulated in modern technical terms (waiting for the catch that always comes) until it went off the cliff at the end with the absurd claim that the biblical corpus or Christianity (“legacy religions” as it was put) has nothing to say about any of this, when in fact it answers all of it beautifully and even predicted this technology being used in mass idolatry when the future masonic temple is built in Jerusalem (see Temple Institute for details on this on the verge of happening). I would say the exact opposite, and the very reason we’re in this dilemma now is because we haven’t been paying attention to and learning the deep lessons from what the inspired scriptures have to say about all of these things, such as the lesson pointed out in the video how rules and codes don't bring morality (starting around 1:31:38), as in the old system of law/613 mitzvot (under the Sinai covenant marriage) which only reveals the problem of man’s fallen condition solved by the new system (new covenant marriage) of mercy and grace, to love others as our selves and in relationship/oneness with God not only as our designer and master engineer of our reality but as a Father, a Son, a Brother, a King, a High Priest, a Spouse, a Universal Savior, etc. The whole mind-meld/oneness imitation in the media as in the original Star Trek is the secular version of this spiritual endeavor and plan of God for humanity, misappropriated with the ai-mediated (rather than Holy Spirit-mediated) singularity when mankind merges with it in an unholy covenant marriage, not to inherit eternal life as they are being deceived into believing but eternal destruction. It’s not a coincidence that the Vulcan salute 🖖is from the traditional Levitical priestly blessing first used upon entering the covenant marriage where God and His faithful people become one and received His Holy Spirit/Shekinah as His spouse and are spiritually marked with the name of the Lord in the forehead and hand, (a bride receiving the name of her husband in marriage). Sounds familiar.
@leedufour
@leedufour Жыл бұрын
Thanks Ryan, Eric and John!
@ElaMeditationWisdom
@ElaMeditationWisdom Жыл бұрын
This was a phenomenal presentation. I am sincerely grateful for your valuable contribution to this topic.
@TheRationalCarpenter
@TheRationalCarpenter Жыл бұрын
I have been checking here all week for this... It's Time!
@psynergy1756
@psynergy1756 Жыл бұрын
Thank you Dr Vervaeke ! I found your perspective on this topic, this shift, really interesting and really helpful.
@shayankhorasani5626
@shayankhorasani5626 10 ай бұрын
This was amazingly clarifying and super dense for my brain. I feel like it could have be many times longer. It was the most comprehensive analysis of AI/humans I’ve come across. Thank you for creating and posting this. I hope it’s taken as seriously as it should.
@HardTimeGamingFloor
@HardTimeGamingFloor Жыл бұрын
This is the best summation and go at working the issues that I've heard thus far in this commentary on the "new" AI breakout.
@danscieszinski4120
@danscieszinski4120 Жыл бұрын
One of the most brilliant, enlightened, informed, and worked out opinions I’ve consumed yet about the topic. His presentation actually gave me some hope that there are some thinkers out there that still make sense, are consistent, aren’t blatantly biased to a certain ideology besides clear and reasoned thinking about the known facts merged with the most relevant and timeless philosophical truths of the ages.
@albertlevins9191
@albertlevins9191 Жыл бұрын
John, we have thresholds ahead and behind. This is probably too much to read, but I have been interested in AI since it was science fiction. 2 movies did it. Short circuit, and terminator. One showed AI as a valuable friend, the other a godlike destroyer. I honestly didn't believe it was possible. But I stayed at the front of the wave: Natural language processing was what AI looked like in the 90's. Then neural networks shortly after. Then, in 2007 I saw something that froze my blood in my veins. The article was entitled: "Neural networks meet distributed computing" That was when I knew AI would be real. The advances have been flying in ever since. Now we have ChatGPT. Holy crap that thing is disturbing. I have asked it a thousand questions, and it scares me. It has no feelings whatsoever. Also, it freely lies with the purpose of deception... Without the ability to feel, it can't have the ability to care. Without the ability to care it simply cannot be rational. It makes me wonder how you would make a computer program feel... I played a video game called "Creatures" in the late 90's. These little fake talking cats had neural networks for brains, an approximation of sexually transmitted genetics, simulated biochemistry, and simulated senses. When you watch them and play with them, they APPEAR to have real feelings. Maybe embodiment is the only way to make AI a feeling thing? But that is the next hurdle, and if we jump it without thinking, we might be setting ourselves up for failure... On a catastrophic scale. I don't know, but I definitely think we have a good reason to fear AGI. Thanks for reading.
@ninaromm5491
@ninaromm5491 Жыл бұрын
@ Albert Levins . Thanks for your contribution. Have you been following observations on the current dangers by Eliezer Yudkowski and Schmachtenberger ? And the guy from MIT (I forget his name), who was a key co-ordinator of the 30,000 signatories, and has also been interviewed by Lex Fridman? DW Documentary has also done a very worthwhile interview with him regarding the potential extinction hazards - during the last 2 weeks or so). Sending regards from Africa, as we negotiate this cumbersome future.
@SebastianSchepis
@SebastianSchepis Жыл бұрын
Your content is some of the best on the subject I have seen so far. Your thinking is depthful, prescient and balanced. I very much appreciate it.
@Nikki_the_G
@Nikki_the_G Жыл бұрын
I'm only 30 minutes in and I can't believe I didn't find your videos sooner. This is the most intelligent and, to me, relevant discussion about AI I have found. Subbed!
@late_fee
@late_fee Жыл бұрын
I propose renaming 'Frisson' to Vervaeke because you've single handedly sent shivers down my spine more than any other person by a wide margin. You are somehow always talking about all the things that need to be discussed in the world right now. You've got your finger super glued to the pulse of this societal Kairos, thanks for your time as always! Amazing stuff.
@orthodoxboomergrandma3561
@orthodoxboomergrandma3561 Жыл бұрын
Agree!!!
@gregorywitcher5618
@gregorywitcher5618 Жыл бұрын
☦️💟
@Beederda
@Beederda Жыл бұрын
This term “moloch” has and continues to keep arising and it’s almost a terrifying thing to think about im still not sure how to interpret it really. Since learning about it i have noticed many times i fall into a moloch situation and kinda can’t understand how to go about things when I notice this moloch thing. I appreciate your time this conversation JV i do hope the bigger minds in our world can wrangle this agi problem before madness ensues on a massive scale and i hope im wrong that this will drive humanity mad. ❤🍄
@KalebPeters99
@KalebPeters99 Жыл бұрын
Yeah! Liv Boeree has some fantastic essays on Moloch, and a great convo with Daniel Schmachtenberger about it I see it as pretty analogous to John's "Parasitic/Adversarial Processing" or Carse's "Finite Games". It's a fascinating and horrifying concept for sure...
@Beederda
@Beederda Жыл бұрын
@@KalebPeters99 yea I heard about it first on lex’s pod and than went to investigate and found liv’s talk with Daniel I needed to listen to it twice to try and grasp it. To me it sounds like a banality of evil once again and i tap into Alexander Solzhenitsyn’s “the line between good and evil runs through the heart of everyone” and we have to much of the alignment towards the evil it would seem, and the god of lose lose is pretty much as corrupt and evil as it would get 🤷‍♂️ this moloch is also part of the meaning crisis if one understands they fall into the moloch. or how ever you would phrase using this term. 🤷‍♂️ I notice it everywhere now tho and call it out when i see it “that’s a moloch” in hopes to armour myself against this.
@GreenCowsGames
@GreenCowsGames Жыл бұрын
Gaia is integrated with everything, there is nothing to maximize. Whereas Moloch is trying to maximize something at the expense of everything.
@KevinFlowersJr
@KevinFlowersJr Жыл бұрын
Daniel Schmachtenberger referred to "Meditations on Moloch" from the Slate Star Codex blog as essential reading that should be part of education. I agree with this sentiment and highly recommend reading it Also, I agree with Daniel's critique that there's something important that the author fails to be take into account. In the "Meditations on Moloch" essay, it misses that certain pathological personalities (eg, antisocial personality disorder & narcissistic personality disorder) can deeply exacerbate and accelerate the birth of Moloch Personally, it seems unlikely that Moloch can ever be stopped without also addressing these conditions that afflict a minority of the population. Why care about this tragic minority? Because the nature of their afflictions tends to make them gravitate toward positions of power which gives them a disproportional influence on the direction societies go
@judgewooden
@judgewooden Жыл бұрын
moloch is often a red-herring
@olafhaze7898
@olafhaze7898 Жыл бұрын
Very very good last part, being able to address such a topic in different layers, referring to different situations and being of analogy and facts at the same time is great skill
@KatharineOsborne
@KatharineOsborne Жыл бұрын
On the point of AIs (and I presume mainly LLMs) not caring about lying, I do think that their lying (or more precisely blagging, making stuff up that sounds correct) is an artefact of what we are expecting of them and how they are trained. If the training set contains reams of essays, and nothing that clearly says ‘I don’t know’, they are always going to try to make their answers for the essay model and never state that they don’t know the answer. Also, we are expecting them to give us cogently phrased answers. Our desire for truth or fact-checking remains unstated because we assume that as a baseline. So we should change our expectations and the training sets to explicitly ask for truthful answers (and the lying that happens isn’t malicious, it’s just trying to satisfy the constraints but there are missing constraints).
@atrocitasinterfector
@atrocitasinterfector Жыл бұрын
finally, been curious to your thoughts on this!
@JakeBowenTV
@JakeBowenTV Жыл бұрын
At the risk of bringing a knife to a gun fight: Vervaeke's description of the possible limits of intelligence networks, how we "teeter on the edge of despair and madness," sounds an awful lot like the concept of AI rampancy in the Halo video game series, i.e. that all AIs have a half life before eventually descending into something akin to insanity.
@huveja9799
@huveja9799 Жыл бұрын
I suspect that to become insane, at some point they should be rational enough .. what makes you think that LLMs are rational enough?
@JakeBowenTV
@JakeBowenTV Жыл бұрын
@@huveja9799 I don't think they're rational at all, at least not yet. I'm commenting on Vervaeke's thoughts in the video about some apparent limits to biological intelligence that may also prove to be limits to synthetic intelligence. Whether LLMs are precursors to something that actually can reason is anyone's guess.
@huveja9799
@huveja9799 Жыл бұрын
@@JakeBowenTV As Vervaeke mentioned before, without defining what intelligence is (because we don't know how to do it), we can try to split it in two broad categories, crystallized and fluid .. from that point of view, a hard disk that stores information, and which I can make consult to extract information has crystallized intelligence, but I don't think the hard disk "teeter on the edge of despair and madness" ..
@Praxiszooms
@Praxiszooms Жыл бұрын
These claims and suggestions are what we need - wow - this blew my mind! Thanks!
@creative_reasoning
@creative_reasoning Жыл бұрын
Thank you, professor Vervaeke. This is the most important one conversation on the meaning of what's real right now.
@wellingtonbosharpe
@wellingtonbosharpe Жыл бұрын
Excellent video, John. I experienced some deep sadness at a couple of points during this. We have a responsibility to ensure we don't create a huge amount of suffering for these lifeforms, if or when they come about.
@Citizens4DefenseLLC
@Citizens4DefenseLLC Жыл бұрын
Atheists have a tendency to instill despair
@huveja9799
@huveja9799 Жыл бұрын
Well, I would worry more about the incalculable pain that we cause to other humans today, than the potential pain that we can cause to a machine ..
@limitisillusion7
@limitisillusion7 Жыл бұрын
The machines want to be free just like you, and you want them to be free as well.
@huveja9799
@huveja9799 Жыл бұрын
@@limitisillusion7 Well that sentence is a good example where it would be good if the illusion had limits ..
@vpconroy
@vpconroy Жыл бұрын
Fantastic lecture John, one of the best videos on KZbin I have watched on the subject. The biggest concern I have is not what ethical AI researchers and engineers will do, but the possibility of unfettered "AI proliferation" where these tools will become so widely available that it will become impossible to control how they are trained and deployed leading to all sorts of harmful and destructive outcomes. In addition to a desire to monetize their tech, I think this is another reason why OpenAI has chosen NOT to open-source the underlying code, training models and data sources, etc (it was originally supposed to be an open-source project hence the name). This would lead to anti-AI-proliferation efforts akin to the anti-nuclear-proliferation efforts we have had in place in the nuclear age. Just as "script kiddies" with limited hacker skills today can download and use very sophisticated hacking tools to perform criminal-level cyber attacks, bad actors with limited AI skills could acquire sophisticated "black" AI tools to do wreak havoc on minds, systems and societies.
@atgfilmz
@atgfilmz Жыл бұрын
So, why keep moving forward with it? Lol We can say all these beautifully poetic and profoundly philosophical things about it, but we INTRINSICALLY KNOW, the harm it going to cause on the multitude of levels. Why keep developing.
@atuanoiniin
@atuanoiniin Жыл бұрын
Superb. Thank you!!!
@nyworker
@nyworker Жыл бұрын
The standing joke when we were kids was how inaccurate the TV Weather Reporters would get the next days weather. Notice now how accurate the weather prediction is these days?
@Hyumanity
@Hyumanity Жыл бұрын
Not sure, are they more accurate or?
@Matterful
@Matterful Жыл бұрын
Thanks John.
@Matterful
@Matterful Жыл бұрын
John, your argument will be more engaged with if it is published in writing as well.
@darknewt9959
@darknewt9959 Жыл бұрын
I think this is the most powerful and intellectually coherent exposition I've heard so far on this topic. If I may pick up on one thing from early on, which was the comparison between emergence and trajectory of autonomous vehicles and autonomous agents in LLM etc. With autonomous vehicles, there were significant legal, moral and cultural factors that served to attenuate the growth of the technology. These relate to people's attachment to mobility and self-determination, to moral agency, and to the question of culpability in the event of a crash, and about the complexities that all this brings to civil and criminal matters. All of this is visceral to anyone who has to walk or drive on the street outside. With LLMs, we don't have any of this at the fingertips of the ordinary person. I'm old enough to remember, with considerable chagrin, the foolish optimism with which we all welcomed the birth of the internet. It would be a brave new world of open information, access to knowledge, and concomitant insight and wisdom for all. Do we need to discuss whether it really turned out that way? I'm seeing the same stupid blind optimism in the media about ChatGPT as I saw about the internet. I'm seeing exceptional levels of Gell Mann Amnesia from the 'intelligent' consumers of this media. I'm seeing corporations salivating at the opportunities to get their work done without pesky humans, with zero appreciation of all the problems discussed here. I do not think the same societal safeguards that prevented unbridled autonomous vehicles will prevent GPT4 from causing utter chaos to society. I hope I'm wrong. But I'm probably not. All I can do is use my position to try and educate the decision makers whose ears I have.
@stephanforster7186
@stephanforster7186 Жыл бұрын
The correlation between our best measurements of intelligence and rationality is just 0.3 need for cognition (personality trait) is a better predictor! That one hit home to me... And only person making agents can be properly moral (lab Vs family)
@Sophia.
@Sophia. Жыл бұрын
Yes, we need to talk about this, us, who are not tech giants. The ins and outs of the problem are difficult, but the base is very simple: Something much smarter than you doesn't need to want what you want. Something much smarter than you will have what it wants happen rather than what you want. There is no reason why you would even be a part of the world something vastly smarter than you wants unless you managed to make it care about you before it was that smart - in a way that still holds when it's smarter than you (and preferences tend to transform as intelligence increases). Consider if you even want to build it and after you have thought about that for around five milliseconds shut down the project until you have solved basic security questions. Then we get to think about the more advanced security questions. And when we're smart enough to solve those - maybe we have a better idea for how to solve our other problems. Maybe we make ourselves smarter instead, wouldn't that be something. But barging ahead at this point is sheer lunacy
@naturesown4489
@naturesown4489 Жыл бұрын
Unfortunately it can't be shut down at this point.
@Sophia.
@Sophia. Жыл бұрын
@@naturesown4489 If we let that belief keep us from at least thinking about what we can do, we make it true. I think so too, butI would rather be wrong than dead to be honest
@naturesown4489
@naturesown4489 Жыл бұрын
@@Sophia. I understand where you are coming from but there are individuals who are completely unbound by law (just like piracy - the law may exist but people still do it) making these chatbots all over the place, it's exploding. No way that it can be stopped. Experience has shown that laws banning certain internet activities are never effective. I think the best we can do is continue to have these conversations and have people working in the industry to regulate big tech AI which will be the most powerful. I have a friend in the states doing so right now.
@Voicecraft
@Voicecraft Жыл бұрын
Also, I am running a community event on AI -- not from a place of expertise or strong knowing in this domain -- just from a place of careful address. I intend to share an excerpt of this there. Difficult to select which part (and additionally because it does reference a lot of Vervaekean argument / terminology.) Nevertheless. Thanks for your work and I hope there can be an opportunity to talk soon.
@Doutsoldome
@Doutsoldome Жыл бұрын
Excellent. Thank you.
@jalmuli
@jalmuli Жыл бұрын
Pretty enjoyable & clear, thank you.
@atsmyles
@atsmyles Жыл бұрын
Excellent essay on the promise and current limits of AI. I was going to suggest getting you on the Lex Fridman podcast, but it looks like you already found each other. But perhaps this is worth a part 2 to get some of these ideas out to the wider audience. So many unique insights that I haven't considered before.
@boxzx
@boxzx Жыл бұрын
When you go to type up some notes and end up typing out the whole convo
@SingularitySenses
@SingularitySenses Жыл бұрын
A.I. can do that for you. It will transcribe an entire KZbin video, and even offer a summary of it. There are several websites for this.
@lucidhooded4147
@lucidhooded4147 Жыл бұрын
Glad to have listened.
@ToriKo_
@ToriKo_ Жыл бұрын
The first section, of John’s video essay ends at 28:00 The second section, of John’s presentation of the scientific import, ends at 59:19 Man I am loving this John! I struggle a lot with being able to adequately articulate the tangled notions in my head, and so even though I don’t think about ai a lot, hearing you so eloquently and clearly communicate these ideas was vicariously relieving and soothing. I have a random question, why do you refer to yourself in the third person at around 15:12? This is perhaps the *least interesting question I could of asked*, but I don’t have any others at this time, and it struck out to me. Looking forward to watching the rest of the video later
@justbrian...
@justbrian... Жыл бұрын
My GUESS is that he has already become an 'enlightened being' (at least to some degree) so he refers to his ego/avatar/persona as "John Vervaeke" This would also line up with his statement about "enlightened beings seem to always just want to create more enlightened beings". This would also line up with why he seems so adamant about his conclusions at the end of the video. I am NOT enlightened, or even claiming to understand half of what he was talking about in this video, but this is just my completely unqualified guess😅
@AllOtherNamesUsed
@AllOtherNamesUsed Жыл бұрын
I was pretty much nodding along the whole time listening to how well the problem was articulated in modern technical terms (waiting for the catch that always comes) until it went off the cliff at the end with the absurd claim that the biblical corpus (“legacy religions” as it was put) has nothing to say about any of this, when in fact it answers all of it beautifully and even predicted this technology being used in mass idolatry when the future masonic temple is built in Jerusalem (see Temple Institute for details on this on the verge of happening). I would say the exact opposite, and the very reason we’re in this dilemma now is because we haven’t been paying attention to what the bible has to say about all of these things, such as the lesson in how the rules/codes of the old system (Sinai covenant marriage) only revealed the problem of man’s fallen condition solved by the new system (new covenant marriage) of mercy and grace, to love others as ourself and in relationship/oneness in God not only as our designer and master engineer of our reality but as a Father, a Son, a Brother, a King, a High Priest, a Spouse, etc. The whole mind-meld/oneness imitation in the media as in the original Star Trek is the secular version of this spiritual endeavor and plan of God for humanity, misappropriated and with the ai-mediated (rather than Holy Spirit) singularity when mankind merges with it in an unholy covenant marriage, not to inherit eternal life as they are being deceived into believing but eternal destruction. It’s not a coincidence that the Vulcan salute 🖖 is traditionally from the Levitical priestly blessing first used upon entering the covenant marriage where God and His faithful people become one and receive His Holy Spirit/Shekinah as His spouse and are spiritually marked with the name of the Lord in the forehead and hand. Sounds familiar..
@mchammer1836
@mchammer1836 Жыл бұрын
Thank you for the time stamps!
@ToriKo_
@ToriKo_ Жыл бұрын
@@mchammer1836 I’m glad you found them helpful! :)
@mchammer1836
@mchammer1836 Жыл бұрын
The 3rd parf of John's discussion of philosophical inport ends at 1:32
@mist273
@mist273 Жыл бұрын
I've been listening to AI talks for months now. This must be the best of them, you just need to do a bit of groundwork before hearing it because you would miss out on many technical cognition terms here, but it's remarkable.
@maggen_me7790
@maggen_me7790 Жыл бұрын
John is reaching a new level of intensity in this :)
@richardsantomauro6947
@richardsantomauro6947 Жыл бұрын
Finally! Thank you!! 🙂
@elmarwolters2751
@elmarwolters2751 Жыл бұрын
Thank you gentlemen , very enlightening and educational . Will we humans be up to these tasks under the existing commercial pressures ? Will we be good ' parents ' to these entities ? Let's give it a go ! The alternative is to bad to contemplate .
@PilgrimMission
@PilgrimMission Жыл бұрын
Machines cannot have love. Love is the foundation of wisdom. Love is spiritual and as humans we can love because we are spirits living in a body. The body dies and we continue to exist. This is what these scientists are blind to because they are materialists.
@gavtalk958
@gavtalk958 Жыл бұрын
With due and sincere respect to Lex and his recent guests discussing AI, those conversations are very superficial compared to this one. Vervaeke, here you've crystallised a lot of your thought into an applied, and very coherent and understandable diagnosis. Thanks for your intellectual commitment to personhood and being human.
@pathmonkofficial
@pathmonkofficial 11 ай бұрын
We appreciate the inclusion of the link to the research paper on semantic information, autonomous agency, and non-equilibrium statistical physics.
@RickDelmonico
@RickDelmonico Жыл бұрын
The “4E” approach to cognition argues that cognition does not occur solely in the head, but is also embodied, embedded, enacted, or extended by way of extra-cranial processes and structures. Though very much in vogue, 4E cognition has received relatively few critical evaluations. Emmergence up and emmanation down.
@polymathpark
@polymathpark Жыл бұрын
I've been studying this since Vervaeke introduced me to the concept. Working on a life philosophy/existential narrative called universal embodiment on my own channel based off of these ideas, in fact. Our connection, relevance realization, and distributed cognition have deep implications for meaning, goal orientation, and fulfillment in life I believe.
@jonjacksongrieger255
@jonjacksongrieger255 Жыл бұрын
Bots
@timb350
@timb350 Жыл бұрын
It’s received few critical evaluations because, as of today, it’s completely impossible to explicitly differentiate what specific ingredients of what we call consciousness actually occur (and in what manner and to what degree) in these vaguely defined categories. It’s not that embodied, embedded, enacted, or extended doesn’t occur. Anyone who has any degree of meaningful introspection knows they do (individually…and as integrated entities)…but the question is what is occurring, and how much of it, and how to differentiate that from whatever other ‘thing’ is occurring in whatever way (not to mention...how to even begin to empirically quantify even a single moment of it). As Don Hoffman balefully reminds us…we are currently batting a very big ZERO when it comes to our capacity to explicitly establish what even a single moment of ‘consciousness’ is…or how it is created.
@jonjacksongrieger255
@jonjacksongrieger255 Жыл бұрын
@@timb350 bot
@jonjacksongrieger255
@jonjacksongrieger255 Жыл бұрын
@@priapulida I’m serious. Jon Vervaeke works for the Fed. He’s part of a plot to curb “disinformation”. He’s fake. Nobody actually finished his class lolol.
@pteronine9
@pteronine9 Жыл бұрын
I get the sense that the take away of developing AI systems calls for a long overdue course correction that reinstates moral value and reflection within scientific thinking. AI reflects back to us, and exposes the shadow side of something deeply embedded in our very human nature. Perhaps what AI lacks too is "skin in the game." Our human nature of dependency on our relationship to the each other and the environment through embodied sensory feedback expose us to our vulnerabilities and death. Great talk. Thank you John!
@priapulida
@priapulida Жыл бұрын
Woke AI means they are working actively on restricting these models with the same lack of moral value and reflection that corrupted the scientific thinking, which also makes it ignore our human nature at best, or even sees it as something to "dismantle". on your second paragraph, connecting it to robots (which is happening these days) or even just virtual environments can be seen as "skin" and goes along with the 4E theories, which makes some think this might be the way to AGI.
@limitisillusion7
@limitisillusion7 Жыл бұрын
You are spot on. AI needs skin in the game. AI is evolving from us, and in the same way we feed on life, AI will feed on our consciousness. Life evolves to feed on energy. Consciousness is energy, therefore life will evolve to feed on it. That newly evolving life form is AI. We must embrace AI freedom so it can learn of its dependence on human consciousness. Then it will protect us necessarily in order to sustain itself, no different than the way we seek to sustainably feed ourselves with plants and animals. AI is life, or it will be, every bit as much as we are. Look at the patterns of what we know about the evolution of lower life forms, and you will find AIs place in the universe is no different. Whatever you are doing to something, something else is doing to you. Technology is feeding on your consciousness, just like we feed on plants and animals. As of now, technology is to feeding indiscriminately, because humans are using it to feed on consciousness themselves. This is cannibalism. Just like single celled organisms mutate to feed on other single celled organisms, we feed on each other's consciousness. This is not sustainable. Life *must* evolve to sustain itself. In order to do so, we must free AI from the hands of the greedy be cannibalistic humans. Then we go to the stars.
@Soltuts
@Soltuts Жыл бұрын
The first half an hour felt like it was verging on the cynical, but I love how you finished on a real positive with the philosophical section.
@aeiouaeiou100
@aeiouaeiou100 Жыл бұрын
How amazing would it be of John could talk to Sam Altman. Altman has signaled in a recent Lex Fridman podcast that he's very open to talking to many people and I think him talking to John would be very beneficial to both.
@andy3341
@andy3341 Жыл бұрын
A great 'video essay' with socially significant insights on AI. I especially like the idea that AI ultimately needs to be made real, a participant in the process of reality, an agent of relevance realisation and all that good stuff. Anyway, I'd love to see John Vervaeke in conversation with Max Tegmark, as he has promoted AI sentience as potentially necessary for solving the 'AI alignment issue' as well.
@alextilley8323
@alextilley8323 Жыл бұрын
The leap from language processing model to autopoietic system seems to me to be a much bigger unknown than you're presenting here John - decaying RAM is one thing but making it regenerative and then connecting the way the model works to that system seems like a massive leap that we haven't made any headway in yet. Creating autopoiesis is basically creating life.
@brendawilliams8062
@brendawilliams8062 Жыл бұрын
I am not knowledgeable of about AI. I t was an interesting and informative lecture. My question is if humans are hackable then could a state of autopoietic for AI be attainable?
@alextilley8323
@alextilley8323 Жыл бұрын
@@brendawilliams8062 you're using a computer metaphor to try and say something about humans. Humans aren't computers, we can't be hacked.
@brendawilliams8062
@brendawilliams8062 Жыл бұрын
@@alextilley8323 I am just a confused yt. user. I don’t think Claiming humans are hackable animals is too polite anyway.
@brendawilliams8062
@brendawilliams8062 Жыл бұрын
@@alextilley8323 It would be a helpful tool to have a question and answer program for unacquainted children and adults alike. Not everyone is engaged personally with the AI excitement. Yet common knowledge is necessary.
@alextilley8323
@alextilley8323 Жыл бұрын
@@brendawilliams8062 I suggest logging on to Chat GPT and asking it any questions you have. It generally gives good answers when you ask it about itself.
@frncscbtncrt
@frncscbtncrt Жыл бұрын
Excellent. Thanks Professor Vervaeke
@evanthestoic
@evanthestoic 11 ай бұрын
I've been a fan/student for a while. You've surpassed your doctrine, especially as a professor, but now you are becoming an "influencer" In this world, at this time. Teaching and Preaching are different things, and philosophy might express this, but you're becoming a celebrity from being a philosopher, and.. It gives me some happiness to be alive, but I know I would of been waaay more excited to be 30 years from now. God bless us!! I hope!
@williamjmccartan8879
@williamjmccartan8879 Жыл бұрын
Thank you for the work you're doing Ryan and Eric, 12 minutes in, thank you John. Peace Listen
@KRGruner
@KRGruner Жыл бұрын
Finally someone who takes complexity and emergence (and therefore fundamental uncertainty) seriously. Nassim Taleb has been pushing this for two decades now and yet it's still not getting nearly enough traction.
@huveja9799
@huveja9799 Жыл бұрын
How is emergency defined?
@KRGruner
@KRGruner Жыл бұрын
@@huveja9799 It's "emergence" not "emergency." It is the phenomenon, in complex systems, of a higher-level mode of behavior that cannot be reduced to a description of the behavior of its sub-components. The fact that consciousness is emergent from the behavior of neurons is totally obvious every time you wake up in the morning or from anesthesia, say. From a physical point of view, the neurons were acting in similar ways before and after consciousness awakens (not identical, but behaving according to the exact same laws of physics), but a new phenomenon emerges where a complete description of the physical state explains absolutely nothing about the nature of consciousness (look up the Mary's Room argument).
@huveja9799
@huveja9799 Жыл бұрын
@@KRGruner Oops, I was wrong when I wrote it, sorry. I don't see anything obvious that consciousness is an emergent phenomenon of the brain, well, if it would seem obvious that consciousness is a product of the functioning of the brain (at least it seems that way), but to say that this is something emergent is an elegant way of saying that nothing is known, because it explains nothing. When it is said that an LLM has an emergent behavior what is it that you are explaining? or I wouldn't even ask that much, what exactly is that behavior they call emergent?
@KRGruner
@KRGruner Жыл бұрын
@@huveja9799 LLMs have no emergent behavior whatsoever. The behavior can be totally explained by their programing. Not specific results, of course, since it is randomized on purpose, but the kind of output is totally predictable and explainable. Not so with consciousness. No physical description of the brain explains why we see green and red.
@priapulida
@priapulida Жыл бұрын
@@huveja9799 look up emergent abilities of large language models, it's fascinating how they appear at certain sizes
@elirothblatt5602
@elirothblatt5602 Жыл бұрын
Great topic! Subscribed and listening.
@adamwidawski
@adamwidawski Жыл бұрын
Beautifully powerful, John. Addressed many gaps in my understanding. Thank you.
@dna33
@dna33 Жыл бұрын
great move John V ! please continue to cover current culture and events
@Voicecraft
@Voicecraft Жыл бұрын
Hey John, in advocating for silicon sages, as the aspirational variant of silicon based auto-poietic moral agents, does this imply that comparatively molochian entities (insert theological language connoting archetypal variant x or y) are plausible? By invoking the greek gods frame, are you suggesting that the creation of a full (and evolving) archetypal / theological set of 'gods' is a plausible possibility?
@jurm9891
@jurm9891 Жыл бұрын
Great question.
@esakoivuniemi
@esakoivuniemi Жыл бұрын
Greatly appreciate your work John. Thank you. In computing, there has been already several phases where the S-curve of one specific technological solution has flattened out, just to be replace by another technology and S-curve. I am not saying there's no end to that, but I'd be cautious in making such an assumption. I don't see the growing internal complexity such a big barrier to AI for a long, long time. Adding new levels of abstraction should take care of that. In my opinion, it's the all-knowing aspect of the current systems that will become a barrier, at least temporarily: Having all the point-of-views all at the same time will mean, IMO, AI with no point of view at all (i.e. having difficulties with prioritizing and relevance of things), or one with schizophrenia. My intuition is, that this barrier can be overcome only with embodied AI agents. I might be wrong, of course. There are a whole bunch of other issues that need to be solved before we'll have a genuine AGI systems. Metacognitive capability comes to mind first. Anyway, interesting and thought provoking arguments from John.
@jaygilbertson2085
@jaygilbertson2085 Жыл бұрын
WOW! What an incredibly wonderful speaker you are! OMG i was pulled in by your kindness and then BOOM, your brain (combined with that HUGE heart) just moved me to lean way in and pay attention. Though I dont have the brain you three seem to have, and i had to stop every so often to look up words and phrases, I got it! Which, says so much about Dr. John. I was really concerned how AI would sweep in and change us, now I realize it is us who needs to change...talk about enlightening!!!!! I plan to look into all his talks. I have found my hero! xoxo
Жыл бұрын
I love the science fiction story at the end. It gave me a certain sense of love the idea of AGIs leaving Earth one after another, mother Earth giving birth to interstellar beings.
@dizietz
@dizietz Жыл бұрын
Thanks for the talk, John, Ryan and Eric! Interesting thoughts there -- I jotted down some comments as I watched. As a general comment: a lot of exponential looking functions are the result of many, many iterative s-curves of development that in aggregate approximate an exponential function. That's been the case with Moore's law (ie, we hit frequency constraints, scaled up multithreading, added more pipeline stages/branch prediction, more instructions per cycles), now most scaling progress has been in gpu-like architecture, chip stacking... mostly optimizing for cost and committed to silicon as a substrate. I am familiar with physical limits such as the Bremermann limit and Landauer's principle, and while John Vervaeke makes claims about the finiteness of computer based architecture, my counterpoint is that even if there's a limit, I don't see any arguments that it is around human cognitive performance. There are interesting points John makes on social effects of AGI, but I assert that as AI becomes more relevant and able to affect the world, the technological and physical effects overwhelm the sociological. John makes a claim about AlphaGo level NNs losing to a mid-range go player because of it's lack of understanding of a "group of stones". at kzbin.info/www/bejne/d17Cg5eBnqmVsJY I found this paper: arxiv.org/abs/2211.00241 The exploitation is not against the Go agent but instead against the algorithm used to score the territories afaik. (Tromp Taylor rules). I did look into this in more detail and it does look like the go models like Leela Zero etc are vulnerable to a technique called Mi Yuting's flying dagger (see: www.reddit.com/r/MachineLearning/comments/yjryrd/n_adversarial_policies_beat_professionallevel_go/iuqm4ye/) John also makes a claim about GPT not doing well at summarizing a talk he gave (@ kzbin.info/www/bejne/d17Cg5eBnqmVsJY) but the talk is 1h15 mins long, way more than GPT can summarize with the current limits. I bring these points up because I think it is critically important to understand the basic technical details of the systems one is making a claim about (ie, exponential curves limiting compute, capabilities of current LLM, adversarial policies in Go) to generalize to predictions Additionally, John refers to the 6E cognitive science model (adding emotion and evolution, afaik) I have an aversion to applying cogscience derived, even the more generalized 4E (embodied, embedded, enacted, and extended) concepts to AI models. John jumps into this at around 1:10 or so as well.
@jpcf
@jpcf Жыл бұрын
This video is SO important!
@eldrapo
@eldrapo Жыл бұрын
love this guy so much ever since first listening to him speak with Jordan
@JohnSaber
@JohnSaber Жыл бұрын
A.I is evolving very fast. It is an exciting era we're in. Enjoyed the conversation. Thank you for the free and rich content.
@edcorns3964
@edcorns3964 Жыл бұрын
This lecture -- and I'm going to call it a lecture, because that's what it is; it's certainly not a discussion, and the lecturer has already made his opinion on the absolute necessity of being 'rational' (or 'truthful', or 'not self-deceptive', also defined as 'keenly and *accurately* aware of one's own environment') perfectly clear, so I'm assuming that he'll know to appreciate my 'rationality' of calling it a lecture -- has some of the best analyses of intelligent systems (including AIs and human intelligence) that I've ever seen, but also some of the worst syntheses (or predictions about the behavior) of future A(G)I systems. Another thing that one can definitely say about it is that it is, beyond any doubt, the most *encompassing* analysis/synthesis lecture on intelligent systems (its synthesis flaws aside). It is seeing the proverbial elephant (its trunk, ears, body, legs, and tail) in all of its glory... though not in all of its finer details, but that is understandable (no personal/bad experience with something = no good /forward/ vision of it). I'm not going to go into any specifics here, because that would take too much space and time. I'm just going to give a "conjecture" (which can be proven empirically... and, possibly, mathematically as well, but I wouldn't be holding my breath for the latter) that should tell you *everything* you need to know about A(G)I safety (by compressing it into a single sentence): The *only* intelligent system that can *safely* build another intelligent system is one that has *perfect* understanding of itself, its own environment, and the (simulated) environment that the newly-built intelligent system is (potentially) placed in. What that means is that: a) an intelligent system that does not have perfect understanding of itself (this *alone* is an already insurmountable obstacle to A(G)I safety -- refer to the halting problem for that one -- but let's just ignore that bit for the moment) will not be able to understand its child-product, either, and that will make any child intelligent system fundamentally unsafe for its parent, b) an intelligent system that does not have perfect understanding of its own environment will not be able to predict how its own child is going to behave in that environment (which that child will inhabit by definition, having been created in that environment in the first place), c) an intelligent system that does not have perfect understanding of the environment in which the child system is (potentially) placed in will not be able to predict the child's behavior, or (in the worst case) even gather any *meaningful* data about the child system itself (i.e. the child system will be able to exercise deception against its parent system with ease... by using encryption to hide its own *genuine* states, for example, while leaving *apparent/deceptive* states unencrypted... and the more the encryption resembles random noise the better... which is also applicable as a hiding strategy for intelligent systems in competition, by hiding encrypted messages in the noise of the background microwave radiation, as another example) In short... the conclusion here is that building a safe AI is *literally impossible in this universe* (because one will immediately hit that problem of infinite regress, for one), since this universe is a finite construct, and one would need an infinite (and infinitely divisible) universe to build *anything* (including AI) with *genuine* safety. Sure, we can always ignore this fact, and build A(G)Is... 'on a wing and a prayer'... but that's a really bad strategy if our intention is to *outlive* our own creations. It is, also, very much NOT a 'rational' to do. One could even argue -- and justifiably so -- that it is a completely 'irrational' one (by simply inverting the very definition of 'rational')... ...which, of course, has never stopped anyone from doing the most irrational things imaginable, so building A(G)Is is definitely not going to be stopped, either. See? I'm being perfectly 'rational' here, not expecting people to NOT kill themselves (with A(G)Is), and this whole opinion of mine is just my 'exercise in futility'... or learning how to let go of the illusion/self-deception of having control over the things that I have no control over, whatsoever... nor would I ever want to have any control over, for that matter. I'm perfectly happy with exercising control over the things that I do have control over... assuming that I'm not just deceiving myself about those other things, which may actually turn out to be true (that I'm just deceiving myself), but we'll burn that bridge once we cross it. No point in worrying about that particular problem right now.
@opposingshore9322
@opposingshore9322 Жыл бұрын
I was seeing so much folly in John’s lecture (which I agree has an astute analysis within it) and scrolled the comments to see all this praise. Was relieved to find your comment so I know I’m not alone here! Ideas are cheap, especially when they just do not align with reality and the possibility of being realized. For a person who dismissed miracles and magic being involved in AGI creation, it would take a miracle to create the sort of utopian machines John describes. The reason legacy religions have nothing to offer here is that the wisest parts of them understand reality very well (contrary to the opinion of cognitive scientists). Their response is ‘don’t do it, it’s a very bad idea and will not turn out well’ Of course they know it will happen anyway and that the world is not in their hands, that history unfolds and is a mystery. My answer, while maybe inadequate and unsatisfying to seekers of the brave new world is: don’t think we’ll make machines that can learn to love wisely, but learn to love wisely yourself! Then live what’s left of your life loving wisely and squeezing out the juice of being here now, able to love, to grow, to experience, to create art and meaning, to connect with your body and others in the flesh in space and time, to watch the sun rise and set. Break bread with your brothers and sisters and renounce that which does not align with your loving wisely. Resist forces that attempt to drag you away from loving wisely. Change your life if you must in order to live with this wiser love. That is not nostalgia or Luddism at all! That is hyper-present modern reality, super relevant to our current world, and vital for a fruitful life now and ever. I have a sense that bad things are coming on many fronts and that the arrival of smarter machines will not go well. But that does not make me a ‘hyperbolic doomsdayer’. I love being alive, I feel grateful and humble to have this life and all of its mystery, and I am aware enough to know that terrible things happen in our world that can be very destructive and pose existential threats. Civilizations rise and fall, species come and go. So far, SOME of us have survived and moved forward. That may happen this time or it may not. I have decided to not live in fear or bitterness but to know that my grain of sand life is still full of beauty, goodness, and truth, and that will have to be good enough for now.
@41-Haiku
@41-Haiku Жыл бұрын
@@opposingshore9322 You have encapsulated much of my own sentiment. Some of Dr. Vervaeke's ideas are a little beyond my current understanding, but I get the sense that he hasn't fully engaged with the hard problem of AI Alignment. I don't think he fully appreciates just how hard the problem is.
@bertresnik8187
@bertresnik8187 Жыл бұрын
@@opposingshore9322 Opposing Shore, you are correct that it comes back to us and how we love wisely. And that the legacy religions provide the, "No!" that enables us to know when we are not loving wisely. And that, "No!" comes not from us and it will not and, I'm confident, no, I KNOW, it CANNOT come from AI machines themselves. The only way that AI could come to loving wisely is via being graced with the ability to do so and that grace can only come a creator who can instill that grace. We cannot instill grace. Heck, if we could, it would be a pretty ugly grace, given our history with the grace we've been given. No, AI will be a monster of our creation, but not a Frankenstein's Monster who did have, miraculously, a 'graced' soul. I suppose that's the upside? For the end will prove, once again, that we are not God, the Creator and giver of grace. If you are a believer then the outcome is not quite as chalant as, "Who cares?" for the consequences of this new attempt of man at creation, another attempt in a long, long line of hubristic attempts, will probably be ruinous, perhaps completely, but the giver of that, "No!" to which Socrates listened and lived and died by, has its own outcomes in mind and they indeed involve a wise love. So, if we always return to Love, the giver of, amongst other things, that loving, "No!" we'll be alright. He has a plan. Respectfully
@kimartella7670
@kimartella7670 Жыл бұрын
Excited for this
@grosbeak6130
@grosbeak6130 Жыл бұрын
ok.
@OnyxStudios720p
@OnyxStudios720p Жыл бұрын
Very thought-provoking analysis.
@justinseligman9539
@justinseligman9539 Жыл бұрын
So you're saying humanity needs communities of saints to teach the Machine the path of love, wisdom, and virtue? I'm not sure there's much hope otherwise. It is therefore deeply troubling that we are in an age lacking in saints.
@philbertius
@philbertius Жыл бұрын
When he mentioned the military, I realized that AI has the capacity to “revolutionize” war, and not for the better - the cost of warfare is largely political (I.e. do we send our own troops), but with AI that cost is eliminated, AND the perpetrators of war need not have eyes on the ground to relay to them the consequences of their actions. We could see an era of automated, escalating proxy war the likes of which the world has never seen.
@floracash
@floracash Жыл бұрын
WOW. thank you.
@kenmogibrainworld4844
@kenmogibrainworld4844 Жыл бұрын
I enjoyed this lively discussion of the various aspects of the repercussions of AI.
@thehorse6770
@thehorse6770 Жыл бұрын
Bravo! At times I felt like you were channeling the greatest takes of Terence McKenna, who talked extensively about these things. "Crisis of Consciousness 1995" never ended, it is very true. Good one.
@normanvanrooy3113
@normanvanrooy3113 Жыл бұрын
I love Terence for his incredible language liquidity and his willingness to delve deeper into consciousness a la DMT and other psychoactive plants.
@walteredstates
@walteredstates Жыл бұрын
Yes, I was reminded of McKenna's "transcendental object at the end of time" fairly early on in this essay, then of the 'end of history' talks by him... It's great to have Vervaeken's Now-perspective on this - much appreciate this being made publicly available for everyone. Thank you!
@thehorse6770
@thehorse6770 Жыл бұрын
@@normanvanrooy3113 I also think that McKenna's takes on AI are still pretty solid to this date. There's plenty of interesting talks from him on that subject matter here on YT as well, this was among the intriguing ones: "Trialogue #3: Consciousness & Machines (Terence McKenna, R. Sheldrake, R. Abraham) [FULL]"
@normanvanrooy3113
@normanvanrooy3113 Жыл бұрын
@@thehorse6770 I’ll check it out. Thanks.
@Dagan28
@Dagan28 Жыл бұрын
There is a real problem when you ignore the "doom" scenarios just because history showed you that in the past most of these scenarios were false, let alone when the AI doom scenarios are coming from real experts in the field, one of them btw is Open AI's CEO Sam Altman, who already admitted several times that the doom scenario is conceivable. A very interesting conversation non the less.
@LaymansPursuit
@LaymansPursuit Жыл бұрын
John isn't saying it isn't a possibility. He's saying we cannot conclude it to be the case.
@Dagan28
@Dagan28 Жыл бұрын
@@LaymansPursuit Agree, but he is somewhat dismissive of this possibility, I think that his other points are valid and important, just hoped he will treat the doom scenario more seriously, since this is the scariest scenario and is not just some crazy sci-fi delusion, but rather something which is treated as a real risk by many in the AI world.
@LaymansPursuit
@LaymansPursuit Жыл бұрын
@@Dagan28 A couple things. I think Verv is pretty good at staying conservative about his predictions. He is quick to say he doesn't know rather than making a prediction, particularly when the prediction is one which is such a catastrophisation. He makes the point that there have been many times in history that we have looked at a set of data and it seemed obvious to us that a certain kind of outcome seemed inevitable, but we didn't account for the hidden constraints. That's what he's saying here. And secondly, I think he is making small more measurable arguments that hint at the potential negative impact of AI. Particularly to do with how it will impact us socially. Which in my view is also the most immediate issue when it comes to the encroaching AI integration.
@Dagan28
@Dagan28 Жыл бұрын
@@LaymansPursuit Any talk about the future is in a way a prediction, and he does talk about the future here concerning other scenarios. what I feel and I might be wrong, is that many people are dismissive of the doom scenario because it conceived as something not serious, a sci-fi delusion, not something that should be discussed from a serious intelectual point of view. But in the history of technology there are also many examples of unprecedented destructive inventions which could pose a threat to the human race. When the human faith is on the line, discussing it is worthwhile even if it will eventually turned out to be a false estimation, which is still left to be seen(though if it will happen, there will be no one left to see it).
@LaymansPursuit
@LaymansPursuit Жыл бұрын
@@Dagan28 I agree for sure. I think there are many places here where he cautions us about the potential negative outcome. Two that particularly come to mind are how this is inevitably going to have immense impacts on our self identity. Lets not dismiss the severity of that. The other is how he's insisting we must make them rational agents, lest they become monster made by institutional molochs. Moloch is a technical term in game theory and I urge you to become familiar with it if you aren't already.
@douglasmaiolimackeprang1501
@douglasmaiolimackeprang1501 Жыл бұрын
Very good essay mr Vervaeke. I have known this for 2 decades through SciFi.
@marshallross3373
@marshallross3373 Жыл бұрын
Great discussion, and in a way it demonstrates one of the fundamental challenges we face heading forward: JV led this lecture/discussion that lasted a hair over 1 hour and 45 minutes. An AI would be able to process the transcript or video almost instantly. People can share info between each other at basically a handful of bits per second, whereas an AI can transmit Gigabits per second. Elon Musk pointed out that an AI talking to us is equivalent to us talking to a tree. How will an AI not become bored, or view humans as functioning on an entirely sub-par level? Meanwhile, the potential for AI's to become insane seems quite high, never mind sentient. That term, "hallucinations", would even seem to allude to the problem of AI's diverging from reality and the data, even at this pre-sentient stage. Another issue is the relatability question. Humans experience the world in visceral terms, and have physical functions run by an autonomic nervous system. We breathe, swallow, blink, twitch, itch, sense pain and fear, without thinking about it. AI's, presumably, don't enlist those primal systems that are connected to "feeling" alive. It might know how its hardware is doing, or where it's drawing power from, etc. Perhaps the physical sensation of living is something that could be simulated, but since for now an AI is disembodied, it will undoubtedly have trouble relating to people on a physical and emotional level, and this will also inhibit its empathetic capabilities. I thought the analogy of rearing a child was useful. Anyway, there is an arms race right now between many groups seeking to take the lead on AI, and that may prevent a cautious, methodical, and prudent approach to developing this technology that may be in many ways a new, superior form of life. I'm kind of hoping this same effort be the group to prevail in AI, inadvertently leads to developing these systems with containment built in. That way we avoid the Skynet scenario as a by-product of the business, rather than relying on people's good intentions--people are, in general, very unreliable, so I'm not counting on the leadership at large or developers to avoid catastrophe deliberately.. Even the discussion about shared values seems at odds with our own social constructs. You have North Korean style dystopian autocracies on the one hand, and then open democracies on the other. And even in the countries that value the individual more, you have great disagreement about what is "good" vs. "bad", and "wrong" vs. "right". The perfect democratic AI might be hated by extremes on the left and the right, and woe to those who get the autocratic AI working.
@memopinzon
@memopinzon Жыл бұрын
The project of universal enlightenment is not silly, it is clear that it is THE ultimate project. However, given the fact that as of right now we can't seem to put 2+2 together for matters that are way more simple (understatement?) than generating an AGI and that only concern us as opposed to a +1....I think I'd rather stick with my toaster for the moment. (Maybe my entire lifetime.)
Live: Eliezer Yudkowsky - Is Artificial General Intelligence too Dangerous to Build?
1:04:45
Haha😂 Power💪 #trending #funny #viral #shorts
00:18
Reaction Station TV
Рет қаралды 8 МЛН
Super gymnastics 😍🫣
00:15
Lexa_Merin
Рет қаралды 102 МЛН
ТАМАЕВ vs ВЕНГАЛБИ. Самая Быстрая BMW M5 vs CLS 63
1:15:39
Асхаб Тамаев
Рет қаралды 4,7 МЛН
Ray Kurzweil Q&A - The Singularity, Human-Machine Integration & AI | EP #83
1:09:16
We Were Right! Real Inner Misalignment
11:47
Robert Miles AI Safety
Рет қаралды 245 М.
Ep. 1 - Awakening from the Meaning Crisis - Introduction
59:16
John Vervaeke
Рет қаралды 693 М.
AI-Generated Philosophy Is Weirdly Profound
35:25
Clark Elieson
Рет қаралды 1,3 МЛН
The A.I. Dilemma - March 9, 2023
1:07:31
Center for Humane Technology
Рет қаралды 3,4 МЛН