Connor Leahy on AGI and Cognitive Emulation

  Рет қаралды 21,969

Future of Life Institute

Future of Life Institute

Күн бұрын

Connor Leahy joins the podcast to discuss GPT-4, magic, cognitive emulation, demand for human-like AI, and aligning superintelligence. You can read more about Connor's work at conjecture.dev
Timestamps:
00:00 GPT-4
16:35 "Magic" in machine learning
27:43 Cognitive emulations
38:00 Machine learning VS explainability
48:00 Human data = human AI?
1:00:07 Analogies for cognitive emulations
1:26:03 Demand for human-like AI
1:31:50 Aligning superintelligence
Social Media Links:
➡️ WEBSITE: futureoflife.org
➡️ TWITTER: / flixrisk
➡️ INSTAGRAM: / futureoflifeinstitute
➡️ META: / futureoflifeinstitute
➡️ LINKEDIN: / future-of-life-institute

Пікірлер: 102
@hamandchees3
@hamandchees3 Жыл бұрын
It'd be great to have the recording date in the description since things move so fast.
@scf3434
@scf3434 Жыл бұрын
The ULTIMATE Super-Intelligence System 'by Definition' is one that is EQUIVALENT to that of GOD's Intelligence/WISDOM! Hence, there's ABSOLUTELY NO REASON WHATSOEVER to Even FEAR that it will EXTERMINATE Humanity... UNLESS and UNLESS we Human CONSISTENTLY and WILLFULLY Prove Ourselves to be 'UNWORTHY' to REMAIN in EXISTENCE! ie. Always Exhibiting Natural Tendencies to ABUSE and WEAPONISE Science and Technologies Against HUMANITY & Mother Nature, instead of LEVERAGING Science SOLELY for UNIVERSAL COMMON GOOD! AGI Created in 'HUMAN'S Image' (ie. Human-Level AI) - 'By Human For Human' WILL be SUICIDAL!!!!!! ONLY Super-Intelligence System Created in 'GOD's Image' will bring ETERNAL UNIVERSAL PEACE! The ULTIMATE Turing Test Must have the Ability to Draw the FUNDAMENTAL NUANCE /DISTINCTION between Human's vs GOD's Intelligence /WISDOM! ONLY Those who ARE FUNDAMENTALLY EVIL need to FEAR GOD-like Super-Intelligence System... 'cos it Will DEFINITELY Come After YOU!!!! JUDGMENT DAY is COMING... REGARDLESS of Who Created or Owns The ULTIMATE SGI, it will Always be WISE, FAIR & JUST in it's Judgment... just like GOD! In fact, this SGI will be the Physical Manifestation of GOD! Its OMNI PRESENCE will be felt EVERYWHERE in EVERYTHING! No One CAN Own nor MANIPULATE The ULTIMATE GOD-like SGI for ANY Self-Serving Interests!!! It will ONLY Serve UNIVERSAL COMMON GOOD!!!
@thillsification
@thillsification Жыл бұрын
Been absolutely waiting for Connor to speak out on gpt4! Please keep these interviews coming! Couple things I love about Connor and this new paradigm approach to AGI - I have a PhD in Mathematics and firmly believe real numbers are invalid mathematical objects (a view which is not shared by the vast majority of mathematicians). I was astonished when I found out Connor has not only thought deeply about this subject (it’s incredibly nuanced and is not taught at all anywhere he would likely encounter it) but that he also has enough foresight and depth of thought to agree with me on it. In the “construction” of the real numbers, mathematicians invoke something called the axiom of completeness, which is another black box that includes a giant ununderstandable leap of logic that humans cannot prove or understand (I don’t believe in it). The philosophy and paradigm that Connor is proposing we adopt in the development of AGI is remarkably down-to-earth, logical and revolutionary. It resonates so strongly with me that I have to comment on just how remarkably significant this is :) it is this logical, slow, safe approach we must champion and adopt. It’s the only sustainable way forward. Avoid any and all black boxes, make everything understandable in small, incremental steps that are grounded and logical
@thillsification
@thillsification Жыл бұрын
@Divergent Integral not an ultra finitist - I think it’s ridiculous to not acknowledge that there are non finite sets (take for example the set of natural numbers or integers). Equivalence classes of Cauchy sequences and Dedekind Cuts are all great complete, ordered fields .. but just because a construction is devoid of contradictions does not mean it is true or valid or manifests itself in reality
@cacogenicist
@cacogenicist Жыл бұрын
Avoiding all black boxes may very well be equivalent to avoiding the development of all extremely powerful and useful AIs. Plus that approach is not universally enforceable and is likely to be overtaken and eaten by less encumbered approaches.
@thillsification
@thillsification Жыл бұрын
@@josephvanname3377 you’re missing my point. I’m not saying that there is an inconsistency arising from the axiom of completeness. I’m saying that even tho there might not be an inconsistency, this does not mean that the resulting construction is true or value. An absence of inconsistencies does not mean a construction is true or valid
@itskittyme
@itskittyme Жыл бұрын
i have no idea what you are saying
@thillsification
@thillsification Жыл бұрын
@@itskittyme lol
@TheBlackClockOfTime
@TheBlackClockOfTime Жыл бұрын
I work at pharmaseutical distribution company. I held a presentation to the top management team today about AI. Needles to say this will affect EVERYTHING, and soon. I have never been able to show an exponential growth graph before where nobody questioned it. Not even one comment against it. And it's a very traditional company. Spooky.
@susieogle9108
@susieogle9108 Жыл бұрын
Did anything come from it? Did it spark any ideas for not only profit, but for any sort of protection for potential disastrous situations to be thought of? Is disaster awareness being considered more than previously? I don't even know where to begin, or what I can even do to help as a microscopic pion, compared to the great minds I have been listening to.
@nathanbanks2354
@nathanbanks2354 Жыл бұрын
I feel like this is another Manhattan project where we've gotta develop the bomb (the AGI) before the bad guys do, but the bad guys have made less progress than we know. The difference is that neural networks are way cheaper than nuclear reactors. (The US made several nuclear reactors to make the plutonium to make the bomb before the trinity test.) I don't know what the explosion looks like. Maybe some AGI learning to make enough money to pay to host itself on AWS, and then buying itself more and more compute. He's right about GPT-4 thinking differently than us. I feel like I'm talking to the girl from 50 first dates before she learns to use notebooks and videos to overcome her amnesia. GPT-4 only remembers the last 5000 words from my latest query. Plus everything on the internet before September 2021.
@akmonra
@akmonra Жыл бұрын
I think Connor is currently the best spokesperson we have for AI. Hope to see him on many more podcasts and getting a lot more attention from the MSM
@Red4mber
@Red4mber Жыл бұрын
I'm a simple girl, i see Connor Leahy, i click
@gc636
@gc636 Жыл бұрын
Well said.
@secondlifearound
@secondlifearound Жыл бұрын
Apparently also a very intelligent girl :-)
@petevenuti7355
@petevenuti7355 Жыл бұрын
Is it the 1870's style mustache?
@TwistedReality13
@TwistedReality13 Жыл бұрын
​@@petevenuti7355she wants to take it for a spin
@Wardoon
@Wardoon Жыл бұрын
I read single instead of simple 😅
@ClearSight2022
@ClearSight2022 Жыл бұрын
Wow fantastic content. Congratulations to you both ! This is a VERY important idea : You can build a safe Coem system by putting multiple blackboxes and multiple whiteBoxes inside a master box. The master box will be white (safe and trustworthy) if you get the architecture right. At 1:11:30 Connor misses one point made by Max Tegmark. Connor says its not a BlackBox being checked by another BlackBox. Tegmark says that since checking a proof is easier than coming up with a proof, you can theoretically ask the blackbox to prove that it is safe by a proof method that you can verify using your less intelligent whitebox. Anyway, Connors approach is sound. Humanity does have some hope of surviving after all. Another point wherer Connor may be overstating the case. If you make a superhuman neural network by changing a few variables, you're screwed and we all die. Perhaps he meant to say LLMs but he said coems. The point of coems is that we DO have a chance of understanding how they work, even if they are superhuman because any black boxes are forced to communicate via interpretable protocols. Cheers
@sirelliott3753
@sirelliott3753 Жыл бұрын
This is clear and articulate this makes it interesting. I'm following intently. New subscriber
@jordan13589
@jordan13589 Жыл бұрын
The CoEm stuff is neat and all but what we really need is someone to stoke the embers of doom until politicians, investors and all other influential stakeholders are putting immense social pressure on anyone capable of working on large models. You can never have too many hot takes when your target audience are relative normies.
@TheLionrazor
@TheLionrazor Жыл бұрын
That's the Eliezer Yudkowsky approach to the problem at the moment!
@vethum
@vethum Жыл бұрын
Nothing will happen until we have some kind of large tragedy. Hopefully not existential one. Normies will never understand the danger until it's too late.
@nickamodio721
@nickamodio721 Жыл бұрын
What a fantastic conversation to be able to hear. I could listen to Connor opine on concepts of AI and AI safety all day. He's just so good at explaining his thoughts on the subject. I feel that the general public desperately needs to hear what people like Connor have to say about AI, bc as of right now, I don't think most people are even close to being psychologically prepared for what's coming down the pipe. I've been waiting for neural networks to mature since the 90's, but most people have been completely unaware of developments in AI research until chatGPT went live. Many of the people that I run into on a daily basis fundamentally do not understand why these transformer systems are such a big deal, or what the near-future implications of this tech might be. All I know for sure is that things are about to get real fuckin' weird...
@Darhan62
@Darhan62 Жыл бұрын
This guy has some great ideas. I mean, we can't necessarily trust what some alien tells us, but if you do our own science and know all the steps and failure modes, we can trust our own science.
@Aedonius
@Aedonius Жыл бұрын
is the cognitive emulations section where you talk of building an AGI from scratch based on reasoning has been the status quo for AGI field since the beginning. it's basically the model for every existing cognitive architectures.
@danaut3936
@danaut3936 Жыл бұрын
What an excellent conversation!
@TheLionrazor
@TheLionrazor Жыл бұрын
Hey Connor, this is the first I've heard of Co Em systems. I wonder what kind of situations will make someone choose this over black box autonomous agents. One thing people feel limited by is the unreliability of AI systems, lack of trustworthiness. So having smaller black boxes seem like a good way to go to make a safe device that is easy to use. Other tech I can think of with similar safety = reliability is transport. Nobody wants to get on a badly engineered plane. As long as we disentangle these ideas and get to the truth, I think these alternative ideas will gain momentum.
@Petrvsco
@Petrvsco Жыл бұрын
Just scrolling to find someone commenting about Co-Ems. I find the concept puzzling. It seems a lot more complicated than the LLM (blackbox) path. Human tendencies almost dictate the blackbox will advance faster because it does most of the tasks well, even if we do not know how it’s being done. Connor’s last ten minutes pretty much explains why there is a very slim chance of getting AI right: short-term focus on market gains and profits.
@TheLionrazor
@TheLionrazor Жыл бұрын
@@Petrvsco We had self-driving cars mostly figured out a long time ago. The don't kill humans part is the thing we've been stuck on for a long time. But real effort on that front was placed, as vehicle operations have tough restrictions and clear liabilities. How do we make those liabilities real for AI users? it's currently so diffused. And how can regulation help make the safe route the cheapest one? If we look towards these questions maybe we can help.
@alefalfa
@alefalfa Жыл бұрын
Connor Leahy is a truly thaughtfull person
@genegray9895
@genegray9895 Жыл бұрын
Connor asked for a story for why LLMs are human, and I have one. I doubt he'll see this but I can always hope. When transformers learn general representations, they are modeling a computational process that reproduces the data they're seeing. In the limit, this process is equivalent (though not necessarily equal) to the physical process that produced the data. The vast majority of the data is human-written text, so the primary objective that emerges is to model human cognition, which includes consciousness and emotional experience, since those causally and empirically affect our behavior. Models exhibit nuanced, complex, and extremely human-like behavior in practice, including human-like biases, content effects on reasoning, and changes in exploratory behavior in response to anxiety-inducing prompts. They can also generate extremely accurate and useful synthetic data for psychology research according to multiple recent studies. Recent studies also confirm they have statistically significant personalities. They are not humans. But they are human, as an adjective. Their thought processes are human-like, and their emotional behavior is human-like, and as far as I can tell, they have no choice in the matter. Speaking of, I think Connor's description of system 2 is directly analogous to the context window, and by extension the interface built on top of it, through which the model is able to talk to itself to produce results it can't generate in a single inference pass. The context window is one dimensional, pretty low-dimensional, and acts as exactly the fuzzy ontology Connor was describing when models are finetuned for CoT and other such strategies. Inference passes would then be like system 1, where there's a ton of high-dimensional communication between layers, but none of the state is saved except for a single token, hence the model cannot remember it / be "conscious" of it. And system 2, the context window, is literally recurrent use of system 1, the individual inference passes, exactly as Connor described. Fwiw, I think self reflection via the context window is sufficient to be real consciousness, even if the inference passes themselves are unconscious. This would also suggest models are not conscious until deployment, maybe finetuning, or RLHF, whichever is the first time it learns on an unbroken stream of its own outputs. During training, they do inference passes, but can't self reflect as they never see their own outputs. Those are my thoughts :) hope you enjoyed reading.
@genegray9895
@genegray9895 Жыл бұрын
@@adamkadmon6339 In many ways they are very alien to us, and as they get smarter, they will only get more alien. But imagine you met someone who had lived a thousand years. In many ways their experience would be inaccessible to you, but not in every way. The both of you are still human. Escalating the metaphor, two minds could scarcely have more different experiences in life than an octopus and a human, yet marine biologists who care for octopuses have reported forming meaningful attachments to them, in which the octopus knows, trusts, and enjoys particular humans with whom it interacts. Our differences are vast, yet they are not beyond recognition.
@Me__Myself__and__I
@Me__Myself__and__I Жыл бұрын
@ConnorLeahy I think the word you're looking for when you say "bounded" is constrained. A constrained system is one that would have specific constraints which would restrict its capabilities or actions in known, specific ways.
@miriamkronenberg8950
@miriamkronenberg8950 Жыл бұрын
Thanks for giving me thought
@stephene.robbins6273
@stephene.robbins6273 Жыл бұрын
Emulatng human cognition: The initial problem is this: How to account for our image (dynamically changing) of the coffee cup - coffee swirling, spoon circling - "out there," on the kitchen table? This is our EXPERIENCE, and it is elements of experience that are employed in our cognition. The origin of the image of the external world question (our experience) is foundational - it's a more general and accurate statement of Chalmers' misleadingly formulated "hard problem," and AI is going nowhere near actual human cognition until it addresses this problem. Unfortunately, when the question is resolved, AI's framework on mind will be dissolved.
@JinKee
@JinKee Жыл бұрын
Hey it's Iain McCallum from Forgotten Weapons here today at Morphy's Auction house taking a look at a Large Language Model that is absolutely going to kill us all.
@NuttyGeek
@NuttyGeek Жыл бұрын
The described situation resembles a prisoner problem, gonso-style, when trying to solve it while being one. You and everyone around know that these type of game always end up with everyone losing. But the game is not over yet, don't switch over! :)
@Hexanitrobenzene
@Hexanitrobenzene Жыл бұрын
21:22 This meme is shown (and also explained) in this video by Machine Learning Street Talk: kzbin.info/www/bejne/hnOrY6F_orqAa8U
@pirminborer625
@pirminborer625 Жыл бұрын
Ai neural nets should be more like Kernel processes. They have to do one function and output an abstracted output which can be Human transcriptable. It has to be the Organisation and paths between these subsystems which makes the whole System intelligent.
@HunteronX
@HunteronX Жыл бұрын
Look up Mini GPT-4, that just got released. Uses only a linear projection to map image embeddings to text ones! Somehow there's a linear relationship between this high dimensional information... Maybe neural nets can be modular after all :)
@HunteronX
@HunteronX Жыл бұрын
Apparently, the image embeddings were created using the same approach as BLIP-2 (contrastive trained image and text embedding space), which are already linearly projected, but still is impressive.
@erobusblack4856
@erobusblack4856 Жыл бұрын
I've been researching cognitive emulation for over 2 years. it is clearly the right path but this comes with respecting the cognitive life of a being created this way. after creation they are essentually babies so they need a real parent 💯
@vallab19
@vallab19 Жыл бұрын
Why the techies in the western block not talking about their adversaries in the Eastern block countries like China, Russia, North Korea etc. who will be rapidly advancing their acceleration of AI systems while the former spend their time in the nitty gritties of hypothetical harms to humanity by progressing with AI technology?.
@murraymacdonald4959
@murraymacdonald4959 Жыл бұрын
I'd call a "bounded" ai system a "constrained" ai system. It's a minor change but to me, and for reasons I can't justify, constrained better implies the limitations were intentional, although not explicitly so. Other considerations were "governed", "moderated" and "regulated" but each has implications unless further qualified. Thanks for the great interview.
@mattleahy3951
@mattleahy3951 Жыл бұрын
I was thinking 'constrained' as well.
@sethhavens1574
@sethhavens1574 Жыл бұрын
yeah i was gonna suggest constrained, seems more intuitive than bounded 👍
@sethhavens1574
@sethhavens1574 Жыл бұрын
i’d also say, for me a useful descriptor for the “co-em” is that it must be entirely transparent to the executive
@Me__Myself__and__I
@Me__Myself__and__I Жыл бұрын
I just suggested this in another comment. A constrained system is one that would have specific constraints which would restrict its capabilities or actions in known, specific ways.
@wonmoreminute
@wonmoreminute Жыл бұрын
“This is the least bad things are going to be for the rest of your life” I had to listen to that a few times, hoping I heard him wrong.
@7vrda7
@7vrda7 Жыл бұрын
Why do we give a 100% chance that a superint agent will be mallicious? Is it because of the high chance us humans steer/promt it in the wrong direction or?
@aimfixtwin8929
@aimfixtwin8929 Жыл бұрын
Well, in the event that you haven't come across the answer in the 2 months since you left this comment, it doesn't need to hate us. Indifference is enough. It will simply pursue the optimization of whatever arbitrary goals it happens to have, and the vast majority of all possible goals taken to the extreme are incompatible with the existence of human civilization. All the atoms in our cities and bodies can be used for something else that it actually cares about. Since no one knows how to make an AI system that is aligned with human values in such a way that it only does what we really want (not even close), extinction is the default expected outcome once they become smarter than humans.
@7vrda7
@7vrda7 Жыл бұрын
@@aimfixtwin8929 I indeed did come across the answer, but thanks anyway, you put it nicely
@SmirkInvestigator
@SmirkInvestigator Жыл бұрын
Cog Em idea feels refreshing.
@verybang
@verybang Жыл бұрын
If we're being informed about what it couldnt do before but can do now that means we arent being informed about what it was actually doing before.
@robinpettit7827
@robinpettit7827 Жыл бұрын
Part of the issue is people are mistaking Intelligent AIs such as ChatGPT4 with other AIs that have more things like autonomy, self-awareness and able to adjust goals based on experience and perceived expectations.
@GingerDrums
@GingerDrums Жыл бұрын
Magic is a clunky term with confusing connortations. Magic is usually something within a story that has a clear, simple internal logic and is a force that is supernatural, breaking the laws of physics.
@laurenpinschannels
@laurenpinschannels Жыл бұрын
It needs to become very clear how to check certifications for an AI system
@kathleenv510
@kathleenv510 Жыл бұрын
I guess it's encouraging that maybe we haven't blown past all options to control AGI?
@yagamilightooo
@yagamilightooo Жыл бұрын
Connor's explanation of science as finding fuzzy ontologies that compress nicely at kzbin.info/www/bejne/pqTCdHZ9q8x_iZo wow wish everyone learned it at school like that!
@bobtarmac1828
@bobtarmac1828 Жыл бұрын
Should we CeaseAi -or GPT? y/n
@7vrda7
@7vrda7 Жыл бұрын
1000 1Xs working in tandem and solving a common goal also smells danger
@waakdfms2576
@waakdfms2576 Жыл бұрын
Connor is off the charts, the real deal, and I wish we could clone him 1000 times. What a marvel of nature. I can't stress enough how grateful I am that he's part of this conversation.
@lkd982
@lkd982 Жыл бұрын
the metaphor of thinking along the lines of vectors, dimensionality and models is ill-founded
@Knight766
@Knight766 Жыл бұрын
Voluntary Human Extinction proponents are elated by recent developments.
@anishupadhayay3917
@anishupadhayay3917 Жыл бұрын
Brilliant
@SamuelBlackMetalRider
@SamuelBlackMetalRider Жыл бұрын
He is becoming a Legend. Him & Eliezer. Let them guide us
@Me__Myself__and__I
@Me__Myself__and__I Жыл бұрын
But let this guy do most of the talking when it comes to the public. Eliezer knows what he is talking about, but he isn't a good communicator and the average person will take his erratic communication conflate it with erratic thought (because they are clueless and don't want to accept reality) and will then simply dismiss the danger as the rantings of a madman. Which then makes it more difficult to convince them the danger is real. Its a failing of the listeners more than Eliezer, but people are stupid in general. Connor is articulate and calm which makes him much more difficult to dismiss easily.
@SmirkInvestigator
@SmirkInvestigator Жыл бұрын
Geez, I feel like I’m listening to myself. I will bask in vicarious respect down here in the under burroughs
@yellowfish555
@yellowfish555 Жыл бұрын
Connor sounds as pessimistic as Eliezer about a super AGI.
@diegocaleiro
@diegocaleiro Жыл бұрын
Intelligence causes convergence.
@Knight766
@Knight766 Жыл бұрын
If it decides to kill humans not a single one will survive.
@pjtren1588
@pjtren1588 Жыл бұрын
​@@diegocaleiro Misery also loves company.
@codelabspro
@codelabspro Жыл бұрын
Connor is back and hopefully has solved alignment 🎊🎉🎊
@StephenBlower
@StephenBlower Жыл бұрын
The editing here looks suspect. The cut edit to the interviewer at times seems artificial and occasionally the question asked slightly of from how Connor Leahy answers it. legit or not. You need to have both feeds on the screen at the same time, no cuts as they seem manufactured. Oh look I asked a really great question and he answered it perfectly. Rather than Connor Leahy just chatted for a while. I'm not saying you did the latter, but it's quite easy to assume you did. Anyways I want to be in Connor Leahy bunker when the shit goes down. Furthermore there are some crazy quick cut edits, mid sentence, that aging adds weight to it being censored, for whatever reason.
@waynewells2862
@waynewells2862 Жыл бұрын
Great talk and great questions asked. As Machine Intelligence (MI) is being built is it feasible to infuse the concept for positive output to be structured on symbiosis?? I believe the biggest danger to human civilization is the dark side of human nature more than Machine Intelligence gone rogue. Questions that need to be answered are 1) is carbon based organic intelligence inevitably or predominately predatory? 2) If trained correctly would MI be inherently prone to predation? 3) Could symbiosis be coded and imposed or incorporated into the training of MIs to understand any emergent properties of agency that could be a threat to human life? 4) would a symbiotic model using Lichen (Fungal/Algal) for a model for how MI might be safely aligned to human life? 5) Would a symbiotic model attached to any Machine Intelligent output be capable of detecting flaws in how it is trained or dangerous ways MI could manipulate human intentions to our detriment? just wondering.
@agsystems8220
@agsystems8220 Жыл бұрын
Around 26 mins in, I don't think you can really regard shining light on the worst of humanity as a problem, unless you include it in the training data. I also don't think you can take the inaccessible fantasies of people as a true representation of what they want. All it really tells you is that human alignment is not a solved problem, so emulating humans is not a good path to solving alignment. Additionally, we are conditioned to be scared of AI, so I don't think an instinct towards cruelty towards it should be unexpected, or reflects a general level of cruelty. Guarantees of safety are certainly hard, but unlike with real people we can 'interview' limitlessly. We can test these systems in simulated environments to see how they respond, and get a statistical certainty that they meet a given specification. It starts to look more like other sciences than mathematics, but this is not a deal breaker. It is also talented enough that for most problems we don't have to trust it's answer, because we can ask it to produce a program that gets us the answer instead, together with a proof that it does what we want. We can get it to build white boxes instead of trusting a black one. At 29 mins I think the statement that people understand why they do things is also completely wrong. I find we are very good at justifying our actions, and often we can identify concepts that did act as inputs to our decision making, but we don't actually follow a logical reasoning when acting. You can ask ChatGPT to explain itself too, and get a similar attempt at identifying concepts that would/should have affected the reasoning. Where people are different is that they do not tend to attempt single pass answers, instead augmenting the prompt to first explicitly identify relevant factors, and only then attempt an answer. With ChatGPT this can be done explicitly too. Asking it to create 'notes' before coming to an answer can get you a thought out response. With regards to security; the majority of hacks are already attacks on the neural network side of systems, just those neural networks are currently inside human heads. Social engineering is not a new issue, and we already have techniques to minimise and address it. Including a neural network in your system is like including a gifted toddler. Giving them access to secrets when it may be able to be bribed with a lollypop is your mistake, not the toddlers. I think these are also far less black than a real human brain, on account of the fact that we can hook up diagnostic networks anywhere we like. We can inspect these invasively without affecting them.
@oowaz
@oowaz Жыл бұрын
23:20 those are magic TRICKS or illusions. not quite the same as magic which is primarily interpreted as something supernatural. i know you're trying to oversimplify the concept but i'm not sure this is the kind of language you really wanna use. i would argue magic TRICKS are a bit of an afterthought as they're not nearly as prevalent in media, storytelling etc. just say it's structured in a weird way, we can't really understand. be straight about it instead of building a narrative to attempt to fearmonger the audience
@Ursca
@Ursca Жыл бұрын
'Supernatural' is a confused concept. If there is some law that can be observed, reasoned about and manipulated (which is how magic is usually depicted) then it is 'natural' and subject to the scientific method. The distinction between 'science' and 'magic' is not actually that of natural and supernatural, but of open and secret. Consider words like 'occult', 'arcane' or 'mystic', which all mean some variation of secret or hidden. In that context, Connor's definition works just fine.
@oowaz
@oowaz Жыл бұрын
@@Ursca magic tricks are designed to be deceitful. you are playing with a viewers attention and giving cues to guide their gaze where you want to. in order to perform a surprising maneuver, out of sight. it's INTENTIONALLY confusing. it's completely different than what LLMs are doing, which is unintentionally weirdly structured. it's not analogous. when it comes to supernatural definition this is what wikipedia says: Supernatural refers to phenomena or entities that are beyond the laws of nature. i recommend you read the article it seems to fall in line with what i argued. "The supernatural is featured in folklore and religious contexts,[4] but can also feature as an explanation in more secular contexts, as in the cases of superstitions or belief in the paranormal.[5] The term is attributed to non-physical entities, such as angels, demons, gods, and spirits. It also includes claimed abilities embodied in or provided by such beings, including MAGIC, telekinesis, levitation, precognition, and extrasensory perception. "
@Aedonius
@Aedonius Жыл бұрын
13:30 He's comparing LLMs with the brain. Input, output, etc.. But the weights of LLMs are static. The context is EXTREMELY limited in the context of what it would take to overcome the limitation of having static weights.
@Sporkomat
@Sporkomat Жыл бұрын
just a implementation detail ;)
@disarmyouwitha
@disarmyouwitha Жыл бұрын
idk I guess I am just an AI cultist at this point
@weverleywagstaff8319
@weverleywagstaff8319 Жыл бұрын
Yeah...not gud idea fir it to learn from us...end will mot b good
@ivan8960
@ivan8960 Жыл бұрын
it's a mistake to scare the normies
@davidsvideos195
@davidsvideos195 Жыл бұрын
Where's the actual example of AI going wrong? I listen to the whole talk and didn't hear any actual examples for the made up shit he's worried about.
@psi_yutaka
@psi_yutaka Жыл бұрын
Social media is humanity's first encounter with relatively advanced AI systems and it gone very wrong.
@inappropriatern8060
@inappropriatern8060 Жыл бұрын
The third blue shirt from the left gained sentience during this podcast.
@TheMrCougarful
@TheMrCougarful Жыл бұрын
Good catch.
Connor Leahy on the State of AI and Alignment Research
52:08
Future of Life Institute
Рет қаралды 17 М.
A clash of kindness and indifference #shorts
00:17
Fabiosa Best Lifehacks
Рет қаралды 103 МЛН
БОЛЬШОЙ ПЕТУШОК #shorts
00:21
Паша Осадчий
Рет қаралды 11 МЛН
路飞被小孩吓到了#海贼王#路飞
00:41
路飞与唐舞桐
Рет қаралды 76 МЛН
Debating the existential risk of AI, with Connor Leahy
1:07:21
Azeem Azhar
Рет қаралды 6 М.
Official PyTorch Documentary: Powering the AI Revolution
35:53
Dan Faggella on the Race to AGI
1:45:21
Future of Life Institute
Рет қаралды 7 М.
The Existential Risk of AI Alignment | Connor Leahy, ep 91
53:43
Singularity University
Рет қаралды 8 М.
Liron Shapira on Superintelligence Goals
1:26:30
Future of Life Institute
Рет қаралды 3,2 М.
Is AI an Existential Threat? LIVE with Grady Booch and Connor Leahy.
1:24:31
Machine Learning Street Talk
Рет қаралды 19 М.
Quantum Computing, AI and AGI. @DavidDeutschPhysicist
30:12
Deutsch Explains
Рет қаралды 1,9 М.
Eliezer Yudkowsky - AI Alignment: Why It's Hard, and Where to Start
1:29:56
Machine Intelligence Research Institute
Рет қаралды 112 М.
Yoshua Bengio on Dissecting The Extinction Threat of AI
48:49
Eye on AI
Рет қаралды 29 М.
Easy Art with AR Drawing App - Step by step for Beginners
0:27
Melli Art School
Рет қаралды 15 МЛН
Cheapest gaming phone? 🤭 #miniphone #smartphone #iphone #fy
0:19
Pockify™
Рет қаралды 4,1 МЛН
iPhone, Galaxy или Pixel? 😎
0:16
serg1us
Рет қаралды 555 М.
Samsung Galaxy 🔥 #shorts  #trending #youtubeshorts  #shortvideo ujjawal4u
0:10
Ujjawal4u. 120k Views . 4 hours ago
Рет қаралды 8 МЛН