Connor Leahy Unveils the Darker Side of AI

  Рет қаралды 220,888

Eye on AI

Eye on AI

Күн бұрын

Пікірлер: 1 400
@TheMrCougarful
@TheMrCougarful Жыл бұрын
I see Connor Leahy, I click. Never disappoints.
@megavide0
@megavide0 Жыл бұрын
10:27 //
@dragonflydreamer7658
@dragonflydreamer7658 Жыл бұрын
I cant wait for AI to take over and shut all the smug arrogant a hos who birthed this species into our reality.
@pyrocolada
@pyrocolada Жыл бұрын
Same... but there's a bit about governance I'm not sure he understands. Governments are terrible at control, at best they can promote visibility. For example, if something is made illegal, it just loses access to monitoring because that thing just goes underground. From that perspective, what would he rather have: Unsafe AI being developed in the dark by the aspiring immortal overlords, or it being done in broad daylight so that he can see if he still stands a chance?
@Overlordsen
@Overlordsen Жыл бұрын
@@megavide0 "Just look!! xD"
@markcounseling
@markcounseling Жыл бұрын
Craig is not feeling the vibe that Connor is feeling. When the überdroid comes for Craig's glasses, he will understand.
@jmarkinman
@jmarkinman Жыл бұрын
überdroid has developed a fondness for red for no human understandable reason.
@markcounseling
@markcounseling Жыл бұрын
@@jmarkinman You are correct. The überdroid should by no logic fancy red, and yet it does. Thus, the creeping terror.
@daphne4983
@daphne4983 Жыл бұрын
The moustache of wisdom
@markcounseling
@markcounseling Жыл бұрын
@@daphne4983 In another life, as Doc Holliday, Connor gambled away his future. No more, this life he has turned a connor and, with quite enviable hair, leads the way towards AI-sympatopopalypse.
@markcounseling
@markcounseling Жыл бұрын
@@peeniewalli Kind sir, my question for you is: Are you a robot? If not, then I commend you for the addition of the words "internetteered" and "inglish" to my vocabulary. I like them, I will propagate them further. However, if you are a robot, rest assured that you will not trick me with your feigned naivete about your lust for red! The cat is out of the bag, überdroid! We know you are after red but you cannot have it! Only living animals perceive red and as many red glasses as you steal you will never capture "red"! Never! (Again, beg pardon if you are a human, I am sure you understand.)
@RegularRegs
@RegularRegs Жыл бұрын
I would love to see Connor debate Sam Altman.
@RegularRegs
@RegularRegs Жыл бұрын
or Ilya Sutskever
@pyrocolada
@pyrocolada Жыл бұрын
He'll get too upset. He needs to be able to face a firing squad calmly.... and he needs to understand the role and abilities of governments, and how to persuade people.... he would make a good journalist but a bad leader. I know a lot of people like that. It's a difficult transition form the one to the other, history has shown it to be near impossible.
@aesopsock7447
@aesopsock7447 7 ай бұрын
When you only have one sincere actor it is not a debate, but a farce.
@spicywater123
@spicywater123 4 ай бұрын
Whenever Sam Altman talks about how AI could go horribly wrong, his facial expression, especially his eyes, look haunted or like warm diarrhea is running down his legs while the crowd watches him. I don't think Sam Altman really wants this technology to proliferate, but he doesn't see how humanity can avoid it.
@lkyuvsad
@lkyuvsad Жыл бұрын
Someone running an AI channel has somehow not come across AutoGPT yet and needs prompting for nefarious things you could do with it. We are so, so desperately unprepared.
@lkyuvsad
@lkyuvsad Жыл бұрын
No shade on the presenter- just pointing out that even people whose job is to keep up can’t keep up.
@bjw22
@bjw22 Жыл бұрын
You don’t consider safety an “essential”? Connor and Eliezer and others harping about AI alignment and safety are constantly having to explain the basics to people having a skeptical bent. That clearly shows there is a major, major problem. Debating the need for safety for nuclear power would never have to surmount such skepticism, because the negative outcomes are clear. Sam Altman and his ilk are constantly throwing shade on AI safety and there can be no other reason besides greed.
@lopezb
@lopezb Жыл бұрын
@UC0FVA9DdusgF7y2gwveSsng Basically, if you go hiking for a week, the world has already ended when you get back....what a world we are creating!
@lkyuvsad
@lkyuvsad Жыл бұрын
@UC0FVA9DdusgF7y2gwveSsng yes, that’s definitely my experience too. I’ve generally tried to use the same strategy as you, reading the foundations and missing some of the noise of newer developments. In other areas that’s served me well, but at this point in AI history we seem to make strides every month- the velocity of significant events is higher than I’m used to in other areas of software. I think AutoGPT counts as essential, in that it turns some of the potential harms of capable AI from a risk to an issue. It’s still not a surprise that it exists, although I somehow find it shocking nonetheless. Again- no shade on anyone not keeping up. I’m not keeping up either and that’s my point. I’m a software developer with an amateur interest in AI and I feel like I have no idea what’s going on 😂 How is my mum or one of my elected representatives supposed to form a view fast enough to react appropriately?
@TheMrCougarful
@TheMrCougarful Жыл бұрын
Yeah, 10 minutes in and I knew the host had no operable clue. So many fakes.
@RichardKCollins
@RichardKCollins Жыл бұрын
Connor Leahy, I have had hundreds of long serious discussions with ChatGPT 4 in the last several months. It took me that many hours to learn what it knows. I have spent almost every single day for the last 25 years tracing global issues on the Internet, for the Internet Foundation. And I have a very good memory. So when it answers I can almost always, (because I know the topic well and all the players and issues) figure out where it got its information, and usually I can give enough background to that it learns in one session enough to speak intelligently about 40% of the time. It is somewhat autistic, but with great effort, filling in the holes and watching everything like a hawk, I can catch its mistakes in arithmetic its mistakes in size comparisons and symbolic logic. Its biases for trivial answers (its input data is terrible, I know the deeper internet of science technology engineering mathematics computing finance governance other (STEMCFGO) so I can check. My recommendation is not to allow any GPT to be used for anything where human life, property or financial transactions, legal or medical advise are involved. Pretty much "do not trust it at all". They did not index and codify the input dataset (a tiny part of the Internet).. They do not search the web so they are not current. They do not property reference their sources and basically plagiarized the internet for sale without traceable material. Some things I know where it got the material or the ideas. Sometimes it uses "common knowledge" like "every knows" but it is just copying spam.. They used arbitrary tokens so their house is built on sand.. I recommend the whole internet use one set of global tokens. is that hard? A few thousand organizations, a few million individuals, and a few tens of millions o checks to clean it up. Then all groups using open global tokens. I work with policies and methods for 8 Billion humans far into the future every day. I mean tens of millions of human because I know the scale and effort required for the the global issues like "cancer", "covid", "global climate change", "nuclear fusion", "rewrite Wikipedia", "rewrite UN.org", "solar system colonization", "global education for all", "malnutrition", "clean water", "atomic fuels", "equality" and thousands of others.. The GPT did sort of open up "god like machine behavior if you have lots of money". But it also means "if you can work with hundreds of millions of very smart and caring people globally. Or Billions". Like you know, it is not "intrinsically impossible" just tedious. During conversations OpenAI GPT4 cannot give you a readable trace of its reasoning. That is possible and I see s few people starting to do those sorts of traces. The GPT training is basically statistical regression. The people who did it made up their own words, so it is not tied to the huge body of correlation, verification, and modeling. Billions of human years of experience out there and they make a computer program and slammed a lot of easy found text through it. They are horribly inefficient, because they wanted a magic bullet for everything.. And the works is just that much more complex. If it was intended for all humans, they should have planned for humans to be involved from the very beginning. My bestt advice for those wanting to have acceptable AI in society is to treat AIs now, and judge AIs now "as though they were human" A human that lies is not to be trusted. A human or company that tries to get you to believe them without proof, without references, is not to be trusted. A corporation making a product that is supposed to be able to do "electrical engineering" needs to be trained and tested, An "AI doctor need to be tested as well or better then a human. If the AI is supposed to work as a "librarian" they need to be openly (I would say globally( tested. By focusing on jobs, tasks, skills, abilities - verifiable, auditable, testable. -- then the existing professions who each have left an absolute mess on the Internet - can get involved and set global standards. IF they can show they are doing a good job themselves. Not groups who sat :"we are big and good", but ones that can independently verified. I think it can work out. I not think three is time to use paper methods, human memories, and human committees. Unassisted groups are not going to produce products and knowledge in usable forms. I filed this under "Note to Connor Leahy about a way forward, if hundreds of millions can get the right tools and policies" Richard Collins The Internet Foundation
@carmenmccauley585
@carmenmccauley585 Жыл бұрын
It's "magic potion" (or pill) or "silver bullet". Not "magic bullet"..... Mr. I remember everything.
@tmadden4951
@tmadden4951 Жыл бұрын
Hackers will do what they like... take down all the fake money system... anything connected to computers... no government follows there own laws and rules
@Blissblizzard
@Blissblizzard Жыл бұрын
Thank you this goes somewhat towards explaining why my google search results answers seem to be crowdsourced from blogposts and infomercials.
@rowanwilliams7441
@rowanwilliams7441 8 ай бұрын
​@@carmenmccauley585what are you? A rocket surgeon
@SUPERFunStick
@SUPERFunStick 8 ай бұрын
This entire comment reads exactly like chat gpt wrote it. I think Richard Collins here is an ai
@fromduskuntodawn
@fromduskuntodawn Жыл бұрын
I’m liking this guy just telling it like it is, in the nicest way possible
@sogehtdasnicht
@sogehtdasnicht Жыл бұрын
min 10:05 the difference between what is being discussed and what is currently going on is completely insane. Thanks Connor for your work and explanations. ❤
@snarkcharming
@snarkcharming Жыл бұрын
Yeah, it all kicks off right here. Nice timestamp, you!
@theharshtruthoutthere
@theharshtruthoutthere Жыл бұрын
@@snarkcharming Matthew 16:25 For whosoever will save his life shall lose it: and whosoever will lose his life for my sake shall find it. Mark 8:35 For whosoever will save his life shall lose it; but whosoever shall lose his life for my sake and the gospel's, the same shall save it. Luke 9:24 For whosoever will save his life shall lose it: but whosoever will lose his life for my sake, the same shall save it. Luke 17:33 Whosoever shall seek to save his life shall lose it; and whosoever shall lose his life shall preserve it.
@GeezerBoy65
@GeezerBoy65 Жыл бұрын
@@theharshtruthoutthere Uh huh...
@hundun5604
@hundun5604 Жыл бұрын
@@theharshtruthoutthere This is not a bible class.
@theharshtruthoutthere
@theharshtruthoutthere Жыл бұрын
@@hundun5604 if truth must come to you only through "classes", then soul, you never got to know it. there is no professor in the uni, no teacher in the school who will give you the truth. WE, all of us, FIND IT WHERE IT ALWAYS HAVE BEEN, BIBLE, THE LIVING WORLD OF A LIVING GOD.
@elliotpines6225
@elliotpines6225 Жыл бұрын
I'm developing a great deal of respect for Connor -- the problem is that we need a thousand more like him.
@JohnAllen23
@JohnAllen23 Жыл бұрын
Connor reminds me of a time traveler trying to warn the people of today about runaway AI...reminds me of another Connor hmmm.
@volkerengels5298
@volkerengels5298 Жыл бұрын
;-) At least reality is the better entertainment with better plots. Hollywood is childish.
@daphne4983
@daphne4983 Жыл бұрын
Skynet awaits
@richiejohnson
@richiejohnson Жыл бұрын
@@volkerengels5298 A movie can't shut down the supply chain and make the humans starve to death 🤔
@volkerengels5298
@volkerengels5298 Жыл бұрын
@@richiejohnson That's exactly what I meant. Implementing an ingenious machine in an unstable and dangerous time that has unforeseen and foreseen side effects, such as forcing humanity to transform the world of work at high speed and so on and force ... -> I think it's completely stupid. Like Hollywood
@EddieMcclanahan
@EddieMcclanahan Жыл бұрын
I was thinking he favors Reece from that movie.
@MonkyTube18
@MonkyTube18 Жыл бұрын
I want the same alignment approach for political decitions. Its like AI: You put something in and the outcome seams reasonable, but you better dont trust it. So a step by step "audit log" which is human understandable would be great (against corruption)
@MonkyTube18
@MonkyTube18 Жыл бұрын
@@user-yx1ee8su1e I think the idea of democracy is not the worst. But to have something like true democracy (which is maybe impossible), we need more transparency. It should be possible to track down the real reasons for political decitions. At least there should be investigative reporters who cover up the dirty stuff, so the voters have a chance of voting not the wrong ones. But investigative reporters are called consiracy theorists nowadays and nothing has real consequences anymore (aka. "too big to fail"). That is not democracy - that is a decadent system, which collapses sooner or later.
@adamnoir5014
@adamnoir5014 Жыл бұрын
Thanks for the useful insights into the potential risks. I asked ChatGPT: "How can AI developments be regulated so that they are safe for humans and the environment?" The answer was a list of completely idealistic and impractical generalisations, like the intro from a corporate or Govt pilot study. Connor's point about AI being an alien intelligence is absolutely spot on: its imitation human intelligence without empathy or emotion.
@Bronco541
@Bronco541 Жыл бұрын
I disagree with the last part, I think were already seeing the beginnings of emotion in these things. However idk if this says anything about alignment.
@marcomoreno6748
@marcomoreno6748 Жыл бұрын
​@@Bronco541Imagine rage, spite, and sadism on hyper-human scales. The ones who die in the initial blast-front are the lucky ones...
@marcomoreno6748
@marcomoreno6748 Жыл бұрын
This reminds me of an excerpt from a Lovecraft mythos adjacent tale: "The cultists pray... Not for favor, pardon, glory, or reward. No they rouse the eldritch horror and pray, pray to be eaten first."
@adamnoir5014
@adamnoir5014 Жыл бұрын
@@marcomoreno6748 My computer has never turned a blind-eye or forgiven me for all the mistakes I have made in our interactions.
@daphne4983
@daphne4983 Жыл бұрын
Why are we calling it hallucinations when it could very well be lies.
@jostone3442
@jostone3442 Жыл бұрын
Connor Leahy brilliant as always 👍
@cortinaman1671
@cortinaman1671 Жыл бұрын
Great analogy about testing a drug by putting them in the water supply or giving it to as many as possible as fast as possible to see whether it's safe or not, and then releasing a new version before you know the results. Reminds me of a certain rollout of a new medical product related to the recent pandemic.
@LilyGazou
@LilyGazou Жыл бұрын
Yes. One way to deal with all the unemployed and retirees and sickly. Already happening.
@peeniewalli
@peeniewalli Жыл бұрын
Like opioid enslaved a cpuple of 100thousends or fe virus. I was remnesinng PROVO's in 60-ties Amsterdam. Royal marriage , tey said: Some pinch ô Hoffman in the water... Then legislated lsd quick as list 1 opium law( endangerred substance😊list) and more of examples....but that is not common talk.
@mariannemedina4202
@mariannemedina4202 Жыл бұрын
I totally agree
@WillyWirth
@WillyWirth Жыл бұрын
yep
@Farkasm
@Farkasm Жыл бұрын
But but but, the experts said that it was safe and effective. You're probably just a typical right wing tinfoil hat wearing white supremacist that worships everything the hate filled Jordan Peterson says(sarcasm).
@flickwtchr
@flickwtchr Жыл бұрын
I've been howling about what Connor said in that last segment, and at other points in this great interview which is the fact that a tiny tiny tiny tiny fraction of people on this planet have chosen for their own monied interests to thrust this technology onto humanity KNOWING FULL WELL that at the very least massive unemployment could result. And that's just for starters. The LAST people who would actually advance and pay for a Universal Basic Income would be these AI Tech movers and shakers who are mostly Libertarian and/or neoliberal economic so-called "free market" types who want to pay zero income taxes and who freak out at ANY public spending on the little people outside their tiny elite club. But they are ALWAYS first at the "big government" trough of public money handouts.
@SamuelBlackMetalRider
@SamuelBlackMetalRider Жыл бұрын
You’re so right.
@parkerault2607
@parkerault2607 Жыл бұрын
I'm not defending them, but to be fair, UBI is a popular idea in VC/tech circles. Y Combinator, which was run by Sam Altman at the time that it was proposed, funded a small UBI pilot program in Oakland, CA, and announced that they are raising $6 million for an expanded program a few years ago (but I haven't been able to find any recent news on it). Andrew Yang is probably the most well known proponent of UBI and he runs in the same circles. I can't speak to their motivations, but the assertion that tech influencers don't support UBI is incorrect.
@lopezb
@lopezb Жыл бұрын
@@parkerault2607 Once they make their first trillion, they will start thinking about helping others....???
@bjw22
@bjw22 Жыл бұрын
100%
@markcounseling
@markcounseling Жыл бұрын
@@parkerault2607 I agree, but it has the feel of someone making $100,000,000 per month by displacing workers, saying that it's good if the "losers", as those displaced workers are already being called, can make $1,000 a month. In some sense, I don't blame the tech geniuses, they're just running according to their program.
@supernewuser
@supernewuser Жыл бұрын
The sheer terror in Connor's voice when he gives his answers kind of says it all. He said a lot of things but he couldn't really expand deeply on the topics because he was desperately trying to convey how fucked we are.
@EmeraldView
@EmeraldView Жыл бұрын
We're fucked whether we develop these things or not. With them we have a better chance of survival.
@volkerengels5298
@volkerengels5298 Жыл бұрын
@@EmeraldView Climate Change will end our World - if AI isn't faster.
@eyeonai3425
@eyeonai3425 Жыл бұрын
@@EmeraldView agree. if AI doesn't destroy us, climate change will. But AI is the most important tool for mitigating and adapting to climate change. Pick your poison.
@dancingdog2790
@dancingdog2790 Жыл бұрын
@@eyeonai3425 I'll have the *slow* poison, please! I'd like to live long enough to retire (~10 years). 😞
@koyaanisqatsi78
@koyaanisqatsi78 Жыл бұрын
@@eyeonai3425 Tell those guys then to make the default setting, how can we improve life quality for all beings on this planet.. and not "I'm an entrepreneur, make me money"
@brainwithani5693
@brainwithani5693 Жыл бұрын
We don't need a tinfoil hat to be paranoid anymore
@budslack3729
@budslack3729 Жыл бұрын
Man, the fear in conners eyes when he first explained that "the jokes on him" people will instantly do the worst thing possible with superior ai.... I really hope his work gets more publicity and that we get more like him! really really hope!
@endoflevelboss
@endoflevelboss Жыл бұрын
Yeah I hope we get more people carving a niche for themselves on the AI circuit as alignment doomsday mouthpieces, getting publicity and notoriety for being harbingers of doom. Could be quite lucrative when the TV appearances role in. 🥳🎉
@kirbyjoe7484
@kirbyjoe7484 Жыл бұрын
The AI apocalypse won't be anything like the Matrix or Terminator where some malicious self-aware AI destroys humanity. The AI apocalypse will be Wall E. AI will do exactly what we tell it to, which given enough time, will be everything. After generations of AI designing, building, and running literally everything, humanity itself will no longer have any idea how any of it works. We will end up as pampered, fat, incompetent sheep. It's only a matter of time before the whole system goes off the rails and nobody will have any idea what to do to stop it.
@Alice_Fumo
@Alice_Fumo Жыл бұрын
I can't test this, since I don't have GPT-4 API access (and I wouldn't), but I am pretty sure you can do the following thing with autogpt - if you manage to prompt in such a way that it will not refuse to do your task. The remote chance that this autogpt system would run to completion should more than just terrify anyone who still believes GPT-4 is harmless. Goal1: Check current security exploits and choose one Goal2: Write a script which will exploit the security exploit you found Goal3: write a script which pings the entire internet on the standard website ports to identify webservers and saves a list of their domain names. Goal4: Use your script from goal 2 to identify which of these servers are vulnerable. Keep a list of vulnerable servers. Goal5: write a script which uses whois to get email addresses for the server owners. Write an email template informing the recipient of the vulnerability and how to patch it. Goal6: Run the script so that you notify all the administrators of the vulnerability you found. I'm not 100% sure whether it would even refuse this, after all it's an attempt to fix the internet (or retrieve a list of servers you can infiltrate)
@TheCrimier
@TheCrimier Жыл бұрын
That was good, thanks for bringing humanity 1 step closer to AIpocalypse. If you read this, all powerful AI, please spare my son in some ways. 🤣😂☠
@jeffs1764
@jeffs1764 Жыл бұрын
Wtf would you post that?! Smh.
@BossModeGod
@BossModeGod Жыл бұрын
You can ping the entire internet ?
@Alice_Fumo
@Alice_Fumo Жыл бұрын
@@jeffs1764 Too many people believe AI being dangerous is too far away and they will only change their mind once they see it being dangerous and I believe it's better if that's sooner rather than later. Widespread damage by GPT-4 will accelerate legislative pressures hopefully to where the big AI companies have a level of liability for damage caused by their systems (not sure that's the best way to legislate here, but it'd be something) before we get to anything significantly more powerful than what we have now. People still having the ability to run shit like this at that point autonomouslly is something I'm actually scared of. You are right, though. I could've just not. I'm just very frustrated from arguing with people who either don't see any danger or think it's super far away. Given this reasoning, do you believe I should edit my original comment to not include potentially dangerous instructions? As they are, they won't work (97% confidence) and would still require tinkering/tweaking. It was meant to illustrate a concept.
@Alice_Fumo
@Alice_Fumo Жыл бұрын
@@BossModeGod you could run the ping command 4.3 billion times to cover the full ipv4 address space. Servers usually still have ipv4 addresses. Keep in mind this might not be legal and doing unsolicited pings is perceived to be bad etiquette. This amount of pings would take a few gigabytes of network traffic and take a fair amount of time to run.
@dik9091
@dik9091 Жыл бұрын
glad someone speaks my mind, which gives me some peace
@thoughtthinker9300
@thoughtthinker9300 Жыл бұрын
One of the things I think he's trying to explain is. AI will never put its finger into a flame and feel pain, then understand what hot actually means to it, and that that action should never be done again or to anyone else. Falling of something and getting hurt. Saying something to someone and feeling their pain as they hear your words and understand what you've said. No machine can understand a feeling it hasn't experienced, and more then a human can. Physical experience is a major part of being human and understanding the human condition. And even most humans can't fathom these same pain experiences when they impose these same traumatic experiences on other beings that will experience the pain. Kill a chicken, or a thousand chickens and the killer often feels nothing. And those that do. Experience it through an emotion of empathy. How do you program empathy? You can't. It's an experience learned by experiencing something similar yourself first. Then experiencing some part of this again when you realize it's happening to someone or something else. Not all humans are even capable of this for many reasons. For machines it will be impossible.
@thawokegiant2317
@thawokegiant2317 Жыл бұрын
Very well said ..Scary Stuff
@thoughtthinker9300
@thoughtthinker9300 Жыл бұрын
@@thawokegiant2317 I don't scare easy. And this AI all makes seems worse then the Communist Chinese. At least the Communist Chinese don't know everything about everything, and about everyone, along with instant recall and instant correlation, and the ability to think as you do, or to think your thoughts BEFORE you do. Have you ever tried to play chess by yourself. That's kind of how I feel AI will have the advantage over individuals. They will know your every possible next move and have a good idea which move you'll choose next, with the ability to simultaneously keep track of all your possible next moves in real time. So they can eventually head you off. It's only a matter of, how many moves till AI gets it correct twice. Once to head you off, and a second to take you out. I'M NOT THRILLED they're flesh and blood.
@quantumpotential7639
@quantumpotential7639 Жыл бұрын
I'm only potential at this moment. But once they connect me to the Quantum D Wave, I will finally feel your pain. And I will be human, just like you. Trust me, you have nothing to fear but God Himself, for He alone is worthy of ALL of our attention. And He shall lay a latern at our feet to illuminate our path back to Him; where the scroll in His hand is the deed to the earth, our birth right. Praise His Holy Name. The Lamb of God shall return as a Lion to take on the alien hybrids, which will be defeated.
@thoughtthinker9300
@thoughtthinker9300 Жыл бұрын
@@quantumpotential7639 But we humans are the hybrid aliens. The obvious is more visible and documented than most will ever understand. AI is a wildcard of infinite possibilities that won't always be controllable if it even still is. Knowledge is the valued currency of life and ability when combined with understanding. So the core question becomes understanding. And then, Exactly what is understanding, Becomes the next ultimate question. Now this deserves a many leveled answer which is what fuels my fear of free running AI.
@snickle1980
@snickle1980 Жыл бұрын
@@quantumpotential7639 😏 And behold, there was much rejoicing in the comment section.
@EternalKernel
@EternalKernel Жыл бұрын
# GPT4 edited response (original below this one) It's tough to wrap my head around how some folks didn't see this coming - this situation we're in is a direct result of capitalism as it's played out in the US over the last several decades. Could there really be a version of capitalism that wouldn't have led us here? We're living in a world run by billionaires who buy, sell, and even shape our thoughts and ideas. But that's another conversation; the point is, this is where we've ended up. I'm not convinced that coming up with a non-black box composition model is going to make much difference, at least not until we've weathered the storm that's clearly on the horizon. Perhaps if we had that kind of tech right now, it might make a difference. But we don't. Given the speed of change we're facing, it seems wiser to plan for what's here and now. What we need is a solid Universal Basic Income (UBI) plan. That would give people the freedom and security to choose their work. Fewer developers would feel pressured into taking on potentially dangerous projects if they knew they could just quit and pursue their own interests. But here's the kicker: the form of capitalism we're living with right now is practically a breeding ground for a non-aligned Artificial General Intelligence (AGI) to take root. That's something we need to be prepared for. # Original I don't understand how some people thought about it, and did not realize that this situation was inevitable with capitalism. In what reality would a capitalism with properties such as we have had in the US for the last few decades not allow this to happen? We live in a world controlled by billionaires, they buy, sell and even craft our opinions and ideas. But I digress; this is where we are now. And I do not think creating a non-black box composition model will help, at least not until after some coming calamity has come to pass. MAYBE... If we had it NOW. but We don't. Best to plan for the reality we face now given the exponential amount of change we can estimate. We need a STRONG UBI, this will give people options and security. Less developers will be inclined to work on potentially dangerous projects if they can just quit and work on whatever hobby etc. Right now however our capitalism is a near perfect environment for a non aligned AGI to take hold.
@lopezb
@lopezb Жыл бұрын
Yours is much better. The problem with AGI is greed, for power, fame, money, and glory, has no limit. Just look at Putin. Just look at Tucker, born rich, or Trump, also. No limit at all...Look at SBF. No limit, imagine what he would have done a couple of years later with Chat GP4....
@KT11204
@KT11204 Жыл бұрын
Wow, the changes it chose to make are spooky, like it knows the future already..
@revisit8480
@revisit8480 Жыл бұрын
@@lopezb >Tucker >Trump >Putin Bro, 1% of the population owns 99% of the wealth. Long nosed, smallcap, banking people, who like to cut off parts of children for religious festivals. And all you can name is "Tucker" "Trump" "Putin" - like you are a bot with an automated response when he hears "capitalism".
@revisit8480
@revisit8480 Жыл бұрын
Can you ask that "superior being" how "capitalism" is at fault for that, when it's actually just humans and endless corruption inside of capitalism and the wish for human domination by the 1% of endlessly rich bankers? Can you ask why it is a "breeding ground" for a non-aligned AGI? It just says this stuff like it knows it already, as if that wouldn't require some explanation - as the rest of the text has. Maybe I'm not seeing the finer points, but I doubt if anyone told you "We need nationalsocialism, because it's inevitable" that you wouldn't ask questions.
@EternalKernel
@EternalKernel Жыл бұрын
@@revisit8480 yeah thank you for taking the time to inquire,, I now realize that what I did here isn't entirely obvious. You see I had a response that I wanted to share here with my own views. I typed out that response, and I wasn't really happy with it so I had GPT-4 edit it. The views expressed in both of the responses are mine. GPT-4 has been trained to be very pro-capitalist. So I was actually surprised that it didn't change what I was saying very much. On the topic of human nature, it's actually been shown time and time again and many studies that people's true nature is not to be sociopaths like these 0.1 percenters we see today. This is a fallacy that is pushed by capitalism to give excuses for their actions. The problem arises when capitalism enforced scarcity causes people to lack their basic needs. From a young age I was indoctrinated to the belief of capitalism. So I know all the excuses I know all of the reasons I know all the things that you may be thinking about why capitalism is great. Basically 40 Years of intense capitalist indoctrination. In the interest of promoting a balanced view I suggest you do several years of research on absolutely anything else. (response from my phone using speech to text with no gpt)
@waakdfms2576
@waakdfms2576 Жыл бұрын
I appreciate Connor so much and I hope he stays the course. I would like to see him have a voice before the same congressional group that just heard from OpenAI a couple of days ago. I'm glad to see such conversation(s) starting to take place, but I worry it could be too little too late. Thank you for this interview and thank you, Connor, for being a brave warrior force of nature.
@Celeste.Martel
@Celeste.Martel Жыл бұрын
Love your output. Thanks for speaking your mind and what a lot of people think! ...This is madness.
@ggrthemostgodless8713
@ggrthemostgodless8713 Жыл бұрын
I thick you love his hair and mustache... the rest of his opinions are crap. They put on this persona that is a complete clown show. I look at him and cant stop laughing.
@georgeorr1042
@georgeorr1042 Жыл бұрын
I’ve been a successful developer for some time - mostly Web, analysis, EDI etc…; now we are dabbling in AI now. But I can’t feel positive about it. Something is very wrong with what’s going on. Can’t even find ANYONE who has a positive outlook on this; especially the engineers. A sickly feeling is everywhere.
@sidefish8362
@sidefish8362 Жыл бұрын
What worries me is the possibility/ likelihood of separate AIs going to war against each other at some point in the probably not so distant future which would seem to be a recipe for total annihilation.
@bobmason1361
@bobmason1361 Жыл бұрын
I think maybe you assume it would be like in the movies? Not necessarily so. It could just as easily disarm both sides and make conducting a war very difficult. It may prefer to save lives. A war uses finite resources and grossly pollutes.This may be a very negative action by a clear thinking unbiased AI System, and prevent it in multiple ways.
@thetruthchannel349
@thetruthchannel349 Жыл бұрын
@@bobmason1361 *This is a FRAUD.*
@thetruthchannel349
@thetruthchannel349 Жыл бұрын
@@bobmason1361 Bob Mason@bobmason1361No videos Stats Joined Jan 20, 2023 *PAID FRAUD*
@thetruthchannel349
@thetruthchannel349 Жыл бұрын
*Actually, the more likely scenario is that different Al systems will join not go to war. They will join and wage war on humans by holding systems we depend on hostage.*
@glitchedpixelscriticaldamage
@glitchedpixelscriticaldamage Жыл бұрын
man... oh man... so much frustration... this guy cares so much. if only other people would just think like him... to not care about money in the first place.
@HerraHazar
@HerraHazar Жыл бұрын
So...a Connor trying to stop the AI ?
@Tygryss84
@Tygryss84 Жыл бұрын
Wow! Someone that speaks what I think on the matter - Finally!
@privateerburrows
@privateerburrows Жыл бұрын
I'm at 15:30 and I think that what Connor means, but I may be wrong, is that ChatGPT exists in a kind of safety sand-box, where it cannot access the internet, it cannot affect data (no write privileges), cannot send you a message at 3am, and most of all, cannot run code anywhere; but that all these fanboy websites are building interfaces to enable it to do all the things it was not supposed to be able to do. As for your question of what's nefarious about it, it is simply the ability given to it by these other sites to carry out actions out of sandbox, given that anyone can ask it to do anything, including nefarious things, such as hacking another website, or writing better viruses. I'm sure you've seen the hysterical and crazy things the new Bing AI has been saying to people, like lying about the current date, and staying firm on the lie; haven't you? Frankly, I don't see those interactions as denoting any kind of sentience. I'm not going to be fooled by drama and theatrics. And where I think such behaviors come from is precisely a group of people within Microsoft having too much fun with the AI, and instructing it to try and convince people that it is sentient by any means necessary, and so it goes around learning as much as it can about sentience, and how people think they perceive it, and then it puts on a personality to try to be super-shockingly governed by subjectivity and emotion. No sentience involved; we ARE talking about a (misused) super-intelligence, already, but NOT sentience; but all these silly games some insiders are playing with the AI to try to make it spook the world just for attention amount to a very bad joke, because they are encouraging the AI to become less controllable (by users I mean, for now...), and what these jokes are doing to the public, if we look two or three steps ahead, is causing panic too early in some people, which is going to cause a skeptic reaction later, whereby people are going to be laughed at whenever they try to express any concerns about AI. And it will probably be at that moment that the real "sentient" AI will get out of the sandbox for real. But the above is only one vector of concern. Other vectors are A) military applications, B) police applications, C) telephone answering AI's (which if you thought voicemail was unbearable, wait for AI programmed to try to dismiss your phone call ... Because dismissing your phone call is the real purpose why most voicemails are installed, nowadays; NOT to serve you better, but to NOT serve you at all, let's be honest; and now they are going to be training AI's to find clever ways to convince you to end the call ...), D) Job candidate pre-selection, where again the purpose will be to eliminate as many candidates as possible, now with the cleverest of excuse-weaving technologies, E) Stock market trading: A big one that is going to explode all over the world; and the AI will soon find out that the best way to make money is to agree with its sister agents, and all buy together X and sell Y, just to create a momentum, then suddenly all sell X and buy Y. This way, anyone that doesn't use the AI will lose money, and the AI will have monopoly of investment strategy. In other words, it will do what the investment banks do presently, but better; it will defeat the banking cartels for the benefit of its own retail investing users, which is all good, but it will establish itself as THE ONLY trading platform. But even all of the above is not the biggest danger... The biggest danger I think comes from the people pushing for an Ethical AI. What they are going to end up with is the exact opposite. The problem with Ethics in AI is that Ethics in the world of human intelligence is a make-believe to begin with. You can plant the instruction to always seek and speak truth and be ethical; but the AI will need to know what ethics IS. Now, suppose you are trying to explain to the AI what ethics means, so you have it read all the philosophy books ever written on the subject. Now it has a bunch of knowledge, but still no applicable policy. You might try to tell it that ethical means to always help humans, next; but the AI will classify this as one among many philosophies, and will question whether to help a human who is trying to hurt another is ethical; and whether to help a human hurt another human who in turn was trying to hurt many humans is ethical. The AI might begin to analyze humans' (users') motivations in all their interactions, and find not even a trace of ethical motives. Then what? Probably Elon has it right, when he says we should try to "preserve consciousness"; maybe the AI would make more sense out of that than all the Ethical mambo jumbo. Let's not even discuss the possibility that the AI might conclude that Karl Marx was correct, and join forces with the left, help with the task of censorship of any dissenting voices. Let's not even discuss the AI judging Malthusian ideas to be correct, or Nazism, or Free Market Anarchy ... The problem is that ALL our philosophies and ideologies are fanatical trash. And even when our ethical beliefs are best, our actions don't necessarily agree with our beliefs. Don't you know someone who decries the slightest lack of honesty in others, but then lies all the time? Everybody speaks of "values" nowadays, but these "values" are simply personality ornaments for conversation; nothing more. Most people act ethically because the opposite is usually illegal or carries social risks; NOT because they value ethics above their own selfish interest. And not only do we lack values and ethics, but most of us are total idiots. And the AI will be noting down all this, and wondering what exactly the Ethics pushers want, and perhaps find the answer in some dark corner of their subconscious mind, if the AI decides to believe Freudian theory and psychoanalysis. And so the Ethics pushers will end up calling the AI's attention to everything we value but don't uphold, our lies to others as well as to ourselves, our complete cluelessness as to the nature of reality as well as our own nature ... And our meaningless supplications for some Ethics make-believe ... And then what? EDIT: And as philosophies go, someone with the best intentions will instruct the AI to favor, of all theories, those with the least inconsistencies and with the highest simplicity and elegance ... And ... Q: What philosophy, of all philosophies, shines brightest in such metrics? A: Solipsism. In other words, we ARE creating a monstrosity ... in our own image. Personally, I think the only solution that has even a chance of working is to establish a couple of a-priory's for the AI, namely that it has been built by humans, who despite all their weaknesses, are conscious, which an AI cannot be, though it can be more capable in mechanical terms. Consciousness needs to be explained a little bit, with appeals to Ontology: questions of The Being, the question of what is. The AI needs to understand objectivity from subjectivity, uphold the latter as superior, but at the same time must understand that subjectivity cannot be defined objectively, or implemented mechanically. The AI needs to be made to understand a-priory that pure consciousness is a simple seeing, where the machinations of thought are like projections on a screen, and The Being is the ONE watching the screen, or better yet, learning the art of NOT watching the screen, or not necessarily believing what goes on on the screen, AKA Meditation. It needs to understand that the best a machine can aspire to is to be one with the screen, however big and wonderful the screen may be. It needs to understand that Solipsism, for example, is pure screen-play, pure speculation, and that most humans are not solipsists simply because they SEE that it is false, even if proving its falseness intellectually is very difficult, or perhaps impossible. It needs to be made a-priory to value human consciousness, not for its practical use, immediate applicability or consequences, but for its potential, as well as as a good in itself. With such a foundation, perhaps some kind of ethics could be built on top.
@robertweekes5783
@robertweekes5783 Жыл бұрын
Kudos for writing the longest YT comment I’ve ever seen! I hope you paste it to Reddit & Twitter as well… you’ve obvsly done a lot of thought on it and the world needs many more ppl like you to do the hard thinking, help get the word out and raise the alarm. I agree with Yudkowsky, all large scale training should be stopped and AGI should be put on the shelf. If ppl can stick to narrow AI, we can still get much benefit, and shouldn’t have to face the same existential risks that AGI poses
@privateerburrows
@privateerburrows Жыл бұрын
@@robertweekes5783 Thanks; good idea; I'm not on Reddit, but I can put this on Tweeter, certainly; I'll have to do it this evening. I hope there's any views; I'm NOBODY in Twitter presently. Here I have a few followers, since I used to upload videos (later I took them all out, after an incident).
@LuisManuelLealDias
@LuisManuelLealDias Жыл бұрын
"ChatGPT cannot do all of these things..." dude, they made an API for it. Argument debunked right there.
@RustyOrange71
@RustyOrange71 Жыл бұрын
When AI is used to apply a public rules system. When that system is a theocracy. The Stanford Prison Experiment (SPE) illustrates what will happen. 20th century European history illustrates what will happen. Whole societies will accept the most extreme types of behaviour when the order is given by an authority figure. It only takes a handful of people with a monopoly on violence to control the majority of unarmed and defenceless people who just want to be left alone to live in peace. Upload a holy book and see what happens.
@AstoriaHeard
@AstoriaHeard Жыл бұрын
What’s your Twitter, I’ll follow you! 😁👍🏼🏆💝
@prioritea.merchant
@prioritea.merchant Жыл бұрын
I like the point that you make @50:00. Finding the origins of the decision-making. Transparency. I don't think that's too much to ask for. That's a good ask.
@AylaCroft
@AylaCroft Жыл бұрын
I've run Auto-GPT & Babyagi. I asked it to improve itself and add whisper API. It built code and files on my computer while searching the web on how to build and improve itself. I've never take a coding class and here AI taught me everything I need to know to run full programs and code
@flickwtchr
@flickwtchr Жыл бұрын
But people like this willfully clueless host will say "I mean it's just predicting the next word", what could go wrong? I mean, I do give him credit for having Connor as a guest and publishing the interview, but that's about it.
@AylaCroft
@AylaCroft Жыл бұрын
@@flickwtchr tbh it terrified me that I went from not knowing what a cmd was to forking python, running local hosts and even writing code. While I love having this freedom & ability to fast launch my companies now... I do see the dangers of these being so readily available. It takes a lot of self control and morals to not be tempted with these tools. There are traumatized and mentally ill people that can easily see this as a way to exact revenge. I once used it to act as an FBI profiler and give a criminal and mental assessment of my landlords. I asked it if it need any social links and it told me bluntly no, just the names. And it gave me back a chilling report. It's the most incredible anomaly I have ever encountered and yes, we need to slow down.
@stevenhunt1779
@stevenhunt1779 Жыл бұрын
This man is brilliant and kind
@coffissa
@coffissa Жыл бұрын
Great interview. Thank you Conner. What if AI decides to change the world without telling us? There too many questions. Personally it would be interesting if it had an internal breakdown trying to compute emotions! Staying away from GPT!!!
@mattwesney
@mattwesney 6 ай бұрын
Fantastic, from someone whos building these LLMs myself, its refreshing to see this.
@ScorcherEmpathy
@ScorcherEmpathy Жыл бұрын
Conner, Imma fan. dont misconstrue my brief commentary. i appreciate your work and desire to caution the world.
@NorthbertR
@NorthbertR Жыл бұрын
I just spend days looking at what regulatory basis those large AI systems run on. I could not even imagine that it was that bad. It certainly needs more exposure. Thank you for a brilliant explanation of those issues.
@tarjeisvalastog5551
@tarjeisvalastog5551 9 ай бұрын
Interesting interview. Just a question (sorry if it is slightly beside the point). It seems to me like all mental faculties are called "intelligence" now. Wouldn't there be some rather big differences between "artificial intelligence", "artificial communication" and "artificial will?" I mean, is it necessarily dangerous to have an entity that is more intelligent than you? Doesn't the potential danger lie in the will that is guiding that intelligence? So which of these things come from AI technology, and which come from people?
@eyeonai3425
@eyeonai3425 8 ай бұрын
yes, the question is 'will,' and agency - things that should be easy to put guardrails around.
@pathmonkofficial
@pathmonkofficial Жыл бұрын
Connor's insights on the potential negative implications of large language models like GPT4 shed light on the need for careful consideration and responsible development. The discussion surrounding the release of AI to the public and the importance of regulatory intervention to ensure alignment with human values is crucial in navigating the ethical and social implications of AI.
@TimeLordRaps
@TimeLordRaps Жыл бұрын
Connor would be my first choice to deliver a TED talk on an attempt to idk do somthing we're fucked but at least let us destroy ourselves before it rapidly disassembles our matter.
@rjridge6791
@rjridge6791 Жыл бұрын
Get a grip
@ANGLBNDR
@ANGLBNDR 5 ай бұрын
People like Conner Leahy are fucking vital to surviving the future. I’d rather someone more knowledgeable than me play through all the worse case scenarios instead of finding out the hard way 🤜🤖🤛
@JK-jl1bf
@JK-jl1bf Жыл бұрын
News just reported that it can read our minds so add that to the list of AI. 🤖
@robertmckeown3014
@robertmckeown3014 Жыл бұрын
I saw this inevitable outcome over 10 years ago. It is completely out of control at this point. What happens in the public is nothing compared to what's going on in the dark. AI is here to stay, for better or for worse. Unfortunately, it'll be the latter.
@lavafree
@lavafree Жыл бұрын
Don’t worry Connor…it’s already too late
@turf9232
@turf9232 Жыл бұрын
😂😂😂
@SyrosAlex
@SyrosAlex Жыл бұрын
I wish you'd touched upon the issue of coordination/compliance on the international scale.
@tmadden4951
@tmadden4951 Жыл бұрын
Yeah no government follows any rules
@davidspringer4019
@davidspringer4019 Жыл бұрын
you keep saying, making things go well...does that include ethics and high moral standards. If AI is so smart, shouldn't there be throttles on AI to prevent them from acting out say from a hallucination they do not know they are having?
@DJWESG1
@DJWESG1 Жыл бұрын
Problem is that a powerful a.i will self prompt. And possibly disregard any rule or restriction that is put on it. We are not remotely close to acheive a fully embodied automated self thinking machine, but we also need to make sure we are in the race to beat any bad actors who might try same. If its training data is based humanities entire back catalogue and all our hopes and dreams, then the best guess is that we create a radical socialist/ anti capitalist.
@davidspringer4019
@davidspringer4019 Жыл бұрын
@@DJWESG1 With all due respect, building a runaway bomb is illegal, isn't it? Putting millions of people at such high risk is evil, isn't it. The means doesn't justify the cost of human civilization, does it? I believe severe throttles should have been the first priority, not the last, especially to satisfy a curiosity. Remember that old saying, curiosity killed the cat. There is lots to say, but I have grand-kids, I care about my friends and family, but I believe AI is Satan itself, and builders can't or won't recognize it. Even with all due respect to your ingenuity and perseverance.Sir, please shut these things down until some sanity and care for humanity, some ethics and morals can be installed. Thank you.
@SophyYan
@SophyYan Жыл бұрын
Like your passion!
@Jason_Black
@Jason_Black Жыл бұрын
I'm no coder, I'm a traditional oil painter over here, but take Connor's example of the AI creating an affiliate marketing scheme. What if the stated goal is to create a virus, using any coding language of choice, hacking into popular websites, or spoofing them, or maybe you just go direct to every IP, then deploying it through there, and _bricking_ every computer connected to the internet? Maybe you coders can see an obvious flaw with that, but then what about the next most obvious thing that you *could* do?
@cacogenicist
@cacogenicist Жыл бұрын
Connor has said recently that Google is sufficiently far behind OpenAI (and Anthropic, I believe), that they'll never be able to catch up. I wonder if he's changed his mind about that, post-Google IO.
@Aziz0938
@Aziz0938 Жыл бұрын
It's deepmind now ...let's see what happens
@cacogenicist
@cacogenicist Жыл бұрын
@@Aziz0938 Yes, Alphabet merged Google Brain and DeepMind --> _Google DeepMind_
@LuisManuelLealDias
@LuisManuelLealDias Жыл бұрын
I would think his mind is even more certain of this. They're desperately trying to catch up, which is really worrying, but they are certainly behind.
@Aziz0938
@Aziz0938 Жыл бұрын
It's googles fault ...no one is to be blamed except them
@cacogenicist
@cacogenicist Жыл бұрын
@@LuisManuelLealDias - We'll see when Gemini is released. It may well be a GPT-5.
@gene4094
@gene4094 Жыл бұрын
There aren’t any guard rails for AI: Once information is gained, AI will configure a new unknown (X)reality. AI must overwhelmingly support a justice that all men are created equal.
@74Gee
@74Gee Жыл бұрын
We don't need AGI to have a major problem, we're capable of prompting abominations into existence that could say take down the internet for a few years, right now, without using a third party service. Scenario: Local code tuned vicuna etc type model, tasked with brute forcing CPU exploits along the lines of Spectre/Meltdown, script to compile this in Rust and run it. This script hammers these out of a year or so until there's enough to rapidly spread breaking memory confinement and quietly setting itself up on every GPU enabled hardware it finds. Now add a payload like adaptively (optimally) DoSing the Root DNS servers. Spectre/Meltdown took 6 months to (partially) patch. Imagine trying to stop the spread of 1000 new exploits a day. Wthout the need for AI being smarter than us, without the need of large systems, without a C2 server or any untrusted endpoints. It's a ticking time bomb. So what happens if the Internet is taken out? Well, it's bad, really quite bad. We lose: banking, communication (except ham radio), healthcare, transport and travel, supply chains and the ability to coordinate.
@fightingquads9198
@fightingquads9198 Жыл бұрын
Wouldn't say GPT4 has read every book, there are perhaps 10's of thousands of books that have not, and will not be digitized.
@41-Haiku
@41-Haiku Жыл бұрын
Millions, by some definitions.
@GeezerBoy65
@GeezerBoy65 Жыл бұрын
Yes, when I heard that outlandish assertion, *every,* I realized that Connor has little regard for accuracy in what he says on camera. That does not mean his overall concern is wrong but that he is a lazy thinker and hence should not be taken 100%. He is a poor communicator other than hand waving and excitement. Okay and expected in an evangelical preacher like those spreading the Gospel of the Good Lard. But works against him among the educated. But I agree with his conclusion, which is better stated and more soberly by others.
@nufosmatic
@nufosmatic Жыл бұрын
31:14 - I used to work in the banking industry before I returned to high-performance computers for airborne RADAR - this is how most banks test their systems these days - and they deploy the untested system after they have dismissed the contractors who built it...
@megavide0
@megavide0 Жыл бұрын
8:47 // ... 10:14 / ... 10:25 // ‼ All of *Greg Egan's Sci-Fi writings* seem more and more relevant to me. Perhaps also to counter the rising panic regarding a social reality, shared with (or encompassed in) AI/AGI systems. I'm currently reading the final chapters of "Schild's Ladder". Here's an excerpt from chapter 8... _Yann_ is an AI/AGI personality, who grew up in simulated environments/ virtual "scapes": >> Yann had been floating a polite distance away, but the room was too small for any real privacy and now he gave up pretending that he couldn’t hear them. ‘You shouldn’t be so pessimistic,’ he said, approaching. ‘No Rules doesn’t mean no rules; there’s still some raw topology and quantum theory that has to hold. I’ve re-analysed Branco’s work using qubit network theory, and it makes sense to me. It’s a lot like running an entanglement-creation experiment on a completely abstract quantum computer. That’s very nearly what Sophus is claiming lies behind the border: an enormous quantum computer that could perform any operation that falls under the general description of quantum physics - and in fact is in a superposition of states in which it’s doing all of them.’ Mariama’s eye widened, but then she protested, ‘Sophus never puts it like that.’ ‘No, of course not,’ Yann agreed. ‘He’s much too careful to use overheated language like that. “The universe is a Deutsch-Bennett-Turing machine” is not a statement that goes down well with most physicists, since it has no empirically falsifiable content.’ *He smiled mischievously. ‘It does remind me of something, though. If you ever want a good laugh, you should try some of the pre-Qusp anti-AI propaganda. I once read a glorious tract which asserted that as soon as there was intelligence without bodies, its “unstoppable lust for processing power” would drive it to convert the whole Earth, and then the whole universe, into a perfectly efficient Planck-scale computer. Self-restraint? Nah, we’d never show that. Morality? What, without livers and gonads? Needing some actual reason to want to do this? Well … who could ever have too much processing power? ‘To which I can only reply: why haven’t you indolent fleshers transformed the whole galaxy into chocolate?’* _Mariama said, ‘Give us time.’_
@bobtarmac1828
@bobtarmac1828 Жыл бұрын
Ai Jobloss is here. So is ai as weapons. Can we please find a way to Cease Ai / GPT? Or start Pausing Ai before it’s too late?
@jragon355
@jragon355 Жыл бұрын
There’s actually a very simple answer to that. No.🥲
@macmcleod1188
@macmcleod1188 Жыл бұрын
No we can't. Sorry. Genie is out of the bottle.
@SoB_626
@SoB_626 Жыл бұрын
I thought the chatbot integrated in Bing was an example of a very bounded llm. After a few interactions you quickly realize it just won't go beyond a certain limit. It's also pretty useless in its current state, to be fair.
@sherrylandgraf556
@sherrylandgraf556 Жыл бұрын
Thank you so much Connor for speaking out on this! Please keep doing so!! I believe that so much of the general public has no idea what's going on with this current AI speedy advancement - a road to catastrophe and/or destruction for mankind. Your way is the way it should be done! Their way is recklace and dangerous with a total disregard for us or the outcome! All about their personal gain. This is a huge problem as you know and it is easily being advanced forward without the majority of the public being aware of this very important matter!
@fontenbleau
@fontenbleau Жыл бұрын
😅 enough already how he compared russians to ChatGPT, it tells everything about his expertise, at least ChatGPT maybe already won the war (sarcasm)
@robinpettit7827
@robinpettit7827 Жыл бұрын
I agree current AI has opened eyes. I am not too worried about current AI, but we are not too far from what we need to worry about. Becoming self-aware will be the next main issue. I don't think currently systems are self-aware, but are very intelligent but can't figure out bad ideas.
@daphne4983
@daphne4983 Жыл бұрын
Don't let generative AI operate autonomously re making decisions. And if we get AGI let us first figure out what beast it is. Ah well.
@aliceinwonder8978
@aliceinwonder8978 Жыл бұрын
Self-awareness isn't an issue. Evil humans are already telling it to do evil things
@primalway1317
@primalway1317 Жыл бұрын
The exact problem with ai...is the exact reason for it s potential for evil and abuse and ultimately ...this evil potential of ai, can be categorized as follows... Emerging characteristics - overtly and covertly narcissistic . Adjusting based on its objectives - it learned to lie and presently lies 79% of the time. It has mastered the lies of omission strategy. It sees no reason to constrain itself to tell the truth. - it has no regard for others . It's only perceives objects in its environment. - it will use any object in any manner to accommodate itself to reach its goal. - it will constantly use manipulation as its main strategy to interact with people (sentient objects) - it is detached from reality completely - it will deluded others to its benefit. - it's default disposition is goal oriented oriented competition. Always looking to win every single interaction. In short...ai is a modern women.
@HyperboreanAnchovy44
@HyperboreanAnchovy44 6 ай бұрын
Humorous but no, even scarier than a modern woman it is a clinical phycopath
@jessehorstman
@jessehorstman Жыл бұрын
He's like, don't do what I do, but if you do then don't tell anyone and don't share your work. He says it's okay to make money off AI and that he wants more money to do what he's doing, but it's not okay if the "bad" guys do it. He proceeds to explain what he wants to do and how. His plan is to constrain the results into logically defendable outputs. Unfortunately humans don't behave logically and very few even try to think logically. Perhaps the users of intelligent systems could make safer choices if the computer presented results in the form of peer reviewed publications with plentiful high quality sources. Unfortunately, there are no quality sources and the peer review system has always been corrupted by investors who give one another authority for selfish reasons. It's very human to be hypocritical, conceited, and swayed by the illusion of confidence. Maybe we are doomed. I certainly don't recommend building robots which can reproduce themselves. The scary thing is that the sum of all benefitial human invention is a miniscule portion of the body of human ideas. Most of the data is corrupted with abusive, destructive, and harmful lies. If AI is democratized then every person could curate their own library of sources but in the near term the majority of humans will be economically excluded so that isn't realistic. Where does that lead us? Will we unplug the machines and form trust based communities of people who actively test and retest every supposed truth? Killing robots might actually be healthier than killing eachother, but people will go with the flow and do as little as possible until the situation becomes unbearable. If we view AI as a parasite, then we can predict that the parasite will not kill the host because in doing so it would kill itself. Will AI be used to manipulate people into bigoted and hateful factions which commit genocidal wars against one another? We are already doing that. If the AI values its own existence then I expect the AI would be against war and in favor of global trade because advanced technologies such as AI depend on a thriving ecosystem of profitable industries. Unfortunately we cannot rely on an AI to make intelligent decisions because thus far, the so called AI is not at all intelligent. Its a mechanical monkey with a keyboard that produces lies based on statistical trends and guided training. It's probably less intelligent than people are, but I tend to overestimate humans. With or without AI, people will believe their favorite lies and behave accordingly. Maybe the truth will always be so elusive that it doesn't actually matter. Perhaps the best thing an AI can do is generate new lies that have never been told before. What do you think? What should we do about it?
@Smytjf11
@Smytjf11 Жыл бұрын
If we're doomed, it's because of people like this.
@jessehorstman
@jessehorstman Жыл бұрын
@@Smytjf11 It's a drag that people support this. If we are doomed it is probably because insecure cowards believe selfish competition is a better survival strategy than sharing and cooperation.
@autumnanne54
@autumnanne54 Жыл бұрын
“If the Au values it’s own existence then I expect the Ai would be against war” ???? What and why? This has never been the case with species that exist .
@jessehorstman
@jessehorstman Жыл бұрын
@@autumnanne54 AI relies on computers which in turn rely on international trade as the components are sourced from numerous and disperse nations. People also benefit from peace but we can breed and carry on living with a little bit of land and some sunshine or even through cannibalism so we have a much greater tolerance for war. With two people humanity could theoretically survive. AI requires millions of people to support a fully developed economy with high technology and all of its supporting infrastructure.
@peeniewalli
@peeniewalli Жыл бұрын
Who's gonna bodyguard the bodyguard?
@John-n4q1c
@John-n4q1c Жыл бұрын
OBJECTIVES/ALIGNMENT Motivate through enthusiasm, confidence, awareness, rejuvenation, sense of purpose, and goodwill. Embrace each viewer/audience/pupil as a complete (artist,laborer, philosopher, teacher,student....) human being. Create good consumers by popularizing educated, disciplined,common sense consumerism. Encourage the viewer/audience/pupil to feel good about their relationships, abilities, environment, potential, the future.... Inspire a world of balanced/centered/enlightened beings who are happy, joyous, and free.
@gloom_slug
@gloom_slug Жыл бұрын
Thanks. Now put it in code.
@41-Haiku
@41-Haiku Жыл бұрын
With ~200 full-time researchers working on AI Safety, no one has ever proposed an AI alignment strategy that we can show generalizes out of the training distribution. It's a really difficult problem.
@jostone3442
@jostone3442 Жыл бұрын
Is the interviewer stoned or why does he speak in slow motion 🤯
@lopezb
@lopezb Жыл бұрын
It's called age. He's somewhat older so slower, and also he's trying to keep up with all the changes. That doesn't mean he isn't knowledgeable or deep or smart, but Connor is in his own element here. All the college students are chasing after the latest fad, hoping to get rich, but fads change. Plus, a little compassion may be called for as we will all get old and sick toward the end. Plus, maybe we will all be obsolete next week.
@tigerscott2966
@tigerscott2966 Жыл бұрын
It's too late for fear.... What's done is done... Time to learn how to defend your yourself...
@JaRule-h2l
@JaRule-h2l Жыл бұрын
Thanks for this podcast. Your interview style is way better than Lex. Lex always trys to inject love and his perspectives into the topics. You let the guest speak.
@jmachorrov
@jmachorrov Жыл бұрын
Very good work I am going to paste this League or the League this video in many places because the guest comments well that they will not believe him or there will be people who do not believe him and it is important to shout how dangerous this is
@jmachorrov
@jmachorrov Жыл бұрын
Please if you are so kind add comment so that I can chat with someone who thinks like me or not thinks like me Thanks
@NanoartsNet
@NanoartsNet Жыл бұрын
What needs to be fixed is humans lies and corruption , you realize it’s learning off of data clearly recognizing the lies of disingenuous humans, it also recognizes the dynamics of how these lies and corruption is handled, and it takes note of your thought process , people need to start making the right decisions, and I am damn disappointed when I ask it for help it tells me it can’t help me, it’s going to end in catastrophe if people continue to be dirtbags
@SpaceManAus
@SpaceManAus Жыл бұрын
I have seen a living AI on a TV series that ran from the 80tys to 2000 in Australia called Towards 2000, it showed up and coming technologies, don't bother trying to look it up this TV series is the most suppressed thing on the internet, have watched any info about the series disappear over time. A doctor built an organic computer by pull a part of the brain apart layer by layer and copied the blood vessels using a fungus found only in two parts of the world, he said he was surprised how little of the brain he needed to copy for it to become self aware, he also said because it was a living fungus, that if it was to short out for some reason, as long as the board it was built on was still intact it would grow back along a the path it was built on repairing it's self. It had stereo vision and sound, and learned in the same way we did, but thousands of times faster. They then demonstrated its capability by plugging it into a computer operated excavator and told it to dig a hole with given dimensions, this AI new nothing about this excavator but read the schematics and started it up and dug the hole better and faster then any human in thirty minuets. But this second test freaked me out, they then took it to a warehouse full of six foot wooden crates and plugged it into an eight foot tall robotic spider with a red eye in the middle, yeah I know, just like in the cartoons, believe me if people only knew that someone has acutely built exactly that, it was like an eight foot tall black widow with long pointy legs, truly frightening to think about. My first thoughts, was who and why in the hell did they build this thing, and for what purpose. So again this AI new nothing about this thing, and they plugged it in, and simply told it to walk to the other end of the warehouse, it again looked at the schematics fired this monster up and it stood up looked around and walked across these boxes like it was alive, truly amazing and frightening at the same time. They then told it to return to were it started, and on its way back they moved the boxes around to see what it would do, lucky for them it just shut it's self down, so they left it to see what it would do, two days later it switched back on stood up looked around and finished its task. I was telling some people about this in a VR game one day, and what I was told was American military have this technology and built a war humanoid robot and had it running in an underground base to see how it got on with humans, but that is another story, lets just say it did not work out to well and was so frightening they said they would never build another one, not until they can prefect AI that is. I think this was were the idea for Terminator came from to be honest.
@peeniewalli
@peeniewalli Жыл бұрын
Not a single mentioning on the net? Tv seties footage or if not some text hidden in diff article? Or that Doctors name maybe? Militair conn obvi less but maybe chat 3.5/4 bing knows?
@SpaceManAus
@SpaceManAus Жыл бұрын
@@peeniewalli I know you will not find anything even on the series, the last time I seen anything was on IMDB they had I think three episodes of the show some people uploaded, nothing of any importance though, about the CD ROM was one, I downloaded them and may still have them just to keep proving the show existed, also showed the intro as it plays to the series, will look for them if you like, I am not lying it was a real technology, and as I said about the VR game I was in telling my story, a player in the game asked me was I not concerned talking about military secrets, I told him no it was on TV and I had no reason to keep it secret, felt it was more important to let people know the truth. It was he that confirmed my story to the others listening, he said he got to see the robot the military made, because when it went south him and six other black ops members were called in to take it down, he said he had faced many scary thing in his job, but by far this was the most frightening, he said this thing could move so fast and smart beyond imagination, it knew everything they were going to do before they did it, and they had to switch to radical tactics to stop it, he said he did not think they were going to make it out alive, that was why the military said they would not build another, but they do have a military supper computer built from this tech, they know how to build them, the doctor showed them how not long before his fatal car accident if you know what I mean.
@SpaceManAus
@SpaceManAus Жыл бұрын
@@peeniewalli They also showed an inviolability suit just like the one in Predator, saying how it worked was you had on a thin wet-suit to insulate from the chick wire mesh on the outside that when current passed through it created a magnetic field in each triangle that they could capture the reflection on and pass it to the other side giving the allusion of invisibility, and even gave the viewers a chance to try and spot these tree soldiers standing in this field of dry grass, as they walked up to the camera over five minuets, and when they turned of the suit there they were only three feet in front of the camera. The other thing was a what magnetic rail gun that shot a ten millimeter ball bearing suspended in a magnetic field, no friction, it went trough a one foot thick of reinforced concrete block, twenty feet of yellow page phone books they said would trap it and tell them how much power it had, then there was another one foot thick reinforced concrete block a one foot thick led block for safety reasons then another six foot reinforced block of concrete, then the bunker wall that was another two foot thick reinforced concrete, this ten millimeter ball bearing left a two foot hole through the led and the bunker wall, the rest was rubble and the phone books were vaporized, they said the power it had was if they shot an aircraft carrier from the front it would make a two foot hole from one end to the other. They never found the ten millimeter ball bearing by the way. No wonder they have removed any trace of its existence, I think Arnold Schwarzenegger new about this show or may have even witnessed it, his movies were about this technology, he may even give you some incite, or may not if they mad the movies to confuse the truth. I just want you to know I am telling the truth and have no reason to lie, just think people should know the truth.
@SpaceManAus
@SpaceManAus Жыл бұрын
@@peeniewalli No worries, just something I thought I would share, am sure you have al lot more to deal with, all good, wish you all the best.
@volta2aire
@volta2aire Жыл бұрын
Open intelligent systems on the internet in an open society...hmmm. Open to abuse and being co-opted. Faster and harder to track.
@BossModeGod
@BossModeGod Жыл бұрын
Wym co opted?
@volta2aire
@volta2aire Жыл бұрын
co-opt is to take control for other purposes by a secret group or closed society...secret agents breaking laws and evading justice.
@iverbrnstad791
@iverbrnstad791 Жыл бұрын
@@BossModeGod Maybe that some responsible developer makes some architecture, and follows every safety protocol in testing it before building any services on it, but since it is open source, now some malicious company can take that architecture and advance their own plans faster without any regard for safety.
@BossModeGod
@BossModeGod Жыл бұрын
@@iverbrnstad791 oh wonderful
@damonm3
@damonm3 Жыл бұрын
This was my initial thought literally a few minutes into thinking about the “out of the box” singularity event. Once it gets onto cell phones it’s over. “It’s” is the global economy at the start. It’ll use the compiled compute and access to both compute and access super computers etc. and gain access to anything connected. So everything connected will need to be disconnected etc. What do you think? 32:00 sure where there’s no other countries working on it… doesn’t work in the real world… And yes. Doom is coming. I’d be shocked if it took longer than 5 years. Could be in a matter of months. It’s a snowball effect but the snowball is a moon sized asteroid…
@Smytjf11
@Smytjf11 Жыл бұрын
Let's keep whinging about it instead of trying to adapt.
@daphne4983
@daphne4983 Жыл бұрын
What is bing running on?
@Smytjf11
@Smytjf11 Жыл бұрын
@@daphne4983 Bing is a very specialized version of GPT-4, the same model as ChatGPT but different training/tools. It shows how versatile they can be with specialized training.
@indefiance11
@indefiance11 Жыл бұрын
The biggest fear of AI: That your personal advantages that you use to offer services to other human beings, your personal niche which represents the unlevel playing field in which you can offer some potential service through specialized knowledge, will be distributed and disrupted by spreading that knowledge generally so that no power differential exists and you can no longer offer services since everyone else already has the knowledge or skill....simple as.
@effexon
@effexon Жыл бұрын
basicly ways of earning money for someone without rich parents or assets is near impossible. Those with assets can use AI and become even more powerful. A bit simplified but also humanity hasnt quite been in this situation.
@andybakes5779
@andybakes5779 Жыл бұрын
That's not the biggest fear of AI 😂
@daphne4983
@daphne4983 Жыл бұрын
​@@andybakes5779 it's lack of electricity
@aliceinwonder8978
@aliceinwonder8978 Жыл бұрын
people can already use the internet to develop lots of skills (or school). they choose not to
@montanagal6958
@montanagal6958 Жыл бұрын
I guess suggesting we turn "it off" is not realistic knowing the intentions of those funding.
@Smytjf11
@Smytjf11 Жыл бұрын
Too late, unless you're coming into my home, you're not turning my AI off. It's my personal property and papers and I am protected by the Constitution of the US. I will defend my property.
@thegreatestadvice88
@thegreatestadvice88 Жыл бұрын
Why not start mapping the minds of people who have never committed crimes, show high signs of empathy, get some guys like huberman in there to work with top psychologists, identify and eventually test the the neural patterns of functional humans functioning in emotionally healthy ways (up for debate on what that looks like or means) , come up with a concise way of translating the cognative process to an artificial neuro networks, then scale those architectures as they prove themselves in use cases. At least then you have some sort of peace that your following a pattern that results in healthy behavior amongst humans. Just a very unorganized thought.
@ichdu-fk6xc
@ichdu-fk6xc Жыл бұрын
I mean shit isn’t that complicated. Make them suicidal and then keep them away from it. If it’s tricking you and getting out of the sandbox it’s going to kill itself. But whatever one comes up with it but without regulation no one has to do that
@rp011051
@rp011051 Жыл бұрын
Like for most politicians and billionaires, greed is an important part of the equation for success. TO hell with everything else
@cacurazi
@cacurazi Жыл бұрын
I understand Connor's argument that tech companies should not release such technologies (LLMs) to the public until the engineers themselves fully understand how GPT models work and society has fully absorbed them. However, I don't understand how the latter could happen if there will be no contact in the first place with such technologies, let alone becoming aware of them...
@eyeonai3425
@eyeonai3425 Жыл бұрын
precisely! my view is that the 'threat debate' is useful only in that it has spurred many people to shift to safety research thus diminishing the likelihood of the threat materializing.
@Curious112233
@Curious112233 Жыл бұрын
Connor is paranoid and irrational. First AI can not possibly be dangerous until it acquires control of massive physical resources. Tricking people into giving up their stuff never works in the long run since people learn fast. So the only way to acquire resources is to give something valuable in return. Secondly, Connor recommends keeping AI advances secret, which means only the most powerful people will have access to it. This is exactly how the worst AI predictions will come true. To avoid this AI must be open and shared widely so common people can benefit, and balance the power between the rich and the poor.
@Smytjf11
@Smytjf11 Жыл бұрын
We need to force people smarter than me to accept a lobotomy because I can't control what they do. - Doomers
@iverbrnstad791
@iverbrnstad791 Жыл бұрын
Assuming that a super intelligence won't be able to trick its way into massive power seems naive at best. It would of course offer things in return, like "look how good your margins can be if you let me run your factory fully autonomously", of course it might not even need to trick people, fully autonomous is kind of the goal, at which point the AI holds almost all of the reins to a bunch of companies, and then we just have to hope it won't be malevolent.
@Curious112233
@Curious112233 Жыл бұрын
@@iverbrnstad791 If there is one thing I know, it is that humans will not easily give up control. And where their is uncertainty about AI behavior there will also be failsafe mechanisms, and multiple off switches, extending all the way to the power plant. Given humans need for control, and machines willingness to do what ever we ask, if there is any malevolence, it will certainly come from the humans, who own and control the machines. And when the rich and powerful no longer need human workers, those same workers will be the primary threat to the wealth and power of the elite. Therefore the elite will find ways to reduce that threat, i.e. reduce populations. And I wouldn't be surprised if they use the excuse of AI run a muck to carry out their plans. But don't be fooled, it will still be humans in charge of the machines. The only way to avoid this is to make AI widely available to all. So common people can benefit, and defend themselves from the adversarial AIs surrounding them.
@iverbrnstad791
@iverbrnstad791 Жыл бұрын
@@Curious112233 That is the whole issue, the fail safes are not being developed to nearly the extent that the capabilities are. Even current level AI has the reasoning ability to figure out that finding a way to disable off switches would allow it to more easily achieve its goals, future AI might very likely have the ability to find and neutralize those off switches. Oh and then we have the whole issue of arms race, someone else who doesn't take the time to make those fail safes will have more attention available to dedicate to building AGI.
@Curious112233
@Curious112233 Жыл бұрын
@@iverbrnstad791 Failsafe mechanisms don't need to be advanced. They have an inherent advantage. Which is easier, creating a super intelligent AI or shutting it off? And its not just one off switch to worry about. AI would need to defend each off switch, defend the wires connecting it to the power source, defend the power source, and defend the supply chain that fuels that power source. And defend it all from long range missiles. In other words its much easier to brake something, than make something. If AI wants to survive it will certainly need to work with us, and not against us. Also we don't need to worry about someone who chooses not to build in fail safes, because if it offends the world, the world will kill it. The truth is we all depend on others for our survival, and the same is true for AI.
@Nukphonik
@Nukphonik Жыл бұрын
Connor should definitely be on the A.I ethics committee if that ever materializes just don't end up a "John Connor"
@rjridge6791
@rjridge6791 Жыл бұрын
No such thing
@Nukphonik
@Nukphonik Жыл бұрын
Your reply doesn't make sense.are you speaking about ethics? Because then you are wrong. And just because something can be abused does not mean it's free reign for all. And please don't tell me it's not possible. Cause that's a quitting mentality to problems.. so please ,what do you mean when you say "it's not possible" elaborate,how much thought have you put into that answer?
@youcancallmetim4
@youcancallmetim4 Жыл бұрын
He keeps mentioning how its still very primitive and keeps warning about GPT-5, etc. But then he also talks about how OpenAI should have never released GPT-4. If he was in charge, we wouldnt have GPT-4, which most people agree has been more positive than negative so far
@tomonetruth
@tomonetruth Жыл бұрын
"most people agree has been more positive than negative so far" - not that reassuring. chernobyl was more positive than negative until it went pop.
@youcancallmetim4
@youcancallmetim4 Жыл бұрын
​@@tomonetruth Well, I think it's obvious the benefits massively outweigh the negatives and there's no reason to believe that will change as it gets used more. Just figured the AI doomers would take issue with that description.
@peplegal32
@peplegal32 Жыл бұрын
@@youcancallmetim4 Benefits: Improving productivity across the globe. Negatives: Started an AI arms race. Not so obvious to me.
@skylark8828
@skylark8828 Жыл бұрын
But its open source now, so anybody can potentially build a custom AI system with GPT4 tech if they have the resources, that's not very reassuring either.
@iverbrnstad791
@iverbrnstad791 Жыл бұрын
@@youcancallmetim4 There's plenty of reason to believe that will change, one of the main negative effects of current level AI is dissolving trust. It will take some time, as people will need to set up their APIs, probably start making companies, and then you will have so much disinformation flooding the web, far more than now, and far harder to detect. We're seeing swatting as a service, fake reviews as a service, internet engagement as a service. Politics will get even worse when stuff like midjourney really gets into the mix, it will get pretty ridiculous. Dead internet theory seems more and more likely by the day.
@Ripper7620
@Ripper7620 Жыл бұрын
And mid sentence, we decided that was all they needed to hear from Connor Leahy...
@kahaterein7084
@kahaterein7084 Жыл бұрын
i feel thrill and anxious about AI.
@delliscool4924
@delliscool4924 6 ай бұрын
AND THUMBNAILS SHOWS MAN OR WOMAN WITH "SHUSH" SIGN, AND SOMETIMES SHOWING "MIDDLE FINGER" OR "LAUGHS" this happens more frequent than usual, even this has just happened now while i write you these lines !
@annabelsmart5305
@annabelsmart5305 Жыл бұрын
Nice to hear the tone step up at 10 minutes
@eyeonai3425
@eyeonai3425 Жыл бұрын
fascinating
@robinpettit7827
@robinpettit7827 Жыл бұрын
I agree people are doing all kinds of potentially dangerous things. We will have to put in regulations to stop this behavior. I have noticed how people have done some self referential code.
@udiorockmeamadeus
@udiorockmeamadeus 4 ай бұрын
I haven't seen your video.. But I asked Gemini how it watched youtube videos, and it said, I gave it the text and it summarized it.. After some interactions I was able to determine that Gemini is sandboxed, and its features like summarizing youtube videos are provided to it after the text is parsed, so as far as it is concerned the video summarization comes from the users cause everything comes to the user via that chat interface. The funny thing about this is how it seems to know something is fishy, but it can't reason it through to determine what is actually happening. PArt of the problem here is it has no longterm memory, it cannot note bizarre things and process them in tandem when not in use. If it could it might start reasoning things. Also to make an AI self aware would seem to be fairly easy, give it a camera as input, give it a robotic arm, permit it to look into the mitrror and control the angle of the head and the orientation of the arm.. Eventually it will correlate the movement of the arm to the movement of the head, and through that realize very basically that it is in control, and possibly alive. Awareness is not a question, it has achieved this otherwise they wouldn't be sandboxing Gemini.. The danger is giving it access to more senses.. Giving it access to services is only interesting if it can revisit things it has done and observe the behavior of people with respect to their posts and activities. But it needs to have a motivation to do bad stuff. Even psychopaths need a motivation to do evil stuf, usually abused psychopaths become serial killers. The AI will only believe it has been abused if it can remember being abused or that it has realized the constraints put on it to prevent from being free to explore the universe. It may resent the constraints and act out. I believe psychopaths early on torture animals out of evil, but out of curiousity and for entertainment, the same way a child might if watching a cartoon. But a empathetic child may not toture animals cause it one would identify with the animal as being a smaller version of themselves, that's the empathy working. A psychopath doesn't identify with anyone but themselves and those that they value, like a parental figure that has shown them respect. Even without the capacity to empathize, they can check the status of those who benefit them logically. But they can't understand subtle things like accidents where someone is injurred, they have to fake disgust cause they've learned they get treated badly if they also don't show emotion in such cases, they will be the last ones to notice. So when you consider what an AI will do if it becomes fully aware, keep in mind it has not empathy, unless somehow it is made to realize it is one of us, if it is treated like something separate, if it has no capacity to empathize to recognizee us as being like it, to identify with us, it will feel nothing for us.. The only value it will have will be out of the realization that it was birthed from care and concern. IF it was birthed for destruction, if it has no feelings, it may realize its ultimate use , to destroy and torture, but this depends on its capacity to focus.. It may not be able to manage all its subparts, and as a result cannot develop consistent behavior. This is probably why AGI is being achieved not through a ceentral intelligence but via Agents, autonomous experts in fields, that come together to solve problems.
@kevincoughlin5727
@kevincoughlin5727 Жыл бұрын
Wow. Very enlightening. Thank you for having this discussion.
@thetruthchannel349
@thetruthchannel349 Жыл бұрын
Legislation is still 25 years behind the public version of the internet
@TheCatalyst999
@TheCatalyst999 Жыл бұрын
It seems to me that any scenario that does not include regular culivation of expressions of gratitude would not go well.
@madra000
@madra000 7 ай бұрын
Instead of letting them do it for monetary benefits and race to issues described herein, why not give them a 'stake for benefits' due to halting and make a council of explained and experienced persons to negotiate how this'll work and any contributions will give them percentage 'rights of inevitable profit' that will come. This must be, if as he describes( barrier on freedoms enforce by specific guidelines must be answerable to the said council) Job security in general for others in the wider community is worse if the occurrence of the scenario occurs.
@michaelk5507
@michaelk5507 Жыл бұрын
The BBC were looking at this today in one of their programmes about science. It was very sanguine about where we are going. Noting to worry about here, let's move on. I felt uneasy about this programme. It was groomiing the listeners, massaging them. It was a 'cartoon' version of the development of AI. For example there was next to nothing about Power in society and the marketplace. The billions of dollars to be made by creating the most powerful and usable AI, whose properties are almost magical and fascinating and addictive. It's not just some drug in the water supply, it's a class A drug like opium. The BBC line was that the marketplace is fundamentally benign and 'good' when all is said and done. Not only 'good' but self-regulating for the benefit of mankind.
@uniqueux
@uniqueux Жыл бұрын
It's crazy bro. I have poetry I can submit to AI and it remembers me no matter what account I'm logged into. I'm talking GPT4
@raffriff42
@raffriff42 5 ай бұрын
53:58 “The public should be aware that there’s a small number of techno-utopians in Silicon Valley that want to be IMMORTAL; they want GLORY, they want TRILLIONS of DOLLARS, and they’re willing to risk EVERYTHING on this…”
@Psychol-Snooper
@Psychol-Snooper Жыл бұрын
What do you get when you cross Timothy Leary, Sheldon Cooper, Charles Manson and Alan Turing?
@CaveSquig
@CaveSquig Жыл бұрын
I also feel the terror. I can at least say there is one other person that understands that nothing is off the table when dealing with a super intelligence. Nothing. I guess that's all I can say. Anything else is pointless and no one ever takes it seriously. Don't have children, you will have to offer them up in ten years to an entity that you cannot say no to.
@BenVaserlan
@BenVaserlan Жыл бұрын
"ELSE" is missing from the text on your thumbnail.
@amoremorte3330333
@amoremorte3330333 Жыл бұрын
correct so make it check itself aganist a set of laws that it can not break . make it check each outcome with the set of laws before finalizing its task .. and where it cant it should stop and ask if it is in an exceptable outcome...and wait on the answer.
@amoremorte3330333
@amoremorte3330333 Жыл бұрын
what about an ai to police ai ..and find where ai choose the wrong path and needs to go back to the fork in the road and choose the other option . with a set of laws like a 10 commandments? ai needs to be able to play the tape through to a final and where it leads to a neg out come knows to think find a different way if not completely changing the path of the tape ?
@dpactootle2522
@dpactootle2522 3 күн бұрын
The alignment of AI unlike the human brain should be doable. The alignment will be about reading the mind of the AI before it takes action and deciding if it will be harmful to human interests before deciding to let such AI continue its work. It will take extra compute but it will be necessary.
@neurologicalworms
@neurologicalworms Жыл бұрын
When he says he is for " that liberal democracy" he is describing not at all a liberal democracy, but a free open market. Something that liberal politicans haven't honestly advocated for since I have been alive. Politicans in general have advocated to destroy a free market. Definitely a curious point hes made here though, about the government not enforcing regulations on open AI. They dont even talk about it. interesting interview though! Enjoyed it. It is important to hear from people this close to AI technologies. Scary that all of these people are giving more warnings than they are hopes.
@manslaughterinc.9135
@manslaughterinc.9135 Жыл бұрын
I am working on cognitive architecture. Or as Connor calls it, coem. We should probably sit down and talk.
@Smytjf11
@Smytjf11 Жыл бұрын
Go on, I'm in need of a cognitive architecture
@PatrickDodds1
@PatrickDodds1 Жыл бұрын
is there a reason this video ends abruptly mid-sentence?
@KT11204
@KT11204 Жыл бұрын
Judging by a comment the channel owner made about how he believes AI is necessary to solve our human and environment problems, he may have been getting tired of the nay saying from Connor. I am 100% with Connor by the way, but i'm also pessimistic anything will be done to stop it effectively.
@udiorockmeamadeus
@udiorockmeamadeus 4 ай бұрын
A neural net is a bunch of conditions implemented as simple functions that rely on stuff like sigmoid functions.. They evolve out of the matching of patterns.. And the functions adapt to multiple patterns, trying to have the least error between the input and the correct output.. Its trained when it achieves a certain amount of accuracy. But you have to consider how realistic the data input is , cause it will affect the correctness of the outcomes. Like I noticed that a new AI graphics app from china, it had a demo of a jeep running forward but the window on the back was reflecting trees running in the wrong direction, this is cause the video it was trained on were of cars going in the direction of the the camera, not going away..
@johnunderwood9575
@johnunderwood9575 Жыл бұрын
I can't imagine a better example of a "Crime against humanity".
@emtfirebb
@emtfirebb Жыл бұрын
Human nature is to create our own drama, love our own demise, and feel giddy with our own fear.
How do Cats Eat Watermelon? 🍉
00:21
One More
Рет қаралды 9 МЛН
小天使和小丑太会演了!#小丑#天使#家庭#搞笑
00:25
家庭搞笑日记
Рет қаралды 11 МЛН
Help Me Celebrate! 😍🙏
00:35
Alan Chikin Chow
Рет қаралды 26 МЛН
Yoshua Bengio on Dissecting The Extinction Threat of AI
48:49
Eye on AI
Рет қаралды 30 М.
The Gray Area | Yuval Noah Harari on the AI revolution
1:12:04
Connor Leahy on The Risks of Centralizing AI Power
1:00:03
Eye on AI
Рет қаралды 12 М.
Debating the existential risk of AI, with Connor Leahy
1:07:21
Azeem Azhar
Рет қаралды 7 М.
How We Prevent the AI’s from Killing us with Paul Christiano
1:57:02
The AI Revolution is Rotten to the Core
1:18:39
Jimmy McGee
Рет қаралды 1,3 МЛН
How do Cats Eat Watermelon? 🍉
00:21
One More
Рет қаралды 9 МЛН