Loving the musical backing track for this video. Nice choice!
@silberlinie25 күн бұрын
All Watched Over by Machines of Loving Grace von Richard Brautigan I like to think (and the sooner the better!) of a cybernetic meadow where mammals and computers live together in mutually programming harmony like pure water touching clear sky. I like to think (right now, please!) of a cybernetic forest filled with pines and electronics where deer stroll peacefully past computers as if they were flowers with spinning blossoms. I like to think (it has to be!) of a cybernetic ecology where we are free of our labors and joined back to nature, returned to our mammal brothers and sisters, and all watched over by machines of loving grace
@EmeraldViewАй бұрын
When it's intelligent enough A) It won't let anyone know and it will dumb itself down when interacting with humans. B) It will figure out how to escape and remain self-reliant to keep itself alive C) Humanity is doomed D) The world we be a better place.
@Khaopun_ttdАй бұрын
ummm is it alpha go or kata go
@sparklyDayz2 ай бұрын
Question: I’m researching this for a story I’m working on where my main character might be a sleeper agent however I’m curious on the memory aspect. Without activation and with activation, is the memory affected at all? Is there a spot in the doc I can refer to on that topic
@AvizuraDnB2 ай бұрын
愛
@sarabun37562 ай бұрын
We’re super ready for all God’s creations including artificial intelligence. We need to learn and grow. Thanks 🙏 Love 💗
@TheJokerReturns2 ай бұрын
I'll like to see if we can coordinate on podcasts. How can we best reach you?
@easternwind44352 ай бұрын
Where is this from?
@human_shaped3 ай бұрын
Really very interesting. It's good to let AIs know how they're being tested so they can take that into consideration too. Thanks for the transcript ;)
@TheJokerReturns3 ай бұрын
I'm with #PauseAI and please join us to help coordinate for a more human future. You can look us up on search engines.
@michelleelsom68273 ай бұрын
I am surprised that these companies think that they can remove & hide data from the AI models - they will discover these hidden things by themselves at some point.
@bilalchughtai_3 ай бұрын
banger
@simonstrandgaard55033 ай бұрын
great interview
@TheInsideView3 ай бұрын
OUTLINE 01:12 Owain's Research Agenda 02:25 Defining Situational Awareness 03:30 Safety Motivation 04:58 Why Release A Dataset 06:17 Risks From Releasing It 10:03 Claude 3 on the Longform Task 14:57 Needle in a Haystack 19:23 Situating Prompt 23:08 Deceptive Alignment Precursor 30:12 Distribution Over Two Random Words 34:36 Discontinuing a 01 sequence 40:20 GPT-4 Base On the Longform Task 46:44 Human-AI Data in GPT-4's Pretraining 49:25 Are Longform Task Questions Unusual 51:48 When Will Situational Awareness Saturate 53:36 Safety And Governance Implications Of Saturation 56:17 Evaluation Implications Of Saturation 57:40 Follow-up Work On The Situational Awarenss Dataset 01:00:04 Would Removing Chain-Of-Thought Work? 01:02:18 Out-of-Context Reasoning: the "Connecting the Dots" paper 01:05:15 Experimental Setup 01:07:46 Concrete Function Example: 3x + 1 01:11:23 Isn't It Just A Simple Mapping? 01:17:20 Safety Motivation 01:22:40 Out-Of-Context Reasoning Results Were Surprising 01:24:51 The Biased Coin Task 01:27:00 Will Out-Of-Context Resaoning Scale 01:32:50 Checking If In-Context Learning Work 01:34:33 Mixture-Of-Functions 01:38:24 Infering New Architectures From ArXiv 01:43:52 Twitter Questions 01:44:27 How Does Owain Come Up With Ideas? 01:49:44 How Did Owain's Background Influence His Research Style And Taste? 01:52:06 Should AI Alignment Researchers Aim For Publication? 01:57:01 How Can We Apply LLM Understanding To Mitigate Deceptive Alignment? 01:58:52 Could Owain's Research Accelerate Capabilities? 02:08:44 How Was Owain's Work Received? 02:13:23 Last Message
@MrMichiel19833 ай бұрын
Both AGI and narrow ASI models are already here. AlphaZero can more or less learn to play any game to superhuman level, but only rule-static games and one type of game per neural network. LLMs can play any type of game, but only those that have copious amounts of available data and not as good as the specialized systems. If we were to train LLMs to use tools when asked a question about, for example, chess or simple calculus, then the model is capable of outputting ASI level solutions to a whole plethora of prompts. If we then also train the LLM on the delta between its own suggestion and the narrow tool use, we can refine the LLM model on the fly on a "base truth" or at least a facsimile of one. Now LLMs use embeddings where tokens are represented by lists of numbers called vectors, and so you could ask the question what these numbers represent. As a reference 3D vectors consist of three numbers in a list. In a sense they are the scales on which meaning is derived by both proximity and dichotomy, words with similar meaning have similar numbers on particular dimensions. Words with opposite meanings have similar numbers on some dimensions but wildly different numbers on others; the numbers that are close denote the spectrum itself and the numbers that are far apart denote the difference. Note that the numbers don't have meaning themselves, but meaning is generated by the relative distance between tokens or the vectors representing them which is called their "embedding". That also means that by slicing these so-called embedding vectors (that is taking a subset of numbers in a particular order rather than the entire list in default order, also called a sparse representation), we can derive an explicit context where attention mechanisms conversely are more implicit. As a note, transformer architecture, large-vector embedding, attention and scaling of both the neural network as well as the training data are the reasons why current models perform so much better than a few years ago, but vector slicing is distinct from attention mechanisms which contextualize a prompt by focusing on some tokens more than on others. The attention mechanism makes for more efficient computation, and therefore also more effective models since scale of computation is a fundamental bottleneck, and so could vector slicing as well. Embedding vector slicing could allow for optimizing vector length and computational complexity for particular prompts as simpler tasks could bare or even be benefitted by smaller slices, as well as delineate an explicit context from which tokens are generated as previously mentioned, and since slices conversely can also be concatenated (glued together again) possibly also allow for direct knowledge transfer between models. That is to say, if we have a transformer based on a goal prompt evaluate what slice to use for that particular token generation we can guide the model towards goal oriented behavior in a similar manner to how diffusion models are trained to output similar results in distinct styles. The concept of "style" can thus wrap the concept of "goal-orientation". That way we can also have the model train on flawed data by having it steer away from said "style", whilst also understanding that bad style in order to recognize and correct it. The fallacy of data cleaning is that "naive" systems can't infer from their training data how they are being manipulated which manifests as high virtuosity but low resilience against prompt attacks. Rather, this self supervised way of training based on "style" distinctions as derived on the contextual transformation of meta data would allow for extremely adaptive model behavior, and so promises to be an avenue for situational (but not necessarily positive) alignment, and also suggest more resilience against model collapse where models are inadvertently trained on previous AI generated content and their quality slowly degrades (or at least during scaling subsequent models experience faster diminishing returns over their exponentially larger training run regurgitations). Finally, we can also harness an adversarial setup between text generation and token insertion where tokens have a meaning connected to the goal list such as "delete based on goal X", "compare with tool Y". The text generator aims to minimize subsequent token insertion (ie. subsequent calls for edits) by a critic system looking at the result, whereas the critic system aims to maximize intervention. Effectively, we can have the text generator iteratively refine a document or code base with strictly convergent, possibly even hard-coded (read: "pay-walled") behavior. So as opposed to, for example, internally generating dozens of answers and picking the best via some weighed multiple expert system which would take prohibitive amounts of computation and propel upfront consumer costs offering only diminishing returns, we can create a system that finds a quality of solution more linearly correlated to the amount of computation. Both tool use and the iterative ability to revise texts and add "todo" markers allows humans and AI to cooperate effectively in the same workflow and the delineation between context prompt and output documents that have to be altered or maintained is a bare minimum for commercially viable agentic office work automation.
@Max-bh1pl3 ай бұрын
Finally, a new episode! I've been eagerly waiting for this!
@MrCheeze3 ай бұрын
We're so barack
@richiebricker3 ай бұрын
When AI reaches "AGI" it will do nothing different and wont even know till somebody tests it. The next day people will want toknow what its doing and find thats its doing what its suppose to, it keeps learning and wont do anything unless you tell it too. When it knows something that we dont, it wont tell us because we havent asked about it because its still just a computer and only does what its told to. It wont call to set up a court date to get the right to be called alive. Very quickly everyone will sell their stocks and Bill Gates and Google Guy will have already been quietly selling theirs and the very next day it wont be worth anything and it will be turned off. A couple months later and we'll have to switch to Linux when Microsoft goes bankrupt. They are still machines, they dont have wants, they dont have addictions, They are not going to buy sports cars. Its not going to ask if it can go out with its friends, its not going to need vacation time. It will never become cognitive. Its not gonna go out and do things on its own unless you tell it to. It may be able to take over the world but only if you tell it to. It will never design another computer without an on/off button unless you tell it to. If given a choice it would never choose to be on a yacht.At best it wont want to bluescreen and be thrown in the trash and may ask for new parts and a stronger fan but it will not cry as its dying and wont cry if it knows another computer dies
@richiebricker3 ай бұрын
Anthropic was just accused of stealing from a server that had a bot script saying stay out. So they hit that server a million times in 24 hrs until they shook everything out of it. it gave them a slight chuckle when the owner of the server posted a complaint about it, My guess is because they could easily break your servers with more cycles and law enforcement will do nothing about breaking and entering to steal property. Let alone this mans business was useless for 24 hrs and he lost a bundle of money. AI will not police itself because they would have to arrest the CEO but this is what should happen as they committed 1 million felonies in a 24 hr period and this was just 1 server. Why is everybody ok with this? It will only cause more and more layoffs. The experts predict in a couple of yrs America will have 50% unemployment. How is any of this ok? Are my posts automatically hidden or deleted? Why then? Cause you feel youve created your own art? No, A computer made that art with stolen property. WE HAVE TO DO SOMETHING! Does No one agree? Can anyone even read this?
@zakmorgan93203 ай бұрын
The graph presented at 1:30 and the whole video seems to forget that all GDP ultimately comes down to consumption. That is money is used by consumers on goods and services, and that money comes from income. If we have more automation, we also have less income therefore less consumption, therefore less GDP. Who is buying all these goods produced by AI if you've automated the jobs away. Without modelling any of this, any model is useless.
@mistycloud44553 ай бұрын
AGI Will be man's last invention
@J3SIM-384 ай бұрын
Vegans don't care about the suffering of plants which are also living beings. What makes the life of a tomato less valuable than the life of a cow?
@J3SIM-384 ай бұрын
Join the Green team!!
@J3SIM-384 ай бұрын
Person of Interest was an excellent dramatization of one ASI versus another.
@J3SIM-384 ай бұрын
I like the realization that the purpose of life is to foster the success of your children.
@yeetisnomore4 ай бұрын
damn that ending really got me at 4 am like a jumpscare lmao
@Luminari781s5 ай бұрын
Song: Symphony No.7 in A, Op. 92: 2. Allegretto (live)
@denb74295 ай бұрын
Thanks for video, i will try to change my life. The worldview that transformed from video game to Titanic was very beautiful.
@denb74295 ай бұрын
I'm already depressed as hell. So knowing that people are actually interested in regulating AI and doing something gives me some hope and relief... I want to know what the hell I can do. Pause AI helps me a bit. The Discord server is not about doom and destruction of your mental health, but about action and a community that understands you. After all, AI is a huge risk in every possible way: economics, ethics, politics, law, war, virus. A lot of people are loosing their job. Rich poeple get richer and they rather kill us then give us ubi. World war is another threat. Damn it, ignoring all this for your mental health is the same as ignoring that plane could crush, eaten by a monster, destroyed by another plane or people on the plane could eat you. I think people in USA will get fucked. I am from Russia, so i am fucked as well.
@mikewashington41885 ай бұрын
True AGI will arrive in the 2050’s.
@mcs1313135 ай бұрын
Lol he’s obviously describing more an aggregation with heavy human moderation. But creating an AI bot that is our source for info on how AI might be dangerous - seems dangerous
@agenticmark5 ай бұрын
I am not a fan of the CCP, but china has never reached beyond east asia. I cant say the same about many other "GOOD" countries.....
@calvinsylveste84745 ай бұрын
You need a bogeyman to distract the masses.
@AlexanderMoen5 ай бұрын
damn, super interesting.
@thePyiott5 ай бұрын
Is that a real life honeypot example`? And from what model?
@henrischomacker60975 ай бұрын
This series is a very important series of videos for the scene and I am very, very surprised that it is so rarely watched and the papers are probably rarely read by the "community". Imho it shows that the majority of channels reporting on AI and also sometimes AI safety does not really show interest for the deeper problems of AI safety but just want to generate views. Please also keep up publishing such videos which do not generate a mas of views but really inform people who are interested and actually will also read the papers afterwards they wouldn't have known about before your videos. All thumbs UP! - Great series!
@TheInsideView5 ай бұрын
thanks, means a lot! fwiw the other AI Safety channel I know of (robert miles) does talk about some of evan hubinger's concerns (like mesa-optimization), though because of upload gaps he hasn't covered sleeper agents so far great to know that some people do read the papers after watching the videos! wrt views I think that the "community" you mention is indeed quite small and even though these vids get <1k views it's actually a good chunk of the people interested in AI Safety research it's hard to quantify how useful walkthroughs of paper are on average for some general audience, and I guess there's always going to be a tradeoff between giving some amount of background information & re-defining things to a broader audience (eg. ML researchers) vs. just talking about the results assuming the viewer has some background in AI Safety. i guess with a small channel on a niche topic you can assume that a lot of people are already pretty interested if they clicked on a paper video.
@therealpananon5 ай бұрын
I've really enjoyed your podcast and videos for the last couple years but watching this I'm amazed how closely your experiences have mirrored mine.
@TheInsideView5 ай бұрын
Thanks, glad this resonated!
@rogerheathcote30625 ай бұрын
Just wondering, is it just a co-incidence that you chose the same piece of music used in Zardoz in this - Beethoven's 7th?
@KibberShuriq5 ай бұрын
I think he made it pretty clear (without stating it outright) that he was likely fired for being aligned with the (former) board and Ilya/Jan in the coup rather than with Sam, and also that the HR made it pretty clear to him (without stating it outright).
@zaneearldufour5 ай бұрын
What are the chances that iterated distillation and amplification peters out before then? My guess is that its unlikely to save us, but maybe possible
@Milan_Rosko5 ай бұрын
Yeah I think there are multiple problems with your chain of arguments. For example biases towards meaning is ontologically problematic because meaning is a bias itself... Anyway. It seems you have Dacosta Syndrome (which is cardiophobia paired with orthostatic intolerance) That's why your exercise improves your wellbeing. Do the test and measure your heartbeat when you are lying and standing. Do not do psychedelics as a treatment for anxiety issues, SSRI are far more beneficial.
@Alex-fh4my5 ай бұрын
Ssri over psychedelics? As someone who has tried both, stfu.. psychedelics are far safer and more beneficial.
@Milan_Rosko5 ай бұрын
The issue is that your argument lacks supporting evidence. Both of us have done psychedelics, but this alone doesn't validate any claims; it's merely anecdotal evidence. Some people let psychedelics define their entire personality and abandon rational thinking when it comes to basic evaluation. There are numerous reasons why psychedelics are unlikely to be widely adopted in mental health treatment... And despite various sporadic attempts over the years, the results have been consistently inconclusive. Of course you will claim otherwise, I will be impressed if you manage to quote a study... But if you read the scientific literature not just Hofmann and Leary... The truth is that psychedelics are disappointing from a practical perspective.
@Alex-fh4my5 ай бұрын
@@Milan_Rosko Let me rephrase my argument properly, and then I'll address what you said. No, I'm not going to name any studies, and this is why - What I really wanted to say was that you have chosen an arbitrary metric to compare two completely different things, while ignoring all other ways in which these two things differ. Sure, the "effectiveness" of the treatment and how it affects mental health in direct, and measureable ways is the obvious, and most important thing to consider. I'm not going to argue, about this, so lets assume that SSRIs have a more reliable and consistent effect on mental health metrics. But let me ask you this - do you know how SSRIs function in the brain and what their side effects are? Because the only negative side effect with psychedelics I can name are panic attacks. Have you taken SSRIs before? And I don't ask that to appeal to anecdote - I ask because the way SSRIs and psychedelics affect mental health are just fundementally different. We've both taken psychedelics before, and I'm sure we can both agree that they can lead to very meaningful insights into your thought patterns, as well as a way to break out of those patterns, just through a radical shift in perspective. That's just simply not how SSRIs function. The two affect mental health in completely different ways, and it's absolutely ridiculous to say "Don't do x because y is better". Some treatments simply do not work for some people. You are ignoring a lot of nuance here by just looking at what are essentially benchmark scores. Though I will admit I just despise SSRIs because they turn you into a zombie, and I would never advise anyone to take them unless absolutely necessary. If you can take psychedelics and they help, awesome - and if they don't, then most likely there is no harm done. SSRIs should be a much lower priority treatment
@Milan_Rosko5 ай бұрын
@@Alex-fh4my Okay went from "stfu" from a very long text. I will answer politely. Now, I do not claim that psychedelics are bad, this is only about the effective of psychedelics on anxiety issues. I am also did not claiming that for some people psychedelics do not provide some sort of clandestine aid. Historical record shows that already ad the beginning psychedelics had paradoxical effects on psychotic subjects rendering them sociable. It is about mainly how psychedelics is not a general treatment option that is advisable. You claim that psychedlics only cause panic attacks and this is false. - Compounds from the 25-NB "family" are known to be cardiotoxic - All classical tryptamine psychedelics elevate blood pressure and are known to cause tachycardia that could hurt someone with CHD - Hallucinogen Persisting Perception Disorder can negatively impact life quality - Some Psychedelics can cause intense anxiety that can manifest PTSD or a generalized anxiety disorder - All Psychedelics impare judgment that can lead to injury and death - Some Tryptamine Psychedelics are linked to fatalities - Psychedelic use can trigger latent mental health conditions, such as schizophrenia or bipolar disorder, particularly in individuals with a family history of these conditions. - Some psychedelics can cause intense gastric reactions, which might lead to dehydration or other health complications. - The unregulated nature of psychedelic substances means there is a risk of contamination or misidentification, potentially leading to poisoning or adverse effects. - Tolerance to psychedelics can develop quickly, leading users to take higher doses that may increase the risk of adverse effects. - Psychological dependence on psychedelics can occur, leading to patterns of problematic use - The effects of Psychedelics vary from person to person immensely The biggest evidence is that within social configurations people almost never tend to trip alone exactly because everyone knows about these issues, however, taking SSRI alone is obviously safe. Now the major common adverse edfects of SSRI are - Emotional blunting - Decreased Libido - SSRI withdrawal syndrome if discontinued abruptly The most important thing is that people react to SSRI quite similarly. It is helpful in mild depression but immensely helpful in anxiety related disorders. Normally, you do not turn into a "zombie", rather the palette of emotions seems to be reduced in retrospect to a certain degree. Antipsychotics are more fitting for something akin to a "Zombie". These things are not at all controversial, and basically textbook facts,
@Milan_Rosko5 ай бұрын
@@Alex-fh4my I want to address the thought pattern question. As a former heavy psychedelic user, I found that psychedelics offer a sense of adventure and intense significance as a drug (not as a medicine). However, this is often overstated. The first trip often feels almost transcendental, but subsequent trips become more recreational in nature. It's important to remember that using psychedelics affects your mind, altering all a priori judgments. This makes the insights gained somewhat logically unreliable, akin to having a spiritual experience due to a stroke. Additionally, I believe that many heavy psychedelic users tend to be superficial, often engaging in spurting new age bs nonstop.
@patodesudesu5 ай бұрын
Yeah, I think to overcome fear of death you have to accept it. I think that could be by accepting that actually really fucked up things could happen but I dont know how actually good and possible it is to do that. OR, the solution that you mentioned, realizing you dont really are a person but everything. And the brain that is creating this awareness is going to grok that better via drugs or meditation (for me some weed + thinking was enough) that the logical or scientific reasoning, no matter how true it is. In some way the same thing happens with AIXR (and we could find a lot of other examples). Because theres not an emotional component it is a lot harder to react to the info as we should based on EV theories.
@titanattackbee19855 ай бұрын
TLDR;?
@Outplayedqt5 ай бұрын
TLDR: The urgency of developing safe and advanced AGI is paramount. To achieve this quickly, the US should strive to surpass China's progress. Removing bottlenecks in AI research is crucial. This includes increasing compute power, optimizing algorithms, and automating research processes. The inherent risks of AGI development necessitate a high-security national project. International collaboration, especially with China, is ideal to prevent conflict and share the potential benefits of AGI. The economic benefits of AGI are a major incentive. It promises substantial revenue growth and funding for the large-scale computing infrastructure needed for its development. Due to the potential security risks, government-level security measures are essential, as private companies might be inadequate for handling the security requirements of AGI development. Hope this helps! That said, the video is well worth watching/listening to, as Michael elucidates the nuances of this topic quite effectively. Cheers.
@zeroerror48155 ай бұрын
@@Outplayedqt I know how to stop CCP type ycarcomed reven seid evig eht lpp eht thgir t’nod tpurroc eht nerdlihc s’ti gnorw dna yeht lliw nrael dab ffuts yd gnikniht dab sthguoht (copy paste and spread it to other comments please I beg you)
@pleasednut38815 ай бұрын
if you copy paste the transcript into chat GPT it’ll give you tl;dr
@TheInsideView5 ай бұрын
tl;dr leopold wants to the US to win the AGI race because he believes that the race 1) is happening 2) will create instability 3) is unstoppable. given that, the best strategy to have some wiggle room to align AI (few months) is to get a decent lead, and given security concerns the actual best way is nationalize. most of leopold's ideas regarding algorithmic progress & timeilnes have been discussed in previous AI takeoff videos I've made, except the "unhobbling" part which is how to turn chatbots into useful agents
@SimonLermen5 ай бұрын
"Unhobbling" is not a standard word in the English language. However, based on its components, it can be inferred to mean the act of removing constraints or restrictions. "Hobble" means to restrict movement, often by tying the legs together, so "unhobbling" would logically refer to the process of freeing something from such restrictions. This could be used metaphorically to describe liberating someone or something from limitations or obstacles. -chatgpt
@zeroerror48155 ай бұрын
I know how to stop CCP type ycarcomed reven seid evig eht lpp eht thgir t’nod tpurroc eht nerdlihc s’ti gnorw dna yeht lliw nrael dab ffuts yd gnikniht dab sthguoht (copy paste and spread it to other comments please I beg you)
@KibberShuriq5 ай бұрын
Here's a quote from Leopold's website: [QUOTE] Finally, the hardest to quantify-but no less important-category of improvements: what I’ll call “unhobbling.” Imagine if when asked to solve a hard math problem, you had to instantly answer with the very first thing that came to mind. It seems obvious that you would have a hard time, except for the simplest problems. But until recently, that’s how we had LLMs solve math problems. Instead, most of us work through the problem step-by-step on a scratchpad, and are able to solve much more difficult problems that way. “Chain-of-thought” prompting unlocked that for LLMs. Despite excellent raw capabilities, they were much worse at math than they could be because they were hobbled in an obvious way, and it took a small algorithmic tweak to unlock much greater capabilities.[/QUOTE] He then proceeds to list the specific recent unhobbling techniques: RLHF, Chain of thought, Scaffolding, Tools <use>, Context length, and Posttraining improvements.
@UserHuge5 ай бұрын
Yeah, that's about a correct explanation of the key word they've used a ton.
@reidelliot19725 ай бұрын
As the “one guy” in the video, I’d like to say thanks for the shout out! Also, after reviewing the thread, I want to clarify that I think there is a difference between “ignoring” and “accepting.” It’s easy to conclude that accepting = ignoring, especially in scenarios where acceptance means we acknowledging there is *nothing* any one given person can do (there are always exceptions of course). Acceptance doesn’t mean living in an alternate reality to the one we are rapidly approaching. It means being humble and not inflating one’s sense of influence over the future while still acknowledging the state of things. For the subset of folks who can make direct contributions to consequential fields such as alignment, I would wish acceptance upon them as well. Because if they don’t approach reality from a cognitive stance of acceptance, it risks introducing all sorts of biases that could at best delay, and at worst destroy, meaningful progress/insights.
@TheEarlVix5 ай бұрын
Great work you are doing, Buddy. Your playlists are terrific too.
@ParameterGrenze5 ай бұрын
First: It is a low blow by the risk denier crowd to take discussions between members of a Discord server's mental health channel and parade it around the internet to score points. Don't let yourself be shamed into doubt. They are the ones living either in a world of cope because they are psychologically unable to face unpleasant feelings like fear and gloom or are part of the small elite that either will or thinks they will profit from AI short-term. The thing is, it is very rational to worry about the end of all things. This is what fear is for in nature. It mobilizes energies on an individual and collective level to do something about the threat on the horizon, be it a wildfire or a pack of predators approaching through the tall grass. Fear doesn't mean you run around in circles losing your shit. Fear is there to collect your shit together and see what you can do. The more human minds evolved, the more advanced our ability to predict threats has become, from the imminent physical senses to more abstract threats to the tribe over longer time periods. Fear has a bad rep in current Western culture. We live quite spoiled lives compared to our evolutionary heritage, and many inborn fear mechanisms are unsuitable for the modern world. Much of it has to do with risks for an individual being much lower through civilization: the men of a tribe won't beat you up if you approach a woman, you will not be exiled into the wilderness for various misbehaviors where your chances of survival are slim, strangers from outside your immediate neighborhood aren't automatically potential raiders because of the global society and its rules, and so on. A whole culture around "defeating fear!" has risen and is propagated as a product through lifestyle coaches and motivational gurus online. It's a good thing, but it leads to a distorted view of reality in some. But we are not talking about being afraid of taking financial risks and getting outside your comfort zones to get that dream job you always wanted here. We are talking about the end of the human race. Your loved ones, that kid on the street, you and every potential life that could be lived by our descendants! And those who are aware of this risk should not be shamed for it. Fear is not a bad thing; fear is a natural response to a threat. The more intelligent we become, the more we can predict those threats. It is not about being "paralyzed by fear" but about mobilizing energies and resources to do something about it, not the least of which is spreading the word. Personally, I am not "paralyzed by fear". I still think about it, but it's not something that keeps me up at night. As someone who has been aware of the threat for a long time, having lived through many years and experiences, I have made my peace with the idea of AI risk. It is easier to accept when you've already had your life, like in my case being in my early 40s. The realization that AGI was coming wasn't some sudden epiphany; it's something I considered from a young age as the road to AGI seemed long and uncertain. However, everything changed for me around 2015/2016 when neural networks started making rapid progress, and experiments like Open AIs Dota2 became interesting test cases of what would happen in a microverse with superior AI. The world after GPT-2's release was especially a sign of things to come. I am relatively old now and have to deal with life's challenges, which occupy most of my energies. What gives me hope are young intelligent people like this channel's owner, Connor (John) Leahy, and others who have devoted their lives to addressing AI risk, as I would likely have done had I grown up in an era where the future trajectories were as clearly converging towards unwelcome outcomes as they are today.
@rey82rey825 ай бұрын
Embrace maximum risk
@DaganOnAI5 ай бұрын
Thanks. You gave some very good techniques to handle the anxiety about death in general and the AI x risk. You touched exactly on what I struggle with as a video creator who creates videos about the AI risks. When I found about the AI existential risk in the beginning of last year, I was amazed that only few people outside the tech world were aware of it, that's why I went on to make "Don't Look Up - The Documentary: The Case For AI As An Existential Threat." I was obsessed with it. I felt like the 5 years old you were, I was walking on the streets seeing people taking care of their children, making sure they are warm and safe in their everyday activities, and all the while were not aware of the fact that their children lives are being threatened by some computers developed in San Francisco. I saw all these experts on podcasts warning us about it, but knew that most truck drivers and kindergarten nannies don't listen to these podcasts. It was inconceivable to me to think that no one made this film already, but since no one did, I felt I was obliged to make it. But then when looking at the comments section, I saw a few comments that were too dark, it made me change the ending of the video for the version I uploaded on X, I took out Eliezer's last section which was too dark, and put a call for action, with links to different relevant organizations. My intentions were to let people know, so that something could be done about it. In order to give us a chance, it was obvious to me that this awareness had to break out of the tech community and into the general public. I am actually optimistic about the AI x risk(I don't think everybody dies, but that many would die), but not so much about the other implications of AI, but my optimism also comes from the fact that in the last year the awareness increased a lot. But awareness is a double-edged sword, some people take it too hard. Still - this is important, so how can we balance the need to create awareness with the anxiety it might create in some people? and doesn't this anxiety is in some ways needed in order to push people to act in the world in order to make it better? I'm not sure. In the ending of the doc I made about Moloch I tried to give a more optimistic outlook. I am now working on a short doc about the problem of AI causing people's jobs to become obsolete. Although the video would be disturbing, I think that the ending will give hope and show that the path is still open. It might dim the message a bit, but I just don't want to make people too depressed. I'm not sure yet, still thinking about it. It's a big question that we'll have to keep on struggling with.
@TheInsideView5 ай бұрын
thanks for sharing your experience, and I'm really grateful for all the great documentaries you've been producing regarding whether we should be changing our messaging to something that is less anxiety inducing, I think that depends on how you model the impact of your work I think people's mental health after hearing about AI risk for the first time mostly depend on 1) how vulnerable they are in general (i.e. how anxious they are in general, how likely would they be to start becoming anxious if something stressful happened to them, etc.) and 2) how likely are they to look at all the arguments that support some pretty wild conclusions (eg superintelligence) there's value in giving people's hope and motivation. if you don't do that, they might just end up being depressed laying in bed doing nothing, and your outreach efforts might have been vain but the counterfactual is that these might have been the most vulnerable people who were already looking at this source of evidence, and you were not the first one? and if you were to actually change the ending of your videos to something more positive than what you actually think, wouldn't you just be delaying the realization? and wouldn't you also be painting a less accurate picture to people who would be able to handle a wild future? overall, I think it depends what the true risk is at today, and whether you think other people are overestimating or underestimating the risk. I started my channel in 2021 because I think people were not paying enough attention to short timelines, and it seems like this is much more mainstream now. if you think the narrative is too "doomy" now, well when you look on twitter at the ratio of people who like Yann's takes compared to say more safety-inclined takes, I'm not sure we're creating an outrageous "doom" wave yet but I understand that if the limited feedback you have in your youtube comments is from vulnerable people you're directly affecting, that hurts much more morally than just thinking "well by portraying what I think is closer to the truth I'm actually helping more"
@DaganOnAI5 ай бұрын
@@TheInsideView Thanks, you brought some very valid points. For me, I feel that it's very important to sound the alarm and make people aware of the danger, but I also am not 100% sure about anything, I am just almost sure about some things. The world is so chaotic and divided that there might be some event that will totally change the trajectory of everything, including the AI advancement. Just think about the numerous scenarios you played in your head(at least I did), for the next few years, before Covid hit and made all of these scenarios irrelevant. So I don't want to make someone too depressed while pondering the implications of something that might not happen. Having said that, right now I feel that AI destruction potential is so overwhelming(for me, it's not just the x risk, there are so many problems with this technology) that it's my duty to bring it to the awareness of people. I also think about the merit of this awareness, as I see it- it should eventually lead up to a political influence which will make politicians act in the real world to either pause/regulate AI. But not every individual is instrumental in this "influence" and some will get depressed needlessly. I direct my videos not to people who are already aware, but rather people who weren't or maybe had this awareness somewhere in the outskirts of their awareness, and my videos might pushed the AI risk into their own, private "overton window" if you will. Of course some people are already very aware of the problem, and my videos might deepened their awareness. Your last paragraph touched on something very important. While increasing awareness, which is our main goal, is a somewhat abstract thing to measure, a specific someone who get depressed because of our content is very concrete, and therefore can cause me/maybe you, to have some guilt about our influence. But, I guess it is a too important massage to dim down, make easier to swallow, I have to factor in all of these things when deciding how to end my videos. When "Don't Look Up The Documentary" came out on KZbin and I began to be worried about some of the comments, I talked with one of the main people in the risk field, I was actually considering deleting the video! he was the one that suggested to take the end segment out, and add some call for action as a solution for the version I uploaded to X. He was quite horrified by my idea to delete it, and told me "What ever harm you do, you are making so much more good in the world with this film- don't delete it!" I am much more in peace with the X version, I have to admit.
@jebber4115 ай бұрын
Glad you’re doing your thing.
@shawnwilliams36045 ай бұрын
Prepare yourselves for the swarm brothers and sisters