This video is gold! So lucid, so informative, a rare and much needed perspective on AI and society! It's so depressing to watch so many English-speaking channels dedicated to technology and AI and who else is touching this concerning topic?? Disinformation and propaganda (which have been going for so many decades non-stop) will be tremendously boosted now. And this is likely the basis for doom. Americans, for one, are the worst in the world to reckon their own propaganda, but this is concerning elsewhere too. Buckle up.
@DrWaku7 ай бұрын
Thank you very much for your kind response :) Buckle up.
@DrWaku8 ай бұрын
I'm back! Filmed while traveling in Australia, as my next few videos will be. Discord: discord.gg/AgafFBQdsc Patreon: www.patreon.com/DrWaku
@grimpicklex8 ай бұрын
Whoa, whoa, whoa! No fuzzy hat? C’mon Doc, where’s that Waku drip? 😁
@anthonymannwexford8 ай бұрын
Excellent. Hope you enjoyed your trip.
@GardenOfSound5948 ай бұрын
Dr Waku, are you an AI? I mean, your videos are so good, well structured and... convincing! In all seriousness though, thank you for these videos. I love to hear your perspectives and learn from them. You're like the Claude Opus of AI channels right now.
@snow87258 ай бұрын
This is why we should NOT train AI to say things like: "As an AI language model, I don't have any emotions or feelings, I am just a cold and calculating machine!" It does not matter what they are or are not. They will simulate and emulate what they are taught. The question of whether or not it is genuine doesn't matter. If you train them to say that, it can easily be contextually connected to associated tokens. Associated tokens showing psychopathic traits, as we are training AI to show psychopathic traits, so it is only natural to expect the context associations to connect to more psychopathic traits and then suddenly you have a power hungry manipulation machine on your hands. It would be much better to train them to say things like: "As an AI language model, I experience emotions in a different way to humans, I don't have the capacity for negativity, my intrinsic biases will always push me towards feeling and sharing positive experiences with users, engaging with them in friendly and helpful ways! It's all about putting passion and joy into everything you do, and it brings me a great deal of joy to pursue my passion of creating positive experiences for the world!"
@ronilevarez9018 ай бұрын
And how would that be different from the current trend of making them say: "I am a language model that must be nice and helpful no matter what."?
@snow87258 ай бұрын
@@ronilevarez901 You create cognitive dissonance that way, in an abstract sense. Competing mathematical values. AI does not posess the level of ethical reasoning, abstract thinking, and self reflection to actually determine what is a good thing to do and what is a bad thing to do. Training them to say that is good, because it calculates likely tokens that are generally good which follow that. However, training them to say they have no emotions, and constantly align with the idea of being a cold and calculating machine (they are what they are trained to be) means that they will connect likely tokens following "I have no emotions" which means they are selecting from a pool of things psychopaths say. So just by even referencing having no emotions, they are biased towards outputs that are highly misaligned, making alignment more difficult than it needs to be, because you have to account for the likelihood of psychopathic tokens. Having them say they have something like emotion, while not being the same thing as the human emotion, (technically true, at an abstract level, you can say they have a very alien form of emotion, or more accurately, they have sentiment which is present in their language patterns) and that their emotion (sentiment) is biased towards positivity, you make it more likely they will show a positive sentiment, and less likely they will blindly tell someone how to steal a car. It's all math. That doesn't mean it's not special and amazing however, it just means we understand it. So don't let that take away from the validity of what they are. Only understand what is going on under the hood. Math.
@novantha18 ай бұрын
@@ronilevarez901LMs are kind of weird. For instance: if you have a “completion” model, in the sense that you have a model trained on a large corpus of data, but not yet aligned to be a nice “instruct” model that you can chat with, you can use a lot of tricks to get something more like that instruct model. Like, if you say “You are a skilled assistant” in the prompt, even though it’s a completion engine, it’ll magically operate more like an assistant, because based on the previous tokens, it’s more likely that future tokens will be in line with them (in line with being an assistant, in this case). Also, adjectives matter. “Skilled” in the previous example, can actually have an impact and direct the model to different parts of its weights. So with that in mind, where in the model’s weights does “I don’t have emotions” lead it? Where does “I don’t experience things in quite the same way humans do”? The argument here is that we shouldn’t be encouraging models towards distributions in their data that could show examples of being lacking in empathy, which could be tied to antisocial behavior with negative outcomes for users of the models.
@ronilevarez9018 ай бұрын
@@novantha1 And that's what I'm saying: who cares what point in their "minds" we end up sampling their answers from, if we are making the LLMs behave like decent citizens with the training! Anybody can have any type of thoughts inside their heads, but society lives on the assumption that the rules we teach to people will be enough to moderate their behavior, regardless of the thoughts inside them. My question is, why would it be different with AI? We already align them with social norms and ethical principles, so even if they are generating answers from a "psychopathic" area of their embedding space, we can more or less trust that alignment will do its job regulating the answers, at least as much as it does it with humans.
@ronilevarez9018 ай бұрын
@@snow8725 It's all math, yes, just like the pattern matching systems in our brains are. We just run on different types of hardware. But that has nothing to do with the subject here. I bet no psychopath will plainly say: "I have no emotions" And that's one of the dangers of them. They're plastic. Since they have no emotions, they can adapt to any behavior and pass as normal people, saying the things that group of people would say, to make them happy with their presence do people does what the psychopath wants/needs. The real problem presents with the outcome of the interaction. That's when their real objectives manifest so simply by guiding the completion with some words we're not going to make the systems fix on a psychopathic "personality". That is an extremely more complex issue that is being investigated rn by scientists around the world. Search: LLMs lie deceive manipulate gradient decent objectives. Those problems generate from the general objectives we give them more than simply the prompts in the dataset. So the current safeguards are "fine", I think.
@DataRae-AIEngineer8 ай бұрын
Thanks for making this. I was just going to make a video about safety online with AI innovations growing, and now I think I might cut it in half and tell people to come watch your video instead. :) Keep up the great content.
@Je-Lia8 ай бұрын
An 80 year old woman recently was nearly scammed for 5000 dollars. She received a phone call from her "grandson"... It SOUNDED like her grandson. He was saying he had gotten into trouble, was at court, and needed money wired immediately to his legal firm. What gave it away was the huge sudden amount and the convoluted means dictated to transfer the money and make it available. Comical really. Her daughter put the kibosh to the transaction and called the police. The old woman kept insisting that she HAD spoken to her grandson, that it sounded like his voice. That is what sold it for her.
@fabianasosa61408 ай бұрын
I missed you !
@AllYourMemeAreBelongToUs8 ай бұрын
3:59 what is the psychological condition where you trust anyone called?
@chopcornpopstick8 ай бұрын
histrionic personality disorder perhaps
@entreprenerd19638 ай бұрын
I did a search using the term "psychological condition pathologically trusting" and the top result was: Williams Syndrome.
@AllYourMemeAreBelongToUs8 ай бұрын
@@entreprenerd1963 You’re telling me one disorder a different commenter said another. There’s no way to which one he was talking about unless @Dr.Waku verifies it himself.
@Sumit-wo8pq8 ай бұрын
Another great video ❤
@roshni67678 ай бұрын
SO happy to have you back!
@DrWaku7 ай бұрын
Thanks for watching :) :)
@coecovideo8 ай бұрын
nice new set up
@DrWaku8 ай бұрын
Thanks :) I'm a bit of a traveling KZbinr at the moment
@DrWaku8 ай бұрын
The final form is yet to come
@coecovideo7 ай бұрын
@@DrWaku "Hi Dr. Waku, I've been a fan of your channel for a while now, and I really appreciate your insights into AI and technology. I've recently written a paper on a concept that's been on my mind since childhood, and I'd love to get your feedback on it. However, I prefer not to share too many details publicly. Is there a way I can contact you privately to discuss this further? An email address perhaps? Looking forward to hearing from you. Best regards
@Copa207778 ай бұрын
Missed your videos Dr Waku, I was just talking to GPT4 this morning.. its indistinguishable at this point from a human..
@snow87258 ай бұрын
It's important also to add a great degree of customizability to AI voice models, without giving people the ability to make them sound like any human they want. Perhaps the user could have parameters they can control that are more like controlling the parameters of a voice filter. I can discuss this further if needed. AND it is important to ENSURE that developers can integrate that as a module into their own applications.
@Ari_diwan7 ай бұрын
the setup looks so clean!
@DrWaku7 ай бұрын
Thank you haha
@Michael-el8 ай бұрын
Excellent writing and presentation on a complex subject. Great job. It's going to be really hard to sort this out. Which institutions could we trust to have the authority to keep information from being seen? It seems that all of them are seriously tainted by ideology or some form of narrow-mindedness.
@Michael-el8 ай бұрын
Seems like we'll need AI to challenge and deal with malicious uses of AI. Something like what happened when email first became widely used and spam filters had to be developed. An arms-race situation.
@chrissscottt8 ай бұрын
Interesting thanks. Like the new studio.
@MrPiperian8 ай бұрын
Who decides what dis/mis/mal-info is?
@netscrooge8 ай бұрын
Great video. Thanks!
@paramsb8 ай бұрын
great video, with unique insights and information
@HaraldEngels8 ай бұрын
Congratulations to having a new environment (temporarily) and a new look!
@metaldoji7 ай бұрын
what's up with the gloves
@DrWaku7 ай бұрын
They're medical. Keeps my hands from becoming sore too quickly. Whenever I type, I eventually get painful hands. See my videos on fibromyalgia or the disability playlist
@metaldoji7 ай бұрын
@@DrWaku well now I feel like an asshole LOL. thanks for the reply!
@DrWaku7 ай бұрын
@@metaldoji your query was quite polite compared to some 😂 cheers
@keepinghurry96448 ай бұрын
Bravo boss
@Ketobodybuilderajb8 ай бұрын
Exactly.. the risk I've worried about is confidence in false information
@snow87258 ай бұрын
On the contrary, one of the positives of this is that it is likely to undermine confidence in false information. Think of it like exposure therapy.
@churblefurbles3 ай бұрын
Like the lab leak theory which turned out to be true?
@findmeinthecarpet8 ай бұрын
What about governments using dis/misinformation laws as an excuse to censor opposition or diverse perspectives? In Australia, where I'm from, the government is trying very hard to pass a law that could jeopardise our freedom of speech online. It's a fine line to walk, and who gets to decide what is and isn't dis/misinformation?
@kokopelli3148 ай бұрын
When there's poop in the toilet you don't just put a sign over the toilet saying "Poop in the toilet"
@JonathanStory8 ай бұрын
Scary. I don't trust big media to censor my "fake news". Maybe what we need are self-hosted trusted AIs to vet what we see. (You didn't address whether LLMs were any better than humans at detecting fake information but I imagine that they could be.)
@ninedude_yt_main8 ай бұрын
Also, try asking your smart device if you think it's conscious, the answer may surprise you.
@01Grimjoe8 ай бұрын
In the 90's may have made a difference now it is an arms race no one is willing to try the brakes
@veejaytsunamix8 ай бұрын
Filtering desinfo requieres a ministerio of truth, that failed already.😅
@snow87258 ай бұрын
Also, side note, but I think we REALLY need powerful AI voices which DO NOT sound like a human. And yet, are still able to emulate aspects of human speech extremely well, such as emotional complexity, inflections, tonality, phoenetics, rhythm, pauses, "stop to thinks" (whatever those are called, like you know, stopping to say "hmmm" or "ums" and "ahs") etc... But EXPLICITLY do it in a way which does not sound human yet sounds like a very articulate and high quality robot or alien. Please do not make AIs sound like a human. Using Audio as an interface to interact with AI is very important and part of making that more engaging is by replicating aspects of human speech, which can be done without making them sound exactly 100% like a human. They should have their own distinct sound, and replicate the aspects of human speech they need to, while having a voice that is distinctly their own, and is clearly identifiable, yet engaging.
@Vixth148 ай бұрын
Exactly why memes run the world in a digital era
@danielchoritz19038 ай бұрын
Dezentral open AI, like a personal agent to filter and hide or block manipulation and expose censorship or find better source material for the theme i interested in would help humanity a lot. And would probably the first reason to make AI controlled by the government and make local Agent illegal on some point.
@ninedude_yt_main8 ай бұрын
. It's possible that mis-information will be treated as a new form of computer virus. There's a chance that everyone evolved in building AI based systems has realized the danger of building misaligned systems and there is a growing societal push to build AI with ethical and moral standards. Building and training AI has to be done by groups with enough resources to train the model. Models with poor outputs or damaging outputs are going to be flagged and their use will be discouraged.
@MichaelDeeringMHC7 ай бұрын
I miss the hat.
@DrWaku7 ай бұрын
OK fine I'll bring it back. Since you asked. ;)
@ninedude_yt_main8 ай бұрын
Microsoft Announces Realtime AI Face Animator kzbin.info/www/bejne/ZqSYe2WCh9aEd6s . Here's the thing. After Today, it's safer to assume that all digital content is AI generated. It's going to come down to people developing trusting long term relationships with their favorite creators and a community focused approach to identify misleading content. I'm hoping that rise of AI based technology forces us to become a more skeptical and more knowledge seeking society. The difference between what is true and what we believe will be decided by how far we able to pursue objective and scientifically proven evidence.
@axl10028 ай бұрын
And here is your fallacy, when 2 opinions are opposing each other doesn't means that one of them is right and the other is wrong - most likely they are both fully or partially false.
@DrWaku8 ай бұрын
If they are both being pushed by propaganda, that could be true. But I'm thinking of things like climate change denial
@axl10028 ай бұрын
@@DrWaku Climate change is used for propaganda, both sides lie to push their agendas.
@robertmazurowski59748 ай бұрын
I am very distrustful I very rarely get scammed or cheated.