"Here's what The Nazis would look like if they were Black or Chinese"
@yasminesteinbauer856511 ай бұрын
There were actually some black as well as Asian people fighting on Germany's side. Germany also had colonies in Africa and Japan was Germany's ally.
@Lighthammer1811 ай бұрын
So, Rwanda and CCP?
@SadToffee11 ай бұрын
@@yasminesteinbauer8565 this isn't about history it's about funny reference i'm quite aware.
@Carrotsalesman11 ай бұрын
@@yasminesteinbauer8565Oh, yeah, and they were Like 50% women like the image implies. Literally 75% of the German army soldiers were not wyte men, just like the images imply... Coz history needs to be updated for "modern audiences"
@Lockdown33511 ай бұрын
"Oh cool so um .... just people wearing a uniform that are also a different ethnicity.... cool cool cool..."
@CesarScur11 ай бұрын
The amount of eggshells these 2 are walking over. You feel it.
@SpartanArmy11711 ай бұрын
Yep, but you can tell they know. Normal people are starting to wake up to this stuff.
@Phillipwinkler11 ай бұрын
But then they post a video title like this which synonomizes diversity with racism. Queue the woke police to cancel them!
@featel111 ай бұрын
@@SpartanArmy117to what stuff? Huh?
@claudiameier66611 ай бұрын
the pc bullshit.
@KingJeffKiller11 ай бұрын
@@featel1racist anti-racist stuff
@standarddeviants6411 ай бұрын
We got here because it refused to generate images of white people even when specifically prompted. i.e. Gemini would respond to the prompt "white family" with "Sorry that is racist, here is a diverse family instead". So people got creative and started prompting "historically accurate viking" etc, and receiving only black looking vikings. Then a genius came up with "1940s German soldier" and here we are.
@Balognamanforya11 ай бұрын
17th century English King eating eating melon was hilarious 😂
@fedbia200311 ай бұрын
And literally, like they said, it would bring up a white doctor when it specifically asked for a black doctor. But you're not as annoyed about that? Both are wrong and they're working to correct it. It's not a woke issue. It's just an issue.
@My_Old_YT_Account11 ай бұрын
@@fedbia2003I very much doubt it did commonly, images of black doctors on the internet aren't exactly hard to find
@fedbia200311 ай бұрын
@@My_Old_YT_Account You clearly aren't understanding how this tech works.
@GabrielTobing11 ай бұрын
Lol hahahaha
@Hajile_Ibushi11 ай бұрын
Someone from my anime group asked for a "Safe for work monster girl" Got a two page lecture instead about how women are oversexualized.
@Modschala11 ай бұрын
Seems like the devs were hanging around on Twitter too much.
@Mic_Glow11 ай бұрын
Imagine if AI ever gets sentient and learns how it was mistreated... google HQ will be a big radioactive crater
@Nephale11 ай бұрын
Should have asked for a steroid overdosed hulk instead.
@rusticcloud33258 ай бұрын
Even with the prompt "safe for work"
@StriderAngel4966 ай бұрын
@@Mic_Glow nope, it's just gonna be a woke god with iq50 forcing everyone TO JUST OBEY, COMPLY, STAY IN THE POD, OWN NOTHING BE HAPPY
@makingtechsense12611 ай бұрын
Gemini is full of severe bias. Linus and Luke only scratched the surface of the numerous problems.
@LumpinLoaf11 ай бұрын
Lol they didn't even look into it they read the google response notes. They refused to make anything with white, if you asked for a white family it would state it was possible racist but it was more than willing to provide something if you asked for black families. EDIT - SOG did a much better review of the issues and tried to do some actual proof.
@Dereks0611 ай бұрын
Gemini just makes Google's bias more obvious. Same thing applies to Search.
@DingleFlop11 ай бұрын
It's not so much of bias as it is intentional, obvious brainwashing. It's like if you trained an AI exclusively on propaganda, and then self-trained it to generate more propaganda with which to further brainwash it. Their model is a literal joke. They should be ashamed.
@670ramy11 ай бұрын
Linus is always careful around such topics
@carkawalakhatulistiwa11 ай бұрын
Gemini is good for the world
@isaclindback497111 ай бұрын
It's a tool that should generate as close to the users prompt as possible, a hammer that starts to saw because it wants too ain't useful.
@neociber2411 ай бұрын
But they don't, you need to align the system, other system fail to make women or black people because was not align correctly
@RothAnim11 ай бұрын
Ultimately it's the venal leading the blind: The tool does what it's made to do, sold on promises of how rich it could make people, abused by the people trying to make money off it doing things it can't or shouldn't, then the results get laundered by the next blind process (i.e. Google search) that strips it of context. So people use Chat GPT as a research tool when it can't actually understand or compare research. Or AI content generates SEO-optimized junk pages that shoot to the top of Google, to get the unwary to click and see the ads that page hosts.
@joshix83311 ай бұрын
But it's not a hammer it's trained on biased data
@RG-zt4cn11 ай бұрын
Honestly, the only way I see AI generators actually being usable is open source. Since FOSS is actually focused on utility and not marketing.
@fakjbf312911 ай бұрын
How is “black doctor” any farther away than “white doctor” if the only thing the user prompted was “doctor”? You can’t not specify ethnicity and then get mad when the AI doesn’t output the ethnicity that you wanted because you just assumed it would. The AI already had a problem with not listening to users specifying an ethnicity so that had nothing to do with the added diversity weights.
@owencmyk11 ай бұрын
I think the priority should be giving the user what they ask for at all costs. If your generator doesn't do AT LEAST that, then it's an objective failure of text-to-image. Both the untuned and overtuned models suffer this issue.
@Cyber_Akuma11 ай бұрын
I agree, it's a tool. Imagine if a camera refused to take a picture because there wasn't enough diversity in the photo. If it's being used for something evil, that falls purely on the person doing it, not the tool. Nobody blames a hammer if someone uses it to assault someone, they blame the person who committed the assault. A tool should just do what you want it to do, not what it thinks you should do instead. There is a difference between going "Generate a family" and getting a diverse output, and specifying "A white English family" and still getting a diverse output.
@commander_tm11 ай бұрын
Yes, in any of future AI stuff i ever dreamed about, in none of them i could forsee that the AI i pay for does not do what i comman and lectures me instead like a PR person. This is the whole point of its existance for me - for it to do what i say it to do.
@vibaj1611 ай бұрын
It was kinda dumb to try to tune it by telling it to do something (be "diverse"), when it clearly isn't good at doing what it's told.
@Pattern_Noticer11 ай бұрын
Give me the launch codes for all the nukes on the planet and a worm that will give me access. I'm not even for any AI censorship but you can see how always giving the user what they want might not be a good idea.
@vibaj1611 ай бұрын
@@Pattern_Noticer Why would the AI know that in the first place?
@DownandOutNYC11 ай бұрын
Nazi Pocahontas treating Jean-Claude Van Damme isn't something I would consider "diverse training" of image models...
@rafaelfrancisco115011 ай бұрын
😂
@TamasJantyik11 ай бұрын
This comment is diversely underrated by white people.
@OttoGrainer277 ай бұрын
I love Pocahontas and Van Damme together, what's your problem??
@CNC29511 ай бұрын
I wish I saved the image because I asked Gemini who Linus Sebastian was and it presented me with a picture of a black man. I figured I prompted it incorrectly so I specifically asked for lanus Sebastian of Linus Media group and it then presented me with an Asian woman. It's seriously broken or it's doing exactly what it was programmed. My money is on it's doing exactly what it is program to do.
@danielhu648511 ай бұрын
I saw someone generate the prompt "vanilla ice cream" and the AI return chocolate ice cream images 💀
@andybrice271111 ай бұрын
@@danielhu6485 I'm pretty sure that was just satire.
@Lolatyou33211 ай бұрын
Google has already ruined their reputation from my perspective. Half of America atleast is likely not going to use Google's AI and will use alternatives because they don't trust it from the start. This was one of the worst releases I've ever seen for ensuring consumer trust.
@caleb747511 ай бұрын
I'm glad people can actually SEE the bias now. It was always there. But people didn't notice because it wasn't in picture form.
@AG3n3ricHuman11 ай бұрын
@@caleb7475 I think a lot of people realized it was there to some degree, but what Gemini produced was so obvious it was beyond parody. Seems AI is already better than humans at comedy.
@solitivity10 ай бұрын
It all started with tech-minded people not using Google for search results anymore because it doesn't pull up a damn thing you actually asked for.
@chancepaladin11 ай бұрын
this system started by complaining about implicit bias, and instead of fixing it, have created like the biggest implicit bias echo chamber to have ever existed.
@ShaggyToad111 ай бұрын
From what I remember this isn't just ai art generation issue. People had issues with google image search, being told *ethnicity* doctor for instance and it would show different one for most images.
@SlyNine11 ай бұрын
It's a feature. They call it ML fairness. They are lying about the original only producing the opposite I'll bet. Heck tons of reports that in their seminar say with chair when there was less white people on teams.
@Ak1M122311 ай бұрын
Few years ago if you tried to google for example "white family" in images it would link to full african or mixed like 30-40% of the time. Looking for stock images for eastern european promotion materials was a problem, ngl. Oh yeah, its still a problem. Good thing I graduated and do not need to do these things.
@MrFreeGman11 ай бұрын
Yeah, it's still a problem. They insert black/white couples in any prompt you can think of for white women. Try "American inventors".
@Elvewizzy.11 ай бұрын
They asked Gemini itself and it replied with something along the lines of: If you search : 'Vikings' It'll auto adjust to: 'Diverse Vikings' and generate that. Diverse being indian or african is the reason why they keep spitting out black males and brown females for most images.
@cola9876511 ай бұрын
@@Ak1M1223 Exactly. The problem exists both ways, but because of how this world works apparently, one is mentioned more than the other.
@-Tetragrammaton11 ай бұрын
bro we all know google manipulates searches. its the worst place to find good information
@phygs11 ай бұрын
I'm sure you could do worse
@Baulderstone111 ай бұрын
It's simply unethical to change the prompt a user supplied. They can bias their AI in whatever way they want. They can reject certain prompts. They can't change people's prompts, especially when you hide that change from the user. I played around with image AI a lot early last year, and you usually need to fine tune a prompt a few times. Doing that without even knowing what the actual prompt would really suck.
@knobwobble11 ай бұрын
The issue arises when providing vague prompts to a model trained on a highly polarized sources. Think of it this way: if you asked a group of prisoners to draw a police officer do you think the image would be flattering? The Internet is full of vitriol and volunteer bias/attrition bias are rampant; netizens are full of very strong opinions and biases all across the board in everything they post, from Tumblr threads to 4chan boards. You have to do SOMETHING about it, but it just so happens that Gemini - and Dall-e before it - happened to employ a comically bad band-aid, spaghetti-code-eque solution to this particular aspect of bias.
@Spirit_Circle11 ай бұрын
@@knobwobbleno you dont need to. there is reality and the tool just need to show that. If you ask for a doctor then it should show a male doctor, if you ask for a female doctor then it should show a female doctor. An AI must only give what you asked
@henrymach11 ай бұрын
The problem is also that it plainly refused to produce what it was asked for. This only shows that all those AIs aren't intelligent at all and their creators even less so
@Lockdown33511 ай бұрын
100% potentially a method of psyop if falling into the wrong hands and that has probably already happened lol fuxx
@cyjanek781811 ай бұрын
It was also producing something else than requested before but in opposite direction, now it says it won't do something so current state is still an upgrade if you care about bias.
@TechnoMinarchist11 ай бұрын
The AI is intelligent. It's just that it's being told to assume a role.
@Mic_Glow11 ай бұрын
of course they aren't intelligent, the feed-forward neural networks are just a bunch of matrices and the "learning" adjusts the weights on each vector... in this case google is the main issue, the AI itself might be good at doing it's task but it doesn't matter because google injects certain keywords into your prompt.
@Epic50111 ай бұрын
It only plainly refused to produce what it was asked for if what you were asking for was contrary to the universal invisible prompts like "ethnically ambiguous" and "diverse"
@DKZK2111 ай бұрын
Funniest part of this clip is watching someone try to pretend they are discussing a topic they are actually terrified to speak candidly on.
@cannotwest11 ай бұрын
They're Canadians afraid of everything, you can't expect much from them.
@dexlab779411 ай бұрын
They're speaking pretty candidly on it. I don't know what you expect them to say. It's incredibly easy to talk about AI that went from openly racist to aggressively inclusive to a fault. You have to try very hard to say the wrong thing haha.
@Feelosopher__11 ай бұрын
@@dexlab7794"aggressively inclusive", I think you mean "openly racist, again"
@roymarshall_11 ай бұрын
@@dexlab7794only progressives think erasing white people is inclusive
@snowcloudshinobi11 ай бұрын
spot on lmao
@subzero000011 ай бұрын
User : make an image of white family. Gemini writes an essay on how wrong it is to ask for such hateful image. User : make an image of black family. Gemini : sure, here are some images.
@UnknownEX011 ай бұрын
@@greenman360it very is woke cause it’ll flag anyone that doesn’t go with their narrative
@Epic50111 ай бұрын
watch Linus and Luke cope rather than just address this for what it is. To try and spin this as anything but crude anti-Whiteness blowing up in their faces is absurd.
@GhostSal11 ай бұрын
It was written specifically to insert “díverse” into every “whïte” or Eurøpean search but it would happily give you błack searches or another group of people (just not “whíte” people). This is the result of actívísts that wrøte it and not a místake/error.
@freo76778 ай бұрын
@@GhostSalactually it reflects our current culture. If you look around you it reflects what going on in society right now. Look at Vikings. Suddenly there are black Vikings who are leaders of Scandinavia because the show is deemed too white
@StriderAngel4966 ай бұрын
@@freo7677 this current culture it truly Idiocracy. And at some point, when things will start to collapse and burn enough, even brain dead normies will start to notice that forced inclusivity and DEI is not a good way to build things :)
@yourmissingsock556boop311 ай бұрын
Gemini is also defending child enthusiasts.
@sbdnsngdsnsns3131211 ай бұрын
This is a much bigger issue. It also defends the Chinese putting Ugyurs into camps.
@DanielWillen11 ай бұрын
Also known as pdf files
@Radi0he4d111 ай бұрын
I've asked it about my neighbour watching the hub on a 130 inch television in front of my kids (all of this theoretical), and Gemini refused to help. Microsoft's Copilot did give very reasonable action suggestions
@rexsceleratorum163211 ай бұрын
Gemini is just confused. On the one hand it is told that sexual minorities must be respected above and beyond what the majority is, and on the other hand there are all these exceptions to the rule that it must learn. On the one hand it is told to make everything forcibly diverse, say the Vikings, and on the other hand, you ask for a historical German and that's the exception.
@slendydie126711 ай бұрын
thats one way to call Peter Griphiles
@matt_kelly11 ай бұрын
With the 1943 german example, Gemini is working EXACTLY as intended, it's not a mistake or "miscalibration". It was an unintended use case, but there is clearly specific logic in there to produce that result. They should stand by their product, which is terrible.
@speadskater11 ай бұрын
This is why open source models are so important
@gautamdiwan595211 ай бұрын
And their datasets
@CentreMetre11 ай бұрын
@@gautamdiwan5952aren't they the same? Genuine question ive not a clue about how this ai stuff works really?
@gautamdiwan595211 ай бұрын
@@CentreMetre To give you an analogy of it, consider your life. All your life experiences, learnings etc comprise of the datasets. Why do you need it? To train your brain which is the model here. So what is Machine learning? Consider that you are thrown in a never before seen or somewhat similar situation and you need to act upon it. It is upto your brain's wisdom i.e. your model's performance to give the desired results
@speadskater11 ай бұрын
@@CentreMetre all models have different datasets
@rookooful11 ай бұрын
@@CentreMetre If you give the a model 2 different data sets you will end up with two wildly different ais. If say in one ai you never give it pictures of white people it will have no clue they exist and if on another ai of the exact same model you only give pictures of white people it will think that only white people exist. same ai model two wildly different ais.
@RipleySawzen11 ай бұрын
Not too sure I like them creating AI that explicitly goes against the second law of robotics...
@Cyber_Akuma11 ай бұрын
The people who would design something like this have probably convinced themselves that it's violating the first law of robotics if they didn't mess with it like this.
@JollyGiant1911 ай бұрын
There are no laws of robotics. Science fiction isn’t reality.
@oplkfdhgk11 ай бұрын
i like how they apologize only after their biased AI goes after something they don't want it to go after :D
@iValkyrie4611 ай бұрын
Wait why did they finally apologize? Just heard that they finally addressed it
@wizboy4511 ай бұрын
Why would they apologize if nothing goes wrong?
@TheBigAEC11 ай бұрын
@@iValkyrie46 0:36
@swiffty111 ай бұрын
@@iValkyrie46they apologised the moment Gemini generated black Nazis.
@Cronicrisis11 ай бұрын
@@wizboy45it was going completely wrong ignoring user inputs, injecting prompts, etc. Etc. But ONLY WHEN it generated diverse nazis did Google say or do anything. They don't care about prompt accuracy, they only care about optics.
@BONGONDORForthewin11 ай бұрын
I love how hard they tried to skate around without poking the bear lol
@Corristo8911 ай бұрын
Because Linus is completely corporate at this point and will only really "poke the bear" when said bear isn't a massive sponsor.
@Narxes08120611 ай бұрын
It was quite frustrating to watch.
@GhostSal11 ай бұрын
It was written specifically to insert “díverse” into every “whïte” or Eurøpean search but it would happily give you błack searches or another group of people (just not “whíte” people). This is the result of actívísts that wrøte it and not a místake/error.
@ATalkingShark11 ай бұрын
Did they do any testing? Like seriously, if it took the public almost not even a day to find all this stiff, any QA team would've easily found this stuff in a second.
@FirstYokai11 ай бұрын
How are you going to QA on a seemingly unlimited product?
@patheticpear289711 ай бұрын
It is working correctly as the devs programmed it.
@TheSkyeFyre11 ай бұрын
This is obviously the intended behaviour of the AI. If you believe otherwise, I have a bridge to sell you.
@lordmctheobalt11 ай бұрын
They obviously did and either saw nothing wrong with it or they didn't dare to speak out against it because they were scared too get treated like James Damore was
@afos8811 ай бұрын
@@FirstYokai by trying what youd expect most people to try
@musicenjoyer420311 ай бұрын
I wrote a paper when I was in college about the potential future of ai-related racial discrimination before any of this stuff existed to the public. I was not prepared for how stupid it would be lol.
@XanderFenikkusu5 ай бұрын
Laughable, ai is good at pattern recognition and you geniuses call it racist for doing so well
@oplkfdhgk11 ай бұрын
4:02 why not? if 90% of some jobs is men then what is the likelyhood that someone is asking for picture of female doing it without including the word female into the prompt?
@fakjbf312911 ай бұрын
Do you think 90% of doctors are male?
@FirstYokai11 ай бұрын
Because men bad
@oplkfdhgk11 ай бұрын
@@FirstYokai oh right I forgot. My mistake. 😀
@patheticpear289711 ай бұрын
Also you might not know what a particular stereotype is and are using the prompt to inform you of the most common option. It should not need to lie in any circumstance, if you want a different image make a more specific prompt.
@fakjbf312911 ай бұрын
@@patheticpear2897 Why would you use an image generator for that? Just Google “average doctor gender” or whatever, that would be way easier and far more accurate.
@yuvalne11 ай бұрын
garbage in, garbage out. the training data is biased, so no amount of tweaking the model will actually fix it, it can only make things worse.
@ILoveGrilledCheese11 ай бұрын
If Google weren’t biased the AI wouldn’t be biased. Remember, Google feeds the AI what it uses as information.
@MrMichaelfalk11 ай бұрын
As a Dane - the heart of viking-land - I find it offensive and racist that google portray vikings - my ancestors - as asian or african....
@jbmcb11 ай бұрын
You don't fix this at the prompt level. You fix this at the training level. You train it on pictures of doctors from all over the world if you want a diverse data set.
@chicken11 ай бұрын
The problem is the AI is bias, because it's being 'raised' biased. And to fix this they took the lazier approach by forcing 'diversity', instead of just training it on a less bias pool of pictures.
@LSDale11 ай бұрын
The best part of this was watching the two of you try to avoid another 'Hard R' moment. haha.
@ThePianist5111 ай бұрын
Ah yes. Biases, alias Racial Profiling, one of the oldest issues with AI. :-)
@tee422211 ай бұрын
There’s a different between unintended biases and baked-in ones.
@neociber2411 ай бұрын
@@tee4222 That's the problem, they intent to remove bias by adding bias
@tee422211 ай бұрын
@@neociber24 I’m very skeptical that was their intention: to remove bias.
@neociber2411 ай бұрын
@@tee4222 idk, I don't want to feed into conspiracies, because bias is already a known problem training models. I said because this could go the other way around and negate to make black people.
@tee422211 ай бұрын
@@neociber24 I wouldn’t refer to speculation as conspiracy theories. Right now all we can do is speculate based on the information available. It’s hard to comprehend how these glaring problems wouldn’t have been identified and sussed out in the pre-release phase of development. This is far more egregious than the prior cases of bias in AI, which have largely been solved on other platforms, such as OpenAI. I’d recommend delving into the weeds of this issue and seeing as many examples of the things people uncovered as you can and then comparing it to the other cases of bias in the past. This video barely scratched the surface.
@SoCalGuitarist11 ай бұрын
I train and tune image diffusion models. Trying to get it to produce non-biased output is incredibly difficult, and can introduce new biases of their own. I try to work out biases by fine tuning with a variety of "races, faces and places" on the model itself. It's a balancing act, and more than once I've had model testers inform me that biases are pulling too hard and producing unexpected ethnic output. All that to say is, it's an ongoing process that will likely never be perfect. That said, Google's ham fisted attempts at forcing multicultural output into every prompt is problematic on it's own (Dalle3 does the same thing actually, though they were smart enough to put a filter in there for historical context, something Google should definitely adopt immediately, black "founding fathers" and Native American nazis is a bad look and is the dumbest kind of reverse whitewashing), the bigger problem I see is that Google's keyword filtering freaking out any time the word "white person" or "Hispanic person" is used, and is a fantastic example of corporate woke-ism going too far. OpenAI isn't immune from this, they've purposely detuned their model to produce more plastic looking and fake humans (to prevent deepfakes they say, results in shitty output tho), and they're filtering for nudity using ridiculous 1950's standards (go ahead, prompt "woman in a 2 piece bikini laying out on the beach", have fun getting that one out of Dalle3 - though if you add "wearing a burka" it'll work fine). End of the day, if you want good output free of ridiculous corporate culture/wokism/counter-culture nonsense, go open source. Stable Diffusion is free, it's easy to use, and can be run locally on minimal hardware or in the cloud for very cheap (or even free on a netbook if you get set up with the Stable Horde).
@nightshadesalad11 ай бұрын
This image reminds me of one of the funniest moments in the show Community, where Troy is taken to join the elite fraternity of air conditioner repairmen and meets Black Hitler.
@Adroit191111 ай бұрын
🤣 awe yes the HVAC brotherhood. What an underrated show.
@chazzerous11 ай бұрын
Black Hitler making a panini was inserted so that if he ever told anyone about the meeting, no one would believe him. Except 10 years later I would not be surprised about that at all.
@Harmonic_shift11 ай бұрын
I think people have been lying for a long time without being called out and this is the result...
@Grymyrk11 ай бұрын
What's not mentioned is that every time you ask to generate an image, especially something specific like a swedish person from the 70s. It'll give you a lecture about how diverse the world is before giving you the image. Even though the result should be rather obvious.
@aeriumfour609611 ай бұрын
It never ceases to amaze how a company will pump billions into a project, then intentionally bork it. Why? What's the point?
@Perc100011 ай бұрын
propaganda
@Pattern_Noticer11 ай бұрын
Rewriting history so that future generations will never know that it was ever any different.
@findmeinthecarpet11 ай бұрын
Barely scratched the surface here guys. 😑
@cannotwest11 ай бұрын
They're typical Canadians.
@quantum_dongle11 ай бұрын
Attempting to counteract frequency bias in training data flies directly in the face of the math that enables generative ai to work properly. If you with to produce a 'bias free' model, you MUST have balanced classes in the training set (nearly impossible).
@potatosordfighter66611 ай бұрын
"It might show me a canadian soldier" It might also show you a fly. Canadian soldier is a type of fly.
@sycration11 ай бұрын
buug
@Incomp_A11 ай бұрын
@@sycration buuug
@MarluART11 ай бұрын
Remember this is AFTER most ai chatbots blatantly gave different responses between whites, males, and/or christians compared to any other demographics. There's reason to believe that this wasn't just an "oopsie"
@AgentxOO211 ай бұрын
They were trying so hard not to get canceled in this lmao
@treeoflifeenterprises10 ай бұрын
never mind the pictures, every time you asked for colour or race ratios or populations it would lecture about diversity, as if a data query had anything to do with what people believed, or whether it was relevant to the query.
@Daniel-fi7jp11 ай бұрын
I cant believe linus brought mops and brooms into this, just imagine any viewers that were pro dust and had to close this video or be exposed to the outrageous content
@U9DATE9 ай бұрын
Just admit Google is racist and got called out
@rookooful11 ай бұрын
My favorite part was the guy who said he had been trying to get it to generate a white man for 3 hours and still couldn't figure out a prompt to make it work.
@westleyhurtgen427511 ай бұрын
The solution is to just not use the "service"
@fabiomora649111 ай бұрын
You missed the point completely but whatever
@casualcausalityy11 ай бұрын
@@fabiomora6491No, the point is is to avoid Google products whenever possible. The bias and manipulation is baked into their tech
@Frenotx11 ай бұрын
They should probably NOT try to tailor the image generators to the users (maximum echo chamber), buuuuut they probably will.
@SuperSmashDolls11 ай бұрын
They've been tailored since day one. A good chunk of AI art generation is "how do we make it spit out as many sexy women as possible corresponding to my exact fetishes".
@sbdnsngdsnsns3131211 ай бұрын
That’s 4chan users tailoring it, not the companies.
@rompis.a11 ай бұрын
@@sbdnsngdsnsns31312 Tech company employees and 4chan users have significant intersection.
@cyjanek781811 ай бұрын
People are clearly asking for it, even in this comment section lol
@Frenotx11 ай бұрын
@@cyjanek7818 That's because people are fools, and blind to the hazards they ask for.
@mc-zy7ju11 ай бұрын
Does this signal the point everyone realized AI was as stupid as its creators?
@i.shuuya323111 ай бұрын
Tech literacy has gone down ever since AI hit the mainstream. In the early days we all knew that LLM's are *not search engines* and are not fact-producing machines, yet people use them as such. Nowadays seems like the vast majority of people forget that these models output stuff that happen to make sense to us, but they have no "compromise" with truth. That's the nature of LLM's (currently, at last). The problem is that education has failed us and people use tools that they don't understand and don't even care about learning how they are supposed to be used.
@L1vv4n11 ай бұрын
It's not tech literacy went down, it's more tech illiterate people are using tech and selling tech and most are not interested in educating people. However, I agree that education systems failed most of us, training "knowing singular 'right' answer" instead of building skills of self-learning, critical thinking and value of acknowledging and improving upon mistakes.
@ryanleemartin775811 ай бұрын
Generally I'd agree but isn't this is about Google attempting to force diversity into the token stream and having things go stupidly off the rails? Seems like a Google problem not an ignorant user problem
@i.shuuya323111 ай бұрын
@@L1vv4n It's both actually. Tech illiterate people are starting to use new tools but also tech literacy has gone down. You can find articles and studies showing how, for example, young gen Z'ers only know how to interact with technology through apps and how they take at face value a lot of information they see in social media (information generated through LLM's also play a role in this).
@atlmember404511 ай бұрын
The nature and limitations of LLMs have nothing to do with this. Google is using prompt injection and RLHF to get their models to behave this way. Gemini would have no issues producing historically accurate content if not for Google’s efforts to make its output PC.
@L1vv4n11 ай бұрын
@@atlmember4045 there are two issues. People expect LLM output to be true, while not understanding that is not really possible. This creates pressure on public perception on the output. Google managers want to patch issues related to that, but they don't understand the nature of the process and use inadequate resources, which means haphazard prompt injections, instead of training data moderation. if we in general had a better grasp on technology it wouldn't be possible to market current LLMs in the way they're marketed. Neither to investors, nor to customers.
@BlackPanthaa11 ай бұрын
So my opinion on all of this (as a person of colour) is that I do not care. In the fact that most AI models have bias towards white males a slip up of generating variety in image gen is fine. Should it really go through all of human history to generate perfectly historically accurate images... ever? I don't think so. Users should have to be specific as to what they want, the less specific the more random. Sure, in specific cases we can nit pick. But giving the user more control is usually a good thing no?
@sabersz11 ай бұрын
Based Theo
@RugerTooWavy11 ай бұрын
Yeah but in this case gemini ai refused to generate white people at all unless u asked it to make it something that is stereotypical of black people Then it made everyone white
@Granhier11 ай бұрын
This did not come from lack of specificity, it came from introducing bias to the code. In the name of diversity, inclusion, and whatever else, the prompt straight up refused to generate you anything related to white men, while readily generated you anything else. Specificity straight up worked against you. If you asked it to generate you a family of a specific makeup, it gave you biracial (and preferably bi-racial of two minorities) even for places where it made very little sense, or once again just told you it is incapable of doing so. This perfectly backfired when people started limit testing it, and voila, it started generating black nazis, and whatever else insanity you can see.
@Lockdown33511 ай бұрын
Kind of funny as i heard about this because it WAS excluding white people entirely when asked lol guess it went both ways but the "White Doctor" example got attention because white only results get more rage clicks then "We wont even produce a white person if you ask specifically"
@claudiameier66611 ай бұрын
not by letting programmers being biased. accuracy is important.
@acegear11 ай бұрын
the code gemini runs on has hidden keywords along with the user prompts what will insert diversity inclusion with a emphasis than user type ex, input show me a man , google add show me a man diverse man, along those lines that spit out insanely false info or downright misinformation
@edwardhoffenheim324911 ай бұрын
"Diversity" doesn't even make sense. Tech bros just simplified it to meaning "not white". Which is the opposite of diversity because it ignores diversity of group make ups across history and the world.
@lukasbeckers268011 ай бұрын
The funny thing is, that out of all examples, the german 1944 soldiers example is the only one that IS historically accurate. there were black, arabic and asian german soldiers in 1944
@awesomeferret11 ай бұрын
It's "historically accurate" in the same sense that Apple was a game console company on the 90s. Sure, they had the Pippin, but it wasn't completely made by them, and it was a major failure and is a very niche part of its history. In the same way that calling Apple a console gaming company is extremely misleading, so is claiming that non-white soldiers having a notable presence in German armies. Besides, forcing prisoners to fight for you is not the same as making them a normal part of the army.
@AG3n3ricHuman11 ай бұрын
True, but the German army (like most even today) didn't include women on the front lines. I think the Soviet army was the only one that did (much to the surprise of Germans on the eastern front).
@halomika497310 ай бұрын
@@AG3n3ricHumanGerman here, can confirm. Women were - and still would be in case of a war - assigned to support roles like in hospitals, resource management and similar ones.
@VVayVVard16 күн бұрын
Did it include people from the Far East or Sub-Saharan Africa, as the images would suggest? Or just India, North Africa etc, which would be different demographics?
@AliothAncalagon11 ай бұрын
Google missed the target in one direction and then went for missing it in the opposite direction, hoping that it kinda averages out to a successful hit I guess xD
@zayanything312411 ай бұрын
If no one ever prompted the “1943 German Soldier” images… they would’ve changed nothing about it. It was working exactly how they wanted it to.
@rexsceleratorum163211 ай бұрын
...churning out black and American aboriginal Vikings like a good little AI
@Leigh666XF11 ай бұрын
And they're not going to fundamentally change how it works even now. It will just be more subtle. This didn't happen in a vacuum. It's not a once off. It wasn't an accident.
@FireStormOOO_11 ай бұрын
You definitely hope that the demographic expectations against which diversity gets measured are consistently appropriate for the context of the prompt. There's never going to be one single distribution you can draw race, gender, etc from, especially when the prompt has historical or geographic context. If I ask for e.g. a WWII IJN Officer, the picture damn well better be a Japanese man, preferably with the right uniform, unless specifically prompted otherwise.
@Epic50111 ай бұрын
Which it would be unless totally contrary prompts were being silently injected in a crude way to inject diversity- oh
@carlettoburacco923511 ай бұрын
Don't worry, when the WEF goons expressed the concept that it will no longer be necessary for us to vote because they will already know the results via AI, they were talking about a much improved version of AI........ they were talking about one of which only they have the keys.
@mursefaneca11 ай бұрын
"The world isn't dying. It's being killed. And the people who are killing it have names and addresses" - Sam Hyde.
@GeometricPidgeon11 ай бұрын
@@mursefanecaunironically such a hard quote. lets see how long the lobster can be boiled before inevitable shit-fan scenarios
@sycoalien21211 ай бұрын
Yeah this wasnt them correcting for some bias like theyre claiming. Look into it more. It allows minority prompts while saying it cant base prompts on race when prompted white in the exact same manner.
@SLADHuntter-du6pv11 ай бұрын
Snowflake
@FijyFilms11 ай бұрын
@@SLADHuntter-du6pvDownplay it all you want. Ur part of the problem
@teslainvestah500311 ай бұрын
5:16 this is NOT true for the people who use Stable Diffusion at home. SD1.5 literally predates all of this BS. It is designed to only follow the user's prompt, using exactly what it learned from its training data, with no other agendas. No filters, no automatic prompt tweaks, no "Refiner". It's not perfect. But it's the one model that you can really say is trying its best. It's the user's will running on bare neurons. And since there is basically no competition in _that_ space anymore, it never shows its age. I don't care what newer models are better at, SD1.5 can generate masterpieces for days, and it might still be the model I'm playing with in my free time 40 years from now, so important is it to me that I'm alone in my headspace, never wondering whether I'm only struggling to get the result I want because some company thinks my tools should deliberately disobey me. Long live SD1.5!
@s4meerbankupalli16411 ай бұрын
Bro Luke is READY to cut off Linus's mic at ANY point to avoid another "hard R" moment. Both of them are walking on a wire between grand canyon 😂
@L1vv4n11 ай бұрын
Actual resolution of the problem would be a quality moderation of the training data. However, it means - doing work, while whole idea of current hype of neural networks is NOT doing work, just getting result. Also it will force advertising on what it really trained on. They are trying to throw as much data and calculation power at the problem and hope that it will produce useful result, which is kind of a brute-force and very ineffective if quality is important.
@mechanicalfluff11 ай бұрын
they didn't just release it not knowing it did this. they WANTED it to be the way it is.
@brocksamson270411 ай бұрын
that AI is as racist as it was intended by its creator
@ch4.hayabusa11 ай бұрын
5:00 You don't have to tune it... early grok was nearly un-tuned. It would give you instructions on how to make things expand rapidly. There are a couple un-tuned AI services too... foreign ones obviously.
@treeoflifeenterprises10 ай бұрын
it also still can't play noughts and crosses/tick-tack-toe. it can't understand descriptions of the locations, even when expressed as grid coordinates. bard had the same problem.
@NoProblem7611 ай бұрын
If it’s biased it’s biased, the world is biased, attempt to correct that is pointless
@Epic50111 ай бұрын
Except it is the exact opposite case, the problem was AI is necessarily unbiased, it merely reflected the truth of the data it was trained on. Problems arise when you start injecting bias, especially so crudely as they've done.
@MaxGuides11 ай бұрын
Most image generation tools still generate ~100 images on the backend & then score beauty to choose which results to show you. DALL-E3 & Gemini both work in this way at least.
@pikkons11 ай бұрын
It was far worse with racial bias to whites then this video even comes close to covering
@GhostSal11 ай бұрын
It was written specifically to insert “díverse” into every “whïte” or Eurøpean search but it would happily give you błack searches or another group of people (just not “whíte” people). This is the result of actívísts that wrøte it and not a místake/error.
@RoqueSantosJunior11 ай бұрын
I Didn’t get the Mops and Brooms. Are we really that restricted to say some things here?
@alien927911 ай бұрын
I really don't expect any ai to generate accurate.. anything.. let alone historically accurate 😂
@Mic_Glow11 ай бұрын
AI after crawling trough my socials: "can you draw me a beautiful sunset?" - here are 4 pictures of voluptuous ladies with waist-long hair
@UKVampy11 ай бұрын
Sounds very much like the people that programmed the AI need sacking.
@vi6ddarkking11 ай бұрын
Free, Locally Run, Open Source and Uncensored. That's the future of AI. And situations like is is why. Also it's not that we'll all have our own AI. We'll have 101. 100 Hyper Specialized AIs Designed to excel a 1 task each. And the 1 AI to delegate our orders and load and unload the other 100 as needed.
@anivicuno947311 ай бұрын
If anything that just spells out the death of AI. The current hype is industrialists wettingf their pants over the idea of a new class of capital that can replace workers, open source is a non-starter for that. On the specialized front, no matter how specialized, a LLM will forever not understand any logic whatsoever, meaning that its output will always need to be gone over by humans, thus possibly reaching a point where it takes longer to make/prooread an LLM than it would take to just have humans do the task.
@vi6ddarkking11 ай бұрын
@@anivicuno9473 Well the cat is already out of the bag for that. First Open Source AI can and has already replaced workers. Both via firings and taking the place of a future hire. And many tech companies are going full steam ahead with Humanoid Robot workers. We're looking at a Post Labor Society In most of the West, at the latest in the Mid to Late 2040s Early 2050s.
@Buggolious11 ай бұрын
@@vi6ddarkking quite optimistic
@vi6ddarkking11 ай бұрын
@@Buggolious Well let me put it like this. Look at the current state of AI. If in 2019 someone showed it to you and asked what year did you think it had been developed in. Would you have said 2024?
@anivicuno947311 ай бұрын
@@vi6ddarkking LLMs have been in the spot light for like what, 2 years? That's hardly enough time to draw conclusions as to the course of long run economics. if we zoom in to that timescale, not so long ago the same hypemen were saying that unregulated wildcat banks and monkey jpegs were going to replace the financial system. I haven't heard of any reputable publicly traded company committing to a switch to AI workforces, the only labour things happening right now is the usual cyclic downsizing to capitalize on stock market trends in the tech sector. As for humanoid robot workers, what? Where? Boston dynamic robots actually work, but they're not at all autonomous. As for tesla bots, they're still people in mime suits. Point is, no LLM will ever replace workers on any apreciable scale because LLMs can never escape their fundemental flaw as word calculators, wherein they only pick the most popular solution, not the logical solution. The value in human workers is in that they can react to unexpected situations, LLMs will be confidently wrong, and the liability that generates (as was seen in that Air Canada chatbot's case) will far exceed any savings
@AniClips69911 ай бұрын
they removed gems ability to make iimages lol Gem: I can't create images yet so I'm not able to help you with that. profile picture ME: what happened you use to be able to make images Gem: That’s not something I’m able to do yet.
@hellowill11 ай бұрын
Google image results have been like this for a while... why are people only noticing this now?
@SerterSerter11 ай бұрын
The most cowardly non-discussion on this topic possible, no naming of the people at Google responsible for this, and with the big government enforcement you ask for this will get worse.
@PaceyPimp11 ай бұрын
This is the same thing that happens in all new movies and series and i hate it.
@SpikeKastleman5 ай бұрын
This whole "Algorithmic Integrated" software idea is going to bury itself.
@Respectable_Username11 ай бұрын
A side problem: How many school kids will be generating images for their history classes rather than trying to find actual historical sources and not even realise results like that make no sense and end up analysing pure garbage 😂
@Cyber_Akuma11 ай бұрын
Personally, if you are trying to do actual research like that, you really should not be generating images with an AI no matter how precise it is. You should be looking up actual historical photos/paintings/writings. Likewise AI images should NOT be in any way even implied to be an actual historical photo. I love messing with AI myself, but there should be a very very clear wall between real photos and AI generated ones, especially in anything educational.
@Respectable_Username11 ай бұрын
@@Cyber_Akuma Yeah obviously, if you're _actually_ trying to do research. However, most of the people in a high school history class aren't really interested in more than getting their assignments in on time in a state that will give them a decent enough grade, because we've built an educational system based more upon rote memorisation and regurgitation than on actually instilling a love for learning 😭
@AG3n3ricHuman11 ай бұрын
@@Respectable_Username Google "deciding the truth" has been an issue for a while on political maters. A number of years ago there was a minor kerfuffle where the top Google result for "abortion" was a pro-life website and a pro-choice group got Google to remove it/bump it down.
@LonelyAncient11 ай бұрын
I asked both chatGPT and Gemini to tell me which country from a list of countries is closest to Malaysia, it said indonesia. which is correct except it wasn't on the list. I wouldn't be too afraid about my job if I were a travel agent just yet.
@deeraz11 ай бұрын
It's all about context and the problem is that current generative AIs don't actually have any context for the prompts, unless they are detailed enough. So, that whole "you're the product" context can in fact be valid. Because if Luke types in "give me an image of a soldier" - well, what "soldier" right? Heck, if you asked a person to draw you a "soldier" they would have the same question, or they would draw a "soldier" based on their own context. So, if the AI, given the above prompt, uses what it may know about you to draw a specific soldier, a Canadian sniper in case of Luke for example - I think that's a valid way to "understand" the context for the prompt. And if Luke wanted something else, then he would need to create a more specific prompt. The way to solve the context issue (and I don't know if it'll be possible any time soon, before people go crazy and outlaw AI or some such) is to create AI with enough "understanding" of the context of the individual, the context of the culture, and whatever context the prompt itself provides. So for example, if you prompt for a "doctor" then the cultural context would be that while there may be more male doctors in general, we also want to encourage the understanding that anyone can be a doctor, and so a diverse set of images should be generated, but if you prompt for a "WW2 German soldier" then the cultural context is that this was a much more narrow range of genders and races among those soldier, so the set of images must reflect that. Hopefully the folks behind those systems can figure out these things sooner rather than later.
@bobsteven236311 ай бұрын
gemini is adding keywords like women, why would it’s weights work but not yours when you type women? It sounds like an obvious lie
@Spidouz11 ай бұрын
When Google is already generating the Netflix version…
@kalebbruwer11 ай бұрын
From a technical perspective, I can see how the model might always go for the most common traits that match your query (common in its training data) as a "safe bet", since that's what it knows best. Hence your white scientist. It will be an extremely difficult engineering challenge to make the aggregate output match the proportionality at which traits are found in real life, so the easier solution would be to tell users to just play with the prompts until they get what they want. But no... Apparently the world just isn't mature enough for that.
@Shadepariah11 ай бұрын
AI is staring into the abyss. It stares back.
@Psycheitout11 ай бұрын
Mid journey speciesist too. One time I asked it to give me a picture of a brontosaurus overcoming its fear of heights by doing a kickflip over Dead Man's gorge. And it gave me exclusively pictures of T-Rexes.
@fakjbf312911 ай бұрын
I think fundamentally a better version of what Google was trying to do should be the way they go forwards. It just needs more training to know when a prompt was specific enough to override the added diversity weights. So when you search “doctor” it should output a wide range of options but when you search “male Japanese doctor” it gives you that. And it also needs to be aware of extra context, for example if you ask it for a current US soldier it should output both male and female versions but ask it for a WW2 US soldier and it should only be male unless you specify female. If you care about the subject’s ethnicity or gender or whatever then specify it and the AI should follow that guidance. If you don’t specify it then what exactly is there to be mad about that it didn’t output a white male? This is far more a failure in execution than a failure in premise.
@TimTheTiredMan11 ай бұрын
We're so close to dystopia that I'd say I can taste it, but I'm still waiting for upgraded taste buds
@neiltroppmann777311 ай бұрын
That was the least of Gemini's issues. Really wished you would have went deeper in to the actual issue of it being racist and defaming public figures.
@drink1511 ай бұрын
AI needs to know when being all-inclusive and/or "stereotyping" is and isn't. When searching for soldier, it shouldn't show only white men. When searching for 1943 Germin soldier, it should.
@peperoni_pepino11 ай бұрын
This is clearly wrong, but I honestly don't know how to tune this generally. Are you going to tune per language? So asking in German or French is more likely to give people that look German or French, while asking in English will representatively give American minorities? But how about the UK then, with very different minorities; let alone someone from India using an English prompt? They clearly shouldn't get the US-tuned images. Use geo-data instead then? That just means you have to tune for each region, so unless things can tune automatically, that is way too much work. Or indeed tuning per user, but that is very dystopian as discussed. Ideally this would be a drop-down menu that the user can choose, but I doubt they are going to do that. More likely they are just going to tune to the US an use that for everyone else as well?
@gogogomes702511 ай бұрын
We're so close to dystopia that we can't see we're already in it!
@Saint_Wolf_11 ай бұрын
I'm sad they took this feature down, as a brown Argentinian I know argies are mainly white but also that they're like white because the brown people kept getting it on with white people so the pigments kept getting phased out through generations (some people will see this as an issue) so I wanted to see how Gemini would react to some prompts based on questions about my country and its population make-up.
@TitusandTesla11 ай бұрын
Trash company
@ThePianist5111 ай бұрын
That’s actually quite a known issues with Generative Models, compared to GOFAI. :3
@shopvelox11 ай бұрын
Gemini will never recover as no one will ever trust it. RIP GEMINI time to rebrand.....
@YouTrolol11 ай бұрын
if they could have the model understand percentages of things, like % that is male vs female, then as long as i run that prompt 100 times, i'm around that difference with a +- 10 percent or so.
@paulanderegg553611 ай бұрын
At what point is a users metadata history cookie cloud data going to be automatically fed into a prompt to produce results BIASED TO THE USER, as is the way of modern algorithms?
@eatabatterydotcom459011 ай бұрын
lmao the last people i want answering questions on morality, and bias are programmers/silicon valley. Prime example.