Algorithmic Bias and Fairness: Crash Course AI #18

  Рет қаралды 172,727

CrashCourse

CrashCourse

Күн бұрын

Пікірлер: 154
@Jeremiah_ray
@Jeremiah_ray 8 ай бұрын
WGU Students 🙋🏽‍♂
@Kalaphant
@Kalaphant 6 ай бұрын
0:50 I was not expecting to get a lesson about discrimination when I clicked on a CCAI video XD
@BK-en1uo
@BK-en1uo 4 жыл бұрын
I guess you could make the whole second season of CC Ecology about all the stuff that lives under that hat.
@mathmeetsmusic
@mathmeetsmusic 4 жыл бұрын
Love the nonbinary inclusion remark. PBS on the whole isn't great with gender inclusive language and I didn't expect crash course AI to be the pioneer, but I sure appreciate it.
@melonlord1414
@melonlord1414 4 жыл бұрын
Well, CC is produced by Complexly. The company was founded by John and Hank Green, two youtubers with an very diverse community.
@nantukoprime
@nantukoprime 4 жыл бұрын
Did a short stint working on an algorithm that looked for potential pickpockets, trained on video of actual incidents that led to arrest. Was moved to another project after I kept bringing up the fact that the algorithm was biased as the data set was generally representative of a subset of pickpockets, the ones who get caught. My request for video of successful pickpockets that were not arrested to train the algorithm was not viewed favorably.
@saulgalloway2295
@saulgalloway2295 4 жыл бұрын
6 agreements and disagreements. 1. Nurses are 90% female. Programmers are 80% male. Of course you're going to have far more images on average of the dominate sex in those fields. But, sure. Get it to say THEY. 2. The only value understanding gender has is significant behavioral predictions. Algorithm doesn't care about your social Yugioh game to feel special. It's tackling reality. 3. Lack of data on the racial bit. For sure we need greater data samples there. 4. We're gonna' ignore uncomfortable crime stats? Ok. 5. Yes. The kids who are shown to do well often are at a much lesser risk of becoming shitty. Reality sure is complicated. 6. Yes. You can't discriminate when it comes to loans and jobs. Even if there's a significant racial, sex, whatever difference. Things can't change for the better if you force them out and skillful/valuable individuals that aren't part of the problem within' these groups would suffer.
@KarlRamstedt
@KarlRamstedt 4 жыл бұрын
3:28 omg, that's the jealous girlfriend from the stock-photo meme.
@Karatop420
@Karatop420 4 жыл бұрын
If you gave the algorithm more data for protected classes, wouldn't that just bias it towards them? It seems that any learning data would necessarily contain some kind of pre-selected bias to even make a choice.
@wolflink9000
@wolflink9000 4 жыл бұрын
@@jordangoodman4769 uuhhh no i think his point Is maybe it's not bias and just statistical reality.
@stephenjames2951
@stephenjames2951 4 жыл бұрын
fire rises what an idiotic statement.
@wolflink9000
@wolflink9000 4 жыл бұрын
@@stephenjames2951 more like what a correct statement
@Blaze6108
@Blaze6108 4 жыл бұрын
fire rises no that’s not how it works. AI doesn’t behave in a deterministic way, it can exhibit its own biases even if the training set is unbiased. In these cases you need to bias the training against the AI’s own bias to obtain a correct result. It’s like trying to get a very dumb dog to walk along a straight line, if it’s veering sharply to the left you don’t tell it to go straight, you tell it to go right so it returns to following the straight line.
@Karatop420
@Karatop420 4 жыл бұрын
@@jordangoodman4769 I dont suggest that it is a lost cause. I was thinking of a different angle. My angle was that AI cant perform value judgements, only logical judgements. Whatever the moral environment might be, the AI program will be expected to mind it to be viewed as "unbiased." A logically unbiased AI may very well be viewed to have a moral bias, if the results differ from the beliefs of whoever is judging. For instance, suppose an AI were to learn that some particular political issue was factually non-existent. All those championing that particular cause would certainly judge it as biased and seek modifications. Suppose further that the AI was correct. The AI would then be biased towards factual analysis, which is still a bias! People may or may not value such bias. I dont see how you can say there would be no bias, tho. Think of the difference between equal opportunity vs. equal outcome. If you were to choose one as unbiased, those who value the other will disagree vehemently.
@1996squareenix
@1996squareenix 4 жыл бұрын
"sexual orientation is strongly correlated with certain characteristics of a social media profile photo" which characteristics? how do i algorithmically optimize the gayness of my profile??
@Y0UT0PIA
@Y0UT0PIA 4 жыл бұрын
simple, just use an anime girl as your avatar :)
@Spacemuffin147
@Spacemuffin147 4 жыл бұрын
As a society, we live in a society.
@Julie-jl2kk
@Julie-jl2kk 4 жыл бұрын
wut
@Acoolakim007
@Acoolakim007 4 жыл бұрын
During the debate that followed ProPublia's accusations of the COMPAS-algorithm being discriminatory against black people, Kleinberg, Mullainathan and Raghavan showed that there are inherent trade-offs between different notions of fairness. In the case of COMPAS, for example, the algorithm was "well-calobrated among groups", which means that, independent of skin colour, a group of people classified as, say, 70% to recidive, actually had 70% of people that would recidive. However, ProPublia objected, that the algorithm produced more false positive predictions for blacks (meaning that blacks were labeled more often wrongly as high risk) and more false negative predictions for whites (meaning that whites were more often labeled wrongly as low risk). In their paper, the authors showed that these notions of fairness, namely "well balanced among groups", "balance for the negative class" and "balance for the positive class" are mathematically incompatible and exclude each other. One can't have the one and the other at the same time. So yes, AI-systems will be biased, as insisted upon in the video. But it raises questions about what kind of fairness we want to be implemented and what we're willing to give up.
@eyeborg3148
@eyeborg3148 4 жыл бұрын
TapIntoTheEssence yes this comment gives the necessary context to the situation.
@nonsensespeaks
@nonsensespeaks 4 жыл бұрын
Non binary? - me a programmer confused
@nonsensespeaks
@nonsensespeaks 4 жыл бұрын
I know im sorry I don’t mean to be offensive, a little dark humor isn’t that bad.. hopefully.
@kareemrussell4930
@kareemrussell4930 4 жыл бұрын
@@MyNontraditionalLife suck it up
@whaddup5417
@whaddup5417 4 жыл бұрын
Clearly he’s referring to those of the quantum race
@SP-df1nm
@SP-df1nm 4 жыл бұрын
@@whaddup5417 what my thought was
@alexandrub8786
@alexandrub8786 4 жыл бұрын
@@nonsensespeaks the death of Stalin had more dark humor in the first word than your entire sentenc. Yes is humor but i wouldn't call that "humor negro".
@dmurphy8264
@dmurphy8264 4 жыл бұрын
Many people are missing the point to the Google analogy. AI hiring systems will learn associated characteristics of a nurse or programmer or what have you from similar datasets. That's not so much the problem- it's what happens next. It discriminates against people who don't meet the average characteristics. The AI system may throw out a resume for a nursing position that has the words "Boy Scout troop leader" because that's not something associated with the average nurse. It may throw out qualified programmer resumes from people who attended HBCUs, because most programmers haven't. If you don't quite get this, please look up the scrapped Amazon AI hiring program. It downgraded resumes from applicants who attended women's colleges.
@MfundoTenza
@MfundoTenza 4 жыл бұрын
Yikes that's messed up.
@Randomfun4life
@Randomfun4life 5 ай бұрын
Been struggling to focus lately, and I couldn't stop staring at the insanely huge size of his beanie. I must say though, once I was able to focus, the video is very well put together video. Thank you!
@HetareKing
@HetareKing 4 жыл бұрын
The point of the Google image search example isn't to accuse Google of some grave injustice, it's just an easy to understand example of how just because a computer is generating it doesn't mean its output isn't biased. The society it's getting its data from is biased in favour of female nurses, so it will return mostly pictures of female nurses even when the user is just looking for "nurse" without specifying gender. Once you understand that, it's easy to understand how that can become a problem when the situation is more complicated, the stakes are higher, which is the whole point of the episode. Let's say there's 10 male nurses in the world and 90 female nurses. Out of those 100 nurses, one man and two women have committed the same misdemeanour on the job. Given that, would it be fair to make decisions on who to to employ as nurse based on the idea that 10% of men have committed this misdemeanour but only ~2% of women have? An AI trained with this data might. Worse yet, you don't even know it's doing this because its decision-making process is more or less a black box.
@GrayBlevins
@GrayBlevins 10 ай бұрын
Ai dumb
@Kalaphant
@Kalaphant 6 ай бұрын
0:05 Actually, they're sets of moves on a Rubik's Cube! (R U R' F' R U R' U' R' F R2 U' R' is a Jb Perm, for example.) (And yes I have the moves memorized.)
@Argacyan
@Argacyan 4 жыл бұрын
I'll imagine at least a couple of people will be upset to hear that things like what data is put into their algorithm will bias the outcome which is a very easy concept to grasp, like the chemicals in a reaction will narrow down what products you possibly get or how different fuels to a fire can impact the heat generated by the fire - like I could only imagine disagreement there if getting biased results while claiming there was none would be the intention to begin with...
@davidweb2728
@davidweb2728 4 жыл бұрын
Define bias, then define unfair. That there is your problem son.
@TKOB
@TKOB 4 жыл бұрын
Very informative, great work.
@asergb
@asergb Жыл бұрын
microsoft's fault for putting up a bot in twitter. if they put the bot somewhere else, it would probably have been better lmao
@kajmeijer611
@kajmeijer611 Жыл бұрын
there's great talent in simplifying complicated things and you've got it man 🙌🏼
@Jibby24
@Jibby24 4 жыл бұрын
bruh my name is Jibril
@frankie3778
@frankie3778 4 жыл бұрын
Hideously relevant rn
@Setarko
@Setarko 4 жыл бұрын
Early squad here?
@senzubean1358
@senzubean1358 4 жыл бұрын
Here🙋‍♀️
@TaylorChildAkaWeapon
@TaylorChildAkaWeapon 8 ай бұрын
Such a great video, thanks for the crash course!
@redshipley
@redshipley 4 жыл бұрын
I dont get people are upset when AI cant recognize their face. I'd be thrilled to not be recognized. They wouldnt be able to use recognition software on me.
@heatherswanson1664
@heatherswanson1664 4 жыл бұрын
When the AI isn't trained on enough people of your race so that it thinks other people are you and they can unlock your phone :/
@Hamstray
@Hamstray 4 жыл бұрын
"Algorithms are unambiguous specifications for performing calculation, data processing, automated reasoning, and other tasks.", AI (neural networks) are not unambiguous and don't qualify as algorithms. In neural networks biases may emerge spontaneously regardless of the training data.
@henlofrens
@henlofrens 4 жыл бұрын
Neural networks are definitely algorithms, the problem is that they are often not transparent. There are increased efforts to improve the reverse engineering of NNs after training using rule extraction methods so we get better vision of how they weigh certain features (especially in CNNs where the conv/pool cycles are a major component of this) Also note that not al NNs have a randomized element into them, but nowadays most do especially in initialization. So some may be 'spontaneously' biased, it's not a given rule for all NNs.
@Mr_Wallet
@Mr_Wallet 4 жыл бұрын
Neural networks are created and updated by algorithms, but the data that describes a specific neural network is virtually never pieced apart into a known algorithm, and such a network vanishes the moment you feed one more piece of training data into it. For continuously-updated networks, there is no computationally feasible method on the horizon for keeping extracted rules up-to-date.
@ViewtifulSam
@ViewtifulSam 4 жыл бұрын
That's a neat-for-explaining but not very precise definition of algorithm. Technically, at any given point an algorithm does exist, it's just not generally known. Although I agree it's not very useful to call such things "algorithms" in many contexts (and the system itself is indeed not "an algorithm"), I don't see the problem in this case.
@Kalaphant
@Kalaphant 6 ай бұрын
7:05 Some of them are funny lol.
@Hambxne
@Hambxne 4 жыл бұрын
Please do a complete course on climate science!!
@hlam2364
@hlam2364 4 жыл бұрын
Algorithms like when KZbin give some video an ❌ to demonetized the video when it is talking about the truth.
@Dabunni6398
@Dabunni6398 4 жыл бұрын
Why his shirts always wrinkled?
@motorolaandroid5688
@motorolaandroid5688 4 жыл бұрын
I see the book Weapons of Math Destruction.
@Kalaphant
@Kalaphant 6 ай бұрын
I actually read about a way that a fill in the blank tool was able to not assume gender stereotypes, even tho the training data has a lot
@mikhailsemenchenko1383
@mikhailsemenchenko1383 4 жыл бұрын
Wow, this video is newborn
@erikeriks
@erikeriks 4 жыл бұрын
Wow, this comment is newborn
@L3GITME
@L3GITME 4 жыл бұрын
Mikhail Semenchenko what do you mean by this?
@tribal8684
@tribal8684 4 жыл бұрын
Do a crash course music production please.
@ReidMerrill
@ReidMerrill 4 жыл бұрын
1:30 Not bias. Most nurses are women.
@Kalaphant
@Kalaphant 6 ай бұрын
4:10 XD
@Caperhere
@Caperhere 4 жыл бұрын
Cambridge Analytics.
@ListerTunes
@ListerTunes 4 жыл бұрын
I'm reminded of the resume-screening AI that taught itself that the best candidates were named Trevor and played high school lacrosse. Biases in culture introduce biases into data, which just replicates the bias.
@milton5417
@milton5417 4 жыл бұрын
And went to Cranbrook. That’s a private school.
@ShuklaMathsAcademy
@ShuklaMathsAcademy 4 жыл бұрын
Very nice crash course.👍🏻
@wolflink9000
@wolflink9000 4 жыл бұрын
Prioritizing resources to areas where statistically in the past there is more likely to be issues makes 100% perfect sense.
@artiphology
@artiphology 4 жыл бұрын
"Do you pledge the axiom?" "Only in my reality class" (got distracted reading the babel)
@Enigmo1
@Enigmo1 4 жыл бұрын
“Non-binary people doing both of these things” What would google images be showing to present this?
@alexandrub8786
@alexandrub8786 4 жыл бұрын
Objects. Considering that humans are either male or female (99.9%) or have some genetic defect (in between,sadly nothing like a trap).
@jamesstaggs4160
@jamesstaggs4160 4 жыл бұрын
Yeah when you complain about the AI making things "a little more difficult" or "frustrating" then you've really got nothing to complain about. So Google image shows pictures of nurses as women and programmers as men. More women are nurses and more programmers are men. Nobody is keeping anyone from being a programmer if they're female or being a nurse if they're male. I'm sorry that's just a non-issue. We don't need to try and ensure that every single vocation has a perfect balance or race and/or gender. All we need to do is make sure that nobody is barred from any career path based only on their gender or race. Thinking like this should just be called "too many straight white men over there" because that seems to be the only group anybody is interested in making sure there aren't too many of in a given area. This just about is never applied to any other group.
@dmurphy8264
@dmurphy8264 4 жыл бұрын
"Nobody is keeping anyone from being a programmer if they're female or being a nurse if they're male." Except AI hiring systems are already doing this. Look into the scrapped Amazon AI hiring system. It automatically downgraded resumes from applicants who attended women's colleges. The video didn't explain this very well, but AI systems look for associations. You can't just tell them to not discriminate against protected classes; they will learn that the average nurse did not attend Boy Scouts and the average programmer did not go to a HBCU, and therefor those resumes should be scrapped.
@ValeriaGasik
@ValeriaGasik 8 ай бұрын
Very informative! Thank you! :)
@Kalaphant
@Kalaphant 6 ай бұрын
1:40 YES
@demoman8063
@demoman8063 4 жыл бұрын
noice video
@basilwhite
@basilwhite 4 жыл бұрын
Outstandingly clear and engaging.👍
@ibi5990
@ibi5990 4 жыл бұрын
HOW IS HIS HAT THAT BIG?!?!
@PwnZonePanda
@PwnZonePanda 4 жыл бұрын
Omg, im like 3 min in and its already dumb. So a google search shows more women pictures than men for nurses, and more men than women for programs. What the ratio of pictures featuring women to men for nursing, and men to women for programing? Im guessing the same as Google images shows.
@BeCurieUs
@BeCurieUs 4 жыл бұрын
Congratulations, you just described bias in data since the availability of the images is being taken as a proxy for the gender of the jobs when the jobs themselves are genderless.
@PiggySquisherCaleb
@PiggySquisherCaleb 4 жыл бұрын
Douglas Murray goes over this nonsense in The Madness of Crowds. Highly recommend it. He shows how algorithms for google don't show this nonsense if you use an Asian or Eastern-European IP. In the US and other Western countries, you can't even image search for "straight couple" without having gay couples show up.
@hedonisticzen
@hedonisticzen 4 жыл бұрын
@@BeCurieUs that's not a bias in the data that's a skew in the data and you don't know the difference. Data sets are ment to represent populations not necessarily normal distributions go back to statistics.
@BeCurieUs
@BeCurieUs 4 жыл бұрын
@@hedonisticzen Congratulations, you just described what AI modelers often call society bias, where non-normalized data is treated as normalized data and fed into a model, thus skewing results. I find your use of words here perplexing as data skews are one of the more prevalent types of erroneous modeling bias things that can happen. Perhaps one of us, indeed, needs to go back to statistics class.
@hedonisticzen
@hedonisticzen 4 жыл бұрын
@@BeCurieUs get your sociological quackery out of statistical analysis. More of the nursing population is female so more pictures of female nurses is expected perfectly reasonable and logical. The non normalized skew just means certain statistical processes aren't appropriate to apply not that the data set is in any way corrupt.
@TaxPayingContributor
@TaxPayingContributor 4 жыл бұрын
Gonna say it again: (Al Gore Rhythms)
@Master_Therion
@Master_Therion 4 жыл бұрын
Al Gore in a music band, now _that's_ an inconvenient truth. ;)
@TaxPayingContributor
@TaxPayingContributor 4 жыл бұрын
@@Master_Therion disheveled neckbeard avoidant eyed frontman twisting a St. Vitus dance to the myopic earth victim/oppressor.
@sjoerdadlp
@sjoerdadlp 4 жыл бұрын
Most nurses are female, and most programmers are male. So if you Google a nurse, you see a typical nurse (which is female). The example between minute one and two is not about our biases, its simply what is actually there.
@OscarORosas
@OscarORosas 4 жыл бұрын
#315 here
@gravity8197
@gravity8197 4 жыл бұрын
Have you guys covered politics much? I know it can be touchy, but I'd like someone in a good position to do so. To explain the issues with things like the party system and gerrymandering, and what you can legally do to change it instead of letting things go until the levy breaks.
@dev_apostle
@dev_apostle Жыл бұрын
hey this guy's ai channel on youtube is great! He always puts out great content
@Mr_Wallet
@Mr_Wallet 4 жыл бұрын
OK, I was kind of dreading this one because I expected a bunch of woke drivel - but I gotta be honest, you folks pretty much nailed it. This was informative, and probably as even-handed as Crash Course has _ever been_ on such a sensitive topic. I am impressed.
@HolgerKraus23
@HolgerKraus23 4 жыл бұрын
That's why all big companies that use AI should establish AI ethics boards - let's keep our AI fair and unprejudiced! :)
@Dysputant
@Dysputant 4 жыл бұрын
Yeah. Math is racist too. We need to unbias math.
@davidweb2728
@davidweb2728 4 жыл бұрын
Sounds good in premise but then we would need a board to stop the ethics board from being biased, then another board to keep that board from being biased, then another....
@davidweb2728
@davidweb2728 4 жыл бұрын
@@tw2845 It was a joke if you could not tell. My point was that it is not the algorithms that are biased but the metrics by which we measure bias that are the problem.
@mark_tilltill6664
@mark_tilltill6664 4 жыл бұрын
AI has the power to destroy peoples lives. It has no conscience.
@Byongcheol
@Byongcheol 4 жыл бұрын
2:25 is there anybody knows the title of article?
@rumasengupta8594
@rumasengupta8594 4 жыл бұрын
what does the term capitalism and laissez faire mean. Help i donot understand
@JRenardLeatherCo
@JRenardLeatherCo 4 жыл бұрын
Ruma Sen Gupta well those are two separate terms. which one do you want first
@codyuhi8010
@codyuhi8010 4 жыл бұрын
Are there really deep learning models that implement a person's name as a factor to extrapolate their personality traits or compatibility for a job? Are there any studies that show that a person's given name has a significant correlation to their personality?
@ljnv
@ljnv 4 жыл бұрын
Good bye crash course. I loved your videos but go woke go broke
@christopherwalsh3101
@christopherwalsh3101 4 жыл бұрын
"bias is wrong" says the most stereotypical looking black guy they could find
@Antenox
@Antenox 4 жыл бұрын
What does his look have to do with anything?
@schparque
@schparque 4 жыл бұрын
More Jabril please!
@dankley9969
@dankley9969 4 жыл бұрын
Had to throw that "and non binary people" bit in there
@debzeb6899
@debzeb6899 4 жыл бұрын
Dankley non-binary people exist. He’s being inclusive. I work in data and non-binary people are often excluded. For instance many censuses around the work impute sex and gender to one of two responses. Leaving the two available responses blank was the only way non binary folk could truthfully respond to the question but imputation, based on the assumption there were only two answers I guess, perpetuated the myth that there are only two genders.
@Argacyan
@Argacyan 4 жыл бұрын
Non-binary people are relevant for the video topic. If this video was an algorithm, and it wouldn't include stuff like this, the output would just be a syntax error message.
@dankley9969
@dankley9969 4 жыл бұрын
@@debzeb6899 wait you mean non binary people exist?
@dankley9969
@dankley9969 4 жыл бұрын
@@MyNontraditionalLife how many genders are there?
@dankley9969
@dankley9969 4 жыл бұрын
@@MyNontraditionalLife what about in the animal world?
@L3GITME
@L3GITME 4 жыл бұрын
These features are going to come to pass, the only hope is in salvation by faith through Jesus Christ, to obtain the Holy Spirit which will lead you to a relationship with God
@batteryjuicy4231
@batteryjuicy4231 4 жыл бұрын
it's so weird to see Jabrils move his mouth!
@fuckthensa3908
@fuckthensa3908 4 жыл бұрын
Propaganda
Cats vs Dogs? Let's make an AI to settle this: Crash Course AI #19
13:05
How AIs, like ChatGPT, Learn
8:55
CGP Grey
Рет қаралды 10 МЛН
小丑妹妹插队被妈妈教训!#小丑#路飞#家庭#搞笑
00:12
家庭搞笑日记
Рет қаралды 38 МЛН
Man Mocks Wife's Exercise Routine, Faces Embarrassment at Work #shorts
00:32
Fabiosa Best Lifehacks
Рет қаралды 6 МЛН
MIT 6.S191: AI Bias and Fairness
43:22
Alexander Amini
Рет қаралды 47 М.
What Even IS a Religion?: Crash Course Religions #1
11:55
CrashCourse
Рет қаралды 143 М.
How AI is Deciding Who Gets Hired
15:28
Bloomberg Originals
Рет қаралды 167 М.
AI in Schools: Cheater or Tutor? | Paul Matthews | TEDxHobart
17:46
Machine Learning Fundamentals: Bias and Variance
6:36
StatQuest with Josh Starmer
Рет қаралды 1,3 МЛН
AI, Machine Learning, Deep Learning and Generative AI Explained
10:01
IBM Technology
Рет қаралды 264 М.
What Is Artificial Intelligence? Crash Course AI #1
11:46
CrashCourse
Рет қаралды 748 М.