Did a short stint working on an algorithm that looked for potential pickpockets, trained on video of actual incidents that led to arrest. Was moved to another project after I kept bringing up the fact that the algorithm was biased as the data set was generally representative of a subset of pickpockets, the ones who get caught. My request for video of successful pickpockets that were not arrested to train the algorithm was not viewed favorably.
@kajmeijer611 Жыл бұрын
there's great talent in simplifying complicated things and you've got it man 🙌🏼
@TaylorChildAkaWeapon11 ай бұрын
Such a great video, thanks for the crash course!
@Karatop4205 жыл бұрын
If you gave the algorithm more data for protected classes, wouldn't that just bias it towards them? It seems that any learning data would necessarily contain some kind of pre-selected bias to even make a choice.
@wolflink90005 жыл бұрын
@@jordangoodman4769 uuhhh no i think his point Is maybe it's not bias and just statistical reality.
@stephenjames29515 жыл бұрын
fire rises what an idiotic statement.
@wolflink90005 жыл бұрын
@@stephenjames2951 more like what a correct statement
@Blaze61085 жыл бұрын
fire rises no that’s not how it works. AI doesn’t behave in a deterministic way, it can exhibit its own biases even if the training set is unbiased. In these cases you need to bias the training against the AI’s own bias to obtain a correct result. It’s like trying to get a very dumb dog to walk along a straight line, if it’s veering sharply to the left you don’t tell it to go straight, you tell it to go right so it returns to following the straight line.
@Karatop4205 жыл бұрын
@@jordangoodman4769 I dont suggest that it is a lost cause. I was thinking of a different angle. My angle was that AI cant perform value judgements, only logical judgements. Whatever the moral environment might be, the AI program will be expected to mind it to be viewed as "unbiased." A logically unbiased AI may very well be viewed to have a moral bias, if the results differ from the beliefs of whoever is judging. For instance, suppose an AI were to learn that some particular political issue was factually non-existent. All those championing that particular cause would certainly judge it as biased and seek modifications. Suppose further that the AI was correct. The AI would then be biased towards factual analysis, which is still a bias! People may or may not value such bias. I dont see how you can say there would be no bias, tho. Think of the difference between equal opportunity vs. equal outcome. If you were to choose one as unbiased, those who value the other will disagree vehemently.
@Acoolakim0075 жыл бұрын
During the debate that followed ProPublia's accusations of the COMPAS-algorithm being discriminatory against black people, Kleinberg, Mullainathan and Raghavan showed that there are inherent trade-offs between different notions of fairness. In the case of COMPAS, for example, the algorithm was "well-calobrated among groups", which means that, independent of skin colour, a group of people classified as, say, 70% to recidive, actually had 70% of people that would recidive. However, ProPublia objected, that the algorithm produced more false positive predictions for blacks (meaning that blacks were labeled more often wrongly as high risk) and more false negative predictions for whites (meaning that whites were more often labeled wrongly as low risk). In their paper, the authors showed that these notions of fairness, namely "well balanced among groups", "balance for the negative class" and "balance for the positive class" are mathematically incompatible and exclude each other. One can't have the one and the other at the same time. So yes, AI-systems will be biased, as insisted upon in the video. But it raises questions about what kind of fairness we want to be implemented and what we're willing to give up.
@eyeborg31485 жыл бұрын
TapIntoTheEssence yes this comment gives the necessary context to the situation.
@Spacemuffin1475 жыл бұрын
As a society, we live in a society.
@Julie-jl2kk4 жыл бұрын
wut
@TKOB5 жыл бұрын
Very informative, great work.
@KarlRamstedt5 жыл бұрын
3:28 omg, that's the jealous girlfriend from the stock-photo meme.
@1996squareenix5 жыл бұрын
"sexual orientation is strongly correlated with certain characteristics of a social media profile photo" which characteristics? how do i algorithmically optimize the gayness of my profile??
@Y0UT0PIA4 жыл бұрын
simple, just use an anime girl as your avatar :)
@saulgalloway22955 жыл бұрын
6 agreements and disagreements. 1. Nurses are 90% female. Programmers are 80% male. Of course you're going to have far more images on average of the dominate sex in those fields. But, sure. Get it to say THEY. 2. The only value understanding gender has is significant behavioral predictions. Algorithm doesn't care about your social Yugioh game to feel special. It's tackling reality. 3. Lack of data on the racial bit. For sure we need greater data samples there. 4. We're gonna' ignore uncomfortable crime stats? Ok. 5. Yes. The kids who are shown to do well often are at a much lesser risk of becoming shitty. Reality sure is complicated. 6. Yes. You can't discriminate when it comes to loans and jobs. Even if there's a significant racial, sex, whatever difference. Things can't change for the better if you force them out and skillful/valuable individuals that aren't part of the problem within' these groups would suffer.
@dmurphy82645 жыл бұрын
Many people are missing the point to the Google analogy. AI hiring systems will learn associated characteristics of a nurse or programmer or what have you from similar datasets. That's not so much the problem- it's what happens next. It discriminates against people who don't meet the average characteristics. The AI system may throw out a resume for a nursing position that has the words "Boy Scout troop leader" because that's not something associated with the average nurse. It may throw out qualified programmer resumes from people who attended HBCUs, because most programmers haven't. If you don't quite get this, please look up the scrapped Amazon AI hiring program. It downgraded resumes from applicants who attended women's colleges.
@JordanLenz5 жыл бұрын
Yikes that's messed up.
@ValeriaGasik11 ай бұрын
Very informative! Thank you! :)
@basilwhite5 жыл бұрын
Outstandingly clear and engaging.👍
@jacobcase5872 ай бұрын
Great video!
@HetareKing5 жыл бұрын
The point of the Google image search example isn't to accuse Google of some grave injustice, it's just an easy to understand example of how just because a computer is generating it doesn't mean its output isn't biased. The society it's getting its data from is biased in favour of female nurses, so it will return mostly pictures of female nurses even when the user is just looking for "nurse" without specifying gender. Once you understand that, it's easy to understand how that can become a problem when the situation is more complicated, the stakes are higher, which is the whole point of the episode. Let's say there's 10 male nurses in the world and 90 female nurses. Out of those 100 nurses, one man and two women have committed the same misdemeanour on the job. Given that, would it be fair to make decisions on who to to employ as nurse based on the idea that 10% of men have committed this misdemeanour but only ~2% of women have? An AI trained with this data might. Worse yet, you don't even know it's doing this because its decision-making process is more or less a black box.
@ListerTunes5 жыл бұрын
I'm reminded of the resume-screening AI that taught itself that the best candidates were named Trevor and played high school lacrosse. Biases in culture introduce biases into data, which just replicates the bias.
@milton54175 жыл бұрын
And went to Cranbrook. That’s a private school.
@Argacyan5 жыл бұрын
I'll imagine at least a couple of people will be upset to hear that things like what data is put into their algorithm will bias the outcome which is a very easy concept to grasp, like the chemicals in a reaction will narrow down what products you possibly get or how different fuels to a fire can impact the heat generated by the fire - like I could only imagine disagreement there if getting biased results while claiming there was none would be the intention to begin with...
@davidweb27285 жыл бұрын
Define bias, then define unfair. That there is your problem son.
@dev_apostle2 жыл бұрын
hey this guy's ai channel on youtube is great! He always puts out great content
@ShuklaMathsAcademy5 жыл бұрын
Very nice crash course.👍🏻
@Kalaphant9 ай бұрын
0:50 I was not expecting to get a lesson about discrimination when I clicked on a CCAI video XD
@wolflink90005 жыл бұрын
Prioritizing resources to areas where statistically in the past there is more likely to be issues makes 100% perfect sense.
@Hamstray5 жыл бұрын
"Algorithms are unambiguous specifications for performing calculation, data processing, automated reasoning, and other tasks.", AI (neural networks) are not unambiguous and don't qualify as algorithms. In neural networks biases may emerge spontaneously regardless of the training data.
@henlofrens5 жыл бұрын
Neural networks are definitely algorithms, the problem is that they are often not transparent. There are increased efforts to improve the reverse engineering of NNs after training using rule extraction methods so we get better vision of how they weigh certain features (especially in CNNs where the conv/pool cycles are a major component of this) Also note that not al NNs have a randomized element into them, but nowadays most do especially in initialization. So some may be 'spontaneously' biased, it's not a given rule for all NNs.
@Mr_Wallet5 жыл бұрын
Neural networks are created and updated by algorithms, but the data that describes a specific neural network is virtually never pieced apart into a known algorithm, and such a network vanishes the moment you feed one more piece of training data into it. For continuously-updated networks, there is no computationally feasible method on the horizon for keeping extracted rules up-to-date.
@ViewtifulSam4 жыл бұрын
That's a neat-for-explaining but not very precise definition of algorithm. Technically, at any given point an algorithm does exist, it's just not generally known. Although I agree it's not very useful to call such things "algorithms" in many contexts (and the system itself is indeed not "an algorithm"), I don't see the problem in this case.
@nonsensespeaks5 жыл бұрын
Non binary? - me a programmer confused
@nonsensespeaks5 жыл бұрын
I know im sorry I don’t mean to be offensive, a little dark humor isn’t that bad.. hopefully.
@kareemrussell49305 жыл бұрын
@@MyNontraditionalLife suck it up
@whaddup54175 жыл бұрын
Clearly he’s referring to those of the quantum race
@SP-df1nm5 жыл бұрын
@@whaddup5417 what my thought was
@alexandrub87865 жыл бұрын
@@nonsensespeaks the death of Stalin had more dark humor in the first word than your entire sentenc. Yes is humor but i wouldn't call that "humor negro".
@Hambxne5 жыл бұрын
Please do a complete course on climate science!!
@motorolaandroid56885 жыл бұрын
I see the book Weapons of Math Destruction.
@artiphology5 жыл бұрын
"Do you pledge the axiom?" "Only in my reality class" (got distracted reading the babel)
@mikhailsemenchenko13835 жыл бұрын
Wow, this video is newborn
@erikeriks5 жыл бұрын
Wow, this comment is newborn
@L3GITME5 жыл бұрын
Mikhail Semenchenko what do you mean by this?
@frankie37784 жыл бұрын
Hideously relevant rn
@Kalaphant9 ай бұрын
I actually read about a way that a fill in the blank tool was able to not assume gender stereotypes, even tho the training data has a lot
@Byongcheol4 жыл бұрын
2:25 is there anybody knows the title of article?
@ReidMerrill5 жыл бұрын
1:30 Not bias. Most nurses are women.
@redshipley5 жыл бұрын
I dont get people are upset when AI cant recognize their face. I'd be thrilled to not be recognized. They wouldnt be able to use recognition software on me.
@heatherswanson16645 жыл бұрын
When the AI isn't trained on enough people of your race so that it thinks other people are you and they can unlock your phone :/
@Randomfun4life8 ай бұрын
Been struggling to focus lately, and I couldn't stop staring at the insanely huge size of his beanie. I must say though, once I was able to focus, the video is very well put together video. Thank you!
@Setarko5 жыл бұрын
Early squad here?
@senzubean13585 жыл бұрын
Here🙋♀️
@PwnZonePanda5 жыл бұрын
Omg, im like 3 min in and its already dumb. So a google search shows more women pictures than men for nurses, and more men than women for programs. What the ratio of pictures featuring women to men for nursing, and men to women for programing? Im guessing the same as Google images shows.
@BeCurieUs5 жыл бұрын
Congratulations, you just described bias in data since the availability of the images is being taken as a proxy for the gender of the jobs when the jobs themselves are genderless.
@PiggySquisherCaleb5 жыл бұрын
Douglas Murray goes over this nonsense in The Madness of Crowds. Highly recommend it. He shows how algorithms for google don't show this nonsense if you use an Asian or Eastern-European IP. In the US and other Western countries, you can't even image search for "straight couple" without having gay couples show up.
@hedonisticzen5 жыл бұрын
@@BeCurieUs that's not a bias in the data that's a skew in the data and you don't know the difference. Data sets are ment to represent populations not necessarily normal distributions go back to statistics.
@BeCurieUs5 жыл бұрын
@@hedonisticzen Congratulations, you just described what AI modelers often call society bias, where non-normalized data is treated as normalized data and fed into a model, thus skewing results. I find your use of words here perplexing as data skews are one of the more prevalent types of erroneous modeling bias things that can happen. Perhaps one of us, indeed, needs to go back to statistics class.
@hedonisticzen5 жыл бұрын
@@BeCurieUs get your sociological quackery out of statistical analysis. More of the nursing population is female so more pictures of female nurses is expected perfectly reasonable and logical. The non normalized skew just means certain statistical processes aren't appropriate to apply not that the data set is in any way corrupt.
@Kalaphant9 ай бұрын
0:05 Actually, they're sets of moves on a Rubik's Cube! (R U R' F' R U R' U' R' F R2 U' R' is a Jb Perm, for example.) (And yes I have the moves memorized.)
@Jibby245 жыл бұрын
bruh my name is Jibril
@hlam23644 жыл бұрын
Algorithms like when KZbin give some video an ❌ to demonetized the video when it is talking about the truth.
@gravity81975 жыл бұрын
Have you guys covered politics much? I know it can be touchy, but I'd like someone in a good position to do so. To explain the issues with things like the party system and gerrymandering, and what you can legally do to change it instead of letting things go until the levy breaks.
@ibi59905 жыл бұрын
HOW IS HIS HAT THAT BIG?!?!
@Kalaphant9 ай бұрын
1:40 YES
@Dabunni63985 жыл бұрын
Why his shirts always wrinkled?
@Caperhere5 жыл бұрын
Cambridge Analytics.
@Enigmo15 жыл бұрын
“Non-binary people doing both of these things” What would google images be showing to present this?
@alexandrub87865 жыл бұрын
Objects. Considering that humans are either male or female (99.9%) or have some genetic defect (in between,sadly nothing like a trap).
@asergb Жыл бұрын
microsoft's fault for putting up a bot in twitter. if they put the bot somewhere else, it would probably have been better lmao
@BK-en1uo5 жыл бұрын
I guess you could make the whole second season of CC Ecology about all the stuff that lives under that hat.
@sjoerdadlp5 жыл бұрын
Most nurses are female, and most programmers are male. So if you Google a nurse, you see a typical nurse (which is female). The example between minute one and two is not about our biases, its simply what is actually there.
@Kalaphant9 ай бұрын
7:05 Some of them are funny lol.
@codyuhi80105 жыл бұрын
Are there really deep learning models that implement a person's name as a factor to extrapolate their personality traits or compatibility for a job? Are there any studies that show that a person's given name has a significant correlation to their personality?
@Mr_Wallet5 жыл бұрын
OK, I was kind of dreading this one because I expected a bunch of woke drivel - but I gotta be honest, you folks pretty much nailed it. This was informative, and probably as even-handed as Crash Course has _ever been_ on such a sensitive topic. I am impressed.
@demoman80635 жыл бұрын
noice video
@schparque5 жыл бұрын
More Jabril please!
@jamesstaggs41605 жыл бұрын
Yeah when you complain about the AI making things "a little more difficult" or "frustrating" then you've really got nothing to complain about. So Google image shows pictures of nurses as women and programmers as men. More women are nurses and more programmers are men. Nobody is keeping anyone from being a programmer if they're female or being a nurse if they're male. I'm sorry that's just a non-issue. We don't need to try and ensure that every single vocation has a perfect balance or race and/or gender. All we need to do is make sure that nobody is barred from any career path based only on their gender or race. Thinking like this should just be called "too many straight white men over there" because that seems to be the only group anybody is interested in making sure there aren't too many of in a given area. This just about is never applied to any other group.
@dmurphy82645 жыл бұрын
"Nobody is keeping anyone from being a programmer if they're female or being a nurse if they're male." Except AI hiring systems are already doing this. Look into the scrapped Amazon AI hiring system. It automatically downgraded resumes from applicants who attended women's colleges. The video didn't explain this very well, but AI systems look for associations. You can't just tell them to not discriminate against protected classes; they will learn that the average nurse did not attend Boy Scouts and the average programmer did not go to a HBCU, and therefor those resumes should be scrapped.
@tribal86845 жыл бұрын
Do a crash course music production please.
@OscarORosas5 жыл бұрын
#315 here
@rumasengupta85945 жыл бұрын
what does the term capitalism and laissez faire mean. Help i donot understand
@JRenardLeatherCo5 жыл бұрын
Ruma Sen Gupta well those are two separate terms. which one do you want first
@mathmeetsmusic5 жыл бұрын
Love the nonbinary inclusion remark. PBS on the whole isn't great with gender inclusive language and I didn't expect crash course AI to be the pioneer, but I sure appreciate it.
@melonlord14144 жыл бұрын
Well, CC is produced by Complexly. The company was founded by John and Hank Green, two youtubers with an very diverse community.
@Kalaphant9 ай бұрын
4:10 XD
@HolgerKraus235 жыл бұрын
That's why all big companies that use AI should establish AI ethics boards - let's keep our AI fair and unprejudiced! :)
@Dysputant5 жыл бұрын
Yeah. Math is racist too. We need to unbias math.
@davidweb27285 жыл бұрын
Sounds good in premise but then we would need a board to stop the ethics board from being biased, then another board to keep that board from being biased, then another....
@davidweb27285 жыл бұрын
@@tw2845 It was a joke if you could not tell. My point was that it is not the algorithms that are biased but the metrics by which we measure bias that are the problem.
@TaxPayingContributor5 жыл бұрын
Gonna say it again: (Al Gore Rhythms)
@Master_Therion5 жыл бұрын
Al Gore in a music band, now _that's_ an inconvenient truth. ;)
@TaxPayingContributor5 жыл бұрын
@@Master_Therion disheveled neckbeard avoidant eyed frontman twisting a St. Vitus dance to the myopic earth victim/oppressor.
@GrayBlevins Жыл бұрын
Ai dumb
@mark_tilltill66644 жыл бұрын
AI has the power to destroy peoples lives. It has no conscience.
@christopherwalsh31015 жыл бұрын
"bias is wrong" says the most stereotypical looking black guy they could find
@Antenox5 жыл бұрын
What does his look have to do with anything?
@dankley99695 жыл бұрын
Had to throw that "and non binary people" bit in there
@debzeb68995 жыл бұрын
Dankley non-binary people exist. He’s being inclusive. I work in data and non-binary people are often excluded. For instance many censuses around the work impute sex and gender to one of two responses. Leaving the two available responses blank was the only way non binary folk could truthfully respond to the question but imputation, based on the assumption there were only two answers I guess, perpetuated the myth that there are only two genders.
@Argacyan5 жыл бұрын
Non-binary people are relevant for the video topic. If this video was an algorithm, and it wouldn't include stuff like this, the output would just be a syntax error message.
@dankley99695 жыл бұрын
@@debzeb6899 wait you mean non binary people exist?
@dankley99695 жыл бұрын
@@MyNontraditionalLife how many genders are there?
@dankley99695 жыл бұрын
@@MyNontraditionalLife what about in the animal world?
@L3GITME5 жыл бұрын
These features are going to come to pass, the only hope is in salvation by faith through Jesus Christ, to obtain the Holy Spirit which will lead you to a relationship with God
@ljnv5 жыл бұрын
Good bye crash course. I loved your videos but go woke go broke