It's not bias to accurately represent the bias of reality or to do exactly as you're programmed to do.
@dr.mariophd42966 жыл бұрын
Can I "call bullshit" on the part about sexist bias in machine translation at 04:29? In English (and French, and other languages) you need to express a subject for your phrase. If the machine cannot extrapolate the information from the context, what should it do? It seems to me logical that it would "guess" starting from the data. Are there more male or female doctors in English-speaking countries? I have no idea, but if the answer is "more male doctors", the behavior we see seems correct.
@sashamalone8526 жыл бұрын
For a better look I recommend the paper cited in the slide or the one cited below: Bolukbasi, Tolga, et al. "Man is to computer programmer as woman is to homemaker? debiasing word embeddings." Advances in Neural Information Processing Systems. 2016.
@omgitsflying7 жыл бұрын
subtitles are broken on this one, i get the subtitles from previous episode. It helps for non english speaking natives. but it's an amazing course
@UWiSchool7 жыл бұрын
Thank you! We've updated the captions.
@rabreu087 жыл бұрын
I think a nice study would be to compare the algorithm bias with the real data. For example: See if womans named LATANYA have more criminal records than JILL.
@willschab94145 жыл бұрын
gotta hate CIS 415
@ezradlionel7112 жыл бұрын
If you told me that 27% of nurses are male as opposed to the 11% previously thought, it wouldn't change the fact that the majority of nurses are female. Then to use google image search as definitive proof of some kind of inherent algorithmic bias is just problematic. Search algorithms in general suffer from position bias when trying to display millions of results. Humans are full of biases but AI ethics seems to be all about the curation of data rather than any real ethical issues inherent in AI itself. This video is 5 years old and AI ethics is on the rise particularly due to the prevalence of Language Models. Yet still AI Ethicists continue to gloss over the fact that AI can only regurgitate what its being trained on. Unless you can literally clean up humanity or ban free speech, Language Models will continue to be a digital mirror, parroting whatever groupthink it's assimilated.
@MS-il3ht5 жыл бұрын
Yeah. But this kind of bias isn't that systematic...
@AlwaysTalkingAboutMyDog6 жыл бұрын
Who's here from an ethics course?
@EconaelGaming5 жыл бұрын
I think this whole episode is bullshit. e.g. 3:14 There might just be a cluster of CEOs who have most of the images online, which has a different gender distribution. Don't assume that every CEO has the same amount of public photos! e.g. 4:26 If you look at the actual dataset, nurses are predominantly female (and have been for centuries). Modern Neural Machine Translation learns from examples. There are more examples of female nurses than male nurses in texts, since that reflects the dataset, i.e. the reality. Where is the bias here? Otherwise, I like the course a lot!
@Scarecrow00414 жыл бұрын
I think you might be missing the point. For the CEOs, the publicly available data doesn’t match reality, so an algorithm trained on that data would be biased. The difference in distribution is the exact problem. Secondly, the nurse example is the same issue. What you’ve observed is the problem. That is historically the reality, which is why the results are biased. Since the sentence was genderless in the original language, the translation is not really accurate (due to the embedded historical bias). This could certainly cause problems, or at least miscommunications, if the translations were trusted.
@EconaelGaming4 жыл бұрын
@@Scarecrow0041 I think I was missing the point. Data which does not reflect reality creates a biased model.