AI is literally just pattern recognition. If you don’t like the patterns that they are recognizing, consider the origin of those patterns.
@jkgh3742 жыл бұрын
That’s the point of the video…
@Troyboy232 жыл бұрын
@@jkgh374 can algorithms be racist?
@georgewitheridge49612 жыл бұрын
Can patten recognition be racist?
@Troyboy232 жыл бұрын
@@georgewitheridge4961 I don’t think data is inherently racist. Do you?
@Antenox2 жыл бұрын
@@Troyboy23 It is
@daftwod2 жыл бұрын
Computers need to be told about the harm that observation of reality can do. They cant go around noticing things and expect to get away with it!
@thesuperiorman8342 Жыл бұрын
😂😂
@seawalkarrg2 жыл бұрын
“I THINK they were probably a team of light skinned developers…” SHE SAYS
@XOPOIIIO2 жыл бұрын
"Credit scoring algorithms favored financial behaviors that are more common among white people" - May be you change your financial behavior then?
@dfurianz2 жыл бұрын
Completely agree..
@WaleSoleye2 жыл бұрын
Well not many people can just afford to up and switch lifestyles. Many peoples financial behaviour is based on the options available to them. And the options are not available to everyone. Or are proven to be easier for some than others.
@dfurianz2 жыл бұрын
@@WaleSoleye Well, some specific examples are needed here, otherwise what you said is just an excuse.
@lucaslouzada442 жыл бұрын
It said that the respective backgrounds were comparable, but the whole thing looks quite murky. This is one of those situations that need a clarification, instead of being shoved into a racial bias type of narrative - let’s be honest, though, in acknowledging that the purpose of the report was to ascertain the automatic reproduction of biased patterns through algorithms, and that’s pretty much settled, as computer technology is merely reproductive and wasn’t created to address such problems…
@Munchausenification2 жыл бұрын
So systems should favour specific behaviours? So you get a better algorithmic score if you buy a Volkswagen over a Citroén, you spend 20% rather than 15% of your income on food thereby lowering your score? Where is the line?
@jansport04092 жыл бұрын
Oh great. Now the Economist sounds like the guardian.
@AlexHarris10942 жыл бұрын
Tell me about it...
@bornamofid92542 жыл бұрын
Why is that Uber driver driving a bmw😅
@chevalier56912 жыл бұрын
0:28 I truly wonder how she got the audacity to say dumb stuff like this confidently.
@XOPOIIIO2 жыл бұрын
The worst part they don't accept even slightest possibility that they could be wrong. Going into the wrong direction so persistently is the sure way to keep any problem unsolvable. To solve any problem they have to recognize what the roots of the problem are, but such recognition is inconsistent with their persistent denialism.
@codefluence2 жыл бұрын
I mean there is consensus about that, the bias is in the data.
@Munchausenification2 жыл бұрын
Our biases gets transferred into technology. It could even be something silly as pineapple on pizza. Most people have an idea of what pizza should look like and for the majority of people they dont mind it, i think its 55% or something like that and 10% hate it and 20-30% love it. Well if you search pizza images you often have to look far down the list before a pineapple pizza image comes up, even though lots of people love it? It has to be with the amount of images in the database and on what people click on when searching for images. Also most of the time people against something is louder and our mind is often focused on negativity. I hope it gives a clearer picture of what she is saying.
@GeronimosStolenBones2 жыл бұрын
The computer will scrub all narratives other then the one they want posted.
@edwardguo79952 жыл бұрын
So what about Asians? Is their situation better or worse than Africans?
@abhinavpy27482 жыл бұрын
Doesn't matter China writes it's own algorithms
@Millsmills5862 жыл бұрын
Better, because this algorithm is most likely written either by a "Asian" or white person. Especially if the American was developed in the US.
@Millsmills5862 жыл бұрын
@@abhinavpy2748 this is Uber, an american company
@luddicpath67562 жыл бұрын
05:24 Hey Economist! What are these "favored financial behaviors" that are more common among white people? Asking for a friend.
@Feynman9812 жыл бұрын
Depends all on the sensors. Cameras are biased towards Albedo. LiDAR is not.
@metros99112 жыл бұрын
Based, not biased.
@junyingo13962 жыл бұрын
based ai
@CugelTheClever4582 жыл бұрын
Based
@goyasolidar2 жыл бұрын
Algorithms don't create themselves so consider the source.
@fishiestify2 жыл бұрын
How to make computers less biased? Keep The Economist out of computer, the problem will be solved.
@mbm86902 жыл бұрын
sorry, but how can a computer "see" someones "colour"? Even by considering one's ip-location there's no guarantee for anything.
@Millsmills5862 жыл бұрын
It sampling that is lacking, if you don't feed your AI imaging of all types of humans, but only "white" people. The sampling is going to be skewed. Incorrect.
@KAMIOUKA2 жыл бұрын
The AI that misclassified the couple as gorillas was 100% not fed with black people but instead with real gorillas wich means it was misprogrammed due to human error. If it was fed with actual black people the couple just seems to look more like gorillas than black people. No case of racism to see because computers (Spoiler) are indeed completly RATIONAL and do not have a bias at all!
@WaleSoleye2 жыл бұрын
I think that’s the(part of) idea of the video. Human error and acknowledging that these issues do exist can help the developers address them appropriately.
@XOPOIIIO2 жыл бұрын
What caused the problem in this particular case is unclear, people are susceptible to sensations, but search engines are mislabeling photos all the time, it's just usual mistakes nobody notice until it coincides to provoke emotional response.
@KAMIOUKA2 жыл бұрын
@@WaleSoleye If it's the case that it is a human error, I highly doubt that and just put that statement into my argument because I wanted to name both possibilities. These AI's just determine on statistical probability and the AI in my opinion thought that the couple were presumably 60% gorillas and 40% human. So it picked gorillas. What to do against it you ask? You can always code a fault tolerance to add a moral into the program that if the probabilities of both are high it always chooses the human because for us people it is morally more acceptable to classify a gorilla as human. Probabilities can't always be correct. I understand the point of the video though, our moral concept is much more versatile as any AI or pc ever could comprehend and we have to understand it to fix things computers can't handle without our help.
@daftwod2 жыл бұрын
Only things which are true are hurtful.
@T_J_2 жыл бұрын
@@daftwod Tell that to the innocent people about to be executed on death row.
@philipino992 жыл бұрын
very 1 sided reporting, didn't even give a chance for those who developed some of these 'racist algorithms' a chance to have their say.
@roninecostar2 жыл бұрын
5:26 Research found in the older credit scoring used by older mortgage lenders favoured particular spending habits that are more common with white people.
@marilynsolomon52792 жыл бұрын
Thanks. I was wondering about this
@roninecostar2 жыл бұрын
Aka more financially able to take on mortgage.
@doubleaa6582 жыл бұрын
Yeah depends how they were made
@xiflix89562 жыл бұрын
you think everything is racist
@importantname2 жыл бұрын
the maker and designer of the software decide who gets the advantages - and business is about making the greatest profit possible, not about levelling the playing field.
@conqueryourfuture61342 жыл бұрын
Money is a control tool of the elite not something they need. There are certain cultures that avoid computer programming like the plague others dominate the industry….
@bitsbard Жыл бұрын
If this subject piques your curiosity, then Jack Frostwell's "Game Theory and the Pursuit of Algorithmic Fairness" is a book you shouldn't miss. I was deeply captivated by it.
@stoicismcore2 жыл бұрын
The computer does not lie. It is not biased.
@maxwaller20552 жыл бұрын
*pondering and wondering at 3:47 pm Pacific Standard Time on Thursday, 10 February 2022*
@Nipponson862 жыл бұрын
The reporter was so lazy that she didn't even bother to do thorough research on the subject...
@nayaman10232 жыл бұрын
Its creates as next RACISTISM
@artman122 жыл бұрын
Should do this video: Is the Economist racist?
@georgewitheridge49612 жыл бұрын
Please do
@kennethadler73802 жыл бұрын
Yes
@juanDE17032 жыл бұрын
Propaganda
@marilynsolomon52792 жыл бұрын
Many of the comments so far are very disappointing. The report starts with a gentleman's, and many similar others, ability to make a living, based on this technology, fundamental to most of us.. It's only a 9 min summary, I'm sure if it was longer it could have added much more depth but blimey, sounds like a lot of ppl wants to shut down this interesting topic right now. If you found out about piece of tech affected you in any way, whatever was at fault, surely you'll want it fixed!
@boar66152 жыл бұрын
most middle class white people in first world western countries don't like it when anything disturbes their comfort this is how they react, they face no real issues at large so every little "problem" is worth fighting for
@Omar-kl3xp Жыл бұрын
People are pretty ignorant,anyone that know a little bit of AI will know that a not well trained model can be very bias ,and it is already been happening for a while , there are even documentary about this ,it is not just race , but also gender , Amazon at one point was found that their AI was only hiring men while rejecting women with the same experience and education.
@JM-gz1ej2 жыл бұрын
Will it ever stop ?
@alibizzle20102 жыл бұрын
funny how The Economist isn't making videos based on all its recent reactionary anti--woke articles
@mannykhan77522 жыл бұрын
Great, 1s and 0s have so much bias. Im never dealing with them again.
@parafaramaku2852 жыл бұрын
6:36 well i guess well see...
@myofficetop2 жыл бұрын
I don't see any big issues that AI has some difficulties recognizing black people because AI needs some time to educate itself and fix the issue, the best you can do is report about an issue and wait until they fix it. If you don't have patience you can write your own program and use it,
@HelloPenguinYT2 жыл бұрын
2.48M subs and only 38k views that explains topic...human made things are totally controlling by humans
@paultopping74132 жыл бұрын
The world has gone mad…….everybody seems to feel discriminated against these days (in some cases it is true but it is also true that some people make It their mission to feel discriminated against). What we should be thankful for is that in most western ‘democracies’ you have the right to be paranoid !!
@mentoriii34752 жыл бұрын
What, now AI is racist as well
@ohohoho85442 жыл бұрын
hypocritism in mass media that's real name of these video
@chaovo76292 жыл бұрын
The more I read the Economist, the more I feel they are always taking moral highlands, being politically correct and subjective and biased. Technology itself are not biased and programmers, firms and the data SAMPLE they use could be biased. And AI is not 100% correct Rest assured that the comments are not buying what they say.
@Millsmills5862 жыл бұрын
but why isn't it fair to call it out when it something like this happens? Ofc the sampling is garbage. But still it needs to be fixed. Articles like this, need to happen so they fix issues like this sampling issue. obviously not enough people that aren't "white" aren't in this sample. They never said racist, they said biased. Machines are created by humans and those humans can have bias.
@chaovo76292 жыл бұрын
@@Millsmills586 Yes, that’s exactly is the point - machine is created by human and human could be biased. But The Economists and some of the media are framing like the technology is evil, just criticizing but not thinking about what is behind. For Uber’s issue, they are using Google Image - which is naturally automatically generated online - the whole issue is telling us a fact that some demographic groups, not just ethical, gender but ages and so on, are well under-represented. What we need to do is to empower those groups in disadvantages by education and public education, NOT just simply taking moral high ground calling the firms or the programmers racialist. Out of the belief that majority of human are just, I think Uber is not doing it intentionally.
@GrayBlevins Жыл бұрын
Monkey doesn’t wear any pants
@kunited92 жыл бұрын
The economist used to be a more serious institution, now its encouraging policing and state regulation for policing software against the companys interests?? This is insane
@Deep_Side_Sleep2 жыл бұрын
That's what policies are for? They are not an antiquated concept you know.
@kunited92 жыл бұрын
@@Deep_Side_Sleep sure but praising policies without being specific is like wanting to pay a price without knowing the value AI uses patterns to make choices. That is the definition of discrimination. You can't make water dry
@goldenunicorn3412 жыл бұрын
Great ! Topic !!! 🧐😎
@Subject82 Жыл бұрын
AI isn't racist. It's purely critical and objective thinking. If it comes to a conclusion that's because it's looking at data.
@Omar-kl3xp Жыл бұрын
It is because the developers that trained those models use only white people pictures to train the model ,if they use also minority pictures just as much it will not be as bias against minority people.
@Subject82 Жыл бұрын
@@Omar-kl3xp Nope, it will be biased because raw data is biased and racist.
@stevejurgens98362 жыл бұрын
Total garbage.
@alancient84632 жыл бұрын
Based A.I
@WENKTUBEWRC2 жыл бұрын
Os pequenos deuses dos tolos são coloridos e racistas, estamos na era da tolice humana, especialmente no topo da pirataria fraudulenta? /Há um Deus que criou todas as cores!
@HologramBoy2 жыл бұрын
Slava Ukraina
@WENKTUBEWRC2 жыл бұрын
⚠☢☣
@WENKTUBEWRC2 жыл бұрын
🕯🌍🌎🌏🕯
@traveler59732 жыл бұрын
Big lips matter
@alexsimonelis1642 жыл бұрын
Wrong headed.
@ghostrider_17012 жыл бұрын
Just nonsense, really healthy on brain people, thinking about this?😅