The Dangerous Math Used To Predict Criminals

  Рет қаралды 300,833

Vsauce2

Vsauce2

Күн бұрын

The criminal justice system is overburdened and expensive. What if we could harness advances in social science and math to predict which criminals are most likely to re-offend? What if we had a better way to sentence criminals efficiently and appropriately, for both criminals and society as a whole?
That’s the idea behind risk assessment algorithms like COMPAS. And while the theory is excellent, we’ve hit a few stumbling blocks with accuracy and fairness. The data collection includes questions about an offender’s education, work history, family, friends, and attitudes toward society. We know that these elements correlate with anti-social behavior, so why can’t a complex algorithm using 137 different data points give us an accurate picture of who’s most dangerous?
The problem might be that it’s actually too complex -- which is why random groups of internet volunteers yield almost identical predictive results when given only a few simple pieces of information. Researchers have also concluded that a handful of basic questions are as predictive as the black box algorithm that made the Supreme Court shrug.
Is there a way to fine-tune these algorithms to be better than collective human judgment? Can math help to safeguard fairness in the sentencing process and improve outcomes in criminal justice? And if we did develop an accurate math-based model to predict recidivism, how ethical is it to blame current criminals for potential future crimes?
Can human behavior become an equation?
** ADDITIONAL READING **
Sample COMPAS Risk Assessment: www.documentcl...
COMPAS-R Updated Risk Assessment: www.equivant.c...
“The accuracy, fairness, and limits of predicting recidivism.” Julia Dressel. www.science.or...
“Understanding risk assessment instruments in criminal justice,” Brookings Institution: www.brookings....
“Machine Bias,” Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica: www.propublica...
“The limits of human predictions of recidivism,” Lin, Jung, Goel and Skeem: www.science.or...
“Even Imperfect Algorithms Can Improve the Criminal Justice System,” New York Times: www.nytimes.co...
Equivant’s response to criticism: www.equivant.c...
“A Popular Algorithm Is No Better at Predicting Crimes Than Random People,” Ed Yong: www.theatlanti...
“The Age of Secrecy and Unfairness in Recidivism Prediction,” Rudin, Wang, and Coker: hdsr.mitpress....
“Practitioner’s Guide to COMPAS Core,” s3.documentclo...
State v. Loomis summary: harvardlawrevi...
** LINKS **
Vsauce2:
TikTok: / vsaucetwo
Twitter: / vsaucetwo
Facebook: / vsaucetwo
Talk Vsauce2 in The Create Unknown Discord: / discord
Vsauce2 on Reddit: / vsauce2
Hosted and Produced by Kevin Lieber
Instagram: / kevlieber
Twitter: / kevinlieber
Podcast: / thecreateunknown
Research and Writing by Matthew Tabor
/ tabortcu
Editing by John Swan
/ @johnswanyt
Police Sketches by Art Melt
Twitter: / eeljammin
IG: / jamstamp0
Huge Thanks To Paula Lieber
www.etsy.com/s...
Vsauce's Curiosity Box: www.curiosityb...
#education #vsauce #crime

Пікірлер: 1 000
@DemonixTB
@DemonixTB 2 жыл бұрын
IBM Internal presentation slide, circa 1979; "A COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION" is the perfect response to any of this. no algorithm should ever decide the fate of who lives who dies, whose life get's cut by 30 and whose by 3 years.
@feedbackzaloop
@feedbackzaloop 2 жыл бұрын
Even more so, justice must not be based on probabilty, computer-calculated or man-accounted.
@Mikee512
@Mikee512 2 жыл бұрын
Juries falsely convict a certain % of time. Algorithms falsely convict a certain % of time. Shouldn't you choose the method that falsely convicts less frequently? Or is there something fundamentally important about having people make the decision, even though they falsely convict more often? I don't know the answer, but it's not a cut-and-dry issue, IMO. **Whatever the case, I think any algorithms in use by the justice system (government) should be open-source and subject to public scrutiny. This seems like it should be a non-negotiable minimum.**
@feedbackzaloop
@feedbackzaloop 2 жыл бұрын
@@Mikee512 open-source judging algorithm is a disaster, not a non-negotiable minimum! We kind of already have it as written criminal and civil codes and look at all loopholes people are coming up with to get away from justice, absolutely legally. Now imagine how simple it would be to reverse engineer the algorithm, predict your own sentence and based on that commit it with maximum profit.
@sillyproofs
@sillyproofs 2 жыл бұрын
If we little people can see how nonsense all this is, why can't the higher-ups? I thought they were the more educated..
@fedcab4360
@fedcab4360 2 жыл бұрын
@@sillyproofs LMAO
@KaoKacique
@KaoKacique 2 жыл бұрын
That company made a buzzfeed quiz and is selling it like it was an advanced minority report AI
@bow_and_arrow
@bow_and_arrow 2 жыл бұрын
FRRRRR
@joshyoung1440
@joshyoung1440 Жыл бұрын
​@@bow_and_arrow for real real real real real
@joshyoung1440
@joshyoung1440 Жыл бұрын
​@@bow_and_arrow oh sorry FOR REAL REAL REAL REAL REAL
@avakining
@avakining 9 ай бұрын
Plus like… the whole point of Minority Report was that those algorithms don’t work anyway
@Vee-Shan-CC
@Vee-Shan-CC 2 жыл бұрын
FYI - Noom was found to be practicing very shady business behind the scenes. They have been overcharging customers and refusing to allow them to cancel their services. I believe they are currently under investigation. From what I've come to learn, they are actually bragging about their mishandling of services and suggesting other companies do the same. I''d do some digging to see what you can find before accepting their promotions again.
@moizkhokhar815
@moizkhokhar815 2 жыл бұрын
yes More people should read this comment
@ashlinberman4534
@ashlinberman4534 2 жыл бұрын
I think they made canceling subscriptions easy/easier after complaints, but i couldnt find anything anything about overcharging being solved, however they did get a class action lawsuit over it, and all reports seem to be from 2+ years ago, so it might be solved as well. Not accountable on either fronts btw, this is just from basic research, so you might be able to find better evidence against what i said
@Games_and_Music
@Games_and_Music 2 жыл бұрын
I thought that part of the video really displayed the criminal maths.
@thelistener1268
@thelistener1268 2 жыл бұрын
That's for the tip!
@that_rhobot
@that_rhobot 2 жыл бұрын
I've seen accounts from people trying Noom's mental health app that it pretty much always just recommends dieting, regardless of what you are dealing with. Like, there were people battling anorexia that were being told they were eating too much.
@Cyberlisk
@Cyberlisk 2 жыл бұрын
We need a law that any algorithm that affects sentences or political decisions must be open source. For me as a computer scientist, that's just common sense and not having that law contradicts every juridical principle in a democracy. Having a black box algorithm influence decisions is literally the equivalent of using investigative results or testimonies without presenting them in court.
@mqb3gofjzkko7nzx38
@mqb3gofjzkko7nzx38 2 жыл бұрын
@Lawrence Rogers We might as well have secret laws and secret tax codes too so that those can't be easily gamed either.
@zafar0132
@zafar0132 2 жыл бұрын
If they are using a bog standard convolutional neural network, they might not be able to explain the decisions it makes. The US military used them in deciding what drone targets to attack in Pakistan and ended up bombing and killing ordinary people just going about their business. Using these technologies in certain areas with no oversight is just criminally negligent in my opinion.
@joshyoung1440
@joshyoung1440 Жыл бұрын
This is great but I'm pretty sure the word is judicial
@Bemani-v9e
@Bemani-v9e Жыл бұрын
Some would argue that's exactly why we don't live in a democracy.
@johnmcleodvii
@johnmcleodvii 6 ай бұрын
Any AI model needs to be traceable.
@cee8mee
@cee8mee 2 жыл бұрын
I think using an algorithm to look for possible suspects, or location of evidence, or possibly areas that might require higher security due to history of criminal behavior is valid. But as soon as you start asking subject philosophical questions, you've introduced a wild card that makes the algorithm meaningless. I think we can find areas in the justice system for algorithmic programs, but definitely not proprietary and hidden. Open source is a must for transparency.
@gewurzgurke4964
@gewurzgurke4964 2 жыл бұрын
Any algorithm made for "justice" will reinforce the prejudice of those that make it What law is, what crime is and what what crime prevention should look like are already deeply philosophical questions
@quintessenceSL
@quintessenceSL 2 жыл бұрын
It's a bit more than that, as these same types of test were/are used in "character profiles" for hiring (actually had a manager stand behind me and give me answers after I failed the thing for the 5th time. ALL of my references stated I was a great employee. Who ya gonna believe?). It is akin to social credit scores and the like of essentially magic smoke to remove accountability from decision making (and quite possibly subtly gaming an algorithm for a result not mentioned in the stated intent). And while claiming the mantle of "science", like much forensic tools, it hasn't been tested for falsifiability or even degree of improvement over existing methods. It's a modern day snake-oil salesmen now using computer science as their pitch. Run the test on the management of said companies. Let's see how accurate they really are.
@Cajek2
@Cajek2 2 жыл бұрын
It’s trying to measure how likely it is that you’ll commit a crime in capitalism. In capitalism it’s a crime to be poor or hungry. And in that sense the algorithm is doing pretty good.
@andrasfogarasi5014
@andrasfogarasi5014 2 жыл бұрын
@@Cajek2 What the hell are you talking about? Even if we accept for a fact that the entirety of society is structured to enrich a ruling class, being poor wouldn't be a crime. The poor don't cause the rich to become less rich by virtue of existing. Instead, a poor person under such a system would be considered someone whose labour can be easily bought and is thus quite useful. Preventing the poor from working by imprisoning them would be akin to the rich shooting themselves in the foot. And no, prison labour is not profitable. The number of prisoners in the USA is 2.1 million. The value of prison labour per year is $11 billion. This comes out to each inmate producing $5,238 worth of goods and services per year. There is no prison in the developed world which can house a prisoner while spending only $5,238 per year on them. It's clear then that unless someone causes like net $10,000 worth of social damages per year, it does not make purely financial sense to imprison them. And if they do cause a net $10,000 worth of social damage per year, then I do dare say in my humble opinion that they probably *should* go to prison.
@notsojharedtroll23
@notsojharedtroll23 2 жыл бұрын
Just watch Psycho pass
@PhilmannDark
@PhilmannDark 2 жыл бұрын
I've first read about this in the book "Weapons of Math Destruction". A major problem with all of these algorithms is that they can't measure the variables which they want to observer (like what people think, how stable they are emotionally, what their views, experiences and skills are). So companies use second-hand variables which are often only weakly linked to the problem at hand. Laymen just see "a computer came up with the number after doing some very complex math" which they think means "must be correct since neither math nor computers can be wrong" and they forget the old wisdom "garbage in, garbage out".
@garronfish8227
@garronfish8227 7 ай бұрын
I'm sure more frequent criminals will work out how to answer the questions in the best way. The system seems flawed.
@SupaKoopaTroopa64
@SupaKoopaTroopa64 2 жыл бұрын
Using AI to predict future crimes is an extremely dangerous idea. If you give an AI access to currently available crime data, and optimize it to predict future crimes, what you are actually doing is asking it to predict who the criminal justice system (with all of its biases) will find guilty of a future crime. It gets even worse when you feed the AI data from crimes that it predicted. The AI can now learn from its past actions, and further 'fine tune' it's predictions, by looking at what traits are more likely to lead to a guilty conviction, and focus its predictions on people with those traits. This leads to a feedback loop where the AI discovers a bias in the justice system, exploits that bias to improve its "accuracy," leading to the generation of more crime data which further enforces its biases. Don't even get me started on what could happen if we use an AI powerful enough to realize that it can 'influence' its own training data.
@diceblock
@diceblock 2 жыл бұрын
That's alarming.
@buchelaruzit
@buchelaruzit 2 жыл бұрын
exactly. and it very quickly starts sounding like eugenics.
@stevenboelke6661
@stevenboelke6661 2 жыл бұрын
There's no way that this machine wasn't trained with data about actual convictions and suspect info. Therefore, the algorithm could at best only accurately replicate justice as it has been done, not as it should be.
@quarepercutisproximum9582
@quarepercutisproximum9582 2 жыл бұрын
Dang, that's... a *really* good point. I hadn't thought of that. But, who could say what it should be? How would the creator of the algorithm decide what qualities to select for? I'm not sure such a thing is possible, while still working under the supposition that people lie for their own benefit
@andershusmo5235
@andershusmo5235 2 жыл бұрын
I was thinking the same thing. Algorithms aren't necessarily the objective oracles the way we commonly think of them as. An algorithm making predictions based on historical data is bound to replicate that data. An algorithm not based on historical data relies on speculation in some form or some degree and will reveal (or worse, hide) biases and assumptions on the part of whoever designed the algorithm. Like Steven stated so well, an algorithm trained on the data we have will merely replicate justice as it has been done so far, not change it. An algorithm thus only serves to obfuscate issues in the justice system behind a veil of infallibility and inaccountability.
@pXnTilde
@pXnTilde 2 жыл бұрын
Well, it probably wasn't trained at all. It's not a neural network. It's possible the coefficients were tuned to match historical decisions, and your point is very valid. However, if it's true it's simply reflecting what has happened, then getting rid of it would return to ... the exact thing it was doing.
@I.PittyTheFool
@I.PittyTheFool 2 жыл бұрын
No, the machine algorithms are used at the research level. Studies are done on past convictions to look for common denominators. Researchers use machine learning to look for these correlations. Once there is a stronger correlation is established, it can be considered for a risk assessment. Risk assessments are ultimately a set of items that show a stronger correlation.
@NotQuiteGuru
@NotQuiteGuru 2 жыл бұрын
You're correct in your initial assessment, but I think you're incorrect in your last. The algorithm does not predict or force "justice". It does NOT dictate a judge's sentence, or if the person is guilty of a crime or not. It merely reports it's best guess for the likelihood of recidivism. By your reasoning (if I'm correctly understanding your meaning that is), it could _"at best only accurately"_ determine the chance for recidivism _"as it has been done."_ There is no recidivism _"as it should be."_ It is guessing possible futures based on historical data, plain and simple. It is STILL the responsibility of the judge to set a sentence... mind you, for someone who has already been convicted of the crime.
@TheVaryox
@TheVaryox 2 жыл бұрын
Company: "yea you should sentence him harder, and I won't tell you why I think that" Judges: "eh, good enough" Man, if trade secrets get prioritised over a citizen's right to a fair trial, seriously, wtf. This is trial by crystal ball.
@I.PittyTheFool
@I.PittyTheFool 2 жыл бұрын
Research shows sentences are longer in the afternoon or if it's nice weather outside.
@jeffreykirkley6475
@jeffreykirkley6475 2 жыл бұрын
Honestly, why do we have trade secrets as a protected thing? If no-one can know the truth about it, then why should we even agree to it's use/consumption?
@alperakyuz9702
@alperakyuz9702 2 жыл бұрын
@@jeffreykirkley6475 well, if you spend millions of dollars on development an algorithm to gain an edge over competition, would you publish the information freely so that your competition can imitate itbfor free?
@ipadair7345
@ipadair7345 2 жыл бұрын
@@alperakyuz9702 No, but the gov.(courts especially) shouldn't use an algorithm which nobody except comp. knows the working of.
@legendgames128
@legendgames128 2 жыл бұрын
@@ipadair7345 One which the company could use to suppress those who don't like them, perhaps. Or if they are working with the government and the media, we essentially get political opponents being sentenced. In this one, it merely predicted the rate of recidivism. In the one used to actually punish criminals, it could be used to punish political opponents while still being guarded as a trade secret.
@Oxytail
@Oxytail 2 жыл бұрын
The fact many of these questions seem like what you'd ask a person whilst trying to diagnose them with certain mental illnesses or neurodivergencies is disgusting, let alone the part where these questions are answered with no context or nuanced conversations on the subject. "Do you often feel sad?" The answer: "Yes" The algorithm's thoughts: "this person has nothing to live for and might commit a crime because they don't fear losing their life, their crime and answers indicate they'd be more likely to break the law again" The reality/nuance: "Yes, my mom died 4 months ago to cancer and I've felt down ever since, she helped me keep my life in check and without her I completely forgot to get my car's documents renewed, since she always reminded me to do it as I still lived with her and the mail was received by her" It's SO easy for any answer to mean the complete opposite if you don't allow someone to explain the reason for their emotion. Algorithms and AIs and machines in general should never be in charge or judging people because they do not, and cannot, guess the nuance behind actions and feelings. It's ludacris to me that this is even a thing.
@DanGRV
@DanGRV 2 жыл бұрын
Using that same question: "Do you often feel sad?" "No" "The subject displays shallow affect; more likely to have antisocial tendencies."
@HoSza1
@HoSza1 2 жыл бұрын
First off, algorithms don't think nothing, they are just not able to. AI included. It's the people who create the algorithms are making the decisions ultimately. Second off, there may be a correlation between mental state and the chance of committing a crime, so why not testing for it? What would *you* ask if your job was to decide if given suspect would about to commit crimes repeatedly or not?
@unliving_ball_of_gas
@unliving_ball_of_gas 2 жыл бұрын
@@HoSza1 What would I do? Do a nuanced personal detailed psychological assessment and then decide. But even then, you can never understand 100% of someone's thoughts even if you were given years to do it. So the question becomes, SHOULD we even try to determine recidivism or should we just treat everyone equally regardless of their past because everyone can change?
@HoSza1
@HoSza1 2 жыл бұрын
@@unliving_ball_of_gas I agree that in an ideal world where resources are unlimited we could do that. Your other question is indeed more difficult to answer, but I think that investing energy in order to reduce the chance of reoccurring criminal tendencies would pay off on the long run.
@noahwilliams8996
@noahwilliams8996 2 жыл бұрын
Computers can be programmed to understand emotions. That was one of the things Turing proved about them.
@notoriouswhitemoth
@notoriouswhitemoth 2 жыл бұрын
"determined by the strength of the item's relationship to person's offense recidivism" I was gonna say there was no way those coefficients weren't racist, and the results bear that out. It's almost like predictive algorithms are really good at perpetuating self-fulfilling prophecies.
@desfortune
@desfortune 2 жыл бұрын
AI and the sort just act on the data you provide. If you provide data that contains racist biases, the program will use them. AI is not intelligent, it does what you teach it to do, so as long as faulty humans insert faulty data, most of time without realizing it, you are not gonna solve anything lol
@Codexionyx101
@Codexionyx101 2 жыл бұрын
You'd think that if we were going to recreate Minority Report, we'd at least try to do a good job at it.
@orlandomoreno6168
@orlandomoreno6168 2 жыл бұрын
This is more like Psycho Pass
@I.PittyTheFool
@I.PittyTheFool 2 жыл бұрын
There is a lot of "minority report" in the sex offender world. For example, in Minnesota every such felon is given a risk assessment at end of jail sentence to determine if they need to be civilly committed to treatment. Sex offender assessments basically determine the probability to reoffend in the next five years. if you are labeled as a higher risk, you are often given extra treatment / civil commitment time.
@joaquinBolu
@joaquinBolu 2 жыл бұрын
This brings me memories of Psycho pass anime were an AI computer decided who was a threat for society even before comitting a crime. The whole society was ruled by this tech withought questioning it, even cops and law enforcers
@feffy380
@feffy380 2 жыл бұрын
It wasn't even AI. It was brains of other psychopaths in jars
@aicy5170
@aicy5170 2 жыл бұрын
course?
@I.PittyTheFool
@I.PittyTheFool 2 жыл бұрын
Oh, by no means is this all "tech." I've done paper and pencil risk assessments that then get shared with courts / probation.
@imaperson1060
@imaperson1060 2 жыл бұрын
This is assuming that nobody lies and gives answers they know will lower their score.
@fetchstixRHD
@fetchstixRHD 2 жыл бұрын
Quite possibly, that may be why the girl got a higher score than the guy. The guy probably knew better to think ahead as to how the questions may be taken, whereas the girl probably wasn't calculated at all.
@jmodified
@jmodified 2 жыл бұрын
Hmmm, if I have no financial concerns, is it because I'm independently wealthy or because I know I can always rob a convenience store if I need cash? Probably best to answer "sometimes" on that one.
@felipegabriel9220
@felipegabriel9220 2 жыл бұрын
Those algorithms sounds literally like the SYBIL system in PSYCHO PASS anime, lol. Next step we get a social credit score :D
@sirswagabadha4896
@sirswagabadha4896 2 жыл бұрын
In a capitalist world, your credit score is pretty much already your social credit score. But of course, some countries go even further than that already...
@estebanrodriguez5409
@estebanrodriguez5409 10 ай бұрын
@@sirswagabadha4896 I was about to answer the same thing
@awesomecoyote5534
@awesomecoyote5534 2 жыл бұрын
The worst kinds of judgements are judgements made by someone who can't be held accountable if they are wrong. Judgements that determine how many years someone spends in prison should not be decided by an unaccountable AI.
@Klayhamn
@Klayhamn 2 жыл бұрын
humans that determine it aren't accountable either. in fact, the people who design the systems or manage the systems of law and order rarely if ever (and most likely - never) are held accountable for the decision they made so, at least based on this fact, it makes no difference if we use AI or not instead, what does matter is how good it is at predicting what it claims to predict
@prajwal9544
@prajwal9544 2 жыл бұрын
But algorithms can be changed easily and made better. A biased judge is worse
@soulsmanipulatedinc.1682
@soulsmanipulatedinc.1682 2 жыл бұрын
Should we desire to hold someone accountable? Sorry. It's just that, if we need to hold someone accountable for wrong judgment, I feel that we would have already failed. I mean, the option to hold someone accountable isn't a means to correct someone's judgment, but instead control a person's judgment. An algorithm always has perfectly controlled judgment, so, like...I don't see the problem here? I mean, yeah, this could be implemented horribly. However, the base idea would theoretically work.
@schmarcel4238
@schmarcel4238 2 жыл бұрын
If it is a machine learning algorithm, it can be punished for mistakes, thus be held accountable. And it will then try not to make the same mistakes again.
@soulsmanipulatedinc.1682
@soulsmanipulatedinc.1682 2 жыл бұрын
@@schmarcel4238 I thought about that as well, however, that may cause the program to develop harmful biases that we didn't intend.
@ElNerdoLoco
@ElNerdoLoco 2 жыл бұрын
I'd scrawl, "I plead the 5th" over every question. I mean, you have the right to not be a character witness against yourself too, and how can you tell if you're incriminating yourself with some of these questions? Hell, just participating while black seemed incriminating in one example.
@o0Donuts0o
@o0Donuts0o 2 жыл бұрын
Not that I agree with software being used to predict potential future criminal activity, isn’t this software used after judgement is served and only used to determine the sentencing term?
@pXnTilde
@pXnTilde 2 жыл бұрын
Seriously, this test was used during sentencing, which means there was absolutely no obligation whatsoever for him to complete that test. Remember, too... _he is guilty of his crime_ The judge could have easily decided on the same exact sentence regardless of the algorithm. In fact, often judges have already decided the sentence before hearing the arguments at sentencing.
@chestercs111
@chestercs111 2 жыл бұрын
This reminds me of the study James Fallon did on psychopaths. He would analyze brain scans of known psychopaths and found that all their brains showed similar results. Then during a brain scan testing he did on him and his family he found that one of the brains matched that of a psychopath. He thought someone at work was playing a joke on him but it turned out to be his brain. Showing that it's more than just how your brain is that makes you a psychopath. However, those that match the brain scans may be more susceptible to being a psychopath if certain conditions are met
@andrasfogarasi5014
@andrasfogarasi5014 2 жыл бұрын
If you want to develop an effective method for measuring recidivism, here's the plan: Step 1: Make a law requiring all people to buy liability crime insurance. Under the terms of this type of insurance, whenever the client commits a crime, the insurance agency pays for the damages caused and the client is charged nothing. Step 2: Wait 2 months. Step 3: Base prison sentences on people's insurance rates. Insurance companies under this system have a financial incentive to create an effective system for predicting future criminal behaviour and base their liability crime insurance rates on that. As such, the insurance rates become accurate predictors of future criminality. Of course you could argue that this system will cause repeat offenders to have such incredibly high insurance rates that they have no reasonable way of ever paying them, thus making them unable to buy liability crime insurance. Fret not, for I have a solution. Execution. This will drop their rates to precisely $0. Thank you for listening to my very own dystopia concept presentation.
@michaellautermilch9185
@michaellautermilch9185 2 жыл бұрын
You're just shifting who builds the models and asking insurance companies to be the ones building the black boxes. Yes, insurance companies do have people who build black box algorithms too, but they will basically do the same thing. Actually your plan has a massive flaw: insurance premiums don't only include measures of risk, but also multiple other business considerations. They want to sell more policies after all! So now you would have the justice system being partially influenced by some massive insurance company's 5 year growth plan. Not a great idea.
@KenMathis1
@KenMathis1 2 жыл бұрын
The fundamental problem with this approach is that generalities can't be applied to an individual, and these automated approaches to crime prediction only rely on generalities. They are a codification into law of biases and stereotypes.
@mvmlego1212
@mvmlego1212 2 жыл бұрын
Well-said. Even if the predictions are statistically valid, they're not individually valid.
@luisheinle7071
@luisheinle7071 2 жыл бұрын
@@mvmlego1212 yes, it doesn't matter if they are statistically correct because it says nothing about the individual
@airiquelmeleroy
@airiquelmeleroy 2 жыл бұрын
Mathematically, the problem is preeeetty obvious. The amount of people that only have commited 0 to 1, or maybe 2 crimes, is astoundingly massive. The amount that have commited 4 or more, have commited MANY more than 4, usually around the hundreds if we take into account the amount of times they got away with it before caught. This means that while one group (the people that have commited many many crimes) have a fairly similar profile or data points between each other, the ither group is literally *everyone* else. So picture this: the algorythm determines that 90% of criminals wear blue pants, accounting for like 10% of the population, then the algorythm will happily mark any blue pants wearing citizen a "potential criminal", despite there being thousands more blue pant wearing innocent people, than total criminals overall. While also, completely making invisible any criminal that wears white pants, or worse, chooses to wear white pants, to avoid long sentences. The second problem: Petty crimes tend to be done by normal people, so almost any person that commits a crime is "likely" to commit another, since the algorythm will find the pattern "all these criminals are normal people, therefore, any normal people could be criminals!" Way to go blackbox...
@TheEnmineer
@TheEnmineer 2 жыл бұрын
For real, it's a clear misunderstanding of the field of statistics. Though, the interesting question is how do we know which criminals who have committed less than 4 will commit more than 4? After all, this is supposed to be an algorithm to predict (not just detect) recidivism, pointing at something that's clearly already recidivism isn't what it's supposed to do.
@truthboom
@truthboom 2 жыл бұрын
it needs neural network training
@ichigo_nyanko
@ichigo_nyanko 2 жыл бұрын
@@truthboom that will just reinforce biases already present in the justice system, like racism and sexism.
@themacocko6311
@themacocko6311 2 жыл бұрын
IDK if it works 100%. There is 0 right to punish anyone for acts that have not been committed.
@taodivinity1556
@taodivinity1556 2 жыл бұрын
Yet if a time where it really works 100% of the time ever comes to reality, the fact stands that if you ignore the future crime, somebody will suffer, so perhaps rather than a punishment, a pre-emptive rehabilitation might be the compromise.
@quarepercutisproximum9582
@quarepercutisproximum9582 2 жыл бұрын
Exactly my problem with it. Present punishment should not be allocated based on one potential future (whether "punishment" deserves a place of its own right- outside of rehab- is its own discussion). There will always be variables that may prevent someone from acting on an intention they have to do one thing or the other; to push any forceful action upon a party before they have done anything is a path to thoughtcrime, which is less than a step away from a total lack of real freedom
@truthboom
@truthboom 2 жыл бұрын
@@taodivinity1556 Future crimes happen because of past unjust like bullying or racism. If there's no unjustice there would be no crime in the future
@taodivinity1556
@taodivinity1556 2 жыл бұрын
@@truthboom So are you saying crime is born out of crime? Then how did the crime of bullying and racism happened? Was there another crime before it? I think you're honestly oversimplifying the process, humans are way more complex than that. There is always a beginner, one that happens due to a reason, which may not be from exterior malice at all.
@taodivinity1556
@taodivinity1556 2 жыл бұрын
@NatSoc Kaiser Then change it, I don't know what else to tell you, haha. It isn't working to keep society safe.
@grapetoad6595
@grapetoad6595 2 жыл бұрын
The problem is the focus on punishment. I.e. we think you might commit crime again so you should be punished more for your potential future crime. If instead it was built on attempts to rehabilitate, and decided who was most in need of support to avoid recidivism, this would be so much better. The algorithms are a problem, but what's worse is why they are able to cause a problem in the first place.
@fetchstixRHD
@fetchstixRHD 2 жыл бұрын
Agreed. There's a whole separate discussion on whether punishment should be appropriate, but regardless getting punished for something you haven't done (or attempted to do) is pretty unfair.
@michaellautermilch9185
@michaellautermilch9185 2 жыл бұрын
No this is backwards. Punishment needs to be proportional to the crime, not to the likelihood of rehabilitation. With your mindset, someone could be rehabilitated for virtually anything, regardless of their actions, if they posed a future risk.
@jeremyfarley3872
@jeremyfarley3872 10 ай бұрын
Then there's the difference between punishment and rehabilitation. They aren't the same thing. Are we sending someone to prison for ten years because we want to hurt them or because we want to teach them to be a productive member of society?
@DeJay7
@DeJay7 2 жыл бұрын
"Thanks for watching" No, thank you for making all of these videos, Kevin. I love every single one of your videos, everything you do is great.
@epiren
@epiren 2 жыл бұрын
I'm sad that you didn't cover retrophrenology, where you create bumps on people's heads until they acquire the personality traits you want. ;-)
@TomWonderful
@TomWonderful 2 жыл бұрын
GNU Terry Pratchett
@epiren
@epiren 2 жыл бұрын
@@TomWonderful I read it in a novel by Simon R. Green called "Tales From The Nightside"
@TomWonderful
@TomWonderful 2 жыл бұрын
@@epiren Oh cool. Pratchett did the same gag in 1993 with "Men At Arms."
@zncvmxbv4027
@zncvmxbv4027 2 жыл бұрын
It’s a Myers Briggs test basically. But the only way to correctly do one of these is to have multiple people who know you do one about you and compare their results to yours. After correlating the data you get a much more correct version of the data.
@moizkhokhar815
@moizkhokhar815 2 жыл бұрын
Noom has been involved in some controversy recently with a lot of complaints of their free trials being misleading and subscriptions being very hard to cancel. And some of their diets were also triggering eating disorders apparently
@aloe-aurora
@aloe-aurora Жыл бұрын
These "risk assessments" have HUGE bias towards the neurodivergent. As someone with ADHD, I've faced similar lines of questioning in clinical assessments. ("Do you feel bored?", "Do you feel discouraged?", "Is it difficult to keep your mind on one thing for a long time?")... ...Not to mention I live in an expensive city and live with friends to afford rent. Apparently I'm high risk for repeat criminality 😅
@EnzoDraws
@EnzoDraws 2 жыл бұрын
Should've titled this video "The Immoral COMPAS"
@RialVestro
@RialVestro 2 жыл бұрын
I once got detention for being racist against myself... cause I was speaking in an Irish accent on St. Patrick's Day and I'm actually part Irish... I also got a detention for being late to class when our Teacher was having a parent teacher meeting and locked us out of the classroom during that time but she apparently still took attendance and marked the entire class absent. Apparently that teacher is known for doing stuff like this because when I showed up for detention the lady who runs the detention room took one look at who issued the detention slip and said I could leave. And another time I got a detention because I had left school early to go to work and I had already cleared the absence with the school ahead of time but still ended up getting a detention anyway. Though after I explained that to the principal he threw the detention slip in the trash and told me to just ignore it if it happens again.
@o0Donuts0o
@o0Donuts0o 2 жыл бұрын
3 detentions. I predict 20 to life for you!
@truthboom
@truthboom 2 жыл бұрын
if the times you went to detention are recorded in some data. Then you have to sue otherwise it's meaningless
@chankfreng
@chankfreng 2 жыл бұрын
If an algorithm told us that lighter sentencing leads to lower recidivism, would the courts treat those results the same way?
@buchelaruzit
@buchelaruzit 2 жыл бұрын
lol we all know the answer to that question
@Epic-so3ek
@Epic-so3ek Жыл бұрын
Not in the great US of A
@Nylak-Otter
@Nylak-Otter Жыл бұрын
My problem with this evaluation in my own case is that I test high for recidivism, and they're absolutely correct. But in practice I wouldn't show that feedback since I'd be less likely to be caught more than once. I have the same criminal habits that I've had for 20 years, and no one has caught me or bothered to call me out for it yet. If I was caught, I'd continue but be even more careful. The evaluation would be marked down as inaccurate.
@williamn1055
@williamn1055 2 жыл бұрын
Oh my god they made me take this test without saying what it was. I'm so glad I assumed it was a test against me and answered whatever sounded best
@studentofsmith
@studentofsmith 2 жыл бұрын
You mean people might try to game the system by lying? I'm shocked, I tell you, shocked!
@buchelaruzit
@buchelaruzit 2 жыл бұрын
yeah just looking at these questions tells you that it can and will be used against you whenever convenient
@GrimMeowning
@GrimMeowning 2 жыл бұрын
Or they could go Scandinawia way - where prisoners are not punished (unless very serious crimes) - but instead reintegrated into society, where they learn new stuff and working with psychologists and re-thinking their actions and life position. That decreased level of recidivism to extremely small levels. Thought - until there are private prisons in USA, I doubt it will be possible.
@Epic-so3ek
@Epic-so3ek Жыл бұрын
That system won’t work for people with aspd, and honestly a number of other people. Many people need to be kept incarcerated until they’re not dangerous or with aspd people just forever. A focus on rehabilitation or at least not intentionally torturing prisoners would be a good start though.
@SgtSupaman
@SgtSupaman 2 жыл бұрын
Statistics and algorithms can absolutely help predict what people will do but cannot predict what a *person* will do. No one should be trying to predict a single person's actions for anything more than theoretical interest, especially not in any capacity that will affect that person's life.
@daaawnzoom
@daaawnzoom 2 жыл бұрын
6:30 Remember everyone, if you saw someone stealing food, no you didn't.
@j.matthewwalker1651
@j.matthewwalker1651 2 жыл бұрын
As odd as it sounds polling Twitter and taking the average is a pretty good way to validate results. The "wisdom of the masses" concept has repeatedly demonstrated extremely accurate results, much more accurate than a small group of experts.
@SkigBiggler
@SkigBiggler 2 жыл бұрын
Twitter is not a good representation of people as a whole. Wisdom of the masses is also (as far as I am aware) typically only meaningfully applicable to situations where person beliefs are unlikely to play a role in decision making. No one is likely to hold a strong opinion on the nature of a jar of jelly beans, they are likely to do so with regards to a criminal.
@j.matthewwalker1651
@j.matthewwalker1651 2 жыл бұрын
@@SkigBiggler fair points, and obviously Twitter should not become the source for sentences, but as long as the data is presented in a way that reduces the likelihood of sensationalism it's still a good way to corroborate something like the algorithm. Specifically, anything that could link the subject to a trial in the media, and things like race and sexual orientation should be omitted.
@buchelaruzit
@buchelaruzit 2 жыл бұрын
you cannot ignore the biased element there is to it. here it makes sense that the general opinion is the same as the AI's, where do you think the AI learned? the "wisdom of the masses" also tended to rank black people higher
@The_Privateer
@The_Privateer 2 жыл бұрын
YAY!! "Pre-crime." I'm sure that will work out well. No risk of dystopian tyranny here... move along.
@Eeeeehhh
@Eeeeehhh Жыл бұрын
This test feels scarily similar to an ADHD assessment, I always wonder how algorithms will discriminate against mentally/chronically ill people
@keanugump
@keanugump 2 жыл бұрын
Most of those questions sounded to me like "are you rich?", "are you a stereotypical white person?" or "are you in a vulnerable position in life?"
@andrasfogarasi5014
@andrasfogarasi5014 2 жыл бұрын
Yeah. Most of the questions on that survey could've been condensed into a single question: "What percentage of your income do you save?" A great predictor of recidivism. Financial strain causes criminality due to obvious reasons. And the simplest way to quantify financial strain is your savings rate. If someone makes $15,000 but saves 30% of it, that person is distinctly good at managing their finances. They may be poor, but they are certainly not the type to have to commit crimes over that. Now imagine someone who makes $100,000 a year and saves none of it. What exactly do you spend $100,000 on per year? Drugs? Alcohol? Gambling? Status symbols? An unemployed spouse and 3 children? Whatever it may be, this person is likely to have a stressful life and/or a terrible personality. I dare say they're probably more likely to commit a crime than our impoverished financial wizard. And while that crime is most likely going to be insurance fraud, it is still crime.
@orsettomorbido
@orsettomorbido 2 жыл бұрын
The problem is: We (as world) shouldn't use punitive "justice", but rehabilitative and restorative justice.
@ichigo_nyanko
@ichigo_nyanko 2 жыл бұрын
Absolutely, why should you punish someone for something they might do? It's innocent until proven guilty, and if you haven't even committed the crime yet it is literally impossible to prove you guilty.
@orsettomorbido
@orsettomorbido 2 жыл бұрын
@@ichigo_nyanko I'm not talking about thinking wether someone might do a crime again. I'm talking about not punishing people, but helping them change the motivations that made them commit the crime. And helping the victims too, of course! Wether the person had already commited a crime or not, or wether they might commit another or not.
@michaellautermilch9185
@michaellautermilch9185 2 жыл бұрын
No, you're asking the justice system to do more than administer justice. This will lead to a totalitarian dystopia where the justice system gets to act like everybody's personal overseer. Punishment should be punitive (deserved) because rehabilitative punishment is allowed to go far beyond what the person deserves, if there's a chance it might "help them".
@adamplace1414
@adamplace1414 2 жыл бұрын
"Hey let's take the smartest known computer in the universe - the human brain - out of the equation in favor of some vague questions posed by people the defendants will never meet." "Sounds great!" I get we all have biases and there should be checks in place to offset them. But rules and algorithms are just poor substitutes for common sense in a lot of ways. I wonder if the ongoing labor shortage isn't in part due to so many employers relying on similar questionnaire based algorithms to disqualify worthwhile candidates.
@desfortune
@desfortune 2 жыл бұрын
The program does what you teach it to do. It's still the human developers at fault, because if you train it using biased data, you end up with a biased program. Also no, labor shortage is not because emplyee questionnaires, it's because we are in a recession
@adamplace1414
@adamplace1414 2 жыл бұрын
@@desfortune "...in *part* ..."
@Youssii
@Youssii 2 жыл бұрын
If an accurate algorithm said it was almost certain someone would commit a crime, would it even be fair to punish them for it? After all, it would seem predestined to happen…
@michaellautermilch9185
@michaellautermilch9185 2 жыл бұрын
Under a fair judicial system, no. Under a rehabilitative system, yes, you can punish anyone for just about any reason if it will "help them" in the long run.
@Lolstarwar
@Lolstarwar 2 жыл бұрын
i wanne read the poem
@jampersand0
@jampersand0 2 жыл бұрын
Never expected there to be what sounds like the Meyers-Briggs equivalent of a recidivism assessment. Also, glad to contribute my art in the video ☆ Stoked you reached out to me.
@_BangDroid_
@_BangDroid_ 2 жыл бұрын
And Myers-Briggs is just glorified palm reading
@venkat2277
@venkat2277 2 жыл бұрын
0:40 yes, I predicted that too, it makes a lot of sense. Think about it, the 40 year old guy who has done armed robbery knows the consequences and probably regrets it and will be very scared to repeat it. While the girl walked away as if nothing happened, faced no consequences hence she is much more likely to repeat it.
@michaellautermilch9185
@michaellautermilch9185 2 жыл бұрын
The girl should be appropriately punished by her parents, as all children occasionally need. If parents would parent, then the government wouldn't need to become Big Brother and act like everybody's parent.
@bonbondurjdr6553
@bonbondurjdr6553 2 жыл бұрын
I love those videos man, very thought-provoking! Keep up the great work!
@danielhernandezmota225
@danielhernandezmota225 2 жыл бұрын
One must be careful to include relevant and pertinent data when generating a model. In this case, the model must not have biased features directly or indirectly; that can be tested alongside with a team of experts who carefully evaluate de results. An additional procedure must also be done in order to "open" the black box with model explainabilty. One can use SHAP values or Anchors, even Lime to try to uncover what's inside. Finally monitoring of the model is a must; performance through detailed audits is imperative to determine if the model is still functional or if it is getting worse over time. In this case since population dynamics change over time, it is save to assume that the model will eventually stop working correctly.
@weslanstr
@weslanstr 2 жыл бұрын
My first problem of many with that software is that its mechanics are secret.
@thothheartmaat2833
@thothheartmaat2833 2 жыл бұрын
Compas: are you black? Black guy: uuuhhhh nooo? Compas: good cuz I was going to give you life..
@trickdeck
@trickdeck 2 жыл бұрын
I can't wait for the Sibyl System to be implemented.
@distortedjams
@distortedjams 2 жыл бұрын
I only chose the bike stealer because they weren't caught, and the other one was in prison so couldn't commit more crimes.
@yinq5384
@yinq5384 2 жыл бұрын
The black box reminds me of Minority Report.
@Gerard1971
@Gerard1971 2 жыл бұрын
The duration of a sentence should be based on what evidence about the crime that happened, not on what might happen in the future according to some black box algorithm that is based on group statistics and not on the individual, and that nobody can independently verify. It should only be used to determine if certain treatment needs to be given before rehabilitation to decrease recidivism. It is sometimes used to reduce sentences when the risk for recidivism is deemed low to free up space in prisons, but that is similar to using it to give someone a longer sentence because they have a higher risk of recidivism.
@quarepercutisproximum9582
@quarepercutisproximum9582 2 жыл бұрын
Exactly! Our system is based not on self-proclaimed rehabilitation, but instead on revenge/ punishment. Therefore, we cannot morally "take revenge" or "punish" that which has yet to actually be done
@youkofoxy
@youkofoxy 2 жыл бұрын
They should have watched Minority Report or Psycho Pass. Just that, one just need to watch one of those to realise how such system can be easily ruin people's lives.
@louistennent
@louistennent 2 жыл бұрын
This is literally the plot of Captain America:the winter soilder. Except of course,with massive aircraft with guns aimed at the high risk people.
@PlaNkie1993
@PlaNkie1993 2 жыл бұрын
Didn't know the black box was actually real, that's pretty wild and concerning
@mykalkelley8315
@mykalkelley8315 2 жыл бұрын
It's symbolic
@AnnettesWish
@AnnettesWish 2 жыл бұрын
Hi! I’m a PhD student in behavior analysis and a Board Certified Behavior Analyst. I enjoyed this episode. Human behavior is predictive but much more complicated than the statistics currently used by cognitive-goers. In fact, you don’t need too much group information about others (as a molar or zoomed out analysis is not too helpful for predicting behavior). Analysts can predict behavior by a more molecular (zoomed in and highly individualized) discrete analysis. Very generally speaking, if I know some of your own history of behaviors, and current situation, I’m likely to predict your behavior in an array of situations. Behavior is lawful like all other sciences. Therefore, it goes beyond math and enters contextual sciences that consider not just evolution of species (selectionism) but societal evolution (selectionism in terms of behaviors that work are those that continue) and biology as well (continuously receiving feedback for selectionism from the environment). I doubt that a truly successful algorithm can be created without a PhD in behavior analysis collaborating with other PhDs in the aforementioned sciences. Math is needed but not in the statistical group data that we currently use. It’s a bit more complicated than that. Indeed AI is necessary. I recently shared with my class a theory for this. I’d like to teach AI to enter into derived learning through recombinative generalization principals to learn more about predicting behaviors. It’s pretty cool. 😎
@Rayzan1000
@Rayzan1000 2 жыл бұрын
I think you misinterpret the "How often do you worry about financial survival" -question. If you are often worried about your financial survival, then you "probably" either have a rather low wage or fluctuating wage, making you more likely to commit a crime, in order to pay your bills.
@sirswagabadha4896
@sirswagabadha4896 2 жыл бұрын
In that case, any psych undergrad could tell you how much the ambiguity of the question without any context invalidates its results. There's a whole history of keeping people in prison for being poor, they could have chosen something much better
@SeidCivic
@SeidCivic 2 жыл бұрын
Thus making the test/algorithm even more unreliable.
@Rayzan1000
@Rayzan1000 2 жыл бұрын
@@sirswagabadha4896 Well, most (if not all) questions can invalidate the result if taken out of context.
@kylejramstad
@kylejramstad 2 жыл бұрын
I love the "code" stock footage that shows the help of the command line command append.
@notme222
@notme222 2 жыл бұрын
Your question at the beginning isn't about who's more likely to commit a violent crime, or who's more likely to get a conviction in the next 8 years. It's "who's more likely to commit another crime?" And logic backs up the algorithm on that. The person with more years in front of them, who may believe they got away with their last crime, has a higher chance of doing something at some point. No context from that question was about setting parole. A algorithm that makes accurate predictions would still be wrong if the questions being answered aren't what the asker meant to ask.
@MrTJPAS
@MrTJPAS 2 жыл бұрын
The Watch Dogs games sure seem to be more and more prophetic as time has passed, with the use of big data and algorithms moving from businesses improving their marketing into more personal and immediately important parts of people's lives, like in this case a calculation of one's likelihood to commit crime or be the victim of a crime being reduced to a eimple equation.
@nourgaser6838
@nourgaser6838 2 жыл бұрын
This video to me relates directly to the MBTI and proves that we cannot predict or understand human behavior and personality. Psychology is not a natural science with concrete facts that can be derived mathematically. (Not that the MBTI or that compass software rely on psychology or anything scientific anyways).
@feedbackzaloop
@feedbackzaloop 2 жыл бұрын
For a 'not a natural science' psychologists learn way too much statistics. Like, near as much as physicists
@vgamesx1
@vgamesx1 2 жыл бұрын
6:00 Right here is where I really noticed the biggest problem with these questions on my own, I do agree with this statement, however that does NOT mean that I think you should always put yourself first, but for someone who's main goal is to climb the corporate ladder or whatever then that would be a perfectly valid response too.
@meisstupid1831
@meisstupid1831 2 жыл бұрын
Okay, kevin. This is the problem Crimes shouldnt have algorythms. By judgement is basically the closest anyone could have basically counted the crime. Things might be related, but its never always true either, people are too hard to predict on criminalogy or basically everything. Math doesnt conclude crimes, it catches clues, as kevin has already proven in the last video. Such a dumb misconception, its like using a broken compass to find your way back. The real problem is that human nature is too complex, but the best way to reduce crime rates is to find the root cause. It feels odd to judge people by using math, its a tool but not for something too complex like us human beings.
@HHHjb_
@HHHjb_ 2 жыл бұрын
Ye
@feedbackzaloop
@feedbackzaloop 2 жыл бұрын
Funny you brought up that analogy, when one of the said algorithms is called COMPAS
@truthboom
@truthboom 2 жыл бұрын
Human nature isn't that complicated lol. People rob food if they have no food. Bosses lower the wage as they are greedy and can get away with it
@curious_one1156
@curious_one1156 2 жыл бұрын
algorithms are as good as the input parameters they use. If there is a clear correlation between input parameters and outputs, an algorithm should be used for efficiency and reducing human subjectivity. True, there may be emperical tasks in the justice system where they may be applied. But not here. The reson why they have perhaps done this is because they have been criticised for human subjectivity in this task. Companies which get contracts are the ones which give "favours" to bureacrats. So, the company shall remain as long as they do not get too much media attention.
@prnzssLuna
@prnzssLuna 2 жыл бұрын
Not gonna lie, this is genuinely terrifying. The other vidoes you've made so far mostly showed one-off mistakes, that got rectified afterwards, but it doesn't look like anyone is willing to stop the use of unreliable software like this? Terrifying.
@oliveranderson50
@oliveranderson50 2 жыл бұрын
From what I've seen in the world around me, I'd say that the likelihood you'll reoffend is based much more on the world you reenter as a free person than the person you are when you enter it. You have to believe there will be a benefit to playing by the rules. People who find a job, house, car, dog, and family when they set out to play by the rules don't risk that for the fruit of crime. People who find closed doors and rejection for past crime have no reason not to reoffend.
@prim16
@prim16 2 жыл бұрын
This convinces me that COMPAS doesn't just need to be revised or "fixed", it needs to be discontinued. AI may have a future in the world of law. But this has completely tarnished its reliability, and ruined the lives of people. Its untested and inaccurate technology is being used too soon. If you were using machine learning to teach a bot to play chess, you wouldn't throw it up against Magnus Carlsen on its first dozen trials.
@I.PittyTheFool
@I.PittyTheFool 2 жыл бұрын
What would replace it? Gut hunches?
@jinolin9062
@jinolin9062 2 жыл бұрын
@@I.PittyTheFool something that doesn’t ask philosophical questions to base whether or not someone should get 13 or 30 years in prison?
@I.PittyTheFool
@I.PittyTheFool 2 жыл бұрын
​@@jinolin9062 That's the county prosecutor and judge. I know of one crime where one judge gave someone 15 years probation and treatment (for first conviction) while and prosecutor appealed to get the guy 15 years in prison. (Yes, the prosecutor can appeal your conviction for a harsher sentence)
@I.PittyTheFool
@I.PittyTheFool 2 жыл бұрын
@@jinolin9062 I think another horrid thing is that judges can decide if they want sentences for multiple convictions to be served concurrently or consecutively. In other words, if you get convicted for a 3 year crime, a 5 year crime, and a 10 year crime will you service 10 years for all three or 18 for all three? Judge gets to pick!
@ichigo_nyanko
@ichigo_nyanko 2 жыл бұрын
@@I.PittyTheFool Nothing, standardised sentencing for the same crime; perhaps increased sentencing for repeat offenders. Why should you punish someone for something they might do? It's innocent until proven guilty, and if you haven't even committed the crime yet it is literally impossible to prove you guilty.
@csolisr
@csolisr 2 жыл бұрын
One of the parameters in that COMPAS algorithm is basically the skin tone chart from that Family Guy skit, you know the one
@bbrandonh
@bbrandonh 2 жыл бұрын
Minority report moment
@theomni1012
@theomni1012 10 ай бұрын
It’s always been interesting how history can predict the future- but it still varies wildly. For example, a kid raised by abusive parents. You could say that they’ll be an abusive parent when they grow up because that’s how they were raised. You could also say that they’d grow up to be a very good parent because they never want to treat their child the way they were treated.
@sydney9225
@sydney9225 2 жыл бұрын
Great video! love the way you summarize and explain topics. But that voice crack tho
@evil_bratwurst
@evil_bratwurst 2 жыл бұрын
when was the voice crack
@sydney9225
@sydney9225 2 жыл бұрын
@@evil_bratwurst 1:42
@evil_bratwurst
@evil_bratwurst 2 жыл бұрын
@@sydney9225 lmao
@bishoukun
@bishoukun 2 жыл бұрын
The algorithm: "Mental illness and learning differences are criminal indicators!"
@jamesmiller4487
@jamesmiller4487 2 жыл бұрын
Excellent and thought provoking video, clearly algorithms are not, and maybe never will be, ready to judge humans. The problem is that human judgement is just as flawed, varying from person to person, day to day, and situation to situation. You could have created a video on the fallibility of human judges, their inept and biased sentencing, and been equally right and thought provoking.
@danbance5799
@danbance5799 2 жыл бұрын
I've spent a lot of time developing statistical methods for identifying spam in email. And I assure you, the same fundamentals apply to any sort of predictive methodology. Sepcifically: 1. Dumb beats smart. Every single time. Whatever clever algorithm you come up with, nothing has ever outperformed brute force statistics on raw data. 2. You're only as good as your source data. If your source data is garbage, then every prediction you make will be as well. If your source data is biased, your results will also be biased. 3. Data changes over time. Spam evolves. It's gotten a lot harder to detect over the last several years. Society also evolves. Predicting outcomes based on data from the 1980s will be unreliable (see #2 above). 4. Every prediction has a margin of error. The best spam filters make mistakes. When applying the same methodology to something as unpredictable as behavior, the margin of error will be higher. None of this, however, gets to the biggest problem here - we have a criminal justice system that's predicated on punishment, not rehabilitation. Beyond that, transparency is essential. If I were a judge or a juror, I would never rely on a black box output, ever. Courts should never accept any piece of software or data that can not be audited. Doesn't matter if it's COMPAS, or a breathalyzer, or anything else.
@charlierogers5403
@charlierogers5403 2 жыл бұрын
And this is why algorithms are not good for everything! We shouldnt rely on them 100%
@timojissink4715
@timojissink4715 2 жыл бұрын
Algorithms can be amazing, but they need the right unbiased human input.
@luc_666jr5
@luc_666jr5 2 жыл бұрын
Tell KZbin that please
@ProductBasement
@ProductBasement 2 жыл бұрын
Please note that the SCOTUS declined to hear Loomis v Wisconsin on June 26, 2017, after Gorsuch had taken the bench but before Kavanaugh, Barrett, or Jackson.
@j.21
@j.21 2 жыл бұрын
.
@sllenderbrine
@sllenderbrine 2 жыл бұрын
.
@BansheeWho
@BansheeWho 2 жыл бұрын
.
@WengineerKimProductions
@WengineerKimProductions 2 жыл бұрын
.
@daniplays9124
@daniplays9124 2 жыл бұрын
.
@james_b0nk
@james_b0nk 2 жыл бұрын
.
@requiem7204
@requiem7204 2 жыл бұрын
The BLACK box you say? Maybe it is accurate
@Johncornwell103
@Johncornwell103 2 жыл бұрын
Teenager would be most likely to commit another crime, due to their age. The Teenage girl has at least on average 60 plus years of life left compared to the man who has around 30 years left on average. So by virtue of time left, the teenager is more likely to have more life circumstances happen where they might either intentionally or unintentionally commit a crime. Not to mention that their longer life means the government has more time to pass laws that changes what is a crime and isn't a crime. I.E more stringent drug laws, tax laws, traffic laws, etc...
@MasterChakra7
@MasterChakra7 2 жыл бұрын
And it's not just that she has more time : what if early crimes were a sign of potential future worse crimes ? The man could've done nothing wrong until the bank robbery, but the girl already did way prior. By virtue of being black, she was also more probable to live in dangerous neighbourhoods or to be in contact/influenced by criminals, because of the now too well known 13/50.
@scoutgaming737
@scoutgaming737 2 жыл бұрын
It was committing a crime within the next 2 years not committing a crime during your entire life
@Faris_V5
@Faris_V5 2 жыл бұрын
Hmm. Maybe I didn't give Psycho-Pass enough credit as a series.
@raxcentalruthenta1456
@raxcentalruthenta1456 2 жыл бұрын
This is dystopian. Plain and simple.
@petitio_principii
@petitio_principii 2 жыл бұрын
There may even be some self-fulfilling prophecy effect, to the extent that maybe incarcerating someone itself can increase the odds of recidivism, at least in prisons that aren't known to reduce recidivism, which I'd guess it's most of them. I don't even believe everyone is intrinsically or easily recovered, but certainly the focus of prisons in general is mostly just punitive and in locking people to keep those out safe. Both are perfectly reasonable things, particularly the latter, but only incidentally these goals would make the person who completed their sentence be a better, more pro-social individual by the end.
@themightyquinn1343
@themightyquinn1343 2 жыл бұрын
There is something extremely concerning to me about an algorithm or artificial intelligence that tells me whether or not I will commit a crime.
@もちの花
@もちの花 2 жыл бұрын
Feels like we're getting dangerously close to Psycho-Pass
@yanivray
@yanivray 2 жыл бұрын
I looked if there was a comment about that lol
@killuahsmathetricks389
@killuahsmathetricks389 2 жыл бұрын
As a mathematician, who worked with the compass algorithm it is/was pretty scary to see. To make it short: The algorithm itself doesnt care and is (jsut mathematically!) correct. Or at least it does what it is programmed for. HOWEVER there is a friggin huge problem: The input data. The first data we got (I dont know, if those were the datafiles that were actually used) were extremely skewed towards skin color. To overexaggerate they basically had 1000 people who already commited a crime, 950 of them were black AND the algorithm had "skin colour" as a decision parameter. Aaaaand well. The algorithm "learned" to check for the "easiest" decision parameter. And since in that input data case the skin colour decided for 95% the crimes. This is simply a horrible thing to do. Simply, since by every measurement you CANNOT "unskew" the input data, since in a specific way, shape or form (not skin colour, but e.g. divorce of parents, being an orphan etc.etc.) the input data is always skewed in SOME way. And the algorithm will simply find that skew and more or less cut of everything else off. To put it shortly: It was a super interessting project to do, super learnful and so on. But also reaaaaaally scary to see, that the "emotionlessness" we wanted to have can destroy a humans life by simply deciding to prolong the sentence even though a human judge would have maybe ruled differently.
@Josh-tl4wf
@Josh-tl4wf 2 жыл бұрын
What was the official name of the project you worked on? As a data scientist I'm intrigued
@TyDreacon
@TyDreacon 2 жыл бұрын
One thing I'm surprised isn't brought up a lot is gaming the algorithm. When asking questions like, "do you find drug use harms other people other than the user?", it's pretty evident what the "correct" answer is. It's not like the algorithm can reach into your mind and see what you _actually_ believe. So you're pretty much free to answer per the algorithm's expectations to get the score you want.
@FreeDomSy-nk9ue
@FreeDomSy-nk9ue 2 жыл бұрын
I love your videos, that was awesome I really enjoyed it. I can't believe COMPAS isn't talked about as much as it should
@Xaelum
@Xaelum 2 жыл бұрын
I feel like a system that somewhat accurately predicts crime could exist given enough time and development resources (probably not the ones being currently used), but we've got the tool backwards this whole time. What if instead of convicting someone we applied resources to HELP those with a higher chance of going back so that they feel supported and avoid doing so? That way the moral dilemma works be almost gone, and you would be benefitting the people who feel it's consequences the most.
@brandenjames2408
@brandenjames2408 2 жыл бұрын
I'm skeptical of our ability to make a good ai for something like this anytime soon, but regardless your second point is very good and made me realize I wasn't looking at all the options, that this technology could still be useful even if flawed if it's used for a less dire purpose like recommending help instead of punishment.
@kraistosama8445
@kraistosama8445 2 жыл бұрын
As a professional experienced in creating risk prediction algorithms, a simple questionnaire seems to me too little information and too much unreliable to be able to confirm future behaviors: - Little information: this cannot represent a person's future mentality, or what were the conditions that led to the first crime and if those conditions will be repeated in the future. - Unreliable: because in a questionnaire the person can simply lie in all their answers. Assuming we had a highly reliable and accurate algorithm (which I personally think is impossible), in general the design of this type of algorithm tries to use past statistical information to predict future behavior, which is extremely unfair. Just because people with similar behavior have committed crimes again does not imply that a specific person will do the same. As well as people who grew up in neighbors with higher criminality, belong to ethnic groups more condemned for crimes, or with different political/philosophical preferences, they will receive worse sentences. But most important of all, it's not morally ethical to use this kind of tool to dictate people's fate, especially when I can increase someone's sentence not based on the crime committed, but on the possibility of recidivism, that's basically punishing the person before he commits the crime. These judicial decisions must always be taken by people who understand the entire moral and ethical framework, as well as the impact of their decisions.
@pXnTilde
@pXnTilde 2 жыл бұрын
And these are all things that should be raised by the defense. Unfortunately, if the prosecution uses these tools in good faith (which is presumed to be true) then they are entitles to use them, procedurally speaking. Ban it through legislation; don't wait for a legal theory to crafted and accepted by the courts that demonstrates how it violates rights.
@_BangDroid_
@_BangDroid_ 2 жыл бұрын
You're right, it's not morally ethical. Why do you think SCOTUS doesn't care?
@ichigo_nyanko
@ichigo_nyanko 2 жыл бұрын
Even more concerning, if you look at the questions, they are worded like "does this person seem like X" not "Are you X" - implying it's not the person who committed the crime who fills in the survey, it up to someone else to decide how they 'seem'. That is a recipe for disaster!
@spthibault
@spthibault 2 жыл бұрын
"...If we could, should we? " that is a gold level philosophical question. An additional question that skews the hard line of separation between subjects -imo, that is - should we be fielding this technology and subjecting the public (where lives are real) before it is perfected. Should we be making actual society unwilling and unknowing members of that apparatuses development and operation, especially when their actual livelihood's are on the line?
@Mysterios1989
@Mysterios1989 2 жыл бұрын
I am really glad that these kinds of tools are about to be banished in the EU (well - as soon as the AI directive passes, but there is a strong push for it). AI are great systems for where they are meant to work, but they have too many flaws when used in fields like law.
@pkmntrainermark8881
@pkmntrainermark8881 2 жыл бұрын
I'm just gonna take a moment here to voice my appreciation for Kevin still making videos for us. Vsauce 1 and 3 never upload anything, so it's good to still have one around.
@quarepercutisproximum9582
@quarepercutisproximum9582 2 жыл бұрын
You mean 2! lololol but yeh you right
@maxwhite4732
@maxwhite4732 2 жыл бұрын
This is the equivalent of asking a fortune teller to predict the future and using it as evidence in court.
@evil_bratwurst
@evil_bratwurst 2 жыл бұрын
Exactly! Nice pfp btw.
@light-master
@light-master Жыл бұрын
Our societal laws are what a collection of what society deems that we are and aren't allowed to do. By definition they are a human judgement of human actions, and are consistently changing based on how each new generation values and judges the actions of others. You can not morally allow a computer to judge human actions anymore than you can judge the actions of those that lived hundreds of years ago, who were governed by an entirely different set of laws.
@martinzg007
@martinzg007 2 жыл бұрын
Vsauce
@resolecca
@resolecca 2 жыл бұрын
If those pictures in anyway resemble the people he is referring to, then I know exactly why she was "determined" by the algorithm to be a bigger danger than that man ill give you one guess it starts with the letter R ends in M and has six letters
@resolecca
@resolecca 2 жыл бұрын
Welcome to your dystopian/1984/black mirror future
@spudd86
@spudd86 2 жыл бұрын
Seems like you could get the one about having difficulty keeping your mind on one thing tossed as discriminatory... since that is literally the main symptom of ADHD.
@LeetJose
@LeetJose 2 жыл бұрын
this reminds me of this older book my class read in middle school (2002?) about a computer that could predict crime. I think I remember the book describing a person being led to the room with the device so it could be destroyed I actually don't remember to well I haven't been able to find it.
The Worst Math Ever Used In Court
9:11
Vsauce2
Рет қаралды 1,5 МЛН
Why Mathematicians Won't Help Cops
12:18
Vsauce2
Рет қаралды 628 М.
How Much Tape To Stop A Lamborghini?
00:15
MrBeast
Рет қаралды 252 МЛН
How Many Balloons To Make A Store Fly?
00:22
MrBeast
Рет қаралды 165 МЛН
You're Probably Wrong About Rainbows
27:11
Veritasium
Рет қаралды 670 М.
The Bad Math Used To Punish Criminals
11:28
Vsauce2
Рет қаралды 387 М.
The FBI Framed Him With Science
16:10
Vsauce2
Рет қаралды 798 М.
Math Caught $1 Million Dollar Fraud
10:35
Vsauce2
Рет қаралды 617 М.
Should You Be A Psychopath?
55:28
Vsauce2
Рет қаралды 1,2 МЛН
Did People Used To Look Older?
22:54
Vsauce
Рет қаралды 17 МЛН
The Number That Gets You Shot
11:02
Vsauce2
Рет қаралды 879 М.
Can Learning Make You Dumb? Yes.
16:38
Vsauce2
Рет қаралды 1,2 МЛН
What Researchers Learned from 350,757 Coin Flips
25:46
Another Roof
Рет қаралды 119 М.
The Man Killed For Saving The World
48:50
Vsauce2
Рет қаралды 1,2 МЛН