Finally, a good concise explanation of these concepts!
@PotatoMan14919 ай бұрын
Prof is excellent in delivering essential pieces, I find this very informative
@pythoncodes4044Ай бұрын
Amazing, enjoyed the lecture!
@tanmaythaker290510 ай бұрын
Best explanation ever! Simple yet comprehensive.
@lukamandic5700 Жыл бұрын
At Option 3 it is mentioned that cosine distance treats zero value of movies as negative. Actually cosine zero values do not have impact on cosine distance. Cosine distance: 1 - (A · B) / (||A|| * ||B||) (A · B) represents the dot product of vectors A and B. ||A|| represents the Euclidean norm or magnitude of vector A. ||B|| represents the Euclidean norm or magnitude of vector B. If one component of vector A or Vector is 0 then it won't have impact on cosine similarity.
@ahishchinnu18432 жыл бұрын
This is hands down the best video I have ever seen that instills the intuition for the concept. Absolutely brilliant!!!
@karthikrajeshwaran19977 ай бұрын
Helped me totally grok! Thanks so much prof.
@kisome242311 ай бұрын
Thank you! I learned a lot from you!
@jackshaak2 жыл бұрын
Thank you for putting this together in a fairly easy way to understand. You're a very good teacher.
@yakhoubsoumare69446 жыл бұрын
Sir, you are a genius!
@shubhamtalks97183 жыл бұрын
Finally, someone explained it properly.❤
@subratpati91667 жыл бұрын
Thanks a lot......Finally found some tutorial which is easy to learn !! Great work
@yuzhao52683 жыл бұрын
great lecture- wondering what are some deep learning based approach to solve this problem today
@shauvikpujari95223 жыл бұрын
Thank u sir your explanation is awesome
@sparshnagpal15094 жыл бұрын
Best Description.
@Shivamagrawal503 жыл бұрын
Hi Sir, like you said Pearson's correlation similarly takes out the individual users harshness to the rating scale while finding out user to user collaborative filtering, but why use same method for item to item collaborative filtering?
@josewesley23844 жыл бұрын
Thank you, it's the best explanation that i saw by now. The idea is very simple, how i didn't think on it ?
@gourabnath97894 жыл бұрын
Awesome explanation! Thank you
@SanjarAhmadov8 жыл бұрын
Very insightful. Thank you.
@manisthashrestha2507 жыл бұрын
thank you! that was great :D
@shreyaraut92707 жыл бұрын
thank you... explanation is simple and understandable
@johng52954 жыл бұрын
Thanks in a million. Awesome. Where have you been all these years.
@Skandawin785 жыл бұрын
I like this video. But I'm confused as we are trying to solve 2 problems here, 1. Find the missing ratings 2. Find the item (IBCF) or user (in case of UBCF) which/who are similar to each other we use centering+ cosine similarity and KNN to address problem 1 or 2 or both? We use rating to identify similar user/items or do we find similar items/users and then fill the missing rating?
@edwinchong372 жыл бұрын
Better then my Prof.!!!
@dmitrizakharov90688 жыл бұрын
Hi, thanks for the great explanation. Is there a reason to use centered cosine distance over normalized cosine distance? Would dividing by the standard deviation of the row after subtracting the mean not better account for different levels of variance within user ratings? Is there a downside to normalizing instead of centering?
@rc_matic6 жыл бұрын
What you're describing is standardization, not normalization. In general, that would be an idea worth trying. However in this case, an issue with standardizing here MIGHT be the fact that a user's ratings are not normally distributed. Also, since standard deviation relies on the sample/population size, you could get vastly different results comparing a user who has seen 10 movies and a user who has seen 1000 movies.
@abhinaygupta3 жыл бұрын
Great Explanation.
@HazemAzim2 жыл бұрын
Excellent .. Very well structured
@Robin-zc2iw Жыл бұрын
Very helpful, thanks man
@rpcroft5 жыл бұрын
Does one need to restrict the similarity set to positive values? I calculated the entire matrix (item-item) using python and pandas. I had values outside the 1 to 5 range. For example the inferred value for user2 / film2 was 20.07 and it was -1.27 for User4 / film 1. Negative centered cosine similarities seem to throw the whole thing out of whack. It is possible to have a small denominator in the 18.43 calculation magnifying values and also negative results.
@rayane49752 жыл бұрын
great explanation!
@aramun76143 жыл бұрын
Very good explanation.
@connorphoenix11077 жыл бұрын
Thank you this is exactly what I was looking for
@junaidahmad-oe4oh7 жыл бұрын
Very Informative
@mohammadhegazy1285 Жыл бұрын
Thank you Very much sir
@eti-inyeneibia64427 жыл бұрын
i couldnt get 1.00 , 0.18 0.41 with the cosine similiarity formula..pls explain
@RahulPatel-hr4qe4 жыл бұрын
give me your email-id
@megabites36494 жыл бұрын
@@RahulPatel-hr4qe can you expalain it to me? darkedarksecret@gmail.com
@philipjaykho1144 жыл бұрын
@@RahulPatel-hr4qe please help me getting those answer too Sir. this is my email address kphilipjay@gmail.com.. thanks a lot sir
@lishannavishka17833 жыл бұрын
I have a Problem I can not say it proper way but I will try to explain it. think there are two points A and B, A = [1,0], B=[0,1] cosine similarity between A and B is Cos90 = 0 right? like wise in this example how did you find Cosine similarity between A and B sir? how can we graph A and B points
@Stickman-shenanigans-d7k7 жыл бұрын
Wow. Now this is what I was looking for.
@K-mk6pc2 жыл бұрын
Sir never addressed about user - user similarity metrics.
@ericarnaud50628 жыл бұрын
Amazing explanation
@qihanguan81053 жыл бұрын
Thank you! Super helpful!
@AhmedRaza-kp8io2 жыл бұрын
Centered Cosine similiarity???
@abdelrhmanwagdy33835 жыл бұрын
what if the rates are equal and there absolute vector is equal to zero ??? we will can't divide them on zero
@hasanalikhan35432 жыл бұрын
How they rated overall 5 movies its 6 not 5
@Sudharsan-dd1qi Жыл бұрын
Sir i have one problem In Collaborative filterations can u help to solve?
@whiteF0x90915 жыл бұрын
Excellent ! Thank you.
@mr68clubshorts7 жыл бұрын
Great work I dont know how to thank you really
@aydin38523 жыл бұрын
thank you, now I get it
@philipjaykho1144 жыл бұрын
Hi Artificial Intelligence, Im so confuse because I tried to solve the similarity (those value in green-font) but it seems so hard for me. please help me. thank you so much sir.
@celinexu65985 жыл бұрын
Tried the centralized consine, have 2 quetions here: 1. if we have the example as D (in the vedio) which have all the record rating as 0, the consine distence will any other member A,B,C will be NaN. becuase, there is O at the denominator. any treatment? 2. if i have 1000 customer then I have to comuter 499500 number for any 2 of the customer Combinations. If i have 1 Million customer , the computation is really large, not sure nomal server can handel. Any suggestion or short cut? Thank you!
@mamatizm8177 жыл бұрын
i need an explanation for propagation affinity algorithme with example please
@nishthavarshney95483 жыл бұрын
hw to calcutate the cosine value?
@ibrahimiltifat70453 жыл бұрын
Thank you
@alvaropatricioiribarren37914 жыл бұрын
Thank you!
@akshayjagtap78343 жыл бұрын
Where to find code for the same
@ramans118 жыл бұрын
very nice..
@almasazimkhan71705 жыл бұрын
In the formula(option2) of user-user collaborative filtering we have "Ryi" which is rating. But what we should do if Ryi in original dataset is 0. Should we ignore it?
@rhealisa92685 жыл бұрын
It is not important. The formula uses the *sum* of Ryi ratings, so absence of rating (in this case, considered zero) doesn't make a difference.
@aishi997 жыл бұрын
very helpful. thanks
@foodie28416 жыл бұрын
Sir I have a query if I want to find ratings for movie 3 by user 4 then what should I do?
@PumarkoHD7 жыл бұрын
When predicting the weighted avg at around 18:40, is it a coincidence that .41 and .59 add up to 1? Or should that always be the case?
@jrafaelvalle7 жыл бұрын
It's a coincidence. They represent the similarity, sim, between 1 and 3 and 1 and 6, sim(1, 3), sim(3,6). Consider k values different from 2, for example 1 or 3...
@PrateekAgrawal7 жыл бұрын
How many fans of "Acid Rock" here ;)
@maganaluis924 жыл бұрын
This is horrible
@mathsLaw3 жыл бұрын
How simple and how elegant , i guess this is the Stanford way , thank you !
@abderahmanehm81173 жыл бұрын
Thank you
@prabhacar2 жыл бұрын
Your explanation of why item based CF is better than user-based CF in the last 60 seconds was very intuitive and logical. thanks for bringing so much clarity through easy-to-understand movie examples.
@albb7627 жыл бұрын
Centered cosine similarity is a very bad algorithm for this problem, because a person rated all the movies with 5 star will be similar to the person who rated all the movies with 1 stars.
@aashudwivedi5 жыл бұрын
This example is the case where one of the person is a tough rater and the other one is an easy rater.
@himanshuladia90995 жыл бұрын
what if the ratings are equal and therefore absolute vector is equal to zero ??? we will can't divide them on zero @@aashudwivedi
@harshsandesara94702 жыл бұрын
I always wondered why we needed to center everything around 0 instead of just filling in the missing values with the average ratings... basically the same thing but the calculations are done in the original frame instead of a shifted frame, and the problem that you mentioned most likely wouldn't arise then
@iiTE4L5 жыл бұрын
17:12 in green 0.41 is wrong. I get 0.55 and I double checked it. Everything else is correct.
@megabites36494 жыл бұрын
how to solve that 1.00, -0.18, 0.41, 0.10, -0.31 and 0.59?
@darkeddarksecret62794 жыл бұрын
good day sir, how to compute the cosine similarity between the row? like 1.00, -0.18, 0.41, 0.10, -0.31 and 0.59. please replay asap i needed it in my thesis project, this explaination helps me a lot sir.
@jakman858 жыл бұрын
Excellent explanation over neighborhood-based CF. Anyway you could explain Factor-Analysis based CF? Thanks!
@abhisheksahu60326 жыл бұрын
can anyone explain how do we pick the neighbor hood size?
@Krimson5pride4 жыл бұрын
great explanation. So you know both baroque and acid rock? I will make sure this guy gets more exposure.
@curryfavours6 жыл бұрын
Very insightful comparison of item-item vs. user-user approaches towards the end.
@Vomvom563 жыл бұрын
you are just reading slides, not explaning well. disappointed
@marigam2 жыл бұрын
Thank you so much! I’ve heard it tried to be explained before but it was too difficult! This actually made sense! 💜
@dewinmoonl3 жыл бұрын
what an excellent video. clear with examples, explain the reasonings. stanford is no joke :)
@codelover8474 жыл бұрын
In item-item , shouldn't we be computing cosine similarities between colums?? Because Si,j denotes the similarity of items (i.e movies).
@lucaecari2928 Жыл бұрын
Thank you, very nicely explained, both the intuition behind the concepts and the theory.
@danielclark7392 жыл бұрын
I get what you're saying that 0 is not a great assumption for a rating if a user did not watch a movie because they might have liked it if they watch it. However, it ignores the notion that a user had the opportunity to watch and wasn't interested enough to do so. Which is data that reflects their interest in a movie.
@Girisuable4 жыл бұрын
Awesome explanation. Very clear with the scenarios. Thank you so much.
@ramiroarizpe31354 жыл бұрын
My Sat Guru came to save me, thanks, really usefull information.
@luis-benavides6 жыл бұрын
Great way to explain it! Thank you.
@akshaysrivastava430410 ай бұрын
great video, very informative, thanks professor!
@thasin96717 жыл бұрын
very very very very ... helpful.
@parthgupta66042 жыл бұрын
One of the best videos on collaborative filtering!