Very easy explanation! Thank you for making it simple.
@sujits34584 жыл бұрын
Very good explanation, I was able to understand the concept so easily, thank you :-)
@dharmendrathakur14873 жыл бұрын
Thank You very much Francisco...You made it so simple :-)
@pratikanand22005 жыл бұрын
very very good description. can you upload the link of this ppt?
@kkhanh4y3 жыл бұрын
rows represent documents and columns represent words, isn't it? but here rows represent words and columns represent documents.
@fiacobelli3 жыл бұрын
You can use different matrices to judge word similarity or document similarity. In the end it is the same. All that changes is what vector you use depending of your end goal.
@kkhanh4y3 жыл бұрын
@@fiacobelli Ok, thank you, dear
@pratikanand22005 жыл бұрын
the meaning of a word is not satisfying as the passages contain other words also so can the average meaning is given to all words that the passages contain BUT the LSA describe it properly.
@2441139knakmg4 жыл бұрын
Best
@mohitpahuja68943 жыл бұрын
wrong explanation. you havent actually reduced dimensionalty. what you have found is approximation of data. what you need to do is get projection of data on top two eigenvectors to get lower dimensional data
@fiacobelli3 жыл бұрын
Correct. It is an approximation matrix. You can play with dimensionality as well, but this is the seminal concept behind LSA