Thank for this! Helped a lot with the understanding of making prediction using the regression (B) matrix.
@ayoubmarah40634 жыл бұрын
what does R matrix stands for ?
@RasmusBroJ4 жыл бұрын
That's the matrix containing the inner relation regression coefficients on the diagonal
@HIZIBIJI4446 жыл бұрын
which software this is?
@RasmusBroJ6 жыл бұрын
There's not much software here in this presentation, but the most common ones used in chemometrics would be Unscrambler, Simca and PLS_Toolbox. But there are many other nice packages as well
@Novatures9 жыл бұрын
I don't understand why T=X*P, since X = T*P'+E :(
@nanfengliu10279 жыл бұрын
+Novatures I guess it is because P is an orthogonal matrix, i.e., P*P'=P'*P=I. And this is an constrain condition when we optimizing P (the projection matrix).
@thelittlekid23584 жыл бұрын
If you check out the gray line in gray font at the upper right corner of the slide 9:00, it says there's a simplification for weights (also rotations) and loadings. For simplicity, the tutorial use the same symbol P to denote both the loading and rotation. In reality (for example in scikit-learn.org/stable/modules/generated/sklearn.cross_decomposition.PLSRegression.html), it should be T = XP' + E and T = X W(P'W)^{-1}, in which W denotes the weights for X. The difference between a loading and a weighting is documented at wiki.eigenvector.com/index.php?title=Faq_difference_between_a_loading_and_a_weighting. I would agree with the tutorial on simplifying those symbols when introducing PLSR at the first time (especially following PCA), which makes it easier to understand PLSR easier at a high level.