Here's a fun pet project I've been working on: udreamed.com/. It is a dream analytics app. Here is the KZbin channel where we post a new video almost three times per week: kzbin.info/door/iujxblFduQz8V4xHjMzyzQ Also available on iOS: apps.apple.com/us/app/udreamed/id1054428074 And Android: play.google.com/store/apps/details?id=com.unconsciouscognitioninc.unconsciouscognition&hl=en Check it out! Thanks!
@Gaskination11 жыл бұрын
@Vasanthi, KZbin is preventing me from replying directly to your comment on my video... So, I'll reply here. EFA is for assessing the measurement model (latent constructs) without any constraints. CFA is to determine the validity and reliability of that measurement model. This can also be done in PLS. Hypotheses about relationships between constructs cannot be tested in CFA, they must be tested in a causal (structural) model where you have regression lines between the constructs. I have videos for all of this, including how to do it in pls-graph and SmartPLS (and AMOS and SPSS).
@vasanthiperumal219911 жыл бұрын
Thank You Sir....
@prof.dr.halukmergen623610 жыл бұрын
Dear Prof Gaskin, there is not your voice in SEM lessons. Please input your voice. One does not understand without it.
@Gaskination10 жыл бұрын
Haluk MERGEN Your volume must be down or you have accidentally muted the video. The video mute is on the bottom left of the video. My voice is in all of my videos.
@prof.dr.halukmergen623610 жыл бұрын
Sorry for that. It was a mistake for me
@soehartosoeharto84715 жыл бұрын
Thank Sir
@ChimeraeBF9 жыл бұрын
I love your enthusiasm in your videos! Thanks for helping me out with my SEM exam!
@Gaskination11 жыл бұрын
This is due to standardizing during imputation. The values receive a relative weighting. So, it is possible to even have negative values after imputing composites.
@branisima8 жыл бұрын
I've learned SO much from you :) Thank you so much for sharing your expertise James. May God bless you :)
@Gaskination11 жыл бұрын
I use Promax because it assumes factors are correlated (which they usually are). It also doesn't sugar coat anything. It gives very raw loadings. For Orthogonal, most like to use Varimax, which is a good solution if you are plagued with very extreme loadings. Varimax softens the loadings so that nothing is too high or too low. It is also good because it nearly always finds a factor for each item. However, it is a softer solution that might not play out in the CFA. Either will work though.
@Gaskination11 жыл бұрын
1. you could try combining the two constructs that are causing the problem. 2. you could identify the items within those constructs that are causing the problem (look at the pattern matrix for crossloadings), and then try removing one or two of those items.
@Gaskination11 жыл бұрын
a) Yes, one at a time if you cannot obtain a clean factor solution. Otherwise you may be fine just leaving them in. b) They are probably either very similarly worded, or they are highly kurtote. You might be able to just remove one if either of these issues are problems. If neither is an issue, then retain them. c) This will resolve itself as you address (a) and (b)
@Gaskination9 жыл бұрын
+Caloy Carlo KZbin won't let me reply directly, so I'll reply here. Make sure to re-reverse all your reverse coded items so they are positively scaled. If they are all from the same factor, then loading together is good. If they are from multiple factors, then it is a problem... Reverse coded questions nearly always cause problems in the EFA. I try to avoid them. Usually I end up having to remove them in the end.
@syannameer9729 жыл бұрын
+James Gaskin Thank you for your wonderful work here, I have a question about (1) conducting EFA for testing the scale constructs for a questionnaire that I developed and (2) conducting EFA to test the significant factors in my study. In both cases, do I include all the scales that measure the ID variables and D variables in EFA, or just IDV?..
@Gaskination9 жыл бұрын
+Syan Nameer Include all reflective latent factors. It is crucial to test IVs with DVs so that we know we don't have an issue with tautology.
@Gaskination11 жыл бұрын
If you have skewness on a 1-5 scale, then it will most likely manifest itself as kurtosis. Also, it is only on a 5 point scale, so what does it actually mean if it is skewed? Not like when you have an infinite scale (such as income or age). If items are not loading on more than one factor, then you can force them apart in the EXTRACT option.
@Gaskination11 жыл бұрын
That's rough. You are probably correct about your 2nd order construct. As for the IV, the only thing you can really do is see which items are crossloading the most, and then remove them in order to try to create distance between the factors. You might look at this in a separate and smaller EFA with only the items from the related factors.
@Gaskination11 жыл бұрын
Doing the EFA separately is fine. But then you must make sure to include them all together in the CFA. We must at some point test the discriminant validity between the IVs and DVs to ensure we do not have tautological measures (meaning we are predicting Construct X with Construct X). I always include all items from the beginning, unless I have 60+ items, then I'll break them up a bit and then put them back together for a final EFA.
@2aeng11 жыл бұрын
Well, I have surveyed patients, Q items had 11 ques for docs (measuring 3 types of social support) & 7 ques for staff (measuring d same). The hypotheses is based on d 3 types of support received from d healthcare industry. The CFA/model fit using Amos shows fit indices as acceptable. Now when I perform an EFA using SPSS, I need to see categories of the 3 types of support (for the entire healthcare service)but d matrix shows factors dividing into components separate for docs & staff.
@Gaskination11 жыл бұрын
You name them by looking at the variables that comprise them. See what the over-arching or common theme is between them. If I have three items for a particular factor: 1. I am hungry 2. I want to eat 3. Food sounds really good right now I would name this factor something like "Hunger", because this is the underlying common theme.
@Gaskination11 жыл бұрын
during the EFA I use SPSS, during the CFA I use AMOS. validities can be checked during both.
@Gaskination11 жыл бұрын
You need to make sure they are in the same column. So, if you asked question "are you hungry" to doctors and staff, then you need to make sure their responses show up in the same column.
@Gaskination11 жыл бұрын
So you're saying you are trying to arrive at a 3 factor model, but right now it is coming up with only 2 factors? And constraining it to 3 doesn't help? If this is the case, then you might try separating the data into just docs vs other staff. Then run the EFA separately. If this doesn't work, then the EFA simply doesn't recognize that you have three types of support, it only recognizes 2. What can you do about this? You might try the CFA where you can saywhich items go on which factors.
@Gaskination11 жыл бұрын
It imputes composites for all latent factors. You can just carry on with the highest level factors. No. A 2nd order construct is only formative if its first order factors are truly separate dimensions of the 2nd order construct. To be formative, they would also have to be modeled differently so that the arrows point from the first order to the second order, not the other way around. But amos cannot handle this.You would have to use SmartPLS.
@Gaskination11 жыл бұрын
Because you are not using Maximum likelihood or Promax. If you are using another extraction method, it might show up as the components matrix.
@Gaskination11 жыл бұрын
You may fail to detect (extract) factors. Or you may extract too many factors. I tend to use the eigenvalues greater than 1 approach at first, and then I'll try fixed factors. Sometimes I'll try 1 more and 1 less than I expect, just to see if those are valid solutions as well. For EFA, we do not need to obtain the optimal solution, just a valid one (of which there may be many).
@2aeng11 жыл бұрын
Hi again James, I was wondering, If CFA fit indices are acceptable; convergent validity good; reliability (chronbachs alpha) good But 'Discriminant Validity not achieved', & nothing seems to fix it. Does it mean that the model cannot be used or, as researchers suggest, "the constructs should compulsorily be combined into one overall measure"?
@saimasadiarabee27179 жыл бұрын
hi, your video is so much helpful. and it totally make sense. thank you. it help me so much to take my exam preparation.
@Gaskination11 жыл бұрын
I always do an EFA, but if you are using a bunch of existing measures then you can probably get away with just the CFA.
@Gaskination11 жыл бұрын
I'm honestly not positive, but it appears to be similar to the pattern matrix, except that it does not distance the factors as well. i.e., the cross-loadings are much stronger. However, the extracted factors follow the same patterns.
@Gaskination11 жыл бұрын
Using categorical variables in an EFA will not work very well because the EFA is based on correlations which depend on the value movements of the variables. However, movements in categorical variables have no value relevance. For example, if my "industry" variable moves from 1 (service) to 3 (manufacturing) this move does not represent an "increase in industry". The move is not numerically meaningful. So, these do not work well in EFA.
@2aeng11 жыл бұрын
Hi James, My survey used separate questions for doctors & similar for hospital staff. Since my study pertains to the entire health care service, each of my constructs comprise of both(doctors & staff). But when I perform an EFA in SPSS, it separates doctors items from the ones relating to hospital staff, & does not divide into the kind of factors that I need. Since my model involves 3 different kinds of support from health care, I need just 3 such constructs.Is there a way to combine doc+staff?
@2aeng11 жыл бұрын
Thanks James, you have been a great help. Working on your suggestion
@thisisdeepali11 жыл бұрын
hi James.Nicevideo. could u pls explain why you sais that, for 1-5 scale there is no need to check skewness..and what if loadings in pattern matrix are not consistant, means that items load in zig zag manner but items are not loading on more than 1 factor..
@cracrul11 жыл бұрын
Hi teacher You used to extract the factors MLE method and Promax rotation. I have confusion because some of the theory I know uses optimal scaling to reduce when you are categorical variables. My question is, how I can theoretically support the extraction method applied by. Thank you!
@johnfintch71425 жыл бұрын
Greetings Mr. Gaskin. Can you explain about method of performing correlation among single IVs (no sub-dimensions) and DV (with dimensions)? Do we need to compute all DVs in a single variable and then perform correlation and regression? Guidance required please.
@Gaskination11 жыл бұрын
The standard measure is an eigenvalue greater than 1.00.
@KousarSadeghzadeh4 жыл бұрын
Thank you very much for the helpful video. Could you please, provide a reference to include for the cut-off point of 0.3 for the commonalities?
@Gaskination4 жыл бұрын
hmmm... this was quite a while ago. I think I used Hair et al 2010 "Multivariate Data Analysis".
@cathrineb9011 жыл бұрын
Great video. I have a question in regards to the pattern matrix. I am doing an extraction method: principal component analysis; and rotation method: varimax. However, I do not get a pattern matrix. Do the rotated component matrix perform the same way as pattern matrix or are these different. I am asking because I need the pattern matrix to conduct a CFA
@datsme88811 жыл бұрын
Helo Prof Gaskin! Is there any criteria for selecting the number of factors in relation to % of variance explained? I remember Andy Field's book mention 5% criteria for accepting a component. But I have done EFA on a Likert Type scale and getting only 2 components above 5% criteria which jointly explain only 26% variance. component 3 (4.7%), component 4 (3.5%), and component 5 to 11 (around 2% each)? Now how many components shall I include in CFA??? Plz reply
@Gaskination11 жыл бұрын
1. By composites, I mean you make weighted averages. I have a video about this called "imputing composite variables in Amos". 2. You only need to test for multicollinearity of the 2nd order factor is formative.
@mahdiesfahani427511 жыл бұрын
Thanks so much for your prompt attention.
@dedylesmana3 жыл бұрын
I want to ask, sometimes the pattern matrix is irregular, for example there are empty values, dimensions or factors don't match, sometimes there are pairs of values between factors 1,2, and so on... how to handle it, sir
@Gaskination3 жыл бұрын
Here is a video of me working through a messy EFA that has many of the issues you are mentioning: kzbin.info/www/bejne/pZbShaOOnrihmcU
@sushma0084 жыл бұрын
Thankew sir.please provide the link of data file as well which u hav used.
@surfergirl05199 жыл бұрын
This is so fantastic!! I have learned more from you in this short video than I have in all the books that I have read. A quick question please, what is the logic behind choosing Maximum Liklihood and ProMax rotation from all the choices available.
@2aeng11 жыл бұрын
Survey included 11 items for docs & 7 items for hospital(relevant to specific types of support perceived from both).I want to know how can I get SPSS to classify the factor colums (for EFA purpose) on the basis of 3 types of support instead of doc v/s hospital. If there were equal no of questions for doc & staff, I would have averaged them.
@niens898 жыл бұрын
Hi James, are you also familiar with tetrachoric correlation/factor analysis? We are having difficulties with doing a factor analysis including binary data. We thought we had to do a tetrachoric analysis becaus our IVs are binary (our DV exists out of 2 or 3 categories (depending on how difficult the analyses will be)). We can't find out which steps to take to finalize the tetrachoric analysis in spss 19.0 (we're working with syntaxes in spss which connects with spss hetcor (statistical program R)). Thank you in advance! Nienke and Michelle
@Gaskination8 жыл бұрын
I haven't worked with tetrachoric analysis, but I think my friend +aron linberg has. He has a youtube channel with some statistics videos: kzbin.info
@mahaali26085 жыл бұрын
very good video, I recommed anyone wants to learn about SEM to view it.
@larissatorremante290310 жыл бұрын
Dear James, I have just conducted the EFA as explained in the video. I have a few problems: 1. The number of factors that were supposed to be extracted are 6, however, only 5 were extracted. Two variables (each 3 items) load on the same factor. I assume it is according to the similar items I have in two constructs. However, one of the items does not have a significant loading at all (communality = 0.388) 2. If I constrain the number of factors to be extracted to 6, I receive a correct pattern matrix, with all items loading significantly on one factor. How am I supposed to handle that issue? 3. A different variable has a factor loading of 1.008 in the pattern matrix. Are values above 1 reasonable? Thank you very much for your help! BR
@Gaskination10 жыл бұрын
Constrain to six as expected. This is absolutely fine. As for the high factor loading, try a different extractions method, like ML or PCA and this will usually resolve it. You could also use Varimax as a rotation method. This will dampen high loadings and bolster low loadings.
@Gaskination11 жыл бұрын
I still don't understand your question: "Is there a way to combine doc+staff?" If you mean you want to average their results, then just average them. Add the columns together and then divide by the number of columns.
@twanpeters11 жыл бұрын
And why did you use the Promax rotation method instead of the Varimax or Direct Oblimin?
@datsme88811 жыл бұрын
Thanks Prof. My sample size is 590 and there were total 68 items in the initial scale
@twanpeters11 жыл бұрын
Suppose you have this situation: Extract with EV greater than 1 = 3 extracted factors and a total variance explained of 55%. Extracted with 4 fixed factors = a total variance explained of 63%. 3 factors with an EV >1 and 1 factor with an EV of 0.75. Second option looks better and is better to explain, because of the definition of the items.
@장드보라11 жыл бұрын
Hello Prof. Gaskin. Thanks for all the useful videos. I have a question. Would you explain why you use promax rotation which is oblique. When do we have to use orthogonal rotation?
@safalbatra479111 жыл бұрын
2. The resulting imputed SPSS file gives both first order and second order imputed variables. Do we simply ignore first order constructs and carry on with our analysis of first order? If that's the case, why does AMOS impute first order constructs at the first place? 3. Is a second order construct always called formative as often used by you? Thanks a tonn for helping !! Regards
@ito-lutzmario58479 жыл бұрын
Thank you very much for the EFA guide. It's very helpful. However, I still have some issues with it regarding loading constructs correctly. I was ready to run EFA to load the constructs with the normalized data (calculated with log10), a big issue appeared beyond my current knowledge to solve. Firstly, all the items were crossed loaded on each factor. secondly, they didn't load on the same constructs as i expected or saw in the video. For example, variables from one construct were loaded with ones from other constructs, or IVs were loaded with DVs stuff like that. I tried various analysis methods from extractions, dropping several items or manually fix the factor numbers that i wanna have along with other rotation methods. the data doesn't work the way i wanted. although KMOS and p value were acceptable (over .624 or sth) for EFA conduction, communalities were also high (all over 0.7), the items still didn't form correct constructs like the videos did. I searched a lot of papers but failed to find a good solution. I'm quite stuck at this stage. I also tried with original non-normal data and the result is more or less the same frustrating. I don't know if there is any manual method that i can load constructs myself one by one? it seems this automatic loading method doesn't recognize my constructs at all. The constructs were adapted from other researchers from other studies instead of generated by myself. I'm really lost atm and i can't continue with the rest of the videos if this problem cannot be solved. It'd be great if you can give me some advice on this. I'd be so much appreciated!!!
@Gaskination9 жыл бұрын
+Ito-Lutz Mario Most importantly, make sure you are only including reflective latent factors (kzbin.info/www/bejne/naiTqamsf9xgd68). After removing any items that don't belong (based on that video), try again. If it is still a mess, figure out why by looking at the item wording. Perhaps they are too similar conceptually. I like to do separate EFAs for pairs of constructs. Resolve the discriminant validity issues between these pairs, and then run the remaining items through an EFA with all other remaining items.
@ito-lutzmario58479 жыл бұрын
Thank you very much professor Gaskin for the great tips! :)
@Gaskination11 жыл бұрын
1. There is no easy way, but my validity master tool will create a composite reliability score, which is very similar to cronbach's alpha. The way to do it would be to create composites for the lower order factors and then use SPSS to do a cronbach's alpha test with those composites as the indicators. 2. SPSS is no good for this sort of latent structure. You should be using AMOS for this kind of a test.
@ProfAnshuSharma9 жыл бұрын
+James Gaskin. Sir, if the non residuals is 0.9 (>.5), what does it signify about my data? Can I do anything to correct it. Thanks
@Gaskination9 жыл бұрын
+Ms. Sharma if you mean 0.09, then that is fine. if you really mean 0.90, then that is awful. It means that most of the variance in your model is explained by error... My guess is that you mean 0.09, so you are fine as long as everything else works out.
@michalis8811 жыл бұрын
Hi James, thanks for this video, really straightforward and informative. Any idea what the structure matrix tells us though?
@safalbatra479111 жыл бұрын
Hi James..Thanks a lot for replying. By composites for lower order, do you mean average score of all items of each lower order construct? Also, when I check for multi-collinearity for this second order construct using the procedure told by you in another video, i essentially have to test with composite score right? but since these composites are all from the same latent second order construct, don't you think they will certainly be multi-collinear? Thanks once again for your inputs
@kemiajayi472311 жыл бұрын
Hi James! Thank you for your videos. Is there anyway to recover from this situation. I have 6 latent constructs - 3 independent and 3 dependent. Doing the EFA shows that one of my independent constructs and 2 of my dependents load on the same factor. The other dependent construct has high cross loadings on that factor too. I have considered that I might have a formative 2nd order dependent construct that includes all 3 DVs. But, what can I do about my 1 independent construct?
@cracrul11 жыл бұрын
Hi Teacher I understand that the MLE method is very strict, however, when dealing with categorical variables violate the assumption of normality. How is justified from the theory that rape. I would and have some book that validates what you appreciate.
@vsiahtiri111 жыл бұрын
hello again Dr. Gaskin; may i ask if you have any video on curvelinear regression?
@datsme88811 жыл бұрын
Thanks Prof. But after I did my CFA for a Likert type scale development I found Chi square significant (no model fit) but other indices of model fit like Chisq/df. GFI, AGFI. RMSEA, TLI, are supporting a model which has 4 factors, n each factor contains 3 items..Can you please advise whether the model is valid, or anything else I am supposed to do?? Plz reply
@AmarpreetG11 жыл бұрын
Hi Dr. Gaskin Thank-you for posting this informative video, I followed it step by step with such an ease. I have few questions for you. In my analysis the confirmatory scores for KMO, Communalities, Cumulative % explained, Pattern matrix loadings, and reliability exceed their respective threshold values, except two items. In the Reproduced Correlation tables, as opposed to 2-3%, there are 10 (47.0%) nonredundant residuals with absolute values greater than 0.05. However, there are only 2 (1%) nonredundant residuals with absolute values greater than 0.1. My question is, how important are the reproduced correlation residuals for my analysis? Can I ignore them altogether, or can I consider their value as good at 0.1 level of significance (as rest of my modeling test’s the null hypothesis at 0.1 level of significance)? Secondly, the second component has only two variables (as opposed to minimum of three per component), although their loadings are pretty good. Is this something I need to worry about? I would highly appreciate if you could reply to my questions at your earliest convenience. Thanks in advance.
@Gaskination11 жыл бұрын
1. If everything else looks fine, you can probably move forward. Just be aware that the issues won't disappear when you move on to the CFA. You will have several modification indices and will struggle a bit with model fit. 2. This will be fine. Sometimes latent factors with only two indicators tend to be unstable and end up with indicator loadings greater than 1.00. If this is the case, you can constrain the indicator regression weights to be equal to each other by naming them both the same thing (I use "a"). Then put the variance constraint on the latent factor (=1).
@jwachira10 жыл бұрын
James Gaskin yes, I am encountering this problem of two factors with two indicators have one of their indicator loadings greater than 1. However, I am not clear on your explanation of how to resolve this problem? 1) Is the thing that you explain done with SPSS in this EFA step or done with AMOS in the further step (CFA or SEM)? 2) can I show the indicator loading greater than 1 on my report? Thank your very much
@2aeng11 жыл бұрын
Hi James, Let me also add that I am studying 3 different aspects of support from d healthcare service
@2aeng11 жыл бұрын
Sir, my survey respondents were patients who had to answer questions regarding their doctor & hospital staff. And these entries do not appear in one column even though d study considers d whole healthcare service
@aminamin3411 жыл бұрын
Dear Professor, I'm surprised that you did EFA by considering IVs and DV together... I mostly do the EFA for IVs separately, MVs separately, an then DV. My reason is, there is a cause and effect between IVs and DV and the high correlation between their items are expected which willl lead to cross-loading. Please Advice.
@misaosuzuki87059 жыл бұрын
Dear Professor Gaskin, I have found your videos very helpful. I especially appreciated the Confirmatory Factor Analysis video (Part 4) and the associated Pattern Matrix plugin for building the model. It is exceptionally helpful. I do have a question about EFA, and I would appreciate your help. In the pattern matrix for the data I analysed, I have quite a few factor loadings that are negative. I know that negative factor loadings can be a result of negatively scaled items. However, in my questionnaire, the items are all positively scaled. I've asked around to see if anyone could help explain this situation, and I still don't know why. Please could you advise on what the situation is? Thank you.
@Gaskination9 жыл бұрын
+Misao Suzuki Sometimes this happens as a result of estimation error. You can try a different extraction method (such as Maximum Likelihood, or Principle Component Analysis, etc.) or a different rotation method (such as promax or varimax, etc.) to see if that changes things. If not, then what you are observing is items with inverse relationships with the other items in your pattern matrix.
@mahdiesfahani427511 жыл бұрын
Thanks so much for your interesting videos. if I want to find convergent V. and discriminant V. which software is the best: SPSS or Amos?
@2aeng11 жыл бұрын
I have also tried assigning factors to 3 but no success. The reason why there is an eneven distribution of questions for docs & staff is that the items applicable incase of docs would not be so for staff. My study tries to arrive at conclusions on the 3 types of support. I cant have factors loading separately for docs & staff which is now happening with the EFA
@prabinmaharjan39857 жыл бұрын
Hi James, thanks for the tutorial. I have one query. I have four independent variables (Acquisition, Assimilation, Transformation and Exploitation ). Acquisition has six measurement items (ACQ1 to ACQ6 measure Acquisition), Assimilation has five (ASC1 to ASC5 measure assimilation), Transformation has six (TRC1 to TRC6 measure transformation) and Exploitation has seven (EXC1 to EXC7 measure exploitation). These measurement items are measured using 5 point likert scale. I did exploratory factor analysis to get only four factors because I have only four IVS. So, I use 4 as the factors to extract. However, when i check the output, factor 1 (I named as Acquisition) has measurement items from other variables also such as (Acquisition and Assimilation), Factor 2 has variables from more than two variables' measurement items and so on. In other words, measurment items are not associated with factors to which they belong or the indicators of factors are mixed..I am confused. Is there any mistake in data set? Do i need to do EFA for all the variables (Acquisition, Assimilation, Transformation and Exploitation ) at once or one by one?
@Gaskination7 жыл бұрын
If the factors are reflective, then the EFA is appropriate. If you are observing indicators loading MORE strongly on an unexpected factor, then that is a problem for discriminant validity. You might need to trim away some of the strongly crossloading items. This video may be helpful: kzbin.info/www/bejne/pZbShaOOnrihmcU
@blimeyifancyreading9 жыл бұрын
Dear James, thank you so much for the plugin. It worked like magic. I have few issues to run it. When I try to run, it commands states "the variable, Engage3, is represented by a rectangle in the path diagram, but it is not an observed variable". I tried rearrange the model, couldn't get it to run. Thanks
@Gaskination9 жыл бұрын
Daisy G That error means that you have a variable in your model called Engage3, but there is no variable in the linked dataset named Engage3. Perhaps you have one with that label, but not with that name. You'll need to check the variable names in your dataset.
@blimeyifancyreading9 жыл бұрын
James Gaskin Hello James, ooo my mistake I edited the names to follow my study variables. I changed it back and tried running it. Different error pops up "The variable name Focused1(Absorbed) is invalid because it contains an invalid character. What does that mean? Thanks.
@Gaskination9 жыл бұрын
Daisy G You can't use parentheses in a variable name. Just change it to Focused1_Abs
@blimeyifancyreading9 жыл бұрын
James Gaskin Thank you so much. I can have space in it? My variable name is kinda long i.e.I have been engaged with the interactive displays at the visitor centre, can I name Engage 3 Interactive or should I name them as Engage3_Interactive. My questionnaire was scanned thru the software.
@Gaskination9 жыл бұрын
Daisy G I recommend you try and see what happens. The answer though is that you cannot have spaces in it for some versions of AMOS, but for others you can. SPSS won't allow spaces though.
@jwachira10 жыл бұрын
Hi Dr.Gaskin I followed the VDO and ran EFA with my data. I found that 1) my model has 5 constructs but after I ran the model and set the eigenvalue greater than 1, the SPSS gave me back 4 factors ( the factor5 has value =0.98xxx). Then I fixed them by set them to 5 factors and it came out with 5 factors. Is that okay to do that? 2) I have two factors( eg. A and B) on my construct that one item of these two factors ( eg. A1 and B1) is greater than 1. Would that be a problem? Can I keep them since others values are fine? and how can I fix them if they are unacceptable? Thank you
@Gaskination10 жыл бұрын
1. Yes. that is totally fine. 2. Should be fine, but if you want to fix it, run varimax rotation instead of promax or maximum likelihood.
@yogeshnaik959111 жыл бұрын
Hi James, thanks a bunch for such a resourceful video series. I have collected data sample of 520 responses and was performing FA on 19 attributes. I hv 3 questions : a) There were 5 attributes showing less than 0.3 communalities. Should I remove them from FA? b) 2 attributes showing value of 0.999. Should they be removed too? c) reproduced corelation show 18% non redundant residual (way high than 5%). What would be your advice for improving this? Many thanks for your time.
@golnaz64118 жыл бұрын
Dr. Gaskin, I read that Maximum likelihood method should only be used if data is normally distributed but with data using likert scale, normality may not be accurately calculated because the data will be considered as continuous to measure normality but it would not be accurate since likert scale is ordinal data. May I know your suggestion, should I go on with maximum likelihood and just treat my data as continuous? thanks.
@Gaskination8 жыл бұрын
Hair et al 2010 suggest that there really isn't much difference in ML vs PCA vs PAF, so it makes little difference which method you choose. I use ML because that is what AMOS uses during the CFA.
@Gaskination11 жыл бұрын
I still don't understand. Do you mean you want three factors to be extracted? Or do you mean you want to do an EFA for docs, then an EFA for the other two separately? Or do you mean something else. I'm sorry we're having such a difficult time communicating, but I really don't understand what you are trying to do.
@twanpeters11 жыл бұрын
There are so many EFA video's and they are all boring to listen to. Except yours, keep up the good work! :)
@caloycarlo913610 жыл бұрын
Thanks so much James, I found your KZbin lectures really helpful. I just want to ask, can we say that EFA is a necessary method (or an important preliminary method) before any statistical analysis?
@Gaskination10 жыл бұрын
Yes, when dealing with latent constructs, it is critical.
@caloycarlo913610 жыл бұрын
Thank you very much James.
@soehartosoeharto84715 жыл бұрын
Dear Prof James Gaskin. where I can download the word material that you use to explain your video? thanks
@Gaskination5 жыл бұрын
Sorry, I didn't save this file after the video.
@soehartosoeharto84715 жыл бұрын
Next time i hope you can upload that to statwiki website. Thank Sir for replying my comment.
@hadiyasrebdoost43609 жыл бұрын
Dear James For reliability , is it necessary to do reliability before and after doing EFA or its enough to do once after doing EFA? because I think we should check the main questions reliability with assumed factors in the main theory.
@Gaskination9 жыл бұрын
+hadi yasrebdoost You can do both, but the one you report is the final set of items for each reflective factor.
@syedturabnaqvi26438 жыл бұрын
Dear James, Thanks for very helpful video. My reproduce matrix show me that there are 27% non redundant residual with absolute values above 0.05. What i need to do if i want to bring it below 5%? and is it fine if i want to move ahead with CFA and finally SEM?
@Gaskination8 жыл бұрын
+Syed Turab Haider Naqvi If the number is that high, then there is probably some problem with the factors. Try to extract based on eigenvalues only to see if the number extracted is what you expect. If not, then try to work through removing some of the problematic items.
@safalbatra479111 жыл бұрын
Hello Prof Gaskin..Thanks for these wonderful videos. I have two questions which I am sure you can resolve. 1. I have completed CFA on my data looking at your videos, and things seem to work fine. Your excel sheet also say Whoo no validity concerns :). I am confused if I need to go back and do EFA? Do journals expect to see EFA also done? I mean is it mandatory, or can we directly begin with CFA. I had heard that if all constructs in the study are previously existing, then EFA can be skipped?
@hadiyasrebdoost43609 жыл бұрын
dear james I have one essential question. is it important to identify the normality of our data befor implementing sem? if it is ,should we do normality test for every question or we should do it for latent factors(for example we have a factor measured by three question ,should we do normality test for each question or its enough to do a test for all together?).if its not normal what should we do?should we implement bootstrap?
@Gaskination9 жыл бұрын
+hadi yasrebdoost Normality on individual items is important. If they are not normal, but are on Likert scales, then there is little you can do to transform them. You might use Bayesian estimation in AMOS if the normality is particularly bad. Usually these items will drop out during the EFA and CFA anyway, due to various effects of non-normality.
@wakaeiwada62210 жыл бұрын
Hello James, thanks so much for making this video. I ran EFA step by step following the clip. Everything looked fine except one item did not load on any factor and if I took it out, the factor loading would mess up. What should I do with this single item? Thank you again!
@Gaskination10 жыл бұрын
It depends on what you mean by "mess up".
@wakaeiwada62210 жыл бұрын
James Gaskin Sorry..I should have been more specific. When I took it out the item which did not load on any factor, about 2-3 other items would cross loading...what should I do since leaving or keeping this item doesn't look good. Thanks!!!
@Gaskination10 жыл бұрын
wakaeiwada622 If the crossloadings are tolerable (>0.200 difference), then just accept them. They are not problematic. Otherwise, you might simply try a different extraction or rotation method. Each method will produce slightly different loadings.
@wakaeiwada62210 жыл бұрын
James Gaskin Thank you so much. I appreciate it!!
@ЮлияМузыченко-х5п Жыл бұрын
Hello Prof.Gaskin, thank you again for the videos! I am watching them for the second time(my second time doing SEM). I have a question whether it is a nessesary step to perform the EFA analysis if I use a validated scale from a published artcile with more than several dozens of years of usage? I've tried different rotations and extraction methods and I can see that my cronbach alpha can only get worse if I delete 3 items(as dicated by EFA cross-loadings) from a 10 item scale. So I was wondering if it's a good idea to find the means of all the scales(the rest of them load perfectly well) and apply AMOS process on the means directly?
@Gaskination Жыл бұрын
EFA is not necessary for validated scales.
@ЮлияМузыченко-х5п Жыл бұрын
@@Gaskinationthank you a lot for the reply! Shall I skip CFA and common method variance the way you do it and just conduct Harman's single factor test then? And then move to the measuring model fit and the main analysis?
@Gaskination Жыл бұрын
@@ЮлияМузыченко-х5п Still do CFA. You can just skip EFA. But you still need to validate the factors in the context of your dataset.
@ЮлияМузыченко-х5п Жыл бұрын
@@Gaskination thank you!
@hadiyasrebdoost43609 жыл бұрын
dear professor I have one more question. according to your respond we should test the normality for every item , but if there are only measurable items and there is no latent factor and we cant do EFA before ,whats the solution for abnormaity? and does this condition same with normality condition in regresion ? can we examine this with Q-Q test
@Gaskination9 жыл бұрын
+hadi yasrebdoost Yes, even if there are no latent factors, you should still do a normality check for anything that is not categorical. A Q-Q test is a good way to do this.
@fathimarushmamohammed8099 жыл бұрын
Hey James, I was reading up on EFA and wanted to conduct for my research. the more i read i feel like i have been guided in the wrong direction and i have just done a PCA:( WHY are efa and pca used interchangably?
@Gaskination9 жыл бұрын
+fathima rushma mohammed PCA is an extraction method for EFA. So, if you're doing a PCA, then you're doing an EFA. There are different extraction methods: PCA, PAF, ML, etc.
@chios19768 жыл бұрын
Mr. Gaskin I have a quesiton about minimum number of item under one factor. I have a likert scale (1 to 5) and four factors. One of factors has got only two items, others three. Is that enough for CFA? What is your suggestion? And do you tell me some books about it. Thanks.
@Gaskination8 жыл бұрын
+sümer Aktan The ideal number is four (I can't remember the reference). Two is okay, but unstable. Three is fine.
@chios19768 жыл бұрын
Thanks Mr.Gaskin. If you remember any reference, please may you write to me? Thanks.
@caloycarlo91369 жыл бұрын
Hi James, what should I do if all my reverse coded items grouped together?
@CarlaliS910 жыл бұрын
Hi, Dear James. I don't know if this is a problem, but when I see the Pattern Matrix the indicators of my factors are mixed. For example my factors are Service Quality with 21 indicators, Satisfaction with 5 indicators , Loyalty with 4 indicators, Relational Link with 3 indicators and finally Competitive Advantage with 2 indicators. After calculate the EFA the Pattern Matrix give us mixed indicators in comparation with ours factors, they are not together like your example. In your case, is DecQual, then Useful, Joy, Playful...etc. In my case in one factor there indicators of the Loyalty and Satisfaction. I want the indicators are associated with factors to which they belong. How do you get that indicators are associated with the factors you expected? Thanks :)
@Gaskination10 жыл бұрын
My guess is that your Service Quality factor is actually comprised of multiple dimensions, and therefore not entirely suited to an EFA with other factors. I would recommend running an unconstrained EFA on just the 21 items for SQ and see how that turns out. Then run a separate one for the other factors. This should work better, especially if you intend on modeling SQ as a 2nd order factor.
@CarlaliS910 жыл бұрын
James Gaskin Thanks :) So, I have to assume that my factors loadings are the values that bring me the Patter matrix or Rotated Factor Matrix?
@Gaskination10 жыл бұрын
***** I'm not sure what you mean, but yes the loadings are found in the pattern matrix or rotated factor matrix.
@hadiyasrebdoost43609 жыл бұрын
Dear Professor I have two more question and I will be very happy if you answer me.at first if we have 5 items that describe one factor, is it possible to make new factor with their averages and test that factors normality or not?for examle before doing an anova test we have 5 factors that describe one factor ,are we able to take average of those items and make new factor and test the normality of that?if not ,what should we do. secound, if over items (even only one of them) are not normal ,are we able to use bootstrap or only we ahould use bayesian test? thanks alot
@Gaskination9 жыл бұрын
+hadi yasrebdoost yes. it is fine. As for bootstrap, here is something: stats.stackexchange.com/questions/61787/can-bootstrap-be-used-to-replace-non-parametric-tests
@malenechristensen68918 жыл бұрын
Dear professor Gaskin Thank you for your videos. Is it acceptable to have a pattern matrix, in which there is a value loading higher than 1 (actually 1,019), on one factor? best regards Malene & Isak
@Gaskination8 жыл бұрын
+Malene Christensen This is called a Heywood case. You can fix this by changing the rotation or extraction methods. Varimax will always fix it.
@pallavisingh43529 жыл бұрын
Hi James...many thanks for the video its really useful. I would really appreciate your input on my confusion. I am using SMARTPLS to run PLS SEM for my phd data. I have developed a new construct and items for it, and have adapted a few scales as per the context of my study. I am confused whether I should use Exploratory factor analysis to establish the factors of all the scales before going for PLS-SEM or can I straight away run PLS and use out put of SMARTPLS to establish the measurement model. and the use PLS SEM output to confirm the model(structural one). Different people are recommending different things and I would request you to help me with it please. I would really appreciate your help and guidance.
@Gaskination9 жыл бұрын
You can definitely use PLS to assess the measurement model. There is even a pattern matrix in SmartPLS to show you the crossloadings. I think it was Gefen and Straub who wrote an article in CAIS to show how to assess measurement models in PLS.
@twanpeters11 жыл бұрын
What problems could arise when you choose for: Fixed number of factors. And you choose the exact number of factors that you also have in your model. Instead of using: based on eigenvalues greater than 1.
@cosmopolitan073110 жыл бұрын
Hello James. Thank you for your video. As you mentioned below, I have discovered TP, RS, ES factors are loaded in the same factor (values are over 0.5). I followed your steps of conducting SEMs and found I have to delete the factors due to validity issue. (when performing Validtymaster in Excel sheet) What would be my steps to conduct 2nd order factor analysis?
@Gaskination10 жыл бұрын
I have two recommendations that may help to keep those factors: 1. conduct a separate EFA with just those three constructs. Try to increase discriminant validity in that smaller EFA. Then add the remaining items to the full EFA. 2. Perhaps these are part of a 2nd order factor as you indicated. If so, then still run a separate EFA for these three, but don't recombine it with the full EFA. Then, in the CFA, use the approach I show here: kzbin.info/www/bejne/fnO0gaSga5iMbdU
@safalbatra479111 жыл бұрын
And my second question is: I have two constructs called external (4 items) and internal (3 items) loading on a second order construct called environment. How do I get cronbach alpha for environment? Should I just enter the seven items as if they were directly loading on environment? Similarly, if I have to regress a dependent variable on environment in SPSS, how do i calculate environment? Should I just average the seven items? please help me. Thanks a lot in advance.
@najmaimtiaz86506 жыл бұрын
This is really helpful to my research. I have one query. When we perform EFA , we need to add all the factor i.e dependent, independent and mediator as you do in this video. When the extraction done. The IV, DV and mediator load according to highest loading first. but when we create out CFA do we need to specify IV, Dv and mediator or no need. Please answer this
@Gaskination6 жыл бұрын
All first-order reflective latent factors should be included in the EFA simultaneously. The main point is to ensure there is discriminant validity between the factors. In the CFA, just follow the solution to the EFA. In a CFA, there is no indication of DV or IV. It is just all covaried.
@renamochi5742 Жыл бұрын
hi, i wonder where i can looking for the total variance to be mitigate, currently i have 4 factors but it only loads 2 in total variance explained. thank youu
@Gaskination Жыл бұрын
Constrain it to a specific number by going to the extraction options and choose the option of extract exactly 4 instead of based on eigenvalues
@Gaskination11 жыл бұрын
I would definitely go with the second option.
@frederikeengelhardt24299 жыл бұрын
Dear James, Thank you for bringing light into the darkness of statistics :) Your videos help a real great deal but I still do have a question. In your video you state that there is no right/wrong when it comes to the procedure of deleting items which load too low or cross-load. Nor, conducting an EFA (ML Promax) on our own data, we're just wondering how to go ahead if there a quite a few items that load too low/cross-load. Can we delete several of them at once (the data set contains about 45 variables) or do we have to go ahead separately? It would be great to hear back from you!
@TheAISChannel9 жыл бұрын
Frederike Engelhardt The general rule of thumb is to keep items with primary loadings (on their parent factor) of 0.500 or higher, although you could probably get away with going as low as 0.450. For cross loadings, you want to make sure they are at least 0.100 different from the primary loading. I prefer 0.200 in order to ensure discriminant validity later during the CFA. I recommend removing items one at a time, because each one will have different repercussions on the model.
@Gaskination9 жыл бұрын
Frederike Engelhardt oops. TheAISChannel is me... I was signed in to a different account.;
@frederikeengelhardt24299 жыл бұрын
James Gaskin Haha! Thank you so much for the help!!
@sweety09829 жыл бұрын
Sir, is it necessary to do a EFA for extraction of factors in SEM or can we use AMOS to construct a latent variable with constraints like value above 0.5 are considered.
@Gaskination9 жыл бұрын
+Pinky Pawaskar I always do an EFA first because it reveals discriminant validity issues much more easily than the CFA.
@benjaminketuphel189710 жыл бұрын
ive been trying to do EFA on spss but my variables are not loading accordingly i.e according to pu1-pu4 or com1-com4 and the number of factors suppose to be 8 but, it displays 7. whats the problem with that?
@Gaskination10 жыл бұрын
benjamin ketuphel Try constraining it to 8 to see if that helps. Do this in the extraction option in the EFA. It factors into seven because only seven factors are reaching the eigenvalue threshold of 1.00. If you lowered that threshold, it would achieve the same thing as constraining to eight factors.
@abdullaalghalai721410 жыл бұрын
Thanks Dr. Gaskin Excuse me, I've got 4 factors for one variable and one of the factors has 8 items and 3 other factors, each factor has 4 items. When I use EFA did not materialize the desired result. Although I followed the same steps in your video. Is it allowed I do EFA of these factors separately? Thank you
@Gaskination10 жыл бұрын
Doing an EFA on the first order factors of a second order factor is problematic because we expect them to overlap considerably. Instead, I would do a cronbach's alpha reliability test on each subconstruct, and then see if the higher order construct also has reliability and convergent validity.
@abdullaalghalai721410 жыл бұрын
thank you so much
@Gaskination11 жыл бұрын
Varimax is too soft (it reduces high loadings and increases low loadings). Promax allows for correlated factors.
@prashantchopdar8 жыл бұрын
Dear Prof.James do i have to do EFA for formative measures as well or its not required?