Understanding Factor Loading and Heywood Cases

Factor Loadings and Heywood Cases


The focus of the session is on Factor Loading, When to delete an item based on low loading, and how to deal with Standardized loading greater than 1 normally referred to as Heywood Cases.

Factor Loadings

  • Factor Loadings are the correlations between the construct and each of its indicators (i.e., correlation weights), which become the indicator loadings.
  • Factor loading is a coefficient generated from the evaluation of confirmatory factor analysis for the measurement model.
  • Factor loading indicates that a certain factor represents a variable well.
  • Each factor loading is a measure of the importance of the variable in measuring each factor.

If I Have a Weak Factor Loading, Should I Drop It?

  • If you have a factor loading that is near or below the .70 threshold, it does not mean you need to drop the indicator from the analysis. Complex or newly developed constructs will often have numerous indicators in an attempt to capture a comprehensive aspect of a construct.
  • If you have numerous indicators that are strongly loading on the unobserved construct and an AVE value that is still exceeding .50, then I would suggest keeping the indicator. The weaker indicator could very well be helping to capture a unique component of the construct.
  • That said, if the construct is nowhere near the threshold (< .60), this item is contributing very little in understanding the unobservable construct.
  • With a factor loading lower than .60, you are barely explaining a third of the variance in the indicator. If this is the case, you should strongly consider dropping this indicator.
  • This poor-performing indicator can create more unexplained variance in your model and ultimately hurt your ability to achieve convergent and discriminant validity.
  • A word of caution should be given about deleting indicators in the measurement analysis. If you collect data on some phenomenon and in the analysis decide to start dropping indicators from your constructs, then you really need to have a second data collection to verify that your revised scales (without the dropped items) are valid.
  • Having a single sample and dropping indicators sets you up for criticism that you are capitalizing on chance. If you cannot verify that changes you made in the scales are stable and not based on the unique aspects of that specific data collection, then criticism could ensue in regard to the validity of your results.
  • That is why pretesting a survey or scales is so important, especially with an indicator that is being adapted into a new context or even if a relatively new construct is being measured.
  • The pretest should be where indicators are dropped, and your final data collection should verify the structure and measurement of each construct established at the end of the pretest.

Another Perspective!

  • Although factor loading over 0.7 is desirable (Vinzi, Chin, Henseler, & Wang, 2010), researchers frequently obtain weaker outer loadings (<0.70) in social science studies.
  • Rather than automatically eliminating indicators, the effects of the removal of the item on composite reliability, content, and convergent validity shall be examined.
  • Generally, items with outer loadings from 0.40 to 0.70 shall be considered for removal only if deletion results in an increase of composite reliability or average variance extracted (AVE) over the recommended value (Hair et al., 2016).

What If I Have a Standardized Factor Loading Greater Than 1?

  • A standardized factor loading greater than 1 is stating that you are explaining more than 100% of the variance in an indicator.
  • In this instance, you will also see a negative number in your error term, which is often called a “Heywood case”.
  • The causes of a Heywood case are often the result of an outlier, multicollinearity between indicators, or a mis-specified model.
  • You will see Heywood cases more often when a construct has only two indicators.
  • Possible solutions are to remove a covariance between indicator error terms, deleting the problematic indicator, dropping outliers, adding another indicator to the unobserved variable, or dropping the maximum likelihood estimation in favor of GLS (generalized least squares—this can be done the Analysis Properties window).
  • You can also move the constraint (Parameter -> Regression Weight) to another indicator
  • If you are only concerned with the standardized factor loadings (and not the unstandardized), I have seen Heywood cases addressed by constraining the unobserved variable’s variance to “1” and then labelling all the paths from the unobservable construct to all the indicators the same term (Like an “A”).
  • By labeling all the paths to the indicators the same name, it will constrain all the paths to be equal. So, the unstandardized estimates will all be the same using this technique, but the standardized estimates will reflect the difference in the indicators.
  • This is not an ideal method to address a Heywood case, but it is an option if all else fails.
  • Note: you can have unstandardized loadings greater than 1 and it is perfectly acceptable.


Collier, J. E. (2020). Applied structural equation modeling using AMOS: Basic to advanced techniques. Routledge.

Video Tutorials