Exploratory Factor Analysis (EFA) is a statistical technique that is used to identify the latent relational structure among a set of variables and narrow down to smaller number of variables. This essentially means that the variance of large number of variables can be described by few summary variables, i.e., factors. Here is an overview of exploratory factor analysis:

As the name suggests, EFA is exploratory in nature – we don’t really know the latent variables and the steps are repeated until we arrive at lower number of factors. In this tutorial we’ll look at EFA using R. Now, let’s first get the basic idea of the dataset.

This dataset contains 90 responses for 14 different variables that customers consider while purchasing car. The survey questions were framed using 5-point likert scale with 1 being very low and 5 being very high. The variables were the following:

- Price
- Safety
- Exterior looks
- Space and comfort
- Technology
- After sales service
- Resale value
- Fuel type
- Fuel efficiency
- Color
- Maintenance
- Test drive
- Product reviews
- Testimonials

Click here to download the coded dataset.

Now we’ll read the dataset present in CSV format into R and store it as a variable.

data &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;- read.csv(file.choose(),header=TRUE)

It’ll open a window to choose the CSV file and the `header` option will make sure that the first row of the file is considered as the header. Enter the following to see the first several rows of the data frame and confirm that the data has been stored correctly.

head(data)

Now we’ll install required packages to carry out further analysis. These packages are `psych` and `GPArotation`. In the code given below, we are calling `install.packages()` for installation.

install.packages('psych') install.packages('GPArotation')

Next we’ll find out the number of factors that we’ll be selecting for factor analysis. This evaluated via methods such as `Parallel Analysis` and `eigenvalue`, etc.

**Parallel Analysis**

We’ll be using `Psych` package’s `fa.parallel` function to execute the parallel analysis. Here we specify the data frame and factor method (`minres` in our case). Run the following to find an acceptable number of factors and generate the `scree plot`:

parallel &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;- fa.parallel(data, fm = 'minres', fa = 'fa')

The console would show the maximum number of factors we can consider. Here is how it’d look:

*“Parallel analysis suggests that the number of factors = 5 and the number of components = NA“*

Given below in the `scree plot` generated from the above code:

The blue line shows eigenvalues of actual data and the two red lines (placed on top of each other) show simulated and resampled data. Here we look at the large drops in the actual data and spot the point where it levels off to the right. Also, we locate the point of inflection – the point where the gap between simulated data and actual data tends to be minimum.

Looking at this plot and parallel analysis, anywhere between 2 to 5 factors would be a good choice.

Now that we’ve arrived at probable number number of factors, let’s start off with 3 as the number of factors. In order to perform factor analysis, we’ll use `psych` package’s ``fa()`

function. Given below are the arguments we’ll supply:

- r – Raw data or correlation or covariance matrix
- nfactors – Number of factors to extract
- rotate – Although there are various types rotations, `
`Varimax``

and ``Oblimin``

are most popular - fm – One of the factor extraction techniques like `Minimum Residual (OLS)`, `Maximum Liklihood`, `Principal Axis` etc.

In this case, we will select oblique rotation (rotate = “oblimin”) as we believe that there is a correlation in the factors. Note that Varimax rotation is used under the assumption that the factors are completely uncorrelated. We will use `Ordinary Least Squared/Minres` factoring (fm = “minres”), as it is known to provide results similar to `Maximum Likelihood` without assuming a multivariate normal distribution and derives solutions through iterative eigendecomposition like a principal axis.

Run the following to start the analysis:

threefactor &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;- fa(data,nfactors = 3,rotate = "oblimin",fm="minres") print(threefactor)

Here is the output showing factors and loadings:

Now we need to consider the loadings more than 0.3 and not loading on more than one factor. Note that negative values are acceptable here. So let’s first establish the cut off to improve visibility:

print(threefactor$loadings,cutoff = 0.3)

As you can see two variables have become insignificant and two others have double-loading. Next, we’ll consider ‘4’ factors:

fourfactor &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;- fa(data,nfactors = 4,rotate = "oblimin",fm="minres") print(fourfactor$loadings,cutoff = 0.3)

We can see that it results in only single-loading. This known as **simple structure**.

Hit the following to look at the factor mapping:

fa.diagram(fourfactor)

Now that we’ve achieved a simple structure it’s time for us to validate our model. Let’s look at the factor analysis output to proceed:

The root mean square of residuals (RMSR) is 0.05. This is acceptable as this value should be closer to 0. Next, we should check RMSEA (root mean square error of approximation) index. Its value, 0.001 shows the good model fit as it’s below 0.05. Finally, the Tucker-Lewis Index (TLI) is 0.93 – an acceptable value considering it’s over 0.9.

After establishing the adequacy of the factors, it’s time for us to name the factors. This is the theoretical side of the analysis where we form the factors depending on the variable loadings. In this case, here is how the factors can be created:

In this tutorial, we discussed about the basic idea of EFA, covered parallel analysis and scree plot interpretation. Then we moved to factor analysis to achieve a simple structure and validate the same to ensure the model’s adequacy. Finally arrived at the names of factor from the variables. Now go ahead, try it out and post your findings in the comment section.

In the next post, we’ll look at the Confirmatory Factor Analysis.

## 38 Comments

Best tutorial on factor analysis in R on the internet…. what a weird place to find it.

Glad that you found it useful. But, why did you think that this is a weird place for such tutorial?

After so many attempts to find explanation of FA in R that actually makes sense. Thankyou!!!

Thank you. Nice tutorial

Great tutorial! Thanks a lot.

This is the best tutorial on web…..plz upload more.

Just Awesome!

Useful tutorial, simply explained so that newbie can understand easily.

Thank you!

great, clear explanation…thanks!

A newbie has understood this complicated concept, Thanks …

This was really helpful! Now I’m ready to do a confirmatory factor analysis. I’m unable to find the post on the blog. Have you written a CFA post?

Thanks a lot for the great post. Did you use any special command to get RMSEA and TLI?

You’re welcome 🙂 Special commands are not required for these values.

We have not yet planned for this, but I’ll try to fit this in our content calendar soon.

Thank you very much, it was excellent.

Great tutorial! Very useful! Thanks!

I used the data and instructions verbatim, alas, got much different results. My loadings are different after doing the first fa() call (with the same parameters). When I do the cut-off at 0.3 in the first iteration, only Exterior_looks drops out; Safety remains in with a loading of 0.311 on MR2. Otherwise I found the tutorial very instructive; I just wish I would get verbatim results with the same input data / same set of commands.

Brilliant. This helped me a lot.

thank you. very useful. understandable. but how can ı take factor analyzing output. (which code?)

I’m not sure what exactly you mean; code is available in this tutorial.

Hi, Why the cut-off values are considered 0.3, Is there any specific reason? How do we know what cut-off should be considered? Could you please help me in understanding it.

There are no hard and fast rules. Most of the research papers suggest 0.4 or 0.3. Also, please note that with significantly high number of sample size, you can take the cut-off value at 0.2 as well.

Thank you very much for your excellent tutorial. The only improvement I can recommend is to include references, for example, the citations that cite using a .2, .3., and .4 cut-off. (I would appreciate these original/primary sources are appreciated, also.) For goodness-of-fit I was able to find these references – Kline and Hooper and listed them below. Kline is available from Amazon (I have no relationship with Amazon, nor am paid by Amazon, nor own stock in Amazon) for a reasonable charge and Hooper is freely available. Thank you very much for doing such a great job!

Kline, R. B. (2004). Principles and practice of structural equation modeling (2nd ed.). New York: The Guilford Press.

Hooper, D., Coughlan, J., & Mullen, M. (2008). Structural equation modelling: Guidelines for determining model fit. Electronic Journal of Business Research Methods. Electronic Journal of Business Research Methods, 6(1), 53–59. https://doi.org/10.1037/1082-989X.12.1.58

Thanks for your help, I understood a lot.

Great tutorial, worked right away! 🙂

Thank you very much, very clearly explained

Brilliant example. Thank you.

This was great!!!

Thank you very much for this great post, it’s one of the best available online!

this awesome,

please I need more information on something. if you one have identify the factors, how can you now know which variables from original data set are responsible for those factors.

Very simple and useful explanation, great work 🙂 thank you so much

Thanks a lot, very helpfull. Tried it with my data and cannot come up with a number of factros allowing single-loading only. The best possibility (with 6 factros) shiws 1 double loading, RMSR=0,05, RMSEA=0,08 (CI: 0,077-0,082) and TLI=0,597

How should I proceed if I want to imprive it ? Thanks in advance ……

you’re the best !!!

Thank you for getting back to me. That sounds great! It is a fantastic article that helps me, much indeed Information. This was really helpful!

Great explanations.

Great job…!!!!

Thank you.

Awesome! Thanks a lot

[…] Exploratory Factor Analysis in R [EFA] […]

Bitcoinist

http://bitcoinist.com/

Bitcoinist is the prime source for information about Bitcoin, digital currency and blockchain technology. Bitcoinist provides up-to-date news and insightful analysis related to business news and technical price analysis to community events.

Thank You very much Hasitha, You can check out our other blogs that are featured here– https://www.promptcloud.com/blog/