CP 6691 - Week 5

Correlational Research Designs


Interactive Table of Contents (Click on any Block)

Purpose of Correlational Research
How to Identify This Type of Design
Description of a Correlation Coefficient
Statistical Significance of Correlation Coefficients
A Quick Look at Bivariate Correlational Statistics
A Closer Look at Prediction Studies
Evaluating Correlational Research Studies
Evaluating Sample Study #15 (Burnout and Counselor ...)

Assignment for Week 6

Purpose of Correlational Research

Unlike the other research designs we will study in this course, correlational research can have either of two purposes. One purpose for doing correlational research is to determine the degree to which a relationship exists between two or more variables. Notice that I did NOT say cause-and-effect relationship. Correlational research designs are incapable of establishing cause-and-effect. What's the difference? Well, it's really quite simple. Variables can relate to one another without one causing the other to occur.

Here's a simple example: When you move a light switch to a certain position, the lights in the room come on; when you move the light switch to the opposite position, the lights go off. There is a relationship between moving the light switch and the lights going on and off. However, the light switch doesn't cause the lights to go on and off. Don't believe me? Can the lights in a room go off without moving the light switch? Can you move a light switch to "on" and lights don't come on? The answer to both questions is, of course, yes. The light switch does NOT cause the lights to go on or off -- it controls the flow of electricity, and electricity is what makes the lights go on or off. So, there is a relationship between the light switch and the lights, but not a cause-and-effect relationship. But what is someone didn't know about electricity? They would see someone moving the light switch and the lights going on and off. That person would conclude that the light switch caused the lights to go on and off. And, of course, they would be wrong. So how can you tell if something that relates to something else actually causes it to happen? You can't without doing a special, controlled study. To sum this up simply, statisticians have a saying: "correlation does not imply causation", which means that just because two variables correlate with each other does not necessarily mean that one causes the other to occur.

The second purpose for correlational research is to develop prediction models to be able to predict the future value of a variable from the current value of one or more other variables. A common prediction model used in education is the use of college entrance exam scores to help predict a prospective student's success in college. Colleges and universities work hard to develop the best prediction models they can to ensure that the most potentially successful students are admitted. To increase the predictive power of their models, they use correlational research methods, some of which we'll discuss a little later in this lesson.


How to Identify This Type of Design

Correlational research studies are almost always given away by the purpose statement (or hypothesis or objective). Usually somewhere in these statements will appear the phrase "determine the relationship between ... ." That's the clue that this is a correlational design. You will also notice that correlational research studies do NOT form groups. They are like descriptive research designs in this respect. So, if you look in the Methodology section of the research report and find that no groups are formed, then you know it cannot be causal-comparative research. It must be either descriptive or correlational. The difference between these two designs can really only be found in the purpose statement.


Description of a Correlation Coefficient

Before getting into the different statistics, let's make sure you understand what a correlation coefficient is. A correlation coefficient can take on a value anywhere between -1 and +1 (including zero). The sign of the coefficient simply means whether the correlation is direct (positive) or inverse (negative). The strength of the coefficient is determined by its numerical value. That is, a correlation coefficient of -.75 and one of .75 are of equal strength. Note: positive correlation coefficients are usually shown without any sign in front of the number.

A direct (positive) correlation occurs when variable B increases as variable A increases, and variable B decreases as variable A decreases. An example of a direct correlation is the relationship between the height of a column of mercury in a glass tube (a thermometer) and the ambient air temperature.

An inverse (negative) correlation occurs when variables A and B act opposite to each other (when A increases, B decreases and vice versa). For example, consider the relationship between fatigue and rest. The more rest one gets, the less fatigued he/she is, and vice versa (usually).

"Usually" is an important word when we speak of correlations, because there are very, very few variables (if any) that correlate perfectly with each other. Consequently, the vast majority of correlations are less than 1 (or greater than -1), and are typically reported in decimal form. You could check virtually any elementary statistics text and find some rule-of-thumb regarding what constitutes strong and weak correlations. Here's a typical range of values that'll work for our needs:

0.7 to 1.0 -- strong correlation
0.4 to 0.69 -- moderate correlation
0.1 to 0.39 -- weak correlation
0.0 -- no correlation

Statistical Significance of Correlation Coefficients

We discussed the concept of statistical significance in Week 3. We learned that when the result of a statistical test is significant, it means that it would not occur by chance more often than a certain percentage of time (5 times, or 1 time, out of 100). We can apply this concept to correlation coefficients too. But, it sometimes confuses students. So, let't take a moment to clarify it. The value of a correlation coefficient (from the previous paragraph) has no bearing on whether or not it is statistically significant. That is, it's quite possible for a correlation coefficient of 0.1 to be statistically significant. This would mean that the two variables have very little relationship to one another, and that result is most probably NOT a chance occurrence. On the other hand, a correlation coefficient of 0.95 might not be statistically significant. This would mean that although in this study the relationship between the two variables was very strong, indeed, it most likely occurred by chance in this particular study, and would not likely occur again in another sample from the same population. So, do NOTassume that large coeffients are automatically statistically significant or that small coefficients are not. There is no connection between the size of the correlation coefficient and whether or not it's statistically significant. Whether a correlational value is statistically significant depends largely on the size of the sample. Just concetrate on the "p" value associated with each correlation coefficient, and if it is less than the alpha level set by the researcher, then the coefficient will be statistically significant, regardless of its numerical value.


A Quick Look at Bivariate Correlational Statistics

There are two categories of correlational statiistics: bivariate (two-variables) and multi-variate (many variables). Correlation coefficients used to satisfy the first of the purposes mentioned above (to identify the strength and direction of a relationship between variables) are bivariate in nature. That means that we can use these statistics to correlate only two variables at a time. The multi-variate correlational statistics are used to support the second purpose (to develop prediction models). They're a lot more complicated, so we won't go into much detail about them. We'll deal with them in the next section.

For now, let's look at bivariate correlational statistics. In your text, Table 12.3 on page 273 displays ten different bivariate statistics. Can you tell which one(s) is(are) parametric? Check your answer if you're not sure. Take a look for a moment at the different forms the variables can take, and that there is a different statistic for each data type. The majority of the statistics in the table are nonparametric and are, by their nature, rather weak. Certainly weaker than their parametric counterparts. They are so weak, in fact, that their limited power needs to be focused around narrow ranges to be effective. For example, look at the rho and tau coefficients. They are both used when the variables being correlated are in the form of ranks. But tau is preferred when the sample size is less than 10, while rho is preferred when the sample size is greater than 10 and less than 30.

From our discussion of statistical tools in Week 3, recall that we discussed 4 primary data types: ratio and interval (collectively referred to as continuous), ranks, and nominal (or categorical). There appears to be two new types in this table: artificial and true dichotomies. Actually, both of these are special types of categorical data. Dichotomies have only two possibilities -- so, only two categories. A true dichotomy is naturally occurring (not researcher-made) like gender (male and female). An artificial dichotomy is arbitrarily determined (usually by some kind of cut-off point) like a pass-fail point on a test, or rich-poor (someone arbitrarily determines the dividing line), etc. Get the idea?

Why are there so many different coefficients? To fit nearly every data combination. Let's say, for instance, you wanted to measure the degree of correlation (or relationship) between students' academic achievement on a standardized 12th grade math test and their IQ scores as measured by the Stanford Binet (testing the hypothesis that higher IQs are positively related to achievement test scores). What would be the most appropriate correlation coefficient to use? Your answer should go something like this: "Since academic achievement test scores and IQ test scores are both continuous, I would use either a Pearson r (product moment correlation) or an eta coefficient." Understand? Good (I hope you said yes!) Let's try another example. Say you wanted to see if there was a relationship between gender and academic performance in high school (again, measured by an academic achievement test). What would be the most appropriate statistic to use? Check Table 12.3 (pg. 273) in the text, then check your answer here. Did you get it right? If not, talk with your classmates or with the instructor.

Let's try another example. Say you wanted to determine the relationship between hair color (say blonde, brunette, and redhead) and academic achievement in math (to test the hypothesis that blondes are better at math than brunettes or redheads). What statistic would be most appropriate? Well, we know that academic achievement is a continuous variable. We should relaize that hair color is a categorical variable (with only three categories in this example). Looking through the table, however, we can't find a correlation coefficient that works with this combination of data types. Now what?

Now we take advantage of a technique known as data reduction. Recall back to Week 3 when I listed the four data types in the following order:

  • Ratio --- (Together, these two
  • Interval --- were called continuous)
  • Ordinal --- (Called Ranks here)
  • Nominal --- (Called Categorical here)
What data reduction allows us to do is to change continuous data into either rank data or categorical data. One problem though -- we lose information when we reduce data. So, it's not something researchers do lightly. But, there are times (like now) when it''s necessary. Since we can't reduce the categorical (hair color) data any lower, we have to reduce the continuous (achievement) data. What should we reduce it to? Well, let's look at Table 8.1 again and see what other data types can be correlated with categorical data. The second coefficient from the bottom of the table, the Contingency Coefficient, correlates two categorical variables together. So, we should reduce our continuous (achievement) data to categorical data. That's actually pretty easy. We can change the achievement (typically percentage or raw) scores to letter grades (A, B, C, D, F), which are categorical.

So, what we've seen so far is that even though there isn't a correlation coefficient for every possible data pair. It is possible, through data reduction, to create a data set for which there is an appropriate correlational statistic. In the next section we'll talk just a little about the multivariate correlational statistics used in prediction studies.


A Closer Look at Prediction Studies

The premise behind prediction studies in social science research is that the degree to which variable A correlates with variable B allows us to make a prediction, at some level of confidence, about the value of variable B based on the value of variable A. Notice that prediction is not the same as cause-and-effect. Just because one thing can predict the occurrence of something else does not necessarily mean that one causes the other.

What determines how well one variable can predict another is something called common (or explained or shared) variance. The way to determine the amount of common variance existing between two variables is easily computed: simply square the correlation coefficient. What you get when you do this is sometimes startling. For example, let's say that students' college entrance exam scores on the SAT are correlated with their first semester grade point average (GPA) in college, and we find that the correlation coefficient has a value of 0.70. That's a strong correlation, which means these two variables strongly relate to one another. But, how good a predictor of college GPA is the SAT test score? To determine this, we square 0.70 and get (0.70 x 0.70 =) 0.49. This means that 49 percent of the variance in college GPA scores can be explained by a person's scores on the SAT entrance exam. When you think about it, that's a pretty poor predictor. I could flip an honest coin and do a better prediction job than that!

So, what explains the other 51 percent of the variance in GPA scores? It is other variables. The hard part is identifying what those variables are. This is why there are no really good bivariate prediction models in the social sciences. Human subjects are just too complex to be predicted in such simple ways. This is the foundation of prediction studies -- the search for sets of variables that explain as much variance in one variable as possible. Here's a little terminology you might encounter in your study of correlational research designs (especially prediction studies). The independent variables are frequently called predictor variables, and the dependent variable is usually called the criterion variable.

In order to do a prediction study, a researcher must, first, do a relationship study on all the variables he/she believes are involved in the prediction model. So, the first part of the study is to identify relationships between variables of interest. Then, the second part is to use those relationships (coefficients) to create the model. We won't go into the details of creating the model, because it's far too detailed for this course. It's enough to draw your attention to the multivariate correlation techniques in your text in Table 12.4 on page 274. Each of these techniques provides a means of developing a prediction model based on correlation coefficients computed from a set of variables. The coefficients themselves are computed using the bivariate techniques listed in Table 12.3 in your text.


Evaluating Correlational Research Studies

What should you look for when evaluating a correlational research study? First of all, look for the common things we've been evaluating in all previous studies: sampling technique; existence of a valid research hypothesis, objective, or questions; and the validity and reliability of appropriate measures used to collect data. You should also pay attention to whether the researcher is using appropriate correlational statistical tests.

There is one additional thing to look for that doesn't fall into any of those categories. It relates to the alpha error level we discussed in Week 2. Recall that alpha error is the level of error the researcher is willing to accept or tolerate and still remain confident that the results of the study are true. Well, every time a researcher repeats the same statistical test on the same set of data, the alpha error increases. This is extremely important in correlational research studies, because computers are capable of correlating dozens, hundreds, even thousands of variables in a very short time.

For example, let's say a researcher wants to determine if significant relationships exist among five variables (A, B, C, D, and E). The intercorrelations of these five variables is usually displayed in the form of a matrix with the same number of rows and columns as the number of variables being correlated. In our example, that would be a "5 by 5" matrix that might look like this:
 

ABCDE
A-.27.78.65 *.43
B.27-.56.88.76
C.78.56-.02.32 *
D.65* .88.02-.09 *
E.43.76.32 *.09 *-

* p < .05

Notice the "-" signs down the diagonal (called the main diagonal) of the matrix. These are usually ignored because they're perfect correlations of each variable with itself. Also note that the correlations above and to the right of the main diagonal (called the upper triangular matrix) is a mirror image of the correlations below and left of the main diagonal (called the lower triangular matrix). Often, either the upper or lower triangular matrix is shown in research reports to reduce the amount of confusion in the table.

The total number of correlation coefficients in this matrix (including those along the diagonal) is 5 x 5 = 25 intercorrelations. What we have here is the same statistical test (correlation coefficient) repeated on the same data set (same set of 5 variables) 25 times. Now recall our discussion of the alpha error level (.05). This is the amount of error we are willing to tolerate and still be confident that the results are true. Well, given this error level, we can expect some of these correlation coefficients to be statistically significant -- purely by chance. How many and which ones will they be. We cannot know precisely which coefficients will show up as falsely statistically significant. But, we can estimate how many there will be. We can do this by simply multiplying the total number of intercorrelations by the alpha error level for the study, and rounding off the result. In our current example, we should expect 1 (25 x .05 = 1.25 = 1 (rounded)) of these 25 coefficients to be statistically significant by chance alone. A look at the correlation matrix above shows that 6 coefficients are statistically significant at the .05 level. This is much greater than the number that would be expected to occur by chance given the number of variables being correlated. So, these correlations are more than just chance occurrences.

If there had been, say, only two statistically significant coefficients. There would be very little reason for excitement since at least one of them could have occurred by chance. When evaluating correlational research studies, try to determine the number of potential chance correlations, as we did above, so you will have a better feel for the true significance of the results reported by the researcher.


Evaluating Sample Study #15
(Burnout and Counselor-Practitioner Expectations of Supervision)

1. What kind of research design is this?

Since the purpose (last sentence of the introduction) of the study is "To examine the relationship between satisfaction with supervisor and counselor burnout," this is clearly a correlational research study.

2. What is the research hypothesis, objective, or question(s), or if none, so state.

The purpose statement identified above is actually a good research objective, because it is concise, is grounded in previous research (described earlier in the introduction), and defines the variables to be studied.

3. To what population would you feel comfortable generalizing results of this study?

A rather large non-random sample of subjects from the Oregon Personnel and Guidance Association was obtained. However, only 24 percent were returned in usable condition. I have very little confidence that this 24 percent is representative of the original 500, to say nothing of any larger population. I, therefore, do not feel comfortable generalizing to any population due to the very small return rate.

4. Identify the strengths and threats to validity in this study.

Strengths:
  • Reliability of MBI and CSI were assessed. Validity assessments were addressed also (on page 236).
  • If you concluded that both the MBI and CSI generated continuous scores, then you should have seen the use of the Pearson product moment correlation as an appropriate statistic in this study.
Threats:
  • Non-random selection of subjects.
  • Low return rate and no evidence of follow-up by the researchers to try to increase the response rate.
  • Questionnaires not pilot tested.
  • If you concluded that either the MBI or CSI generated non-continuous (categorical or rank) scores, then you should have seen the use of the Pearson product moment correlation as an inappropriate statistic to use in this study.

Finally, review Table 1 on page 237 of the study. Notice that a lower triangular matrix is displayed (the upper triangluar portion would have duplicate information and is, therefore not necessary. How many variables are being correlated? Can you identify 9 variables? -- 3 for Dissatisfaction, 3 for Ideal, and 3 for Actual. That means there are 81 possible intercorrelations (9 x 9). Using an alpha error level of .05 (which is OK because we weren't told what the alpha level was), the possible number of chance statistically significant correlations is 4 (81 x .05). How many coefficients actually turned out to be statistically significant? Counting all the correlations with asterisks by them shows 27. Since this is just for the lower triangular matrix, we would have to double this number to get the total for the whole matrix. That would be 54 statistically significant correlations (actual). Comparing this number to the 4 expected by chance leads us to conclude that there is reason for excitement about the significant relationships found between supervision variables in this study.

5. Are there any ethical problems in this study?

There are no ethical problems in this study because nothing is done to the subjects. Only their academic scores are examimed. The only ethical problems likely with correlation studies are likely to relate to data collection.

If you have any questions concerning this evaluation (if you found things I didn't discuss here, or if you don't understand something I've discussed here), talk with other members of the course to see if you can resolve the issues with them. If not, discuss your questions with the instructor in class or via email.


End of Week 5 Lesson

Assignment For Next Week
Gall: Chapter 13 (True and Quasi-Experimental Designs)
SB: Study 16
Guide:  Chapter 3
Extra Evaluation Practice: Try your hand at evaluating these scenario-based research studies (I call them mini-studies.) For each problem, read the scenario (first page) and try to evaluate it using the 5 evaluation questions in the Typical Evaluation Quiz format we've been using above. Don't look at the answers on the second page until you have answered all the questions yourself. Then compare your answers with those provided in the problem. If you have questions or don't agree with or understand the answers provided, E-mail me and let's discuss it.
 

Scenario Pack 2 (MS Word documents):   Scenario A, Scenario B,    Scenario C, Scenario D, Scenario E, Scenario F, Scenario G, Scenario H.

Due Next Week
Prepare for Evaluation Quiz 2