Thanks to the work of popular social science authors like Steven Levitt of Freakonomics and Nate Silver of FiveThirtyEight, data analysis is a hot new trend in social science.  Unfortunately, not everyone can be a Silver or a Levitt.  
Objective, data-driven research can help to clarify much in the social sciences, but scientists who jump onto these new methods with little statistical training or rigor do their disciplines a disservice. Objective study is very important to social science, but so is traditional, subjective observation and we must remember that many of the social sciences were founded the development of regression analysis.  
Academics, and we who encounter their work, must be careful to  receive statistical information with a health skepticism.
“Correlation does not imply causation!” Anyone who has taken a statistics class or any data-driven course, has heard this phrase (often from a professor who is constantly peeved by people conflating the two concepts). Depending on the professor’s exasperation level, it is possible that she just finished reading a social science journal.  
Granted, it’s relatively rare that an article will openly claim causation where there’s none to be found, but implicit claims often lurk. And even in instances where causation is not claimed, some researchers dive no further into a topic after determining correlation.  Correlation can tell us quite a lot, but we cannot pretend to understand an issue without determining the causes behind it.
Correlative relationships are powerful rhetorical tools, and everyone from self-styled Facebook pundits to Ph.Ds use them to try to prove points.  One familiar example is the oft-repeated claim that areas with high rates of gun ownership have comparably lower crime rates than those with lower gun ownership rates.  This is true.  But is it a causal relationship?  Doubtfully.  
Areas with high gun ownership tend to be rural areas that would see low crime regardless of the size of its weapons cache—there are, after all, few multinational drug cartels in central Kansas.  
Just because the causal relationship is dubious doesn’t mean that it’s not great rhetoric. Correlative relationships provide fantastic material for argument, but many of these arguments demonstrate just why simple correlation shouldn’t be trusted as proof in academic research.
Causal relationships are shown by revealing the relationship between correlated phenomena.  Laboratory experiments are of limited value in the social sciences, so such relationships are explored by examining the effect of one thing on another in real world context.
In many cases, researchers will accept statistical correlation as causation if there is a theoretical or cultural rationale for it—though they may sometimes do so to their own detriment. 
For instance, I recently read an academic paper detailing the relationship between a Paraguayan’s native language and her educational and economic achievement. The paper claimed that speaking Guaraní, the country’s most widely-spoken language, has measurable effect on—not just correlation with—achievement.  
This phenomenon is culturally possible—the Guaraní language has the stigmatized reputation as being backwards and less value than Spanish, the dominant language in Paraguay’s economy.  
However, the researchers failed to control for their subjects’ socioeconomic backgrounds.  Socioeconomic background and language are no doubt strongly correlated, but both are also show strong correlation with achievement. Without controlling for that variable, among others, it is impossible to know whether Paraguayans’ mother tongues truly influence their economic or educational success.
Cases like this bolster the argument that nuanced social and economic issues are perhaps better examined through more qualitative analyses than with complex mathematical models. 
Western social and political thought is to this day heavily influenced by the discoveries of the ancient Greeks and Romans; the great minds of these societies used little more than description and allegory to illustrate psychological and philosphical insights that remain relevant to this day. Political and social thinkers still cite Socrates’s allegory of the cave, for example, as an impressively illustrative of the way that distorted or incomplete information can create a gap between perception and reality. 
Émile Durkheim, who foundmodern sociology and shaped the structure of many modern social sciences, made these contributions to human understanding before  mathematical analysis of huge data troves was de rigeur. Today’s social scientists would do well to remember that some of the best work done in their disciplines was completed without the use of sohpisticated  mathematical models.
Tim Groseclose, a professor at UCLA, observed that the social scientists who were mose effectively using quantitative methods often had a background in economics. I agree with Groseclose, but would expand this category to include all scientists with rigorous statistical or mathematical training.  
However, the researchers best at analyzing troves of data will not necessarily be those who produce the best results in social science.  The social sciences need academics to interpret our world through logical analysis and thoughtful case studies and number-crunchers to filter through huge swaths of data and conduct rigorous analysis. What the fields do not need is flawed statistical study that contributes little to humanity’s understanding of itself.

Thanks to the work of popular social science authors like Steven Levitt of Freakonomics and Nate Silver of FiveThirtyEight, data analysis is a hot new trend in social science.  Unfortunately, not everyone can be a Silver or a Levitt.  

Objective, data-driven research can help to clarify much in the social sciences, but scientists who jump onto these new methods with little statistical training or rigor do their disciplines a disservice. Objective study is very important to social science, but so is traditional, subjective observation and we must remember that many of the social sciences were founded the development of regression analysis.  

Academics, and we who encounter their work, must be careful to  receive statistical information with a health skepticism.

“Correlation does not imply causation!” Anyone who has taken a statistics class or any data-driven course, has heard this phrase (often from a professor who is constantly peeved by people conflating the two concepts). Depending on the professor’s exasperation level, it is possible that she just finished reading a social science journal.  

Granted, it’s relatively rare that an article will openly claim causation where there’s none to be found, but implicit claims often lurk. And even in instances where causation is not claimed, some researchers dive no further into a topic after determining correlation.  Correlation can tell us quite a lot, but we cannot pretend to understand an issue without determining the causes behind it.
Correlative relationships are powerful rhetorical tools, and everyone from self-styled Facebook pundits to Ph.Ds use them to try to prove points.  One familiar example is the oft-repeated claim that areas with high rates of gun ownership have comparably lower crime rates than those with lower gun ownership rates.  This is true.  But is it a causal relationship?  Doubtfully.  

Areas with high gun ownership tend to be rural areas that would see low crime regardless of the size of its weapons cache—there are, after all, few multinational drug cartels in central Kansas.  
Just because the causal relationship is dubious doesn’t mean that it’s not great rhetoric. Correlative relationships provide fantastic material for argument, but many of these arguments demonstrate just why simple correlation shouldn’t be trusted as proof in academic research.
Causal relationships are shown by revealing the relationship between correlated phenomena.  Laboratory experiments are of limited value in the social sciences, so such relationships are explored by examining the effect of one thing on another in real world context.

In many cases, researchers will accept statistical correlation as causation if there is a theoretical or cultural rationale for it—though they may sometimes do so to their own detriment. 
For instance, I recently read an academic paper detailing the relationship between a Paraguayan’s native language and her educational and economic achievement. The paper claimed that speaking Guaraní, the country’s most widely-spoken language, has measurable effect on—not just correlation with—achievement.  

This phenomenon is culturally possible—the Guaraní language has the stigmatized reputation as being backwards and less value than Spanish, the dominant language in Paraguay’s economy.  
However, the researchers failed to control for their subjects’ socioeconomic backgrounds.  Socioeconomic background and language are no doubt strongly correlated, but both are also show strong correlation with achievement. Without controlling for that variable, among others, it is impossible to know whether Paraguayans’ mother tongues truly influence their economic or educational success.

Cases like this bolster the argument that nuanced social and economic issues are perhaps better examined through more qualitative analyses than with complex mathematical models. 

Western social and political thought is to this day heavily influenced by the discoveries of the ancient Greeks and Romans; the great minds of these societies used little more than description and allegory to illustrate psychological and philosphical insights that remain relevant to this day. Political and social thinkers still cite Socrates’s allegory of the cave, for example, as an impressively illustrative of the way that distorted or incomplete information can create a gap between perception and reality. 

Émile Durkheim, who foundmodern sociology and shaped the structure of many modern social sciences, made these contributions to human understanding before  mathematical analysis of huge data troves was de rigeur. Today’s social scientists would do well to remember that some of the best work done in their disciplines was completed without the use of sohpisticated  mathematical models.

Tim Groseclose, a professor at UCLA, observed that the social scientists who were mose effectively using quantitative methods often had a background in economics. I agree with Groseclose, but would expand this category to include all scientists with rigorous statistical or mathematical training.  

However, the researchers best at analyzing troves of data will not necessarily be those who produce the best results in social science.  The social sciences need academics to interpret our world through logical analysis and thoughtful case studies and number-crunchers to filter through huge swaths of data and conduct rigorous analysis. What the fields do not need is flawed statistical study that contributes little to humanity’s understanding of itself.