Haidt: Quasi-Experimental Evidence
Evaluating causality in quasi-experiments tends to be extremely difficult and so far we have only one such study that concerns social media directly.
Haidt devotes section 6 to retroactive quasi-experiments, a category of studies that view some past developments as a natural scientific experiment.
Haidt is correct in observing that these types of studies might allow us to examine environmental effects (or as Haidt calls them emergent network effects) — akin to second-hand smoke, the transfer of social life to social media may affect even those teens who use little or no social media.
Randomness
Haidt proclaims that These studies are sometimes called “quasi-experiments” because the researchers take advantage of natural variation in the world as though it was random assignment.
As an example of such supposed advantage, Haidt gives us the only social media study on his list of quasi-experiments:
For example, Braghieri, Levy, & Makarin (2022) took advantage of the fact that Facebook was originally offered only to students at a small number of colleges. As the company expanded to new colleges, did mental health change in the following year or two at those institutions, compared to colleges where students did not yet have access to Facebook?
What Haidt fails to mention is that the Facebook expansion was not even remotely random! In reality, as we will see in a follow-up article, it initially targeted elite institutions while the least competitive colleges were the last to get Facebook.
In fact Haidt’s characterization of quasi-experiments is entirely wrong: the difference between experiments and quasi-experiments is precisely the absence of randomization in the latter (see What Is a Quasi-experiment?).
Causation
The absence of randomness is what makes establishing causality so difficult:
The lack of random assignment is the major weakness of the quasi-experimental study design. […] Unfortunately, statistical association does not imply causality, especially if the study is poorly designed. Thus, in many quasi-experiments, one is most often left with the question: “Are there alternative explanations for the apparent causal association?” If these alternative explanations are credible, then the evidence of causation is less convincing.
[…]
An inability to sufficiently control for important confounding variables arises from the lack of randomization. A variable is a confounding variable if it is associated with the exposure of interest and is also associated with the outcome of interest; the confounding variable leads to a situation where a causal association between a given exposure and an outcome is observed as a result of the influence of the confounding variable.
With this in mind, readers should approach assertions of causality in the six retroactive quasi-experiments on Haidt’s list with great caution.
Given the complexities involved in evaluating methodologies in retroactive quasi-experimental studies, we will look at each of the three studies Haidt cited separately.
Social Media versus High-Speed Internet
Five of the six studies on Haidt’s list concern the spread of high-speed Internet — and therefore assigning blame for any apparent effects to social media might be mistaken. Furthermore, if the expansion of Facebook coincided with the expansion of high-speed Internet, then blaming social media might also be a mistake.
Conclusion
Determination of causation in retroactive quasi-experiments is far more complicated than Haidt’s description implied, mainly due to the lack of randomization. We will look at some of the studies separately.