A thorough 
study by Dan Luu. From the introduction:
The summary of the summary is that most studies find very small effects,
 if any. However, the studies probably don’t cover contexts you’re 
actually interested in. If you want the gory details, here’s each study,
 with its abstract, and a short blurb about the study.
And from the conclusion:
Other than cherry picking studies to confirm a long-held position, 
the most common response I’ve heard to these sorts of studies is that 
the effect isn’t quantifiable by a controlled experiment. However, I’ve 
yet to hear a specific reason that doesn’t also apply to any other field
 that empirically measures human behavior. Compared to a lot of those 
fields, it’s easy to run controlled experiments or do empirical studies.
 It’s true that controlled studies only tell you something about a very 
limited set of circumstances, but the fix to that isn’t to dismiss them,
 but to fund more studies. It’s also true that it’s tough to determine 
causation from ex-post empirical studies, but the solution isn’t to 
ignore the data, but to do more sophisticated analysis. For example, econometric methods are often able to make a case for causation with data that’s messier than the data we’ve looked at here.
The next most common response is that their viewpoint is still valid 
because their specific language or use case isn’t covered. Maybe, but if
 the strongest statement you can make for your position is that there’s 
no empirical evidence against the position, that’s not much of a 
position.
Thanks for the effort you put into this epic study, Dan! Spotted via 
Lambda the Ultimate. 
 
No comments:
Post a Comment