Labels: Functional Programming
Thank you to John Peterson, who is organising a symposium in memory of Paul Hudak. I was sorry to miss the session devoted to Paul at ICFP. Paul made huge contributions to FP, and though he was only a little older than me I was proud to have him as a mentor. I'm looking forward to the meeting. The symposium will take place Friday 29--Saturday 30 April 2016 at Yale.
Spotted via Katie Miller ( before answering, needing some extra time to choose his words. “I’m going to get in so much trouble,” he says. The question, you see, touches on an eternally controversial topic: the future of computer programming languages.@codemiller) and Manuel Chakravarty (@TechnicalGrace).
Brandy is a software engineer at Facebook, and alongside a team of other Facebookers, he spent the last two years rebuilding the system that removes spam—malicious, offensive, or otherwise unwanted messages—from the world’s largest social network. That’s no small task—Facebook juggles messages from more than 1.5 billion people worldwide—and to tackle the problem, Brandy and team made an unusual choice: they used a programming language called Haskell.
If you consider that companies like Facebook, Google, and Amazon represent where the rest of the internet is going—as the internet grows, so many other online services will face the same problems it faces today—Facebook’s Haskell project can indeed point the way for the programming world as a whole. That doesn’t mean Haskell will be ubiquitous in the years to come. Because it’s so different from traditional programming languages, coders often have trouble learning to use it; undoubtedly, this will prevent widespread adoption. But Facebook’s work is a sign that other languages will move in Haskell’s general direction.
What about Haskell itself? In the long run, could it evolve to the point where it becomes the norm? Could coders evolve to the point where they embrace it large numbers? “I don’t know,” Brandy says. “But I don’t think it would be a bad thing.”
From The control group is out of control, by Scott Alexander (Star Slate Codex):
Allan Crossman calls parapsychology the control group for science.Spotted by Conrad Hughes. Cheers, Conrad!
That is, in let’s say a drug testing experiment, you give some people the drug and they recover. That doesn’t tell you much until you give some other people who are taking a placebo drug you know doesn’t work – but which they themselves believe in – and see how many of them recover. That number tells you how many people will recover whether the drug works or not. Unless people on your real drug do significantly better than people on the placebo drug, you haven’t found anything.
On the meta-level, you’re studying some phenomenon and you get some positive findings. That doesn’t tell you much until you take some other researchers who are studying a phenomenon you know doesn’t exist – but which they themselves believe in – and see how many of them get positive findings. That number tells you how many studies will discover positive results whether the phenomenon is real or not. Unless studies of the real phenomenon do significantly better than studies of the placebo phenomenon, you haven’t found anything.
Trying to set up placebo science would be a logistical nightmare. You’d have to find a phenomenon that definitely doesn’t exist, somehow convince a whole community of scientists across the world that it does, and fund them to study it for a couple of decades without them figuring out the gig.
Luckily we have a natural experiment in terms of parapsychology – the study of psychic phenomena – which most reasonable people don’t believe exists but which a community of practicing scientists does and publishes papers on all the time.
The results are pretty dismal. Parapsychologists are able to produce experimental evidence for psychic phenomena about as easily as normal scientists are able to produce such evidence for normal, non-psychic phenomena. This suggests the existence of a very large “placebo effect” in science – ie with enough energy focused on a subject, you can always produce “experimental evidence” for it that meets the usual scientific standards.
Bem, Tressoldi, Rabeyron, and Duggan (2014) ... is parapsychology’s way of saying “thanks but no thanks” to the idea of a more rigorous scientific paradigm making them quietly wither away.
You might remember Bem as the prestigious establishment psychologist who decided to try his hand at parapsychology and to his and everyone else’s surprise got positive results. Everyone had a lot of criticisms, some of which were very very good, and the study failed replication several times. Case closed, right?
Earlier this month Bem came back with a meta-analysis of ninety replications from tens of thousands of participants in thirty three laboratories in fourteen countries confirming his original finding, p < 1.2 * -10 , Bayes factor 7.4 * 10 , funnel plot beautifully symmetrical [see figure above], p-hacking curve nice and right-skewed, Orwin fail-safe n of 559, et cetera, et cetera, et cetera. ... This is far better than the average meta-analysis. Bem has always been pretty careful and this is no exception.