16.2.11

 

Scientific programming does not compute


An article in Nature explains why scientists need to pay more attention to basic principles of software engineering.
As a result, codes may be riddled with tiny errors that do not cause the program to break down, but may drastically change the scientific results that it spits out. One such error tripped up a structural-biology group led by Geoffrey Chang of the Scripps Research Institute in La Jolla, California. In 2006, the team realized that a computer program supplied by another lab had flipped a minus sign, which in turn reversed two columns of input data, causing protein crystal structures that the group had derived to be inverted. Chang says that the other lab provided the code with the best intentions, and "you just trust the code to do the right job". His group was forced to retract five papers published in Science, the Journal of Molecular Biology and Proceedings of the National Academy of Sciences, and now triple checks everything, he says.—Zeeya Mirali, Nature 467:775—777 (2010)
Spotted by Shriram Krishnamurthi.

Comments:
Hardly surprising - I use a lot of software written by academics, and quality is...not always what one would like.

Unfortunately, science is a fire-and-forget business: when the publication is out the door, it's time to move on to the next exciting project, and there are no funds to do unexciting stuff like maintenance.
 
Its not simply a matter of academics vs. industry. It's a matter of scientists (non-CS) who know a little programming vs. someone actually trained in CS (who, admittedly, still make mistakes). When computers are programmed to perform calculations we can no longer feasibly do by hand, as is the case with most scientific computing applications, validation of the code becomes as much an issue as validation of experimental methods, mathematical equations, statistical analysis, etc. It can't be done, though, without scientific publications also including access to source code, something that largely hasn't happened yet.
 
The nature of scientific codes means they generally have smaller audience, compared with commercial software. Hence, bugs can like dormant for a long time before they become noticed.
 
IMHO, one of the major issues in scientific computing is testing. In software engineering, we write test cases and run our software. A test case is passed if the actual output and the expected output are the same.

We write scientific software because we cannot calculate the solution by hand. This makes it very hard to develop good test cases. Often we fall back to develop indirect test cases: is the total energy conserved or similar.
 
This comment has been removed by the author.
 
Very true. I found that code in science is badly written, difficult to understand and consequently one could expect that it is full of subtle bugs. Or not so subtle but still difficult to find. If ~1% of event in some Monte-Carlo simulation goes awry it may remain undetected.

There is paper "THE T-EXPERIMENTS: ERRORS IN SCIENTIFIC SOFTWARE"[1] by L. Hatton describe study of bugs scientific programs with rather pessimistic results.

[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.105.3922&rep=rep1&type=pdf
 
Beware of generalizations by the uninformed. For example the original error in Chang's lab was caused by sign flipping, but that's not really the reason that you ended up with 5 papers that needed retraction. In fact this summary is more down to sloppy journalism than anything else.

It's not particularly difficult to track down commentary on the Chang fiasco written by other structural biologists. But perhaps the complexity and subtlety of the actual issues don't make for such good copy.

If we in the structural biology community had waited for CS scientists to write code, there would be almost no structures at all.

If you take the time to look, you will realize that the software we use is actively maintained. His problem was that for this initial step he used a small program rather than using the better tested extant software suites written by other academics and used by them. However his forcing of the structure against the data was his real error. And one that he kept repeating.

Nature probably likes using that example because three of Chang's screw ups were published in Science, a competitor journal. Nature doesn't have impeccable error-checking standards in the papers it accepts either.
 
I'm not surprised neither. Although it's counter-intuitive, I can tell computer science academics are not immune to the problem.

Writing good code is a never ending quest and is certainly not something natural, even for scientific minds.
 
Yep, I agree. The quality of the soft leaves much to be desired
 
I agree, the quality of the soft leaves much to be desired
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?