AAAS Report on Standards of Peer Review

November 29th, 2006

Posted by: Roger Pielke, Jr.

The AAAS has released a report motivated by several recent fraudulent papers that have been published in Science. The report suggests tightening the review process for certain types of papers. Here is an excerpt from the report (available here in PDF):

Science (and Nature) have reached a special status. Publication in Science has a significance that goes beyond that of ‘normal’ publication. Consequently, the value to some authors of publishing in Science, including enhanced reputation, visibility, position or cash rewards, is sufficiently high that some may not adhere to the usual scientific standards in order to achieve publication. Thus, the cachet of publishing in Science can be an incentive not to follow the rules. This problem has a significant impact on all of science, since trust in the system is essential, and since Science and Nature are seen to speak for the best in science. Furthermore, false information in the literature leads to an enormous waste of time and money in an effort to correct and clarify the science.

Science as a journal tries to select papers that will have high impact on both science and society. Some papers will be highly visible and attract considerable attention. Many of these papers purport to be major breakthroughs and claim to change fields in a significant way. However, because the content is so new or startling, it is often more difficult to evaluate the quality or veracity of the work than would be the case for a more conventional paper.

Papers in this class, particularly those that will receive public attention, can influence public policy or contribute to personal or institutional financial gain and thus warrant special scrutiny. In the immediate future, examples will likely come from the areas of climate change, human health, and particular issues in commercial biomedicine and nanotechnology. Progress in science depends on breakthroughs and in taking risks, both in research and in publishing. Nevertheless, it is essential to develop a process by which papers that have the likelihood of attracting attention are examined particularly closely for errors, misrepresentation, deception, or outright fraud. This examination should include especially high standards for providing primary data, a clear understanding of all of the authors’ and coauthors’ contributions to the paper and a careful examination of data presented in the papers.

There is a major issue in the report left unaddressed, and this has to do with the following statement– “Science as a journal tries to select papers that will have high impact on both science and society.” How does this selection process work? What are the criteria of “high impact”? What is the relationship of political positions taken by the Science editorial staff and the selection of papers for peer review and publication? If greater transparency makes sense for authors, then does greater transparency make sense for editors as well?

5 Responses to “AAAS Report on Standards of Peer Review”

    1
  1. Steve Hemphill Says:

    “Papers in this class, particularly those that will receive public attention, can influence public policy or contribute to personal or institutional financial gain and thus warrant special scrutiny.”

    One would wonder if this would apply to the Wegman 43.

    “No crisis: No funding” stands out as particularly applicable here.

  2. 2
  3. Dan Hughes Says:

    Within the climate change community, and the organizations that publish papers on climate change (such as the AAAS), computer software Verification, Validation (V&V) and Software Quality Assurance (SQA) seem to be issues that are not of uppermost importance. I am an outsider and open to correction on this observation. I use the words Verification, Validation and Quality Assurance in the technical sense as used in the literature associated with these subjects. Citations to the literature are given below.

    For purposes of the discussion here a brief summary of the technical definitions of Verification and Validation are given.

    Verification is the process of demonstrating that the mathematical equations used in the computer program are solved correctly.

    Validation is the process of demonstrating that the correct mathematical equations are used in the computer program.

    Generally speaking, Verification is a mathematical process and Validation is an engineering/scientific processes.

    The ultimate objective of applying the processes to computer software, and its users, is to ensure that calculated results accurately reflect the intended application of the software. Absence of application of these processes to computer software should be a very strong indication that the calculated results are suspect.

    All calculations from programs in which inherently complex physical phenomena and processes occurring within complex geometries are the focus are generally considered to be suspect and usually taken with a grain of salt. The modeling and calculation of climate change over the spatial and temporal scales for a planet present major challenges to all aspects of mathematical modeling and computer calculations. The number of important systems involved along with the inherent complexity of the phenomena and processes, interacting on multiple time scales over extreme spatial and temporal extents, represent possibly unprecedented challenges. V&V and SQA are absolutely necessary under these situations and are standard operating procedures (SOP) for all major software development projects. There are also absolutely necessary and should be SOP for software the calculated results of which are submitted for publication in archival journals. For calculated results that form the basis of policies that affect the health and safety of the public, the requirements for application of these processes are codified in the laws of the country.

    Excellent starting references include:

    (1) Patrick Roache, “Verification and Validation in Computational Science and Engineering,” published by Hermosa Press. http://kumo.swcp.com/hermosa/index.html.

    (2) William L. Oberkampf, Timothy G. Trucano, and Charles Hirsch, “Verification, Validation, and Predictive Capability in Computational Engineering and Physics,” Sandia National Laboratories Report SAND 2003-3769, 2003. http://www.csar.uiuc.edu/F_viz/gallery/VnV/SAND2003-3769.pdf. See also, https://www.dmso.mil/public/library/projects/vva/found_04/oberkampf_et_al_vv_and_predictive_etc.pdf

    These publications contain extensive reference citations to the published literature on these subjects. A Google search will find many more resources. In particular, the group at Sandia has remained very active in these areas for several years now. Other National Laboratories, especially those associated with the ASCI program, also have groups actively working in these areas.

    With this very brief introduction, let’s next look at publications in the technical journals of professional societies.

    Climate change paper submittals the basis of which are calculations with computer software continue to be important sources of publications in journals such as Science, Nature, several technical journals from the American Geophysical Union (AGU), the American Meteorological Society (AMS), and many others. Some of these papers might at some time be used as audit/checkpoints/benchmarks for other calculations, and other of the papers might become part of the basis for public policy decisions. The software used for these papers needs to be Verified and Validated for the applications to which they are applied. Papers for which the software has not been Verified should not be accepted for publication in an archival journal.

    The first crucial aspect of such papers should be the status of the Verification of the software. Several engineering societies and their journal editorial boards have recently put into place technical requirements on the Verification of the software before the paper can be considered for publication. If the requirements have not been met the paper will not be published; in some cases the paper will be rejected out-of-hand and not be sent out for review. Papers for which the basis is a single calculation on a single grid with no investigations of convergence and other stopping criteria are typically sent back to the authors.

    Some of these professional organizations and associated journals include: The American Society of Mechanical Engineers (ASME) Journal of Heat Transfer and Journal of Fluids Engineering; The American Institute of Aerospace and Astronautics (AIAA) Journal of Spacecraft and Rockets; and the International Journal of Numerical Methods in Fluids. Other professional societies and journals are sure to follow the lead of these.

    References for the editorial polices for these journals are as follows.

    The ASME Journal of Heat Transfer:
    Editorial Board, “Journal of Heat Transfer Editorial Policy Statement on Numerical Accuracy,” ASME Journal of Heat Transfer, Vol. 116, pp. 797-798, 1994.

    The ASME Journal of Fluids Engineering:
    A detailed discussion of the present status of the Journal of Fluids Engineering policy is available here: http://www.divisions.asme.org/fed/divisionadmin/cfdindex.html.

    The existing policy is given as a part of the preceding discussion here: http://www.divisions.asme.org/fed/divisionadmin/Current%20JFE%20Publication%20Policy.pdf. (The %20 are blanks) The existing policy was first stated here: C. J. Freitas, “Editorial Policy Statement on the Control of Numerical Accuracy,” ASME Journal of Fluids Engineering, Vol. 117, No. 1, p. 9, 1995.

    The Fluids Engineering policy can be accessed directly online and provides a good summary of the issues.

    The AIAA Journal of Spacecraft and Rockets:
    AIAA, Editorial Policy Statement on Numerical Accuracy and Experimental Uncertainty, AIAA Journal, Vol. 32, No. 1, p. 3, 1994.

    The International Journal of Numerical Methods in Fluids:
    P. M. Gresho and C. Taylor, “Editorial,” International Journal of Numerical Methods in Fluids, Vol. 19, Issue 7 p. iii, October 1994, DOI: 10.1002/fld.1650190702

    I have been unsuccessful in locating editorial policies on these important issues for any of the science journals listed above. I consider this to be a major and extremely important failing. As discussed above, many engineering journals would reject out-of-hand all submittals for publication of papers that do not address these issues.

    Finally, it is my understanding that the calculated results from most large complex AOLBGCM codes cannot be demonstrated to be independent of the discrete representations of the continuous equations. That is, the results are functions of the size of the discrete representations (or truncated series) of the spatial and temporal scales used in the calculations. I think this situation is without precedence in all of science and engineering. This is the most fundamental concept taught in every numerical methods textbook that I am familiar with. If this is the correct situation, the calculated results are not solutions to the continuous equations and at the very best represent some kind of approximate “solution”. However, “solution to the continuous equations” is not a phrase that I would apply to these calculations. Focus on The Global Average Temperature calculated by the codes gives a false indication of the robustness of the modeling and calculations because this is a solution meta-functional result that maps everything calculated by the code into a single number. A process that easily hides an enormous number of potential problems.

    In the absence of established, formal V&V and SQA processes and procedures, calculated results are very much less certain to contain true information and thus do not in fact provide knowledge. Even when these processes are rigorously applied to computer software, mistakes and errors still survive, although to a very much less degree than in the absence of the procedures. For the case of the very complex codes and applications associated with climate change, the chances of mistakes and errors are much larger than for less complex analyses. Consider situations that have occurred within the climate change community that sometimes surface in the literature. Data reduction software is an example.

    The potential for mistakes and numerical errors in software in the absence of independent, formal, V&V and SQA procedures are among the reasons that engineering journals have implemented editorial policies. Apparently the organizations have decided that publication of a paper that has not been demonstrated to correctly solve the equations and to be based on the correct equations for its intended applications, has a high potential to not represent physical reality. Additionally, apparently they consider that the consequences of publishing such results will not contribute to advances in understanding and knowledge.

    Regulatory agencies that are responsible decisions that affect the health and safety of the public will never, and I’m aware that I should never use always and never, make policy decisions based on computer software that has not been Verified and Validated and maintained and applied under SQA procedures. Observational data will always, of course, be more important than computer calculations alone.

    If the science community does not begin to implement formal procedures, and at the same time base information on software that is not maintained and applied formally, regulatory agencies will ignore the calculations. And it is not just the large, complex AOLBGCM codes that will eventually be required to implement the processes and procedures. All software, and software users, will have to be demonstrated to be Qualified for applications to the analyses for which the software has been designed. Additionally, it is not just the ‘main’ routines in the large codes that will require these, but also the initial and boundary conditions, the pre- and post-post processing routines ( run-time options, grid generation, processing of the calculated results, etc.), the qualifications of the users, and the procedures for installing the software onto a user’s computer system. That is, all aspects of the codes, users, applications, and results.

    Can anyone point me to the editorial policies that address Verification, Validation, and software Q/A issues for the professional science organizations and the associated technical journals?

  4. 3
  5. Lab Lemming Says:

    Roger, a more transparent editorial process would reduce the ability of editors to use reviewers as scapegoats, and generally inhibit the finger-pointing an obfuscation tactics on which their profession relies.

    Like a three-handed magician, editors use flourishes nd policies to make the credulous stare at the authors and reviwers, so that they miss the editorial shennanigans being performed with hand number three.

  6. 4
  7. Steve Hemphill Says:

    Hmmm.

    “This examination should include especially high standards for providing primary data, a clear understanding of all of the authors’ and coauthors’ contributions to the paper and a careful examination of data presented in the papers.”

    How could this be enforced or at least the problem illustrated by some high profile existing examples?

  8. 5
  9. Patrick J Says:

    Not being a regular reader of Science, the first question that occurs to me is: What are these “several recent fraudulent papers that have been published in Science.”?