Comments on: McIntyre on Climate Science Policy http://cstpr.colorado.edu/prometheus/?p=3404 Wed, 29 Jul 2009 22:36:51 -0600 http://wordpress.org/?v=2.9.1 hourly 1 By: Kooiti Masuda http://cstpr.colorado.edu/prometheus/?p=3404&cpage=1#comment-921 Kooiti Masuda Fri, 18 Feb 2005 05:03:43 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=3404#comment-921 Sorry for ugly appearance of my previous post. When finally posted, a blank line in the input form was translated as a line break. My trouble was the line break did not appear in the preview. K. Masuda Sorry for ugly appearance of my previous post. When finally posted, a blank line in the input form was translated as a line break. My trouble was the line break did not appear in the preview.

K. Masuda

]]>
By: Kooiti Masuda http://cstpr.colorado.edu/prometheus/?p=3404&cpage=1#comment-920 Kooiti Masuda Fri, 18 Feb 2005 04:58:20 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=3404#comment-920 (I am sorry I do not know how to insert line breaks from my browser (Mozilla on Linux). I insert "//" where I want to have a line break.) // It is regrettable that affirmative references to McIntyre's words invited responses like Garry Culhane's. I think that the main issue of this thread is a bigger problem that is discussed by Jim Kanuth. // Suppose that an analysis (maybe Mann's) is good. Even then, it does not appear good until independent confirmations are made. Suppose that someone tries to reproduce the same analysis. If the attempt produces a result different from the original by mistake, there is a chance for the result to be published as an original research paper, even though it is because reviewers do not usually try analysis themselves. If the attempt produces an identical result as the original, it can hardly be published. Thus we do not usually know how well a certain study has been confirmed. It also makes a negative feedback to draw scientists away from such activity in the culture of "publish or perish". // The question about the quality of analysis by Mann et al. or that by McIntyre and McKitrick is another matter that should be discussed elsewhere. I do not have an answer, partly because of the reason mentioned above. By the way, my guess (just guess) is that the analysis by Mann et al. is good for the purpose to estimate the most likely mean value of temperature at any given time point (thus it seems to be a good scientific paper of paleoclimatology), but that it tends to underestimate temporal variability as demonstrated by von Storch et al. (2004, Science 306, 679-682) (thus it should not be given too great weight in such context as evaluating the climate of the 20th century in the millenial perspective). // K. Masuda // in Yokohama (sometimes in Fujisawa), Japan. (I am sorry I do not know how to insert line breaks from my browser (Mozilla on Linux). I insert “//” where I want to have a line break.)
//

It is regrettable that affirmative references to McIntyre’s words invited responses like Garry Culhane’s. I think that the main issue of this thread is a bigger problem that is discussed by Jim Kanuth.
//

Suppose that an analysis (maybe Mann’s) is good. Even then, it does not appear good until independent confirmations are made. Suppose that someone tries to reproduce the same analysis. If the attempt produces a result different from the original by mistake, there is a chance for the result to be published as an original research paper, even though it is because reviewers do not usually try analysis themselves.
If the attempt produces an identical result as the original, it can hardly be published. Thus we do not usually know how well a certain study has been confirmed. It also makes a negative feedback to draw scientists away from such activity in the culture of “publish or perish”.
//

The question about the quality of analysis by Mann et al. or that by McIntyre and McKitrick is another matter that should be discussed elsewhere. I do not have an answer, partly because of the reason mentioned above. By the way, my guess (just guess) is that the analysis by Mann et al. is good for the purpose to estimate the most likely mean value of temperature at any given time point (thus it seems to be a good scientific paper of paleoclimatology), but that it tends to underestimate temporal variability as demonstrated by von Storch et al. (2004, Science 306, 679-682) (thus it should not be given too great weight in such context as evaluating the climate of the 20th century in the millenial perspective).
//

K. Masuda
//

in Yokohama (sometimes in Fujisawa), Japan.

]]>
By: Garry Culhane http://cstpr.colorado.edu/prometheus/?p=3404&cpage=1#comment-919 Garry Culhane Thu, 17 Feb 2005 16:22:59 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=3404#comment-919 You do not "do hockey sticks"? Alas you do and rather badly. The term, hockey stick, has come to stand for the intense debate between those who are charmed or persuaded by a couple of obvious adventurers, and those who support or at least accept the work of a group of well established climate scientists. You can choose one side or the other, or you can say nothing at all, but to act as though there is a middle ground where you can roost in solitary neutrality is, unhappily, a very definite own goal. You can make amends by actually reading the "record" that is very much available (except for the original attack by M&M which seems to have been deleted and replaced by a brand new version) and then telling us laymen what you think, or you can find someone who can work their way through PCA and tell us whether William Connelly is correct. Or you could assist us in some other way that does not include offering a podium for MacIntyre to flourish a newly found respectability (imagine a mining man flouncing across the stage in a tutu). But if you will do it, and surely you can bend your efforts to help the unlettered rest of us, please do it right. Garry Culhane You do not “do hockey sticks”? Alas you do and rather badly.

The term, hockey stick, has come to stand for the intense debate between those who are charmed or persuaded by a couple of obvious adventurers, and those who support or at least accept the work of a group of well established climate scientists.

You can choose one side or the other, or you can say nothing at all, but to act as though there is a middle ground where you can roost in solitary neutrality is, unhappily, a very definite own goal.

You can make amends by actually reading the “record” that is very much available (except for the original attack by M&M which seems to have been deleted and replaced by a brand new version) and then telling us laymen what you think, or you can find someone who can work their way through PCA and tell us whether William Connelly is correct. Or you could assist us in some other way that does not include offering a podium for MacIntyre to flourish a newly found respectability
(imagine a mining man flouncing across the stage in a tutu).

But if you will do it, and surely you can bend your efforts to help the unlettered rest of us, please do it right.

Garry Culhane

]]>
By: Jim Kanuth http://cstpr.colorado.edu/prometheus/?p=3404&cpage=1#comment-918 Jim Kanuth Tue, 15 Feb 2005 01:32:37 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=3404#comment-918 Tom Rees writes "In science, nobody gets paid for checking someone else's numbers." which is accurate today, but wasn't always the case. 30 years ago, a significant part of graduate student education was duplicating published experimental results, both as educational development and as the routine fact checking mechanism in American science practice. These days, any professor who puts his or her grad student to work duplicating already published data would soon be drummed out because they weren't attracting "extramural funding" or contributing to the grad student's publication count. In the rush to commercialize academic research since the early 80's, we've had the unintended consequence of losing one of our major fact checking mechanisms. Even in quantum physics, a bell labs researcher got away with falsifying results for years before someone read two particular papers in succession and realized that random noise in the detectors shouldn't be identical in two different experiments. Im not sure what the answer to the dilemma is. There should be some way of checking anything that is going to make a difference to a particular target audience (whether it is financial, policy, restriction on freedom of actions, whatever). Most major publishers have a rule requiring raw data to be provided to anyone who has a legitimate interest in it following publication under penalty of having their paper withdrawn, but it obviously doesn't have any teeth as evidenced by the difficulty that folks have had getting at Mann et.al.s raw data years after first publication. Tom Rees writes “In science, nobody gets paid for checking someone else’s numbers.” which is accurate today, but wasn’t always the case.

30 years ago, a significant part of graduate student education was duplicating published experimental results, both as educational development and as the routine fact checking mechanism in American science practice. These days, any professor who puts his or her grad student to work duplicating already published data would soon be drummed out because they weren’t attracting “extramural funding” or contributing to the grad student’s publication count.

In the rush to commercialize academic research since the early 80’s, we’ve had the unintended consequence of losing one of our major fact checking mechanisms.

Even in quantum physics, a bell labs researcher got away with falsifying results for years before someone read two particular papers in succession and realized that random noise in the detectors shouldn’t be identical in two different experiments.

Im not sure what the answer to the dilemma is. There should be some way of checking anything that is going to make a difference to a particular target audience (whether it is financial, policy, restriction on freedom of actions, whatever). Most major publishers have a rule requiring raw data to be provided to anyone who has a legitimate interest in it following publication under penalty of having their paper withdrawn, but it obviously doesn’t have any teeth as evidenced by the difficulty that folks have had getting at Mann et.al.s raw data years after first publication.

]]>
By: Tom Rees http://cstpr.colorado.edu/prometheus/?p=3404&cpage=1#comment-917 Tom Rees Mon, 14 Feb 2005 16:11:33 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=3404#comment-917 Statistical errors in published papers are common in all disciplines, I imagine. Certainly, it's true for medicine (see http://www.biomedcentral.com/content/pdf/1471-2288-4-13.pdf ) And medicine is big business, of course. The real check comes when others produce their own studies. In science, nobody gets paid for checking someone else's numbers. You have to go out and do your own work to get noticed. Its only if two or more papers disagree that errors start to get picked up. Statistical errors in published papers are common in all disciplines, I imagine. Certainly, it’s true for medicine (see http://www.biomedcentral.com/content/pdf/1471-2288-4-13.pdf ) And medicine is big business, of course. The real check comes when others produce their own studies. In science, nobody gets paid for checking someone else’s numbers. You have to go out and do your own work to get noticed. Its only if two or more papers disagree that errors start to get picked up.

]]>