Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

"Scientists are snobs"?

In a previous post, I commented on an article in Nature by Robert Antonucci, in which he complained that only too many scientists do not spend enough time thinking, and are only too willing to accept what is in the literature, without checking. This was followed by another article by Keith Weaver, entitled "Scientists are snobs", who asserted that there was another problem: scientists are only too willing to believe that the best comes from the best institutions. This is also a serious issue, if true.
 
Specifically, he complained that:
(a) Scientists prefer to cite the big names, even when smaller names made the discovery, and the big names merely used it later. Yes, this may well be through sloth, and not doing a proper literature search, and in some ways it may seem not to matter. The problem is, it does when the original discoverer puts in a funding application. Too few citations, and the work is obviously not important – No funding! Mean while the scientists who did nothing to advance the technique gets all the citations, and the funding, and the conference invitations, and the "honours". The problem is thus made worse because of positive feedback.
(b)  An individual scientist gains more recognition if they work in a prestige institution. The implication is, the more prestigious the institution, the better the scientists. There is truth in that some scientists at more prestigious institutes are better, whatever that means, but if so, it is not because they are there, but rather the rich institutions pay more to the prestige scientists.
(c)  Even at conferences, scientists go to hear the “big names”, and ignore the lesser names. This is harder to comment on because I know that having been to many conferences, there are some names I want to hear, and many of the “unknowns” can produce really tedious presentations. Choosing sessions tends to be to maximize the chance of getting something from the conference. For me, the problem often ends up choosing between the big name, who as often as not will produce recycled stuff, or the little name, who may not have anything of substance. Conference abstracts sometimes help, but not always.
 
What do you think about this? In my opinion, leaving aside the “sour grapes” aspect, Weaver raises an important point. The value of an article has nothing to do with the prestige of where it came from. To think otherwise leaves one open to the logic fallacy ad verecundiam. I wonder how many others fall into the trap Weaver notes? My guess is everyone is guilty to some degree of (c), but I do not regard that as a particularly bad sin. However, only citing big names is a sin. The lesser-known scientist needs citations and recognition far more than the big names.
 
One might also notice that the greatest contributions to science have frequently come from almost anywhere. In 1905 the Swiss patent office was hardly the most prestigious source of advanced physics, but contributions from there changed physics forever. What is important is not where it came from, but what it says. Which gets back to where this post started: scientists should cover less ground and think more. Do you agree?
Posted by Ian Miller on Apr 29, 2013 12:34 AM Europe/London

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Leave a comment?

You must be signed in to leave a comment on MyRSC blogs.

Register free for an account at http://my.rsc.org/registration.

Comments