Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

Archive for April, 2013
In a previous post, I commented on an article in Nature by Robert Antonucci, in which he complained that only too many scientists do not spend enough time thinking, and are only too willing to accept what is in the literature, without checking. This was followed by another article by Keith Weaver, entitled "Scientists are snobs", who asserted that there was another problem: scientists are only too willing to believe that the best comes from the best institutions. This is also a serious issue, if true.
 
Specifically, he complained that:
(a) Scientists prefer to cite the big names, even when smaller names made the discovery, and the big names merely used it later. Yes, this may well be through sloth, and not doing a proper literature search, and in some ways it may seem not to matter. The problem is, it does when the original discoverer puts in a funding application. Too few citations, and the work is obviously not important – No funding! Mean while the scientists who did nothing to advance the technique gets all the citations, and the funding, and the conference invitations, and the "honours". The problem is thus made worse because of positive feedback.
(b)  An individual scientist gains more recognition if they work in a prestige institution. The implication is, the more prestigious the institution, the better the scientists. There is truth in that some scientists at more prestigious institutes are better, whatever that means, but if so, it is not because they are there, but rather the rich institutions pay more to the prestige scientists.
(c)  Even at conferences, scientists go to hear the “big names”, and ignore the lesser names. This is harder to comment on because I know that having been to many conferences, there are some names I want to hear, and many of the “unknowns” can produce really tedious presentations. Choosing sessions tends to be to maximize the chance of getting something from the conference. For me, the problem often ends up choosing between the big name, who as often as not will produce recycled stuff, or the little name, who may not have anything of substance. Conference abstracts sometimes help, but not always.
 
What do you think about this? In my opinion, leaving aside the “sour grapes” aspect, Weaver raises an important point. The value of an article has nothing to do with the prestige of where it came from. To think otherwise leaves one open to the logic fallacy ad verecundiam. I wonder how many others fall into the trap Weaver notes? My guess is everyone is guilty to some degree of (c), but I do not regard that as a particularly bad sin. However, only citing big names is a sin. The lesser-known scientist needs citations and recognition far more than the big names.
 
One might also notice that the greatest contributions to science have frequently come from almost anywhere. In 1905 the Swiss patent office was hardly the most prestigious source of advanced physics, but contributions from there changed physics forever. What is important is not where it came from, but what it says. Which gets back to where this post started: scientists should cover less ground and think more. Do you agree?
Posted by Ian Miller on Apr 29, 2013 12:34 AM BST
Polywater might have been an obvious error for chemistry, but I still question, what did we learn from it? My guess is, not much. What we eventually realized is that while fused silica does not dissolve in water at any appreciable rate, it does if it is on the surface of a very small capillary. Why? Is it due to the curvature of the surface, or is a micro-column of water somehow more active? A general theory here could be of great help to medicine, or to much of research into nanotechnology, but such was the scorn thrown at polywater that a potential advance of great significance was dealt with like the baby discarded with the bath water.
 
In previous posts I mentioned the problem of whether cyclopropane could delocalize is ring electrons into adjacent unsaturation. The textbooks say it can, and this is justified because MO theory says it can. Do you believe that? Are you still convinced when you are told that the computational programs that "settled" this issue were the same ones that asserted that polywater had very significant enhanced stability? The original MO treatment of cyclopropane was due to Walsh. His concept was that the methylene units were trigonal sp2 centres, with the third orbital of each carbon forming three-orbital overlap at the centre of the ring system. This left a p orbital on each methylene to overlap with the two p orbitals from the other methylene carbon atoms in partial side-on overlap. Since only two electrons were in the three-centre bond, there were four electrons for the three p-electron bonds, which led to two pairs for three bonds, one such bond being a "non-bond". These were obviously delocalized (assuming the model was correct in the first place) but the p orbitals were also properly aligned to overlap with adjacent p orbitals on unsaturated centres, so conjugation should follow. This was a perfectly good theory because it made predictions, however it is also imperative that such predictions were tested by observation.
 
There is an obvious consequence to this theory. Perhaps the biggest reason cited for cyclopropane conjugation is that a cyclopropane ring adjacent to a carbenium ion centre has an additional stabilization of about 100 kJ/mol over other comparable carbenium ions. Of course electron delocalization might be the reason for this, but if it is, then the p electrons of the cyclopropane ring must become localized in the orbitals that can overlap with the carbenium centre, at least to some extent, therefore the “non-bond” must become localized, to the same extent, in the distal bond. With less electron density in the distal bond, it should lengthen. There have been alternative MO computations, which drastically shorten the distal bond, e.g. to 143.6 pm, but significantly lengthen the vicinal bonds e.g. to 159 pm (J. Am. Chem. Soc. 1982, 104, 2605-2612) although it is far from clear why this change of bond length happens. The predicted lengthening of the vicinal bonds presumably occurs because charge in them is delocalized towards the carbenium ion, but it is unclear to me why the "non-bond" shortens.  As it happens, it is not important.  A structural study has been carried out on such a carbenium ion, and the distal bond is so considerably shortened but the vicinal bonds are not so lengthened (J. Am. Chem.Soc. 1990, 112, 8912-8920). Accordingly, the computations are wrong. The polarization theory I mentioned in previous posts is in accord with this observation: the vicinal bonds remain unchanged because nothing much changes while the distal bond shortens because the positive field allows the electrons in the bond to align better with the internuclear axis.
 
Now, the interesting point about this is that when the measurement was made, nobody questioned whether the Walsh MO theory might be wrong. Such is the power of established theory that even when observation brings in a result opposite to that predicted, and even when there is clear evidence (from polywater) that the computational methodology that led to this result is just plain wrong, we do not want to revisit it. Why is this? A general lack of interest in why things happen? Simple sloth? Who knows? More to the point, who cares?
 
Posted by Ian Miller on Apr 22, 2013 5:49 AM BST
I believe that just because everybody thinks standard theory is quite adequate, that is no excuse to reject a non-standard theory. On the other hand, many will argue that there is no need to fill the literature up with nonsense, so where do we draw the line? In my opinion, not in the right place, and part of the reason is that a certain rot in refereeing standards set in following the polywater debacle. Polywater was an embarrassment, and only too many referees did/do not want to be associated with a rerun. That, however, is no reason to adopt the "Dr NO" syndrome, namely that rejection guarantees the absence of a debacle. That policy would certainly have led to the rejection of Einstein's "Electrodynamics of bodies in motion". He was describing the dynamics of bodies without electric charge! And as for common sense, he was abandoning the principles of Galilean relativity and of Newton's laws of motion, both of which were "obviously correct". (Actually, he was abandoning the concept of instant action at a distance, which nobody really believed.)
 
Anyway, back to polywater. This unfortunate saga began when Nikolai Fedyakin condensed water in or repeatedly forced water through quartz capillaries, following which Boris Deryagin improved production techniques (although he never produced more than very small amounts) and determined a freezing point of – 40 oC, a boiling point of ≈ 150 oC, and a density of 1.1-1.2. This was not water, but what else could it be? Everyone “knew” quartz was inert to water and there was no other explanation than the water had polymerized. Unfortunately, nobody thought to do an analysis for silicon. There followed the collection of considerable amounts of data, and in general these were correct (although the collection of an IR spectrum of sweat was probably not a highlight of science). Meanwhile a vast number of theoretical calculations emerged to “prove” the existence of polywater.
 
So what went wrong? Apart from the absence of an analysis, not much initially. The referees had to accept that the experimental work was done satisfactorily. The computational work was simply a case of “jump on the bandwagon and verify what was known”. Unfortunately, those data were wrong. Nevertheless, the question might be asked, should the referees have permitted the computational papers? What the papers gave was the assertion that a certain program was applied, and this is what came out. In general, the assumptions were never clearly stated, nor were the consequences of the assumptions being wrong. The major problem with the computations was that, being based on molecular orbital theory, the proposed systems were assumed to be delocalized, and the calculations showed they were. As Aristotle remarked, concluding what you assumed is not exactly a triumph.
 
The consequences of this unfortunate sequence of events were as follows:
(a)  Experimenter’s careers were wrecked.
(b)  Computationalist’s careers were unaffected. John Pople was relatively prominent in showing why there was considerable stability in water polymers, but that did not hinder his career (although his work on polywater did not feature strongly in his Nobel citation).
(c) When exposed, work ceased. Nobody was ever interested in trying to work out why water in constrained space dissolved silica.
(d)  Little or no genuinely different theoretical work emerged in chemistry following polywater.
(e)  Most importantly, nobody ever stated what went wrong within the computations. In short, we learned nothing, or at least the general chemical community learned nothing.
 
The question that must be asked regarding (d) is, was this because there is no further scope for theory in chemistry, and all that we can do now is deploy computational programs, have the referees killed any attempts, or have chemists simply lost interest?  Your views?
Posted by Ian Miller on Apr 15, 2013 1:59 AM BST
What for me were the most important papers that I found during March were papers relating to the oxidative state of the Earth during accretion. In my ebook, Planetary Formation and Biogenesis, I argued that the availability of reduced organic material is critical for biogenesis, and that as far as carbonaceous and nitrogenous materials were concerned, the Earth's mantle was reducing. Part of the reason was because the isotope composition of Earth's materials is closest to that of enstatite chondrites, which are highly reducing, and because meteorites that have originated from bodies closer to the star than the asteroid belt have increasingly reduced compositions, thus phosphorus occurs as phosphides. A further reason is that water on the ferrous ions in many olivines produces hydrogen, and is the source of methane of geochemical origin. The great bulk of the outer Earth has reduced iron, e.g. in the ferrous state in olivines and pyroxines, and the overall oxidation state of a closed system is constant. The Earth is gradually oxidizing because water reacts with ferrous to make ferric and hydrogen, and while hydrogen in the presence of carbon or nitrogen makes reduced compounds, it can also be lost to space. Geologists seem very keen on the oxidized mantle and argue that gases initially produced by volcanoes were carbon dioxide and molecular nitrogen.
 
The first of the papers, (Siebert et al. Science 339: 1194-1197) argued that the abundance of certain slightly siderophile elements such as V and Cr are better explained through initial oxidizing conditions, which were subsequently reduced to present values by transfer of oxygen to the core. They argue that reduced conditions leads to more Si in the core than is compatible with sonic measurements. For me, there were a number of difficulties with this argument, one being that too many components known to be present were left out of the calculations, and secondly, the effect of water seemed to be omitted. Water would oxidize silicon, thus reducing that available to the core, and make hydrogen. In the second paper, Vočlado (Nature 495: 177-178) carried out a theoretical study using the conditions at the present boundary between inner and outer core (330 billion pascals and a temperature up to 6000 degrees K) and argued that Si is equally probable in the inner solid core and outer liquid core and iron oxide is also there to account for oxygen. Perhaps, but the seismic properties and density of the core have yet to be matched with this proposal. It is also not exactly clear how the properties ascribed to components at these conditions were obtained (there will be no experimental data!) and finally, these calculations left out a number of components, including nickel.
 
Two papers were more helpful to my cause. Bali et al. (Nature 495: 220 – 222) showed that water and hydrogen will exist as two immiscible phases in the mantle, which explains why there can be very reducing conditions while the upper mantle can appear to be readily oxidized in relation to minor components like V and Cr. Meanwhile, Walter and Cottrell (Earth Planet Sci Let. 365: 165-176) note that while multi-variable statistical modeling of siderophile element partitioning between core-forming metallic liquids and silicate melts form the basis for physical models of core formation, experimental data are too imprecise to discriminate between current models and variations in statistical regression of partitioning data exerts a fundamental control on physical model outcomes. Such modeling also invariably depends on the assumption of the magma ocean.
 
To summarize these papers, on balance I do not think they falsify my proposal, however some geologists may not agree with that assessment. On the other hand, with slightly good news for my proposal, NASA Science announced Curiosity has drilled into a sedimentary rock in Gale Crater at a place where water was assumed to have formed a small lake and found in amongst the rock, nitrogen, hydrogen, oxygen, phosphorus and carbon, the elements necessary for forming life. What I found important was the presence of nitrogen, because that almost assures us that there was originally reduced nitrogen, as my proposal requires. The nitrogen is most unlikely to have come from N2 in the atmosphere, because the atmosphere contains so little of it. Only a radically different atmosphere in earlier times would deliver sufficient to fossilize nitrogen. The nature of the clay present is consistent with water of relatively low salinity weathering olivine. Also present was calcium sulphate, which is suggestive of neutral or mildly alkaline conditions at the time. Link:
http://science.nasa.gov/science-news/science-at-nasa/2013/12mar_graymars/
Posted by Ian Miller on Apr 8, 2013 3:17 AM BST