Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

Recently we have seen on the American Chemical Society website a sign  “Publish Be Found or Perish”. This rings a bell with me because there is a similar discussion going on with book authors. Yes, you have to write something that is worth reading, either with books or with scientific papers, but the whole exercise is a waste of effort unless someone reads the work, and by definition, quality has nothing to do with the first reading because if you do not know what is in the paper or book, you do not now whether it has quality or not.
 
So, how, as a scientist, do you get discovered? The short answer from me is, I do not know. With books, the best answer seems to be, “Get lucky!” The second best option is, “Be persistent!” This is, of course, what you have to do to maximize your chances of getting lucky. For any given time you do something, there is a certain chance that it will be noticed, so the more you do, the more chances. Publishing scientific papers in top journals probably helps. If you have a sequence of papers in one journal that is well-read in that topic, your name will eventually be recognized. Conference presentations probably help, because by circulating, people put a face to your name.
 
Does anyone see a problem here? What you end up with is the people being found are the academics with lots of students working for them, and with good budgets for going to conferences. The problem is, those who are found that way are those who are known anyway. It becomes very difficult for the young scientist to be discovered, other than through being associated with someone famous. To be discovered by association means the discovered has almost certainly adopted the workings of his mentor, otherwise his name will not be on the papers. This reinforces the workings of “normal science” as defined by Kuhn, but the question then is, is that the way we want science to work? Do we want to have uniform acceptance of the current paradigm, or do we want to see whether we are missing something? The ones more likely to be original are the young scientists, because they have less invested in the current paradigm, but they are also the least likely to be found.
Posted by Ian Miller on May 20, 2013 5:25 AM BST
I recently became involved in a discussion on how to write a scientific paper, and the first thing I had to concede was that on the whole we do not do this well, and sometimes it is written almost as if the author said to him/herself, "Nobody will read it anyway." In many cases it may not matter. Many papers are written to archive an observation, or a procedure, such as how to synthesize something. These involve putting things down in the order that they were done, and making sure all terms are defined. The writing style probably does not matter much, because the only people who will read this are those who wish to either use the observation or to follow the synthesis. The first group will accept the statement and the second will have to work through it, no matter what.
 
More difficult is when you have to interpret what you found. An obvious example is the structural determination. The problems include the fact that there will be several interpretations of any given observation. The usual approach is to eliminate them one by one, usually in a sequence of experiments, and if that is what you did, so must you report it. The major problems include a failure to eliminate all possible alternatives, in which case the report is unconvincing, or alternatively, the alternatives are eliminated, but the eliminations are scattered, and it is too difficult to keep them all in mind, in which case the argument runs the danger of being confusing. A common problem is the presentation of data that supports your hypothesis. It may, but equally it may support something else.
 
Time for a confession! I once wrote a series of papers on the substitution patterns of red algal galactans. Prior to writing these, structural elucidation was very difficult. These initial procedures involved the molecules or substituted derivatives being broken down into fragments, following which a sequence of fragments were further fragmented, and from the resultant structures, the overall structure was inferred. Because there were so many different aspects that had to be kept in mind, in one paper I wrote with two others, in parts the sentences became so complicated that later even I had trouble working out what they meant. So I came up with an answer: represent everything mathematically. Rather than get to the structure linearly, I carried out a number of different operations on the parent molecule, and from nmr spectral data, each operation was consistent with a set of structures. The real structure was given when the intersection of all sets gave one element. I then wrote papers representing structural units as matrices and data as ordered sets. Mathematical manipulation was unambiguous. The problem was, the rather restricted audience was not very happy with discrete mathematics, and eventually an editor told me to stop doing that or else the papers would be rejected.  As it happened, I did not care so I stopped publishing. I was self-employed, and this activity was to bring no income, so the decision to stop was not that difficult. The real shame was that the methodology was just becoming productive
 
Nevertheless, this raises the problem, what concessions should be made to the reader? My view at the time was, the statement of what I believed the structure to be should be put down in the simplest form possible. However, while how I deduced them should be as clear as possible, I thought it is not unreasonable to expect some effort to be made by those who wished to question the structure. My view was that to put down mathematically the arguments leading to the structures was optimal because all logic steps are explicit and unambiguous. There is no question of acceptance; to disagree you must show some step does not follow. However, many scientists did not agree with such an approach, and prefer comfortable sentences, which will generally be read without questioning them. What do you think? Be unambiguous, but have few readers, or be comfortable but with questionable ability to convince?
Posted by Ian Miller on May 13, 2013 3:18 AM BST
My ebook, "Planetary Formation and Biogenesis" was first published on Amazon 1 year ago, it argues that quite a lot of the standard theory needs rethinking, in particular that initial accretion is dependent on chemistry, not gravity, and while I have found a number of otherwise puzzling observations for the standard theory, as far as I can tell, nothing I have found contradicts my propositions. Readers may forgive me, but I find that rather satisfying. Part of the reason, of course, might be that the year has been relatively quiet regarding discoveries. That will change, because it is inevitable that sooner or later a large number of papers will come out regarding findings from Curiosity. That will be far more critical as far as my ideas go. A further possibility is that the theory is somewhat elastic, and hence difficult to falsify. That is true in some ways. There are a number of options for planets, but once one is chosen, there are very specific consequences. Unfortunately, some of those are as yet too difficult to test, which may also be why the theory has survived!
 
The most interesting evidence came from the Kepler satellite. It discovered (Science 340: 262) that Kepler 62 has five planets that range from half to twice Earth's diameter. These are at 0.715, 0.427, 0.12, 0.929, 0.055 A.U., around a star of 0.69 times the sun's mass. It is estimated that the outer one is in the centre of the habitable zone, and the next inner one possibly. The problem then is, are these truly rocky planets or ice planets sent inwards according to one of the possible mechanisms proposed? If they are all rocky planets, and were spaced according to my "expectation prediction" (which requires the star to have accreted at a rate proportional to its stellar mass squared, which in turn is observed, but only loosely, so there should be a range from the expectation position) the typical planet equivalents should be Mars-type  0.58, Earth-type 0.328, Venus-type (if there is one, and this is somewhat flexible) 0.22, Mercury-type 0.12. (This also assumes the secondary accretion rate, critical for exactly how the rocky planet evolves, was similarly scaled to our star, and observational evidence shows a possible order of magnitude deviation each way.) If the outer one is the Earth-type, then the theory predicts that accretion was significantly faster, and any Venus-type should be at 0.47 A.U., and  Mercury at 0.22 A.U. and there should be a Mars-type at about 1.14 A.U. Additional inner planets (Vulcans, which are predicted to be Mercury-like) would seem unlikely as the temperatures would grow too hot over a shorter radial difference. If the 0.427 A.U. planet is an Earth-type, then accretion was slower, and more material was available, in which case the Mars-type should be at about 0.68 A.U., the Venus-type at about 0.28 A.U., and the Mercury-type at 0.15 A.U. This agreement is not too bad, and the slower accretion rates could permit Vulcans. On the other hand, some of these bodies could be quite different, without violating the theory because if the accretion is slow, a variety of additional options might arise. Their densities should define their nature.
 
Slower stellar accretion rates permit planetary bodies to grow bigger, at which point they interact chaotically. It is generally considered that one major body (Theia) collided with Earth and formed the Moon. However, it is possible that modest-sized bodies might have collided and retained much of their structure provided collisions were not too violent. There is evidence this occurred (Science 340: 22-24). The Earth's deep mantle behaves as if there are two major piles of different composition, one below Africa and one mainly below the South Pacific. Plumes rise from the edges of these and give rise to the volcanic islands. These piles are thousands of kilometers across, but their composition remains unknown. An important point is that these "piles" are denser than much of the remaining mantle. Within my proposition, this is suggestive that they accreted closer to the star than the bulk of the Earth, which increases the pyroxene content, they differentiated, then eventually collided with Earth. The increased density arises through shedding aluminosilicates during collisions, including shedding them to Earth's crust. Is that right? That is unknown, but it is an interesting thought, at least for me.
Posted by Ian Miller on May 6, 2013 4:08 AM BST
In a previous post, I commented on an article in Nature by Robert Antonucci, in which he complained that only too many scientists do not spend enough time thinking, and are only too willing to accept what is in the literature, without checking. This was followed by another article by Keith Weaver, entitled "Scientists are snobs", who asserted that there was another problem: scientists are only too willing to believe that the best comes from the best institutions. This is also a serious issue, if true.
 
Specifically, he complained that:
(a) Scientists prefer to cite the big names, even when smaller names made the discovery, and the big names merely used it later. Yes, this may well be through sloth, and not doing a proper literature search, and in some ways it may seem not to matter. The problem is, it does when the original discoverer puts in a funding application. Too few citations, and the work is obviously not important – No funding! Mean while the scientists who did nothing to advance the technique gets all the citations, and the funding, and the conference invitations, and the "honours". The problem is thus made worse because of positive feedback.
(b)  An individual scientist gains more recognition if they work in a prestige institution. The implication is, the more prestigious the institution, the better the scientists. There is truth in that some scientists at more prestigious institutes are better, whatever that means, but if so, it is not because they are there, but rather the rich institutions pay more to the prestige scientists.
(c)  Even at conferences, scientists go to hear the “big names”, and ignore the lesser names. This is harder to comment on because I know that having been to many conferences, there are some names I want to hear, and many of the “unknowns” can produce really tedious presentations. Choosing sessions tends to be to maximize the chance of getting something from the conference. For me, the problem often ends up choosing between the big name, who as often as not will produce recycled stuff, or the little name, who may not have anything of substance. Conference abstracts sometimes help, but not always.
 
What do you think about this? In my opinion, leaving aside the “sour grapes” aspect, Weaver raises an important point. The value of an article has nothing to do with the prestige of where it came from. To think otherwise leaves one open to the logic fallacy ad verecundiam. I wonder how many others fall into the trap Weaver notes? My guess is everyone is guilty to some degree of (c), but I do not regard that as a particularly bad sin. However, only citing big names is a sin. The lesser-known scientist needs citations and recognition far more than the big names.
 
One might also notice that the greatest contributions to science have frequently come from almost anywhere. In 1905 the Swiss patent office was hardly the most prestigious source of advanced physics, but contributions from there changed physics forever. What is important is not where it came from, but what it says. Which gets back to where this post started: scientists should cover less ground and think more. Do you agree?
Posted by Ian Miller on Apr 29, 2013 12:34 AM BST
Polywater might have been an obvious error for chemistry, but I still question, what did we learn from it? My guess is, not much. What we eventually realized is that while fused silica does not dissolve in water at any appreciable rate, it does if it is on the surface of a very small capillary. Why? Is it due to the curvature of the surface, or is a micro-column of water somehow more active? A general theory here could be of great help to medicine, or to much of research into nanotechnology, but such was the scorn thrown at polywater that a potential advance of great significance was dealt with like the baby discarded with the bath water.
 
In previous posts I mentioned the problem of whether cyclopropane could delocalize is ring electrons into adjacent unsaturation. The textbooks say it can, and this is justified because MO theory says it can. Do you believe that? Are you still convinced when you are told that the computational programs that "settled" this issue were the same ones that asserted that polywater had very significant enhanced stability? The original MO treatment of cyclopropane was due to Walsh. His concept was that the methylene units were trigonal sp2 centres, with the third orbital of each carbon forming three-orbital overlap at the centre of the ring system. This left a p orbital on each methylene to overlap with the two p orbitals from the other methylene carbon atoms in partial side-on overlap. Since only two electrons were in the three-centre bond, there were four electrons for the three p-electron bonds, which led to two pairs for three bonds, one such bond being a "non-bond". These were obviously delocalized (assuming the model was correct in the first place) but the p orbitals were also properly aligned to overlap with adjacent p orbitals on unsaturated centres, so conjugation should follow. This was a perfectly good theory because it made predictions, however it is also imperative that such predictions were tested by observation.
 
There is an obvious consequence to this theory. Perhaps the biggest reason cited for cyclopropane conjugation is that a cyclopropane ring adjacent to a carbenium ion centre has an additional stabilization of about 100 kJ/mol over other comparable carbenium ions. Of course electron delocalization might be the reason for this, but if it is, then the p electrons of the cyclopropane ring must become localized in the orbitals that can overlap with the carbenium centre, at least to some extent, therefore the “non-bond” must become localized, to the same extent, in the distal bond. With less electron density in the distal bond, it should lengthen. There have been alternative MO computations, which drastically shorten the distal bond, e.g. to 143.6 pm, but significantly lengthen the vicinal bonds e.g. to 159 pm (J. Am. Chem. Soc. 1982, 104, 2605-2612) although it is far from clear why this change of bond length happens. The predicted lengthening of the vicinal bonds presumably occurs because charge in them is delocalized towards the carbenium ion, but it is unclear to me why the "non-bond" shortens.  As it happens, it is not important.  A structural study has been carried out on such a carbenium ion, and the distal bond is so considerably shortened but the vicinal bonds are not so lengthened (J. Am. Chem.Soc. 1990, 112, 8912-8920). Accordingly, the computations are wrong. The polarization theory I mentioned in previous posts is in accord with this observation: the vicinal bonds remain unchanged because nothing much changes while the distal bond shortens because the positive field allows the electrons in the bond to align better with the internuclear axis.
 
Now, the interesting point about this is that when the measurement was made, nobody questioned whether the Walsh MO theory might be wrong. Such is the power of established theory that even when observation brings in a result opposite to that predicted, and even when there is clear evidence (from polywater) that the computational methodology that led to this result is just plain wrong, we do not want to revisit it. Why is this? A general lack of interest in why things happen? Simple sloth? Who knows? More to the point, who cares?
 
Posted by Ian Miller on Apr 22, 2013 5:49 AM BST
I believe that just because everybody thinks standard theory is quite adequate, that is no excuse to reject a non-standard theory. On the other hand, many will argue that there is no need to fill the literature up with nonsense, so where do we draw the line? In my opinion, not in the right place, and part of the reason is that a certain rot in refereeing standards set in following the polywater debacle. Polywater was an embarrassment, and only too many referees did/do not want to be associated with a rerun. That, however, is no reason to adopt the "Dr NO" syndrome, namely that rejection guarantees the absence of a debacle. That policy would certainly have led to the rejection of Einstein's "Electrodynamics of bodies in motion". He was describing the dynamics of bodies without electric charge! And as for common sense, he was abandoning the principles of Galilean relativity and of Newton's laws of motion, both of which were "obviously correct". (Actually, he was abandoning the concept of instant action at a distance, which nobody really believed.)
 
Anyway, back to polywater. This unfortunate saga began when Nikolai Fedyakin condensed water in or repeatedly forced water through quartz capillaries, following which Boris Deryagin improved production techniques (although he never produced more than very small amounts) and determined a freezing point of – 40 oC, a boiling point of ≈ 150 oC, and a density of 1.1-1.2. This was not water, but what else could it be? Everyone “knew” quartz was inert to water and there was no other explanation than the water had polymerized. Unfortunately, nobody thought to do an analysis for silicon. There followed the collection of considerable amounts of data, and in general these were correct (although the collection of an IR spectrum of sweat was probably not a highlight of science). Meanwhile a vast number of theoretical calculations emerged to “prove” the existence of polywater.
 
So what went wrong? Apart from the absence of an analysis, not much initially. The referees had to accept that the experimental work was done satisfactorily. The computational work was simply a case of “jump on the bandwagon and verify what was known”. Unfortunately, those data were wrong. Nevertheless, the question might be asked, should the referees have permitted the computational papers? What the papers gave was the assertion that a certain program was applied, and this is what came out. In general, the assumptions were never clearly stated, nor were the consequences of the assumptions being wrong. The major problem with the computations was that, being based on molecular orbital theory, the proposed systems were assumed to be delocalized, and the calculations showed they were. As Aristotle remarked, concluding what you assumed is not exactly a triumph.
 
The consequences of this unfortunate sequence of events were as follows:
(a)  Experimenter’s careers were wrecked.
(b)  Computationalist’s careers were unaffected. John Pople was relatively prominent in showing why there was considerable stability in water polymers, but that did not hinder his career (although his work on polywater did not feature strongly in his Nobel citation).
(c) When exposed, work ceased. Nobody was ever interested in trying to work out why water in constrained space dissolved silica.
(d)  Little or no genuinely different theoretical work emerged in chemistry following polywater.
(e)  Most importantly, nobody ever stated what went wrong within the computations. In short, we learned nothing, or at least the general chemical community learned nothing.
 
The question that must be asked regarding (d) is, was this because there is no further scope for theory in chemistry, and all that we can do now is deploy computational programs, have the referees killed any attempts, or have chemists simply lost interest?  Your views?
Posted by Ian Miller on Apr 15, 2013 1:59 AM BST
What for me were the most important papers that I found during March were papers relating to the oxidative state of the Earth during accretion. In my ebook, Planetary Formation and Biogenesis, I argued that the availability of reduced organic material is critical for biogenesis, and that as far as carbonaceous and nitrogenous materials were concerned, the Earth's mantle was reducing. Part of the reason was because the isotope composition of Earth's materials is closest to that of enstatite chondrites, which are highly reducing, and because meteorites that have originated from bodies closer to the star than the asteroid belt have increasingly reduced compositions, thus phosphorus occurs as phosphides. A further reason is that water on the ferrous ions in many olivines produces hydrogen, and is the source of methane of geochemical origin. The great bulk of the outer Earth has reduced iron, e.g. in the ferrous state in olivines and pyroxines, and the overall oxidation state of a closed system is constant. The Earth is gradually oxidizing because water reacts with ferrous to make ferric and hydrogen, and while hydrogen in the presence of carbon or nitrogen makes reduced compounds, it can also be lost to space. Geologists seem very keen on the oxidized mantle and argue that gases initially produced by volcanoes were carbon dioxide and molecular nitrogen.
 
The first of the papers, (Siebert et al. Science 339: 1194-1197) argued that the abundance of certain slightly siderophile elements such as V and Cr are better explained through initial oxidizing conditions, which were subsequently reduced to present values by transfer of oxygen to the core. They argue that reduced conditions leads to more Si in the core than is compatible with sonic measurements. For me, there were a number of difficulties with this argument, one being that too many components known to be present were left out of the calculations, and secondly, the effect of water seemed to be omitted. Water would oxidize silicon, thus reducing that available to the core, and make hydrogen. In the second paper, Vočlado (Nature 495: 177-178) carried out a theoretical study using the conditions at the present boundary between inner and outer core (330 billion pascals and a temperature up to 6000 degrees K) and argued that Si is equally probable in the inner solid core and outer liquid core and iron oxide is also there to account for oxygen. Perhaps, but the seismic properties and density of the core have yet to be matched with this proposal. It is also not exactly clear how the properties ascribed to components at these conditions were obtained (there will be no experimental data!) and finally, these calculations left out a number of components, including nickel.
 
Two papers were more helpful to my cause. Bali et al. (Nature 495: 220 – 222) showed that water and hydrogen will exist as two immiscible phases in the mantle, which explains why there can be very reducing conditions while the upper mantle can appear to be readily oxidized in relation to minor components like V and Cr. Meanwhile, Walter and Cottrell (Earth Planet Sci Let. 365: 165-176) note that while multi-variable statistical modeling of siderophile element partitioning between core-forming metallic liquids and silicate melts form the basis for physical models of core formation, experimental data are too imprecise to discriminate between current models and variations in statistical regression of partitioning data exerts a fundamental control on physical model outcomes. Such modeling also invariably depends on the assumption of the magma ocean.
 
To summarize these papers, on balance I do not think they falsify my proposal, however some geologists may not agree with that assessment. On the other hand, with slightly good news for my proposal, NASA Science announced Curiosity has drilled into a sedimentary rock in Gale Crater at a place where water was assumed to have formed a small lake and found in amongst the rock, nitrogen, hydrogen, oxygen, phosphorus and carbon, the elements necessary for forming life. What I found important was the presence of nitrogen, because that almost assures us that there was originally reduced nitrogen, as my proposal requires. The nitrogen is most unlikely to have come from N2 in the atmosphere, because the atmosphere contains so little of it. Only a radically different atmosphere in earlier times would deliver sufficient to fossilize nitrogen. The nature of the clay present is consistent with water of relatively low salinity weathering olivine. Also present was calcium sulphate, which is suggestive of neutral or mildly alkaline conditions at the time. Link:
http://science.nasa.gov/science-news/science-at-nasa/2013/12mar_graymars/
Posted by Ian Miller on Apr 8, 2013 3:17 AM BST
In a previous post, I argued that the issue of whether cyclopropane ring electrons are delocalizable was not exactly handled well. By itself, that may seem to be merely an irritant, but the question now is, how widespread is such an issue?
 
In a recent edition of Nature there was an article by Robert Antonucci, (Nature 495: 165 - 167) who argued that the scientific community was failing when trying to explain quasars. A quote: "In my opinion, the greatest limiting factor in understanding quasars is not a lack of intelligence, effort or creativity, nor is it a dearth of fantastic new facilities. It is a widespread lack of critical thought among many researchers. Theories are being published that have already been ruled out by observations. Observers cling to falsified theories when interpreting their data. Most of the AGN community is mesmerized by unphysical models that have no predictive power."
 
This is fairly stern stuff! It got worse. He accused scientists of continuing to use and refine overly simple versions of models that include disproved assumptions and which do not match observations without lots of special pleading.  Observers were not left out: “Some astronomers like to see what they believe." Even worse was to come. In 1984 a temperature was measured and found to be in accord with the disk accretion model. Later, an amateur found the calculation involved a factor of ten missing in Newton's gravitational constant! The correction, and the fact that the method now fails to account for observation, is hardly cited, while the original paper has 100 citations. He complained that this scientific community was producing fewer and fewer theoretical papers, but there is a burgeoning effort to find more examples, leading to statistical analyses leading to further problems, such as claiming causal links in plots of dependent variables. 
 
The question now is, is Chemistry in a better condition? I do not think so. How much original theory, as opposed to opaque computations, have you seen lately? My guess is, not many. How many of you think there is no further theory to find? I think the problem lies in reductionism. Everyone seems to believe that all chemistry is a consequence of the Schrodinger equation, but that cannot be solved. Therefore there is no point looking further. That, in my opinion, is simply false. I am not doubting that the Schrodinger equation is generally correct, but that does not mean that the only way to produce theoretical work is to solve it.
 
Final advice from Antonucci: "I urge my junior colleagues to spend 15 minutes every day thinking, palms down, eyes on the ceiling." Follow a Californian bumper sticker: "Don't just do something, sit there".
Posted by Ian Miller on Mar 24, 2013 10:45 PM GMT
Many/most scientists would probably say, no, you cannot; all you can do is to falsify a theory, while you believe a theory to be true because all evidence supports it. This raises the problem, what happens when the evidence that contradicts the theory are suppressed?
 
Thus further to my previous posts, there were further subtle cues as to why cyclopropane did not demonstrate conjugation. For example, if you have a sequence of olefins, pronounced conjugative effects are demonstrated. On the other hand, while a cyclopropane ring adjacent to either positive charge, or to potential positive charge such as with UV transitions, gives effects similar to conjugative units, add a second cyclopropane ring to the first, thus have two in a row, and the second one has no noticeable effect. On the other hand, if we put three cyclopropane rings around a positively charged centre, the effects are very close to being additive, which does not seem to happen with cross-conjugation.
 
Similarly, with the cyclopropylcarbinyl carbenium ion, you would expect the bond to the carbinyl centre to either make an angle of 120 degrees to the plane of the cyclopropyl ring (as required by the Walsh MO treatment) or approach closer to this angle as the ion forms, but it does not. Instead, the centre moves towards the cyclopropane ring, as if there were an attractive force pulling it. That, of course is exactly what should happen with my polarization field. While the fact that cyclopropane stabilized adjacent charge was taken as proof of conjugation, the associated minor details that contradicted that proposition were ignored.
 
An observation can be used to prove a scientific statement, provided you can write it in the form: “If, and only if, theory X is true, then you will observe Y”. The observation of Y proves theory X is true, as stated. Of course it may be incomplete, but it will be true as far as it goes. The problem is to justify the ”only if” part of the statement, because how can you know that there is not an alternative that has not been thought of yet?
 
The reason I have been writing these blogs on cyclopropane conjugation is not to justify my own youth. From a personal point of view, I could not care less whether anyone believes me, although I do feel that everyone should have the opportunity to consider the issue for themselves. If people want to believe the Earth is flat, well, I cannot do much about that. But people cannot form reasonable views on such matters if the “trivial details” that falsify a theory are suppressed. A review should be critical and complete, not merely fashionable. But suppose, you argue, the reviewer does not know about these details? That is why I think we need a new form of review, like the wiki, where everyone can contribute, and a number of moderators bring order to what is produced. . What do you think?
 
One final comment on this. One reason why everyone said cyclopropane conjugates was because they expected it to, because molecular orbital theory, mainly the CNDO/2 version popular at the time, and also a more sophisticated version of MO theory championed by John Pople, said it would. Remember, molecular orbital theory starts by assuming total electron delocalization, and special reasons are required to produce bond localization. As Aristotle would have said, to find delocalization when you assume it in the first place is not a great achievement. More on this issue later.
Posted by Ian Miller on Mar 18, 2013 1:50 AM GMT
How do you tell which of two theories is likely to be correct? The answer is that each gives a set of predictions, and you have to find an experiment where the two theories predict discernibly different effects. More formally, you cannot state that one theory applies and the other does not from data in the intersection of the two sets. Thus one could not decide whether cyclopropane conjugates with adjacent unsaturation from the fact that positive charge adjacent to a cyclopropane ring is stabilized, because both the electron delocalization theory and my polarization field theory gave essentially the same prediction that positive charge would be stabilized. Worse, calculations showed that to within the uncertainties inherent in each calculation, the two gave essentially the same degree of stabilization: a little over 100 kJ/mol. for the bare carbenium ion in a vacuum. On the other hand, the effects were qualitatively opposite for negative charge. As noted in an earlier post, a case could be made that the required destabilization occurred, and there was certainly no evidence of significant stabilization, but it was difficult to say this was definitive. Then I got lucky: key evidence was published.
 
One further piece of evidence sometimes quoted in favour of cyclopropane conjugating was that cyclopropane adjacent to a chromophore generally gave a bathochromic shift, and an enhanced extinction coefficient. Now, to absorb electromagnetic radiation, to reach the excited state, the system must undergo a change in electric moment, and the probability of a photon being absorbed is proportional to the change in electric moment. Thus something like benzene must have an instantaneous dipole moment in the excited state. The net effect is probably most easily seen using the canonical structure representation, even if it is not strictly accurate. The net result is that for most transitions, a positive charge can be adjacent to the cyclopropane ring in the excited state, hence the polarization field interpretation predicts a bathochromic shift and an enhanced electric moment, exactly the same as does the conjugation theory.
 
It was some time after this that for me a key observation was made: the change of electric moment was measured for the n → π* UV transition of formaldehyde. The important point was the change of electric moment was from oxygen to carbon, hence the same transition on a carbonyl adjacent to a cyclopropane ring would lead to a change of dipole moment with the negative end directed towards the cyclopropane ring. Now, that change of electric moment would interact with my proposed polarization field, which would lead to any strained compound giving a hypsochromic shift to that transition when compared with alkyl. This was important, because it was well-known that conjugative effects give a pronounced bathochromic shift to all such transitions. For example, the transition in acrolein has a bathochromic shift of approximately 25 nm from a saturated aldehyde. I used my pseudocharge to calculate the magnitude of the hypsochromic shift for some strained systems, and got the shift for cyclopropane to within a half a nanometer. (There was probably a certain amount of luck there because observation of these transitions gives broad signals, and picking the maximum is a little subjective.) Of course I could also calculate proportional shifts for π → π* transitions, which have bathochromic shifts. An interesting point here is that it was thought that a carbonyl adjacent to the bridgehead of bicyclobutyl had no n → π* transition. According to my calculation, it would have one but it would be buried underneath the π → π* transition, a consequence of the larger shifts due to the higher strain moving them in opposite directions and thus eliminating most of their separation.
 
So, a triumph? Well, actually, no. Two reviews on the issue of electron delocalization in cyclopropane came out around this period. The first (Bul. Soc. Chim. France  1967, 357-370.) simply stated that the hypsochromic shifts occurred, but they were unimportant! The second (Angew. Chem. Internat. Ed. (Engl.) 1979, 18, 809-886) got around this problem by simply ignoring it. It also ignored my work, and worse, it ignored all the references I had found to work that suggested there was no electron delocalization. That is not the science that I signed up for.
 
The problem with reviews is that once one is declared definitive, there is no place to debate a review. I later wrote a review that found over sixty different types of observation that falsified the delocalization theory but I could not get it published. Accordingly, I and the textbooks disagree on this matter.
 
Posted by Ian Miller on Mar 11, 2013 2:35 AM GMT
   1 2 3 4 5