Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

Another month, and my alternative theories on planetary formation are still alive. Most of the information that I could find was not directly relevant, but nevertheless there were some interesting papers.
 
One piece of interesting information (Science 341: 260-263) is that analysis of the isotopes of H, C and O in the Martian atmosphere by Curiosity rover, and comparison with carbonates in meteorites such as ALH 84001 indicate that the considerable enhancement of heavy isotopes largely occurred prior to 4 Gy BP, and while some atmospheric loss will have occurred, the atmosphere has been more or less stable since then. This is important because there is strong evidence that there were many river flows, etc on the Martian surface following this period, and such flows require a significantly denser atmosphere simply to maintain pressure, and a very much denser atmosphere if the fluid is water, and the temperature has to be greater than 273 oK. If the atmosphere were gradually ablated to space, there would be heavy isotope enhancement, so it appears that did not happen following 4 Gy BP. If there were such an atmosphere, it had to go somewhere other than space. As I have argued, underground is the most likely, but only if nitrogen was not in the form N2. It would also not be lost due to a massive collision blasting the atmosphere away, the reason being there are no craters big enough that were formed following the fluvial activity.
 
There was one interesting piece of modeling to obtain the higher temperatures required for water to flow. (Icarus 226: 229 – 250.) The Martian hydrological cycle was modeled, and provided there is > 250 mbar of CO2 in the atmosphere, the model gives two "stable" states: cold and dry, or warm and wet, the heat being maintained by an extreme greenhouse effect arising from cirrus ice crystals of size > 10μm, even with the early "cool sun". One problem is where the CO2 came from, because while it is generally considered that Earth's volcanoes give off CO2, most of that CO2 comes through subduction, and Mars did not have plate tectonics. Whether this model is right remains to be seen.
 
There was one paper that annoyed me (Nature 499: 328 – 331). The problem is that if Earth formed from collisions of protoplanetary embryos, the energy would have emulsified all silicates and the highly siderophile elements (those that dissolve in liquid iron) should have been removed to the core nearly quantitatively. Problem: the bulk silicates have these elements. An analysis of mantle type rock have chalcogen ratios similar to Ivuna-type carbonaceous chondrites, but are significantly different to ordinary and enstatite chondrites. The authors argue that the chalcogens arrived in a "late veneer", and this contributed between 20 -100% of the water on earth. What has happened is that the authors carried out a series of analyses of rocks and to make their results seem credible, Earth had to be selectively but massively bombarded with one sort of chondrite, but none of the more common ones. Why? The only reason they need this rather strange selection is because they assumed the model in which Earth formed through the collision of planetary embryos. If the Earth accreted by collecting much smaller objects, as I suggest, the problem of the chalcogens simply disappears. It is interesting that the formation of planets through the collision of embryos persists, despite the fact that there is reasonable evidence that the rocky planets formed in about 5 My or less, the Moon formed after about 30 My due to a collision with something approaching embryo size, and modeling shows that formation through such embryo collisions takes about 100 My. The time required is far too long and the evidence is that when there is such a collision, the net result is loss of mass, except possibly from the core.
 
A paper in Angew. Chem Int Ed. (DOI: 10.1002/anie.201303246) showed a convincing mechanism by which hydrogen cyanide can be converted to adenine. This is of particular interest to me because my suggested mechanism for the formation of ATP and nucleic acids is also photochemically assisted. If correct, life would have commenced in vesicles or micelles floating on water.
 
On a positive note (Nature 499: 55- 58) the authors noted that while most stars form in clusters, some are also in loose clusters with stars density at less than 100 per cubic parsec. One problem might have been that stars born in loose clusters might be the only ones that can retain planets, however the authors report transits in two sun-like stars in a dense cluster, which shows that planets can survive in such a cluster, and that the frequency of planet formation is independent of the cluster density. This makes extrasolar planets very much more probable.
Posted by Ian Miller on Jul 29, 2013 3:08 AM BST
I have another blog, to support my literary efforts, and one of the issues I have raised there is climate change. I originally raised this to show how hard it is to predict the future, yet in some ways this is a topic that is clearer than most while in others, I find it more confusing than most. It seems to me there are a number of issues that have not been made sufficiently clearly to the public, and the issue here is, what should scientists do about it, individually, or more importantly, collectively? Is this something that scientific societies should try to form a collective view on?
 
One thing that is clear is that all observable evidence indicates that the planet is warming. Are so-called greenhouse gases contributing? Again, the answer is, almost certainly yes. The physics are reasonably clear, even if the presentations of them to the public are often somewhat deviant from the truth. Are the models correct? My guess is, no; at best they are indications. Are carbon dioxide levels increasing? Yes. Our atmosphere now has 400 ppm of carbon dioxide, up from the 280 ppm at the beginning of the industrial revolution. I think that, on balance, however, most of the public are reasonably well-informed on what the so-called greenhouse effect is.
 
I am not convinced, however, that some of the aspects have made an adequate impact. For me, the biggest problem is sea-level rise. There is considerable net melting of the Greenland ice sheet, and in every one of the last four interglacials, there is evidence that the Greenland ice sheet melted and the sea levels were 7 meters higher. That was when carbon dioxide levels were 280 ppm. Now, check Google Earth and check how much land disappears if the sea is 7 meters higher. It swamps most port cities, and takes out a lot of agricultural land. Check Bangla Desh; a very large part goes. Holland is also in bad shape. Worse, if the climate scientists are correct at their more pessimistic greenhouse estimates, the 400 ppm will take out a significant fraction of the Antarctic ice sheets, and that could lead to something like a 30 meter sea level rise. Now, if such sea level rise occurs, where do all those people go?
 
One option is, do nothing, wait and see, and if the seas rise, tough luck. So now we have an ethical question: who pays? The people who caused the problem and benefited in the first place, or the Bangla Deshis, Pacific Islanders, and other people living in low-level countries? So, what are we doing? Apart from talking, not a lot that is effective. We have carbon trading schemes, which enriches the pseudobankers, we measure everything because some scientists like to measure things, and we devote a lot of jet fuel to having conferences. However, if the levels of greenhouse gases are of concern, we burn ten billion tonne of carbon a year, and d2/dt2[greenhouse gas] is positive for each of them. The second differential is positive! Yet it is the sum of the integrals that is important.
 
We are scientists, so we should be able to recommend something. What do we recommend? To the best of my knowledge, no scientific organization has recommended anything other than platitudinal "decrease greenhouse emissions". Yes, what to do is political, and everything that I can think of meets general objections. Whatever we do, many/most will be adversely affected. The problem is, if we do nothing, a very large number of different people will be adversely affected. So what do you think scientists or scientific societies should do?
 
Posted by Ian Miller on Jul 23, 2013 12:38 AM BST
Our thinking on the Universe changed somewhat towards the end of the 1990s, when it was found that type 1A supernovae at extreme red shift are dimmer than expected. The type 1A supernovae start out as basically white dwarfs that have burnt their fuel to carbon-oxygen, but they have a further companion that they can feed off. If they get above 1.38 solar masses, they reignite and explode, and because they do this at a defined mass from a defined starting position, their luminosity is considered to be standard. Observation has shown this up, at least with nearby 1A supernovae. If they are standard candles, that meant that the expansion of the universe was faster in recent times than in distant times. Thus was born dark energy.
 
I always had a problem with this: what we see is the outer shell, which has a composition that will retain a considerable history of that of the neighbour, because once the explosion gets underway, that which is on the surface will stay there. That would mean the luminosity should depend on the metallicity of the star. However, when I expressed these feelings to an astrophysicist, I was assured there was no problem - metallicity had no effect.
 
Two things then happened. First, I saw a review of the problem from an astrophysicist who left an email address. The second was a publication occurred (Wang et al. Science 340: 170 – 173, 2013) that showed that luminosity could vary significantly with metallicity, and hence I emailed the astrophysicist asked what effect this would have. The reason is, of course, metals in stars are formed in previous supernovae, so it follows that the earlier the stars, the fewer cycles of supernovae would have occurred, and hence the stars would have fewer metals. If so, they should be dimmer, and if they are dimmer, and not standard, then perhaps there is no accelerating expansion or dark energy. Maybe that reasoning is wrong, but all I wanted to do was to find out.
 
Now, the issue for me lay in the response. I was told unambiguously that the lack of metallicity had been taken into account, and there was no problem. This raises an issue for me. Either the lower luminosity resulting from less metallicity was well known or it was not. If not, how as it taken into account? You cannot account for an effect of which you are unaware, and if so, this response was a bluff. If it were known, then how come someone gets a publication in a leading well-peer-reviewed journal when he announces a new discovery? If it were well-known, surely the paper would be rejected, and if it were well-known, surely the peer-reviewers would know.
 
What disturbs me is that there must be a fundamental scientific dishonesty at play here. I do not have the expertise in that field to know where it lay, but I find it deeply concerning. If scientists are not honest in what they know and what they report, the whole purpose of science fails. Just because it is fashionable to believe something, that does not make it true. Worse than that, there are some issues, such as global warming, where scientists have to take the public with them. If scientists start bluffing when they do not know, then when caught out, as they will sooner or later, the trust goes. What do you think?
 
 
Posted by Ian Miller on Jul 15, 2013 12:16 AM BST
One of the most heated and prolonged debates in chemistry occurred over the so-called non-classical 2-norbornyl cation. Very specifically, during reactions, exo-2-norbornyl derivatives solvolysed about 60 times faster than the 2-endo ones. The endo derivatives behaved more or less as you might expect if the mechanism was SN2, but the exo ones behaved as if they were SN1, but there was an additional surprise: the nucleophile was about as likely to end up on C6 as C2. There were two explanations for this. Winstein suggested the presence of a non-classical ion, specifically the electrons in the C1-C6 bond partly migrated to form a "half-bond" between C2 and C6. Thus was born the "non-classical carbonium ion". On the other hand, Brown produced a sequence of papers arguing that there was no need for such an entity, and the issue could be adequately explained by more classical structures, and as often as not, by the use of proper reference materials.
 
That last comment refers to a problem that bedevils a lot of physical organic chemistry. You measure a rate of reaction, and decide it is faster than expected. The problem is, what was expected? This can sometimes border on being "a matter of opinion" because the structure you are working with is somewhat different from standard reference points. This problem is even worse than you might consider. I reviewed some of the data in my ebook Elements of Theory 1, and suggested that the most compelling evidence in favour of Brown's argument was that the changing of substitution at C1 made very little difference to the rates of solvolysis at C-2, from which Brown concluded there was no major change of electron density at C1, which there should be if the C1-C6 bond became a half-bond as Winstein's structure required. As it happened, Olah also produced evidence that falsified Brown's picture, and as I remarked at the end, each falsified the other, so something was missing.
 
In the latest edition of Science (vol 341, p 62- 64) Scholz et al. have produced an Xray structure of the 2-norbornyl cation, which was made from aluminium tribromide reacting with the exo 2-norbornyl bromide, and what we find is equal C1-C6 and C2-C6 distances, as required by the non-classical ion. Also, these are long, at about 180 pm. Case proved, right? Well, not necessarily. The first oddity is that the C1-C2 distance is 139 pm, or about the same length as benzene bonds. Which gets back to Brown's "falsification" of the non-classical ion: while the C1-C6 bond is dramatically weakened, the C1-C2 bond is strengthened, and the electron density about C6 may be not that much changed, despite the fact that the bond Brown thought he was testing was half-broken. Nobody picked that at the time.
 
What do I mean by, "not necessarily"? It is reasonably obvious this is not the classical structure that Brown perceived. That is correct, but there are two other considerations. The first one is that to get a structure, the structure must be in an energy well, which means it does not actually represent the activated state. To give an example, the cyclopropylcarbinyl system would presumably give, as an ion, Cyc-CH2+ would it not? The trouble is, the system rearranges and is consistent with that, as well as a cyclobutyl cation, and an allylcarbinyl cation. The actual cation is probably something intermediate. So the rate acceleration then may not be caused by the intermediate cation, but by whatever is happening on the reaction path. If this cation was the cause of the rate acceleration, it should also operate on the endo cation. Yes, the mechanism is different, but why? A product available to both cannot be the reason. There has to be something that drives the exo derivative to form the cation. My explanation for that is actually the same as that that drives the cyclopropylcarbinyl cation.
 
The second consideration is the structure itself: the two bonds to C6 are equal, and C1-C2 is remarkably short. There is one further way this could arise. Let us suppose we follow Winstein and break the C1-C6 bond. What Winstein, and just about everybody else, thought is that we replace that with two half σ bonds, but suppose no such σ bonds are formed? Instead, rehybridize C1 and C6 so we have two p orbitals. With two p orbitals and a carbenium centre we have the essence of the cyclopropenium cation, without two of the frame-work σ bonds. That gives us a reason why the cation is so stable: under this interpretation, it is actually aromatic, even if two of the bonds are only fractional π bonds.
 
Is that right? If it is, then there is a similar reason why ethylene forms edge-complexes with certain cations. Of course, it may not be correct, but as a hypothesis it seems to me to have value because it suggests further work.
Posted by Ian Miller on Jul 8, 2013 4:11 AM BST
I found a number of interesting papers during June, and I am reasonably pleased that there were no significant contradictions to my ebook Planetary Formation and Biogenesis.  There were three major thrusts. The first one involved fluid flow on Mars, which my work requires, although of course its existence is generally accepted. Observations from the Curiosity rover at Gale crater showed isolated outcrops of cemented pebbles and sand. The rounded pebbles in the conglomerate indicates substantial fluvial abrasion, which led the authors (Williams et al. Science 340: 1068 – 1072) to conclude that sediment was mobilized in water that probably exceeded the threshold conditions (depth, 0.03 – 0.9 meter, velocity 0.2 – 0.75 m/s) required to transport the pebbles. They conclude that sustained liquid water must have flowed across the landscape. Unfortunately, no evidence was available as to why it flowed, or what the fluid was. (It would be water, but such water could either be heated, or it could contain something to depress its melting point.) 
 
Another major issue is, when did rocky planets get their volatiles? Volatiles that are accreted from the stellar accretion disk are almost certainly essentially lot to space due to the high energy emissions of the young star, which persist for about  0.5 Gy. My mechanism of rocky planet formation requires many of the volatiles to accompany feldsic crust formation, and I required that to happen following about 4 Gy BP, in part to ensure that the chemicals required for life were available continuously from about 4 Gy BP for another 1.5 Gy. Pujol et al. (Nature 498: 87-89) measured argon ratios from gas occluded in rock, and concluded that less than 10% of feldsic crust evolved between 170 My and 3.8 Gy BP, 80+10% of the crust formed between 3.8 Gy and 2.5 Gy BP, and < 30% crust was generated from 2.5 Gy BP to today. These results effectively confirm that my requirements were met. Interestingly, Debaille et al (Earth Planet Sci. Lett. 373: 83-92) proposed that early-formed mantle heterogeneities persisted at least 1.8 Gy after Earth's formation. The best explanation is that there was a stagnant lid regime as the crust built up. The major change in geodynamics noted at ~3 Gy BP then reflects the transition to plate tectonics.
 
In my ebook, I argued that there should be significant reduced species emitted geochemically early in a rocky planet's history, and I argued that the components in the Saturnian system would convert methanol and ammonia to methane and nitrogen. I also argued that the planetary systems formed relatively quickly, as opposed to the current theories that require at least 15 My to get a giant. I was pleased to note that Malamud and Prialnik (Icarus 225: 763-774) calculated how serpentinization would produce nitrogen and methane in Enceladus, and argued that for their calculations to be correct, because initiation requires heat from short-lived radioisotopes, the Saturnian moons had to be formed between 2.65 and 4.8 My after CAI formation. Just what I needed! Etiope et al. (Icarus 224: 276-285) showed that methane may be formed at relatively low temperatures by the Sabatier reaction catalysed by chromite minerals. More good reduced materials.
 
Finally, Pringle et al. (Earth Planet Sci. Lett. 373: 75-82) showed from consideration of silicon isotopes in meteorites from 4-Vesta that Vesta differentiated under more reducing conditions than previously considered. Again, just what I wanted.
 
Posted by Ian Miller on Jul 1, 2013 3:33 AM BST
My previous post outlined the issue of the quadruple bond in C2, and an interesting issue is, how do you visualize it? I think this is important because it permits the qualitative reasoning that may be more immediately useful to chemists, but there is something else. Sometimes, by looking at a problem in a different way, you get a different perspective. Thus through using the valence bond/hybridization perspective Shaik et al. (Nature Chem DOI:10.1038/NCHEM.1263) consider the bonds in C2 as follows: a σ bond employing sp orbitals, accompanied by two π bonds arising from the pairing of two p electrons from each carbon atom, plus the additional electron in each carbon atom occurs in outward pointing hybrids, (i.e. with axes along the axis of the acetylenic cylinder). It would usually be considered that the directionality would prevent bond formation. Computations, however, show this is not the case.
 
In accord with the theme of this blog, is there an alternative way of looking at this? For me, yes, and it reminds me of my first theoretical paper! (Confession: not the best presented paper ever.) The problem then was that substituents adjacent to a cyclopropane ring experienced chemical effects different from those attached to an unstrained ring. These were being explained by partial charge delocalization from the cyclopropane ring, but my point was that the effects of strain were not properly considered. The changes in chemical effects were certainly explained in terms of an electric field at the substituent adjacent to a strained system that differed from that of a standard alkyl system, but the question was, was that due to charge delocalization, or to the strain energy? From Maxwell's electromagnetic theory, the work done moving electric charge behaves as if it is stored in an electric field derived from the accompanying polarization field. Charge was moved, but was the movement satisfactorily explained by constraining the movement to within the strained system?
 
To assess the strain energy, after a little mathematics and an assumption I subsequently did not find convincing, I came up with the strain energy being proportional to [sinθ/2]/√r, θ/2 being the angle that that bond deformed (θ the change of bond angle.) and r the new covalent radius. I was quite excited when I found out this was quite accurate; I was less so when I realized my "derivation" was simply too questionable. Accordingly, I simply placed it into the paper as an empirical proposition. However, if you take the bond energy scheme recently (then!) determined thermochemically by Cox, very good results were obtained for ethylene and acetylene. (That does not imply these systems did not delocalize electrons, but it did imply they did not if there was no adjacent unsaturation.)
 
Thus in this picture, acetylene was described as three bent sp3 bonds. As an aside, I could have made a prediction of the strain in [1,1,1] propellane, and I would have been the first to comment on it. Why didn't I? Partly because I never thought about it, but mainly because this part of the paper was an aside, to get the strain energy I needed. The objective was to calculate fields on adjacent substituents, and leaving aside the methylene carbons, there are no substituents adjacent to the junctions of a propellane, so that molecule was outside the scope of the paper.
 
Returning to C2, the axial wave, containing the single electron is also inherently sp3 in this picture. Why that is relevant is that the sp3 wave has a primary wave with which everyone is familiar, but also a small lobe that is on the opposite side of the carbon atom. We get the fourth bond if these two small lobes constructively interfere. That argument says that a fourth bond is conceivable; what it does not show is whether the fourth bond has any net energy, and that requires calculation. It also requires a better definition of that small lobe. (Note that this picture is merely a different way of viewing the molecule. The concept of hybridization is simply one of combining component waves. When combining different waves of different energies, there are various ways that it can be done, provided the energies are properly accounted for.)
 
Does the different description offer anything? I think yes. In 2009 Wu et al. (Angew. Chem. Int. Ed. 48: 1407 –1410) describe the bond in [1,1,1] propellane as an inverted bond, i.e. they seem to consider the orbitals to have inverted and become directed inwards instead of outwards, however in my picture, it is a "normal" bond, derived from bicyclobutane, with all bonds more strained. However, it does explain why the "internal" bond in the propellane is relatively strong, and that in C2 so weak. Of course, computations show this too, and in some ways more convincingly, nevertheless the qualitative view might at least show some experiments that might be worth doing. For example, if such an "sp3 orbital" were to invert, there would be a significant change in electric moment of the molecule, which, again from Maxwell's theory, would be promoted by the absorption of a photon. Thus suitable molecules should have a significant change in their UV spectra. Besides C2, which in this picture should have relatively long-wavelength transitions, we might even consider something like 1,4-diazabicyclo[2.2.2]octane, even though it has two lone pairs with nowhere obvious to go. Tertiary amines usually have a weak UV absorption at about 215 nm, but for 1,4-diazabicyclo[2.2.2]octane I would expect the UV spectrum to show a significant spectral shift due to a new interaction between the nitrogen atoms. Would anyone care to run me a spectrum? I am curious now to see if this reasoning is correct.
 
Of course, the more thoughtful out there might argue that while the small lobe of an sp3 orbital might interfere, there is a reason they may not be capable of inverting in standard quantum mechanics. Can you see it?
Posted by Ian Miller on Jun 24, 2013 4:07 AM BST
In some of my previous posts, I have bemoaned the absence of public discussions between chemists on matters of theoretical importance to chemistry, and so, when one actually appears, I must first congratulate the participants and the journal. This specific issue relates to two recent discussions (Angew. Chem. Int. Ed. 52: 5922-5925; 5926-5928) relating to whether there is a quadruple bond in C2. Whether the molecule is important is a matter of opinion, but the point that I have tried to make previously in these posts is simply publishing papers is not sufficient to lead to greater understanding. What I believe is needed is subsequent analysis, so that we better know what we know as opposed to what we think. It therefore follows that to be useful, the discussion should be in a form comprehensible to the educated chemist who is not directly involved in the field, and it is with in mind that I wish to consider, were the criticisms worth making, and were they answered satisfactorily in that the general chemist would learn anything? There are obviously other issues, but I shall leave them for further posts.
 
The first article was a criticism by Frenking and Hermann of a previous publication in which the existence of the quadruple bond was proposed. Their main points were:
(a)  The force constant of C2 < force constant acetylene. The stretching frequency of C2 was 1855 cm-1 while that of acetylene is 1974 cm-1. Their argument was that these data are evidence that the bond in C2 is weaker than that of acetylene.
(b)  The claim for C2 to have a stronger bond lies in measurement of the dissociation energies of acetylene. Thus when the first hydrogen is removed, the energy required is 133.5 kCal/mol, and the second 116.7 kCal/mol, a difference of 16.8 kCal/mol. This 16.8 kCal/mol is supposedly the additional energy arising from the formation of the quadruple bond, however the criticism is that the framework is not constant, in that in the second dissociation, the carbon-carbon bond length increases by 0.035 A. They argue there is no reason to assume that a smaller C – H bond dissociation energy arises through strengthening of the C – C bond; there may be other reasons.
(c)  The remaining arguments were largely dependent on computational procedures and they may or may not be correct. The outside observer merely has to either accept or not the points. However, there was one point made that irritated me. The criticism was that the original paper adopted incorrect reference states. In general physics, the end conclusion eliminates the frame of reference, and hence the results are independent of it. The reference points eliminated from the calculation are chosen for ease of calculation, and should not affect the conclusion.
(d)  In the footnotes, they write "A bonding model is not right or wrong, but it is more or less useful." Their argument is the quadruple bond model is not useful because it does not agree with the properties of the molecule. Whether or not this criticism is correct or not, it is important because it focuses attention on the critical issues that lead to further understanding.
 
The response by Danovich, Shaik, Rzepa and Hoffmann is of interest. They argue first that the rule that stronger bonds have stronger force constants may not be universal. Given that there is no firm relationship (at least that I know of) relating bond strength and stretching force constant, that may be true, but equally it may not. As an outside observer, I think the F&H point has validity, although it is not conclusive. They also argue that computations show that the energy change in the C – C distance changing from 1.21 to 1.24 A is negligible. If so, the point (b) fails. However, we must ask, were the computations 100% guaranteed true? I am not convinced. On the other hand, the lowering of the energy is unambiguous and uncontested, so any argument thereafter really must be based on what this means. The responders argue that this means additional bonding, and to defeat that argument, there has to be some alternative for this energy lowering.
 
Does it matter? I think conceptually, yes, because it makes us think more about what is a bond. (More on this in subsequent posts.) Consider the energy argument above, and transfer that to dinitrogen. The triple bond of N2 is no simple extrapolation from single and double bonded nitrogen species. One likely reason is, like the acetylide anion, the triple bond configuration stabilizes the lone pair, and extrapolating Coulson's "bent bond" model, the orbitals in the triple bond are bent away from the lone pair, thus exposing the lone pair electrons to greater positive field.
 
The skeptical chemist should now ask, what is the exact electron configuration in C2? Are all electrons paired? Unfortunately, this was not specifically stated in the article, however by observation the species is actually a singlet. To be a singlet as opposed to being a triplet diradical, within standard MO theory, the two electrons must be in a common wave function. If they are, it is either bonding or antibonding, and since there is a net energy lowering, it must be bonding. So, within MO theory, the fourth bond exists because there is an energy lowering of 16 kCal/mol. Suppose we wish to go outside MO theory. If so, and have the two electrons in separable wave functions, then to get a singlet there has to be a phase relationship between the two waves, and an interaction that leads to the energy lowering, and if so, the question then is, why is that not within the description of a bond? In fact Shaik et al. (Nature Chem DOI:10.1038/NCHEM.1263) show by VB treatment, that the reality is in line with that proposition. Thus I believe this omission of the singlet nature of the state was unfortunate, because it is the omitted observational evidence that settles the issue, at least for me.
 
Finally, a quote from Roald Hoffmann: Could it be that “this most rigorous theory,” the one that affords “deep insight,” in fact has failed (so far) to provide pragmatic chemists with a way of thinking about real chemistry—whether it is that of “synthetic” or of short-lived molecules—that is as useful as are Lewis structures, arrow-pushing, and molecular orbitals?
 
My guess is, so far, yes, but if we had more of these discussion-type articles more directed towards the general chemist, perhaps the answer would change.
Posted by Ian Miller on Jun 17, 2013 4:45 AM BST
A comment on a previous post suggested the process of science funding was faulty, so I thought I should comment on a situation that is occurring here (New Zealand). I have no idea how general this is, but I think it is serious, not because of what is happening, but rather what is not happening. If scientists wish to keep being funded from the public purse, I think they have to make certain the outward perception is one of dynamism and value and that the money is advancing something.
 
About a year ago, the Prime Minister announced that the government would put an additional sum (about 4% of science budget, plus or minus quite a bit because of certain vagueness in the announcement) for the express purpose of doing something new. He then asked the public to submit challenges for this money. So far, surprisingly good! For once, the public is involved! We can always quibble about the amount of money, but recall that right now we have something resembling an economic crisis throughout the world, particularly relating to government debts, so such quibbles border on the pathetic. We should be grateful for what comes!
 
The problem soon surfaced. A large number of challenges were submitted, and an expert committee was set up to sort through these. Eventually, ten were published as successful. My guess is that none of these were actually submitted by the public, because they all looked like they came from a committee. Like motherhood and apple pie, you could hardly dispute that they were important, but on the other hand, there was a total lack of originality, incisiveness, etc. What I suspect happened is that the best of what was received was put into a blender and mush emerged. While it may be quite reasonable to blend in everyone's ideas, on closer analysis, it ended up appearing to be “feel-good” money to be spread around existing science organizations to continue doing more or less what they were doing. This image was not helped when I heard on a radio program a representative say this work was important, and just because such programs already had funding, that was no reason not to spend more money on them.
 
That is all very well, but I think there were several negatives from this. The first is, a number of citizens spent quite a bit of their own time putting together challenges, and wading through the “bureaucratic-speak”, and I feel they deserve better than to be simply ignored later. If nothing else, a response thanking them for their efforts, and explaining why what was accepted was felt to be more important than what they submitted. Most people would accept the concept that if someone put in something that was reasonably more important, it should win. The second main one is that it looks as if the original purpose has been subverted for the benefit of institutions. The third one is that the winners are so vague they cannot be measured, therefore there is no way that the government can later say the exercise was a success. These are very important reasons. Scientists have to accept that it is important to carry the public with them, and when the government gives money, it is important to give the government something to promote now and boast about later. As yet, no money has been allocated. What I think should happen is when it is allocated, it is done so with public fanfare, to give the impression that something good could arise from this. What should not happen is that the allocation gets buried in a pile of bureaucratic files. I do not know how general this problem is, but I do not see a lot of platform-building for science going on anywhere.
Posted by Ian Miller on Jun 10, 2013 12:38 AM BST
For May, once again there were few significant papers (at least that I found) that impinge on theories of planetary formation, and I shall restrict myself to the two closest. A commonly measured variable is isotope enhancements, and Halliday (Nature 497: 43-44) showed that lunar basalts have slightly higher levels of heavy iron than Earth, which is itself significantly enhanced in heavier isotopes compared with Mars or Vesta, however there is no enhancement for heavier isotopes in lithium. What does that mean? Interpreting such results is a common problem, because what we are trying to do is to get whatever we can from the very limited samples available to us. The temptation then is to look at the current model and fit the data to it, and if it makes sense within that model, than that is how the data are interpreted. We tend to assume that isotope enhancements only arise through vaporization/condensation, but there are alternative ways of enhancing heavier isotopes, such as the chemical isotope effects. In short, such enhancements may reflect greater processing of a sample.
 
Another interesting paper came from Hamano, Abe and Genda (Nature 497: 607 – 610). They classified rocky planets according to their distance from the star. A type 1 planet forms beyond a critical distance and solidifies within several million years and if the planet acquired water during formation, it retains it. A type 2 planet lies within the critical distance, and can maintain a magma ocean for up to 100 My because the steam atmosphere (assuming it acquired water) blankets the planet, and incoming radiation from the star exceeds the radiative ability of the atmosphere to emit sufficient heat to cool the surface (~ 300 W m-2). Hydrodynamic escape dessicates type 2 planets. Venus is on the border of the critical distance, but is classified as Type 2 because of its properties. The argument depends on there having been a magma ocean in the first place, and it only applies to water emitted at the very beginning. On Earth, volcanism has been emitting volatiles continuously, and while most are secondary now, some remain primary. The point is, most volatiles have yet to be degassed at 100 My. On Mars, it appears to have taken up to 500 My before the bulk of the water was degassed, by which time their mechanism is irrelevant. Of course, what they tried to do was work out why Venus is like it is. My argument is that there are alternative interpretations to the data, and in the case of Venus, it never had much water on the surface.
 
Meanwhile, for those interested in some of the issues relating to planetary formation and the origin of life, there is currently a forum operating on the web. Go to https://astrobiologyfuture.org/forum . Amongst other things, people are more prepared to ac=knowledge what we do not know, and more prepared to be speculative, than in scientific papers.
Posted by Ian Miller on Jun 3, 2013 4:19 AM BST
In the May edition of “Chemistry World” there was an item regarding “leaps of faith” in quantum mechanics, and this item quoted a paper published in Proc. Nat. Acad. Sci. showing how the Schrodinger equation can be arrived at from the classical Hamilton Jacobi equation. What puzzled me was, why was this published? After all, in the chapter “Classical Mechanics” in “Fundamental Formulas of Physics” (Dover, 1962), essentially the same thing was published, and no claim to originality was made. The book was a summary of well-known physics, so this was presumably well-established by then.
 
So, why is quantum mechanics so weird? One possibility is that it is not at all weird, and requires no great leaps of faith at all. The only problem is that we do not understand it, which in turn might mean nothing more than there is more to sort out. One problem was that we were deeply committed to Newtonian mechanics, so anything non-Newtonian was, perforce, weird. Within its set of assumptions, Newtonian mechanics are, in my view, completely correct, but as I noted in my ebook, Elements of Theory 1, there are two statement implied by Newtonian mechanics that are not correct. The first is, force acts instantaneously at a distance. By far Einstein’s greatest contribution to science was to propose that that was wrong, and force is mediated at a velocity. Further, the statement that when you see something, you cannot say, “It is there,” but rather, “It was there when the photons set off.” The second erroneous assumption is inherent to Newton’s first law. Newton’s first law is often regarded as a bit redundant, because it is essentially the second law with a zero applied force. However, there is one further part to Newton’s first law, and that is that motion is continuous. In more detail, what the physicists call action is continuous. In my opinion, that is wrong, and it is where the problems in comprehension lie. Instead, I regard action as discrete, and specifically, in units of Planck’s quantum of action. That, as far as I can tell, is the only required difference between classical and quantum mechanics. The derivation of the Schrodinger equation immediately follows from the Hamilton-Jacobi equation if the quantum of action defines a period of the wave. The Uncertainty Principle and the Exclusion Principle also follow.
 
I think another problem in understanding what is going on follows from an obsession with another part of Hamiltonian mechanics, namely the canonical equations. You will often see that these partial differential equations enable us to represent momentum, p, and positional coordinate, q, as equivalent, from which we can make phase space diagrams, etc. However, action is an integral of motion, and if the discreteness of action is the fundamental essence of quantum mechanics, then some care has to be taken with conclusions based on partial differentials. An example I gave in the ebook is this. ∫pdq has a simple meaning: a particle travelling along a coordinate with uniform momentum. Now, consider ∫qdp; a particle at constant position with a continual change of momentum? Strictly speaking, both integrals give you action, except one is ridiculous. As for the first, ∫pdq, consider that if you integrate over a period you should get a wavelength. If so, = h, the quantum of action, and we have the de Broglie equation. Action can also be represented as ∫Edt, where E is the energy. If τ is the periodic time, then it follows again that = h, from which, bearing in mind frequency is 1/τ, then E = hν, as required. This does not require "leaps of faith", and is reasonably straightforward, but how many chemists get shown things like that in their courses on quantum mechanics? Oh no! What tends to happen is that massive equations get put up, or obscure formalism using the "Sledge Hammer" approach: "Trust me, I know what I am doing."
 
Feynman said that nobody understands quantum mechanics. What I think he meant was that nobody as yet completely understands quantum mechanics, but I think you can get a lot closer to it if you take the trouble to get a few things in order. Ask what is really fundamental, and watch what follows.
Posted by Ian Miller on May 27, 2013 12:58 AM BST
   1 2 3 4 5