Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

One theme of my posts that I have raised more than once is that while scientists are very good at collecting information and of measuring things, this leaves the problem of interpreting what it means. Scientific theory is based on either propositions or statements. A proposition is of one of two forms:
(1)  If theory P is true, then you will observe A
(2)  If and only if theory P is true, then you will observe A
Failure to observe A falsifies either proposition, but if you observe A, all you can say about (1) is the theory is in play. As Aristotle noted over two millennia ago observing A can only prove P if (2) applies, and it is the "only" condition that is difficult to validate. A statement (and an equation is a statement) carries the implied proposition that it is true.
What brought this thought on was one paper (Science, 345: 1590 – 1593) that has had quite some publicity, even in the public news media. What it claims is that at least some of the water we have is older than the solar system. What does that mean? First, it was deuterium/hydrogen ratios that were measured. We also note the authors were astrophysicists, and I quote: " our emphasis is on the physical mechanism necessary for D/H enrichment: ionization." As stated, that is an "only" statement, but I consider the "only" condition is unjustified. However, before getting to that, all hydrogen and deuterium was made in the Big Bang, and all oxygen atoms were made in supernovae. Water is made in space by oxygen and hydrogen reacting, usually on dust. Deuterium enrichment can arise because the O – D bond is stronger than the O – H bond, mainly because the latter has the larger zero point energy, so any process that breaks an O – H bond, particularly if it just does so, may increase the D/H ratio in what remains. It also arises through sublimation equilibria of ices in space, as heavier molecules sublime slightly less easily and under equilibrium conditions, they become enriched. Under these conditions, the D/H ratio of all water remains constant, and if ice gets enriched in deuterium, the vapour becomes depleted.
What they note is that the highest levels of D/H in water occurs in interstellar ices, and that Earth's oceans have a significant deuterium enhancement over solar hydrogen levels and are similar to comets from Jupiter's orbit, and a little less than that of interstellar water. They then model what they believe happened in the solar accretion disk and note that the deuterium levels we see are inconsistent with the disk physics/chemistry leading to the observed enhanced, with respect to solar, deuterium levels. What they then conclude is that comets could comprise either 14% or up to 100% of accreted interstellar ice, and ~7% or up to 30-50% of earth's oceans originated as interstellar ices. Why the "either" options? Largely because while they have a ratio for interstellar ices, they also have a water signal from the disk of a protostar. In short they believe the nature of the original water may vary from star to star. However, that is irrelevant to their claim that our water predates our solar system formation. They then conclude that provided the formation of our solar nebula was typical, then interstellar ices from the molecular cloud core should be available to all young planetary systems.
The last conclusion seems obvious. If there is water and ice in the cloud, which would be expected as long as the carbon levels do not consume all the oxygen, then the water ice should persist at least to the outer parts of the accretion disk, and indeed my theory of formation of the gas giants relies on this being so, so in one sense the paper supports my theory. On the other hand, provided there were water ices in the cloud, what could possibly happen to them until they reached the ice sublimation temperature, given that the disk is opaque so while the star is forming, ionizing radiation should be absorbed much closer to the star? It is here that they seem to have overlooked that there are three important hydrogen sources: interstellar ice, interstellar water vapour, and hydrogen. The latter is about four orders of magnitude higher than anything else, and determines the initial deuterium levels in the star. Nuclear burning will then decrease the stellar deuterium levels.
However, the conclusion that Earth's water reflects the deuterium content of the water as it was accreted is an "only" statement, and it is not true. There is a further possible mechanism: as water travels through hot rock, and current volcanism shows it does, it may oxidize any reduced species, and in many cases liberate hydrogen, which may then escape to space. Such reactions involving the breaking of the O-H bond will also be affected by the chemical isotope effect, with O-D bonds reacting significantly more slowly, and that in turn will lead to deuterium enrichment. That is my explanation for the Venusian atmosphere, where there is a hundred-fold enrichment of deuterium (Science 216: 630-633). The reactions include water reacting with carbon or carbides as the original source of the carbon dioxide in the Venusian atmosphere. As I showed in Planetary Formation and Biogenesis by reviewing a number of papers, either the gases were emitted from the earth, in which case they had to be accreted as solids, or they were delivered from space, but if the latter were the case, each rocky planet had to be struck by completely different types of bodies, and the Moon, quite remarkably, be struck by only trivial amounts of any of the volatile containing bodies. Note that most asteroidal bodies contain negligible volatiles.
So what do I make of this? Of course water arrived from interstellar space, and this work at least supports my concept of ice accretion. On the other hand, the presence of ices in the disk is generally held to be the reason why the giants form, so in another sense this paper simply supports what was long assumed. I am not convinced it warranted the media attention it received.
Posted by Ian Miller on Oct 13, 2014 1:57 AM BST
There are a number of problems that seem to be looming, one of which is the climatic effects of the so-called greenhouse gases. Science should be able to address such problems, but the question arises when a discovery is made is, is this a solution to the designated problem, could it be a solution to the designated problem if some further problems can be overcome, or is it simply an interesting observation but essentially irrelevant in terms of solving any of our problems? With the problem of getting funding for science, "relevance" often becomes an issue. Accordingly, funding applications frequently make significant claims as to what their research might achieve, and there are advantages in carrying this over into the subsequent papers. Of course some of these papers may truly herald an opportunity. So, what do you make of the following?
Ammonia is an important chemical for fertilizer, and is usually made through the Haber-Bosch process, which involves reacting nitrogen gas with hydrogen under pressure, the hydrogen being made by steam reforming of methane, in turn obtained from natural gas. The oxygen from the steam ends up eventually as carbon dioxide, so it contributes to the greenhouse effect. However, a new process has been claimed (Science, 345: 638 – 640) that involves electrolysis of air and steam in a pressurized molten hydroxide suspension of nano-sized Fe2O3, at temperatures of 200 – 250 oC. This process results in the conversion of nitrogen to ammonia with an efficiency that is apparently 35% of the applied current, the other 65% resulting in excess hydrogen. Hydrogen would remain a marketable product. The chemistry is interesting. Iron/iron oxide is a catalyst for the Haber-Bosch process, but that process uses pressures considerably higher than would be found in this reaction. That comparison is probably irrelevant, as is shown by ball-milling standard iron oxide, in which case the reaction did not go, so the nano-sizing is important. The question then is, is this a solution to a problem or merely an interesting side-issue? That leaves open the question, how likely is it that this reaction will scale up successfully, and if it does, then run successfully?
The first problem that I could see is that the efficiency drops off at higher current, thus the efficiency of one synthesis was >30% at 20 mA, but ~7% at 250 mA. The suggestion was that the conversion is limited by the available area of nano-Fe2O3, which may or may not be fixable during scale-up. From the chemical point of view, the nanoparticles were dispersed throughout the solution, but the electron transfer would presumably occur at the electrodes, so that raises the question, exactly what are the nanoparticles doing? The electrodes were nickel, so they should not be a problem for scale-up, but the area might be. The production rates were in the order of 7 x 10-9 mol NH3 per second per square centimetre. That would require a very large area to get 1t/hr, which is hardly a rate to get excited about. The requirement for nano-sized  Fe2O3 would also worry me because Fe2O3 slowly dissolves in hot sodium hydroxide solution to make sodium ferrite. This was not mentioned in the article. On the other hand, they found conditions that stabilized production for six hours. (Actually, it may not be beyond the bounds of possibility that sodium ferrite is the catalyst, as nano-sized Fe2O3 might well be more reactive than the bulk oxide. That is yet another aspect that at least needs answering.) Is this possibly a commercial process? My guess is no, at this stage at least, but it does provide an interesting new opportunity for research. If they could get the current density up significantly, then perhaps there is something here.
Would that help solve the greenhouse problem? In my view, since this electricity would be the marginal production, no, unless we find a way to make electricity that totally stops the use of fossil fuels to make electricity. Nevertheless, the production of ammonia is required to address the food problem. However, if we really want to do something about global warming through ammonia usage, then a good place to start would be to make nitrogen fertilizers more efficient. A very large amount of such nitrogen finds its way into N2O, presumably through the decomposition of ammonium nitrite.  Accordingly, there is plenty of work remaining for further research. The question then is, how to fund it? Unfortunately, the scientist's first duty is to obtain funding, which encourages flag waving in papers.
Posted by Ian Miller on Sep 29, 2014 3:17 AM BST
The question I am now posing involves how scientific papers should be presented where the author faces a dilemma. On one hand, the author wants to show something that might lead to more widespread use, but on the other hand, the information might have more general use. The first point is obviously desirable if in fact the use proposed makes sense, but even if it does not it might still make sense while reporting to funding agencies. The second point involves the dissemination of knowledge, and the problem is if it is presented in one way, it may not be seen by others for whom it may be more useful. The huge output of scientific papers means that nobody can read any more than a tiny fraction, and everybody has to have some form of very coarse screening otherwise they never get anything done.
These thoughts were started, for me, by a recent paper (Angew. Chem. Int. Ed. 53, 9755 –9760) which claimed to give an interesting approach to biofuel production, but I feel the more interesting aspect of it was the implied underpinning chemistry. The basic process involved three reactions that started with molecules such as furfural and hydroxymethyl furfurals, which are acid degradation products of carbohydrates. Furfural is readily obtained from pentoses because it steam distills out of a reaction in which carbohydrates are acid hydrolysed at higher temperatures, but hydroxymethyl furfural does not do this, and instead it degrades further. It can be isolated, but at a cost, and at only moderate yield. So, before we go much further, this paper will have questionable direct applicability because it involves relatively expensive starting materials that represent only a part of the initial resource.
But it is what happens next that is of interest. The authors carry out an aldol condensation of the furfurals with acetone, thus getting C8, C9, C10, C16, etc materials. Furfural gives the furan ring and the unsaturated ketone. These are now reacted at elevated temperatures and pressures with NbOPO4 in the presence of hydrogen and a Pd catalyst. The interesting part now is that the NbOPO4 has the ability to pull out the oxygens, including the furfural ring oxygen and the ketonic oxygen (although this may be a dehydration reaction as the carbon-carbon double bond becomes hydrogenated), with the result that we end up with linear hydrocarbons.
The niobium phosphate gets a 94% yield of hydrocarbons, whereas aluminium phosphate gets a zero yield of hydrocarbons, while the palladium there catalyses the hydrogenation of the double bonds. Actually, the phosphate is not that important as Nb2O5 gives the same yield of hydrocarbons. According to the authors, what happens is that the bulk Nb – O  – Nb groups break, permitting a Nb – O – C  bond to form, and a nearby hydrogen atom can transfer to the carbon atom.
The question then is, what use is this to biofuels? Superficially, not that much because the problem of getting furans probably makes this uneconomic. Not only that, but while the C16 hydrocarbons would make excellent diesel, linear C8 hydrocarbons are not at all attractive as fuels, as lying in the petrol range and having an octane number approaching zero makes them undesirable. What I would find more interesting, though, is how this catalytic system would function with lignin, or lignin derived smaller molecules. While lignin polymerization has essentially no pattern, nevertheless many of the linkages occur through C – O – C bonds. If they could be hydrogenated, and the methoxyl groups removed, it might be a breakthrough in biofuel development. The question then is, why did these authors not try their reaction on lignocellulose to see what would happen? Perhaps they did, and perhaps there are more papers coming, but I do not feel that is constructive. We need to see the fewest papers presented consistent with getting all information over, so as to reduce the deluge.
Posted by Ian Miller on Sep 14, 2014 10:43 PM BST
The question of how planets form continued to attract attention. Everyone agrees the accretion starting position is the disk of gas falling into the forming star. The gas also contains "dust", ranging in size from a colloidal dispersion to pieces a few millimetres in diameter. It is possible some pieces could be bigger, but we would not see them. The question then is, what happens next? The standard theory is that by some undefined mechanism, this accretes into planetesimals, which are about the size of asteroids, and the resultant distribution of these, which is smooth and continuous with regards to distance from the star, gravitationally collide. The asteroid belt is therefore likely to be the remnants of this process. In my opinion, that is wrong, and the first stages were driven by chemistry, and the distribution of growing bodies is highly enhanced in certain zones of temperature appropriate for the specific chemistry.
There was an interesting paper in Nature 511: 22-24  that surveyed problems with the standard theory of planetary formation, and ends with the question, "Why is our system so different from so many others?" Unfortunately, no answer was provided. As my theory shows, the reason is very simple: the admittedly limited evidence strongly suggests that our star cleaned out its accretion disk very quickly after formation, and this stopped accretion. Other systems kept going, which leads to more massive bodies and stronger gravitational interactions, and this results in what is effectively planetary billiards takes place. Unfortunately, once gravitational interactions get big enough, the resultant system becomes totally unpredictable.
Another interesting problem involved the question of rubble-pile asteroids. One major question is how rocky planets accrete, and the standard theory seems to assume that somehow moderate sized objects form, and gravity makes these come together, and as they get more rubble, they become bigger objects. Eventually they become big enough that they heat up, partly through radioactivity and partly through the loss of potential energy when bodies pile up, and the heated body starts to melt together. Asteroids are often believed to be piles of such rubble. However, two papers were published that make this proposition less likely. In this context, my theory requires rocky bodies accreted while the accretion disk is still present to be joined together chemically, and in the case of the asteroids, by cements similar to those used by the ancient Romans, and which also come from certain volcanoes such as Vesuvius. Such asteroids can still be piles of rubble, but cemented together where the surfaces meet. Effectively, they are very poorly compacted concretes. Also, non-cemented rubble piles would exist if the pieces came together following the disk clean-out.
The first (Nature 512: 174 – 176) involved asteroid (29075) 1950 DA, which has a density of 1.7 0.7. Since the solids are believed to be similar to enstatite chondrite, it should have a density of 3.55, hence it appears to have about 50% space inside it.  However, the rotational velocity is such that if it comprised rubble, the rubble should peel off. The authors argued that it must be held together with van der Waals forces from fine grains between the larger pieces. I have a problem with this. If the spaces are filled, then the density should be higher. Note that van der Waals forces are very weak at a very short range, and according to Feynman's calculations, they fall off inversely to the power of 6 with distance. The second paper (Icarus 241: 358 -372) analysed the size/frequency distribution of small asteroids and compared these with computed collision frequencies, and they found that the assumption of rubble-pile asteroids leads to a significant worse fit with observation than the assumption of monolithic bodies, hence they conclude that the majority of main-belt asteroids are monolithic.
Finally, there is the question of global magma oceans. The standard theory has rocky planets finally accreting through massive collisions, which lead to massive generation of heat, which in turn converts the rocky planet to magma. However, evidence has been presented (Earth Planet. Sci. Lett. 403: 225 – 235) that the geology of Mars is incompatible with this picture. My mechanism for planetary formation does not forbid a magma ocean, but unless there is a giant collision between two massive bodies, there will not be, and planets can form without one. In fact, they probably have to, because the energy of collision of massive bodies is generally such that size reduction occurs as material is shed to space.
Posted by Ian Miller on Aug 31, 2014 9:34 PM BST
An interesting problem is how should scientists present their information to the public. The issue is more complicated because we have to assume that some of the public will be educated enough to understand what is presented, and if there are flaws, to pick on them. The problem then is, as other members of the public see the fallout, science itself gets discredited. One piece of news that I saw was a statement that from analysis of the decay products of heavy isotopes 182Hf and 129I, the gas and dust that formed the solar system was present in a dust cloud isolated from interstellar space for 30 million years before collapse to form the solar system took place. The news item stated that this was quite remarkable, because it only took about 1 My for the star to form once it got going (or so we think) and about 30 My for the rocky planets to finally form (this is almost certainly wrong – Mars took about 3My.)  What would be your reaction to seeing that?
My initial reaction, knowing something about the subject, was to say, "Hold on a minute. We date the early stages of solar system formation through the decay of 26Al, and that has a half-life of about 73,000 years." If we take the half-lives through 30 My, it becomes obvious that there is essentially no 26Al left. As it happens, with what we know to have been present initially, there is insufficient left to be useful for dating after about 3My at best. So, how do we resolve this?
If we look at the actual paper, (Science 345: 650 – 653), what they actually say is that certain radioactive nuclei were formed 100 My and 30 My before the sun started forming. They then produce one of those "pretty pictures that implies just about everything important ended 30 My before star formation, and that is presumably what the writer of the public statement latched onto. This is not helped by the same being presented in an explanation (Science 345: 620-621) which states early on that the gas cloud was isolated for 30 My before stellar formation. However, at the end of the paper, the authors of the paper conceded that additional supernovae were required to put the 26Al into the gas just before star formation. The problem is, such supernovae would also put in more of the other isotopes as well.
Thus the statement that the dust formed 30 My before star formation is just plain misleading. That does not mean it is wrong, and the authors have found something. The problem is, it has since been interpreted as something else by the media who do not have the skill to actually analyse what is there. So, what the story should have said is that the material used to form the solar system was a mix of material from a sequence of supernovae. The basic gas, hydrogen and helium, was, of course, there from the big bang.
This article could be written off as unimportant. The problem is, this sort of reporting is more widespread. Think of climate change. Why is there such a heated debate? Surely we can find some critical results and agree what they mean. Unfortunately, this does not seem to happen. I think that learned societies have a responsibility to present critical fact-stating documents, where everything within them is analysed and its reliability stated. Most topics have only a very limited number of really critical papers; the problem is to get these summarized so that the conclusion is not misleading.
Posted by Ian Miller on Aug 17, 2014 10:50 PM BST
One of the more unusual publications recently involved a theoretical computation of a hypothetical carbonium ion (or at least a very short-lived molecule-ion) C-(CH3)5 . (Angew. Chem. Int. Ed 53: 7875 – 7878.) Computations concluded that the structure was that of a trigonal bipyramid, which effectively had three methyl groups around the central carbon atom, which was in sp2 configuration, and two other methyl groups bonded to the p orbital of the central carbon atom. All methyl groups were in sp3 configuration. The important point about such a computation is that the ion is argued to be sufficiently stable that it exists, albeit short-lived, as it has two computed decay modes. The question now is, is it right? The issue is important because it proposes a type of bonding that so far has not been recognized, or if it has, the recognition passed by me.
There is one important point to note. Computations indicate that the CH5 ion does not follow the same structure. This ion can be considered as a distorted CH3 system that bonds to a H2 molecule. This gives three equivalent hydrogen atoms and two further equivalent, but different atoms. This is supported by the infrared spectrum (Science 309: 12219 – 1222) which shows a fluxional molecule consistent with that structure and with full hydrogen scrambling. Why does the replacement of hydrogen atoms with methyl groups make such a difference? Then again, does it?
The CH5 ion is conceptually simple, in that it is really a carbenium ion making an electrophilic attack on a two-electron bond. Now, if it will do that to the hydrogen molecule bond, why does the same thing not happen with, say the (CH3)3 – C ion which could make an electrophilic attach on the C – C bond of ethane?
The next question is, does it matter? I think it does, because it calls into question a number of bond issues. The first is, where is the formal positive charge? In my view, it starts on the central carbon atom. I argued that the gas phase stabilities of the usual carbenium ions is given quite satisfactorily by assuming the positive charge is first located at the formal ion centre, and it then polarizes the substituents (Aust J. Chem. 26 : 301-310.) That makes the (CH3)3 – C ion considerably more stable than the CH3 ion, and that ion would more readily polarize the bond in ethane. The issue then resolves itself to whether the formation of a two-electron C – C – C bond, plus the polarization energy is lower than the energy of the C – C bond in ethane, plus its polarization energy. A further question then is, is a two-electron C – C – C bond even possible? What we are asking, at least in conventional chemical thinking, is for the two methyl electrons that are separated over that distance to pair, and get the appropriate phase relationship. What disturbs me about this is that there are no other examples that I can think of where a vacant p orbital can bind two electrons in that way. My immediate thinking then makes me ask, is there any equivalent in boron chemistry? I am not sufficiently familiar to say there is not, but I am certainly unaware of any. Therefore the question is, does B-(CH3)5 exist? If the trigonal bipyramid structure for C-(CH3)5 is correct, one would think it should because the troublesome ionic character that leads to rearrangement is missing. If on the other hand, such an ion represents the (CH3)3 – C ion polarizing ethane, then there should be no B-(CH3)5
Posted by Ian Miller on Aug 11, 2014 12:27 AM BST
Some time ago I made a number of posts on biofuels, by concentrating on what I saw as the pros and cons of individual technologies, but by themselves, while you may have your own ideas as to their usefulness, one of the more important points is they lacked perspective. Of course it is hard to get perspective of such a wide field in a 600 word post. Another odd thing about those posts was I did not get around to posting about hydrothermal liquefaction, or hydrothermal hydrogenation, which, in my opinion, are likely to be the most useful technologies. Of course I am also biased, because these are the areas in which I have actively worked and published on and off over the last 35 years. The reason I got into those areas was that early in my career, while working for the main New Zealand government chemical research lab, I was given the job, and a useful travel budget, to try to survey what were the possibilities, and to unravel what the more promising (if any) options were. As a consequence of that, I have now repeated the exercise (without the travel budget!) and put my conclusions into another ebook that I am publishing on July 31.
The important aspect about such a survey is that it must explain why it is important to develop biofuels and to do that, numbers have to be put on the assertions. I feel that is the biggest problem with current work in this area. It is true, and I conclude this, that there is no single 'magic bullet', and that a very large number of resources will have to be used, and there is no harm in using resources that are available, even if, by doing so, you will be doing something that is not general. But it is also important to end up with a limited range of fuels. There is no point in having 120 different fuels on the market, when a given motor can only reasonably operate on one. Now, if you put numbers on resources, you very quickly find that if you want to eat, and you want to retain something of the natural land-based environment, you cannot replace oil from the land. There is simply insufficient area that is reasonably useful. Accordingly, I conclude that eventually we have to utilize the oceans. Now the problem here is that we have very little truly adequate technology to do this with. On the other hand, we know that in principle we can grow the algae. Problems include getting past "in principle". There have been clear demonstrations of growing macroalgae in deep water, but the experiment by the US navy started in the 1970s got wrecked in a storm, and when, at the time, the price of oil collapsed, the project was stopped. That does not mean it cannot be restarted, but it will require more work to solve the obvious problems.
So why do I think hydrothermal liquefaction is such a desirable technology to chase? Largely because it can process any biomass and with some reservations, provided one adjusts the methodology to be suitable for the resource. It then produces either drop-in fuels, or fuels than need a little more processing, however, once one gets to the liquid state, it is much easier to transport the "pre-fuel" to a refinery for upgrading. Can we totally replace oil? Probably not. Probably we shall have to reduce the wasted travel, but in principle we can come reasonably close. And while I most certainly do not claim to have all the answers, I am putting what I have out there.
Posted by Ian Miller on Jul 28, 2014 5:14 AM BST
My theory of planetary formation differs from the standard theory in three ways. The first two are that the standard theory has no mechanism to reach the initial position that it assumes, which is an even distribution of planetesimals (asteroid-sized bodies) with respect to distance from the star. (There is a lowering of concentration due to greater circumference as r increases.) My theory requires accretion to be due to chemical means (which includes physical chemistry), which means the distribution of accreting bodies is highly temperature dependent. The third major difference is that the standard theory has everything accreting through the collision of similar-sized bodies, so it becomes very slow; my theory requires accretion to be continuous and proportional to the gravitational cross-section, hence major bodies grow very quickly or not at all. The maths show that once one body in a region is significantly larger than any of the others, it alone tends to grow, by sweeping up all the smaller objects.
In a previous post, I used my theory of planetary formation to predict the properties of the two planets around Kapteyn’s star. Over the last few weeks there have been further papers in accord with my theory, and against the standard theory of planetary formation. In one (Science 344: 1150) the use of the hafnium/tungsten chronometer showed that the iron meteorite parent bodies formed over an interval of 1My, and within 0.1 – 0.3 My of calcium aluminium inclusions. However there is also evidence that the latter formed over about 3 My (Science 338: 651) so the iron meteorite bodies were amongst the earliest bodies in the solar system to form, at least in the zone of the rocky planets. That is required by my theory, because iron meteorites had to form at least within the first My, and probably over an even shorter time.
My theory requires rocky planets, with the possible exception of Mercury, to accrete through water acting as an initial setting agent for silicaceous cements, which gives the initial body enough strength prior to gravity becoming strong enough. This requires the water to be here initially, and not through cometary bombardment. There have been some papers recently that argue that seismic evidence suggests a zone between 410 and 660 km deep that contains 1 – 3% water (Science 344: 1265). That cannot get there by cometary impact, and had to have been there initially, which is well in accord with my theory.
Growth of moons should follow the same rule in my mechanism, and a recent paper (Icarus 237: 377) gave interesting support, in that the moments of inertia of Callisto and Titan inferred from gravity data suggest incomplete differentiation of their interior, which implies cold accretion. Simulations show accretion rate plays only a minor role, and the fraction brought by large impactors plays a more crucial role. The simulations show that a satellite exceeding 2,000 km in radius may accrete without experiencing significant melting only if its accretion is dominated by small impactors and if more than 10% of satellite mass was brought by satellitesimals larger than 1 km, global melting for large bodies like Titan and Callisto cannot be avoided.
On the other hand, there was one paper that suggests an alteration to what I put in the ebook is required. When I wrote the book, all available evidence stated that the isotope ratios of most elements in Moon and Earth samples were the same. This was a problem, because in the giant impactor scenario, since isotope ratios seem to be dependent on radial distance from the star, the impactor should have had different isotope compositions. My suggestion was to support a previous proposition, namely that the impactor formed at the same radial distance, specifically at one (or both) of the Lagrange positions L4 or L5. If so, the Moon would have formed towards the end of Earth’s accretion (because it needs Earth to have a significant gravitational field at these positions) and because the rocky planets were supposed to accrete by adding additional material as it headed starwards, there should have been a slight difference in oxygen isotope ratios. On this proposition, the impactor would not accrete many small iron containing bodies either, because the lunar feed would lie outside the zone where iron melted.
So, overall, I remain reasonably happy. The very first forms of this theory were  laid down in the mid 1990s, and of course I started to keep track of evidence for or against. Some of the evidence as interpreted by the authors naturally supported the standard theory, but so far nothing I have seen has contradicted fundamentally what I started with, although of course there were some minor adjustments. There is a slightly weird feeling when your theory contradicts what everyone else thinks, and nothing falsifies it over twenty years, and also some of what has deeply puzzled almost everyone has a natural explanation.
Posted by Ian Miller on Jul 14, 2014 4:01 AM BST
Throughout my career in chemistry, one of the more interesting debates has been the nature of the cationic exo-2-norbornyl system, and recently (Angew. Chem. Int. Ed. 2014, 53, 5888) a paper was published that included, after a discussion of the original debate, the quote: "To our surprise, the structure of C7 H11+ obtained under our conditions is not that of 2NB+, but instead corresponds to a much more stable rearranged ion." Why was this surprising?
First, this is irrelevant to the original debate on the question, why is the rate of solvolysis of the exo-2-norbornyl X systems, where X is a leaving group, proceed much faster than that of the endo system? The isolated 2-norbornyl cation should be the same for each, and is hence irrelevant. The reason for the differences in solvolysis is not the structure of the isolated ion, but rather the activation energy required to reach the transition state, in which the ion is not fully developed. If the ion fully develops as a free ion, then both starting materials will lead to one ion with one energy and structure.
Another quote: "Although 2NB+ is well-known in the condensed phase, it is not generally recognized that it is not the C7 H11+ global energy minimum. Computational studies have explored some C7 H11isomers, but there has been no comprehensive study of the potential energy surface, and no studies of this system at higher levels of theory.[20, 28, 29]". The paper then went on to show from measuring the infrared spectrum of their cations generated in the MS that the ion was the 1,3-cyclopentenyl carbenium ion. This was apparently a surprise to them.
First, the fact that the 1,3-cyclopentenyl ion was at an energy minimum for this system has been known since the 1960s, and the fact that certain cyclohexenyl carbenium ions would contract to a cyclopentenyl system with the methyl generated at an adventitious position was also known in the 1960s. Then, in 1973 I published a paper explaining why such carbenium ion rearrangements take place, and giving a procedure for calculating the energies of the various species. As to why the rearrangement of the norbornyl to the cyclopentenyl system occurs, we might note that the norbornyl system is in effect a five-membered ring with a two-carbon bridge at the 1,3-positions. (Count from C1, and make what is usually C7 now C2.) The system is also highly strained, and forming the cyclopentenyl system relieves that strain. Lose the bridging bond, and the two "methyl" substituents are already in position following the required hydride shifts, which are known to be fast in this system.
To summarize, the fact that the system forms the 1,3-dimethylcyclopentenium cation should not be a surprise. More interesting is the reason this system is in an energy well, not so much for the norbornyl system, where the strain energy makes it somewhat obvious, but rather for the corresponding cyclohexenyl system. The calculations I made do not need "the highest level of quantum computing". What I assumed was that before the ion was formed, the bonds were standard. Now, when the ion is formed, the action in each bond must remain constant, because action is quantized. What does happen to such a standard framework comes from the application of Maxwell's electromagnetic theory. Very specifically, the enhanced electric field polarizes all electric distributions in the space around it If we assign a volume and a relative permittivity to each specific type of bond (in this case C – C and C – H ), then the stabilization depends on the bond's location with respect to the formal charge, which, for a cation, is a carbon atom. An important point was that the assumed permittivities and volumes were consistent with effects noted from electromagnetic radiation. Perhaps not quite as "glamorous" or "sophisticated" as "the highest level of quantum computing", but equally Maxwell's electromagnetic theory is not exactly fringe science either.
Posted by Ian Miller on Jun 29, 2014 11:45 PM BST
One of the most intriguing announcements recently regarding exoplanets is that two planets have been found around the red dwarf Kapteyn's star, which happens to be about rather close to us, at about 13 light years distance. Even more intriguing is its proper motion; it was about 11 light years distant about 11,000 years ago. The reason for this is that it is orbiting the galaxy in the opposite direction to us! Galaxies grow by accreting galaxies, and our galaxy has apparently swallowed a small galaxy, some of which may be known now as the Omega Centauri cluster. Another interesting feature of this star is that it was formed about two billion years after the big bang. Not surprisingly, the star is rather short of heavy elements, as these have to be made in supernovae.
The planets have been found using the Doppler method, which measures small variations in velocity of the star as it wobbles due to the planets. This star has a mass of about 0.28 times that of the sun, and a surface temperature of about 3,500 degrees C, and such low stellar masses make the detection of planets somewhat easier, because small stars wobble more through the gravitational effects of the same sized planet. The two planets are (b) at 0.168 A.U. from the star, and at least about 4.5 times Earth's mass, and (c) at 0.311 A.U. from the star, and at least about 7 times Earth's mass. (The "at least" is because what is measured is msini, i.e. the actual tug that we see is the component in our directions, and the angle of the orbital plane is unknown.) The reason this hit the news is that (b) is at a distance from the star where water could be liquid, so it is in the so-called habitable zone. With over 11 billion years for life to evolve, would it? If it would, with an extra 6.5 billion years, why hasn't its technology led to space travel to us?
If you accept my theory of planetary formation, the answer is, life there is highly unlikely. In this theory, certain types of planet form at specific temperatures in the accretion disk. The temperature depends on the power generated at a point, which in turn depends on the gravitational potential and the rate of the starwards component of matter flowing through the point. The first, from Newton, is proportional to stellar mass, the second, from observation, is very roughly proportional to stellar mass squared. Accordingly, the radial distance for equal temperatures will vary between accretion disks proportional to stellar mass cubed. Now, this is an extremely rough approximation, not the least because we have left heat radiation out of the calculation and assumed it to be proportionally the same for all disks. However, heat is radiated by dust, which depends on metallicity (which, to astronomers, means elements heavier than helium) and this is an extremely low metallicity star. If we assume my approximate relationship, then the Jupiter equivalent should be at 0.12 A.U. and the Saturn at 0.20 A.U., both plus or minus quite a lot.
Notwithstanding the inherent errors, I am reasonably confident we do not have rocky planets there, because while my estimates have a large potential error, there is a huge difference between the melting point of ice and the melting point of iron (needed to get iron lumps as in meteorites). Further, the error is reasonably consistent, being out by a factor of 1.4 for the Jupiter equivalent, and 1.55 for the Saturn equivalent, if those are what they are. That is reasonable for less heat loss due to lower metallicity. In my theory of planetary formation, these two planets would be interpreted as the cores of the Jupiter equivalent (formed like a snowball by ice sticking together near its melting point following collisions) and a Saturn equivalent (formed by melt fusion of methanol/ammonia/water near that eutectic temperature, the energy of the collision providing the heat, the melt then fusing the ice.) The reason they would not develop to full gas giants would be simply a lack of material to grow that big. Of course such dust as was available would also be incorporated, and the resultant planets would be like a giant Ganymede and a giant Titan. Thus I would expect (b) to have little atmosphere but maybe be a waterworld on the face tidally locked to the star, and (c) to have a nitrogen atmosphere, and maybe methane. Why maybe? Because methane is photochemically degraded, and presumably has to be regenerated on Titan. On Kapteyn c, with 11 billion years photochemistry, the methane may not have lasted. There would be no life on (b), nor for that matter in any Europa under-ice ocean, because of a general deficiency of nitrogen, and also a probable difficulty in forming phosphate esters.
So, that is my prediction. Unfortunately, I guess I shall never know whether it is right.
Finally, a small commercial break! Four of my fictional ebooks are on special at Amazon from the solstice for a few days, including the one that was actually the cause of my developing my alternative theory of planetary formation. The fiction required an unusual discovery on Mars, I invented one, and an editor had the cheek to say it was unbelievable. Now editors in publishing houses have a right to criticize grammar, but not science, so I ended up determined to do something about this. Details of the special are at
Posted by Ian Miller on Jun 16, 2014 12:55 AM BST
   1 2 3 4 5 6 7 8 9 ... 17