Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

As I haven't updated for some time, there should be a lot of references summarized, however, many of them turned out to be only of partial interest. There were a few papers discussing the origin of chondrules. Up until recently, while the formation of these had been something of a mystery, the nearest there was to a conclusion was that they were formed in the plume formed by collisions of planetesimals. Fu et al. (Science 346: 1089 – 1092) found that chondrules in one meteorite formed at about 2 – 3 My after the first small bodies. This was supported by Palme, et al. (Earth Planet. Sci. Lett. 411: 11 – 19.) who found that chondrules behaved as if they were formed before any larger bodies, and were not formed by impacts. On the other hand, Johnson et al. (Science 517: 339 – 341) argued that chondrules in CB chondrites probably formed in a plume produce by an impact at a relative velocity of greater than 10 kilometers per second. If so, meteorites are a byproduct of planet formation rather than left-over building material. Of course there may have been more than one route that lead to their formation, but as can be seen, there is no clear answer to this question yet.
The landing on the Jupiter family comet 67P/Churyumov-Gerasimenko probably attracted as much recent public interest as anything in science, and some data are coming in. Examination of the comet's water (Altweg et al. Science 347: 126952-1 to 3) showed that the comet had a D/H ratio that is three times greater than Earth's water, and hence Jupiter family comets could not be a significant source of Earth's water. The reason this is important is that the water has to come from somewhere, and comets were once considered the obvious source. However, the usual distant comets were found to have too much deuterium, so Jupiter-family comets became the choice. The alternative is carbonaceous chondrites, but the problem with these is they are rather rare, and lie on the outer part of the asteroid belt. Had they been the source, why were there so many of them, and so few of what are now the more common asteroids? And if there were that many, why did they not accrete into a small planet? In my theory, neither comets nor chondrites are significant sources of water on earth.
 Meanwhile, spectral data of the surface of the comet was compatible with the presence of opaque minerals associated with non-volatile organic material with C-H and O-H bonds, but with very little contribution from N-H bonds (Capaccioni et al. 2015. Science aaao628 – 1 to 5.). There may have been small amounts of ice, but no ice-rich patches and the surface was generally dehydrated. Similarly, (Hassig et al. 2015 Science 347: aaa0276-1 to 4) showed the outgassing comprised water, CO and CO2. The relevance here is the absence of significant amounts of nitrogen-containing compounds, as required by my theory of planetary formation, as long as the body did originate in the Jovian region. That is pleasant support, even though the supporters do not realize it.
Significant results are beginning to come in from Gale crater, on Mars. Of particular interest to me was that from Bridges et al. 2015 (J. Geophys. Res.: Planets: 10.1002/2014JE004757). The clays found at Gale crater was consistent with the basalt having reacted with a fluid of pH between 7.5 and 12, and further, the reactions did not occur in a setting where exchange with an overlying CO2 atmosphere was possible because had it been so, there would have been deposits of carbonates, and there were not. My theory involved an early methane atmosphere with ammonia in the local water, although of course Gale crater may not be a good example of early Mars, as impact craters could have their own localized geology. Nevertheless, overall all these facts are in accord with what I published, and nothing has been found that contradicts it, so I am tolerably happy.
On a more personal level, on March 4 I have been asked by the Wellington Astronomical Society to give a talk, and include some chemistry. Accordingly, I chose "Origin of life" as a topic and have put out an abstract and issued two challenges, which readers here may as well join in.
1.     Why did nature choose ribose for nucleic acids?
2.     How did homochirality arise?
Put your guesses or inspired knowledgeable comments at the end of this post. The answers are not that difficult, but they are subtle. I shall post my answers in due course. In the meantime, I am offering serious discounts on my ebook "Planetary Formation and Biogenesis" from Amazon (US and UK only) from March 6 for about six days, the discount abating over time, so get in early. (Sorry about that commercial intrusion.)
Posted by Ian Miller on Feb 16, 2015 2:21 AM GMT
Yes, this post will be controversial, but I am doing it for several reasons. The first is my wife was convinced there is, and she was equally convinced that I, as a scientist, would quietly argue the concept was ridiculous. However, as she was dying of metastatic cancer we had a discussion of this issue, and I believe the following theory gave her considerable comfort. Accordingly, I announced this at her recent funeral in case it helped anyone else, and I have received a number of requests to post the argument. I am doing two posts: this one with some mathematics, and one where I merely assert the argument for those who want a simpler account.
First, is there any evidence at all? The issue is complicated in that observational verification can only be answered by dying. If there is, you find out. What we have to rely on is statements from people who did not quite die, and there are numerous accounts from such people \, and they claim to see a tunnel of light, and relations at the other end. There are two possible explanations:
(1)  What they see is true,
(2)   When the brain shuts down, it produces these illusions.
The problem with (2) is, why does it do it the same way for all? There was also an account recently of someone who died on an operating table, but was resuscitated, and he then gave an account of what the surgeons were doing as viewed from above. One can take this however one likes, but it is certainly weird.
What I told Claire arises from my interpretation of quantum mechanics, which is significantly different from most others', and I shall give a brief outline now. If anyone is interested in going deeper, I have an ebook on the subject ( I start by considering the two-slit experiment, and consider the diffraction pattern that is obtainable. Either there is a wave guiding the particles or there is not. Most physicists argue there is not. They just happen to give that distribution. You ask, why? They tend to say, "Shut up and compute!" For the fact is, computations based on what is a wave equation give remarkably good agreement with observation, but nobody can find evidence for the empty wave. For me, there must be something causing this behaviour. Accordingly, my first premise is:
The wave-like distributions found in quantal experiments are caused by a wave. (1)
This was first proposed in de Broglie's pilot wave theory, but modern quantum theory does not assert this.
As with general quantum mechanics, the wave is represented mathematically by
 ψ = Aexp(2πiS/h)    (2)
where A is the amplitude, S is the action, and h is Planck's quantum of action. Note that the exponent must be a number. As a consequence, it is generally held that the wave function is complex, but this is not entirely true. From Euler's relation
exp(πi) =-1         (3)
it follows that, momentarily, when S = h/2, or h, the wave becomes real.
My second premise is
The physics of the system are determined when the wave becomes real.   (4)
This is the first major difference between my interpretation and standard quantum mechanics. The concept that the system may behave differently when the wave function is real rather than imaginary has, as far as I know, not been investigated. This has a rather unexpected benefit too: the dynamics involve a number of discrete "realizations", and the function is NOT smooth and continuous in our domain. If you accept that, it immediately follows why stationary states of atoms are stable and the electron does not radiate when it accelerates as required by Maxwell's laws. The reason is, the position of realization does not change in the stationary state, and therefore the determination of the properties shows no acceleration. From that, it is very simple to derive both the Uncertainty Principle and the Exclusion Principle, and these are no longer independent propositions.
Now, if (1) and (4), it follows that the wave front must travel at the same velocity as the particle; if it did not, how could it affect the particle? The phase velocity of the wave is given by
v = E/p        (5)
Since p is the momentum of the particle, and if the phase velocity is the same as the particle velocity (for the particle, consider expectation velocity), then the right hand side must be mv2/mv = v. (Recall that the term v must equal the article velocity.) That means the energy of the system must be twice the kinetic energy of the particle. This simply asserts that the wave transmits energy. Actually, every other wave in physics transmits energy; just not the textbook quantal matter wave, which transmits nothing, it does not exist, but it defines probabilities. (As an aside, since energy is proportional to mass, and mass is proportional to probability of finding it, in general this interpretation does not conflict directly with standard quantum mechanics.) There are obvious consequences of this that lie outside this post, but what I find strange is that nobody else seems to have considered this option. For this discussion, the most important consequence is that both particle and wave must maintain the same energy. The wave sets the particle energy because the wave is deterministic; the particle is not and has to be guided by the wave. There is now a further major difference between this interpretation and the standard interpretation: waves are both linear and separable, as in standard wave physics. There is no need for a non-divisible wave for the total state of an assembly because there is no renormalization due to probabilities.
Now, what is consciousness? Strictly speaking, we do not know exactly, but examination of brains that are conscious appear to show considerable electrical activity. Furthermore, this activity is highly ordered. While writing this, my brain is not sending random pulses, but rather it is organising some reasonably complicated  thoughts and setting out action. To do that, and overcome entropy, there is a serious expenditure of energy in the body. (The brain uses a remarkably high fraction of the body's energy.) I leave aside how this happens, but I require consciousness to be due to some matrix that remains undefined but evolves and is superimposed on the brain, and it orders the activity. Without such a superimposed entity, simple entropy considerations would lead to the decay of the order required for conscious thought. Such order must involve the movement of electrons an since this is quantum controlled, then the corresponding energy must be found in an associated wave. It therefore follows that when we are conscious and living "here", there is a matrix of waves with corresponding energy "there".
Accordingly, if this Guidance Wave interpretation of quantum mechanics is correct, then the condition for life after death is very simple: death occurs because the body cannot supply the energy required to match the Guidance Waves that are organizing consciousness, but if at that point the energy within the Guidance Wave matrix can dissociate itself from the body, and maintain itself "there", and recall that the principle of linearity is that other waves do not affect it, then that wave package can continue, and since it represents the consciousness of a person, that consciousness continues. That does not mean there is life after death, but it does in principle appear to permit it.
Is the Guidance Wave interpretation correct? As far as I am aware, there is no observation that would falsify my alternative interpretation of quantum mechanics, while my Guidance Wave theory does make two experimental predictions that contradict standard quantum mechanics, and these could be tested in a reasonably sophisticated physics lab. It also greatly simplifies the calculation of some chemical bond properties.
Is there life after death? In my opinion, you only find out when you die, but interestingly, this interpretation gave Claire surprising comfort as her death approached. If it gives any comfort to anyone else, this post will be worth it to me.
Posted by Ian Miller on Feb 2, 2015 1:34 AM GMT
Back again. My wife died on the 16th of January, so my blogging will be a bit erratic for a while, but at the end of last year I had planned some, and one theme included the behavior of the scientific community in the dissemination and discovery of knowledge, so here is the first blog that was written before this unfortunate event.
In a recent essay in Angew. Chem. Int Ed. 52: 118 – 122, van Gunsteren outlined what he considered the seven sins in academic behavior, which he ordered in increasing gravity of the offence. I found this to be quite interesting, and worth exploring further, The least "severe" sin, according to the author was "Poor or incomplete description of the work". As the author says, reproducibility is a critical element of good science.
Why is this not such a sin? The author then argues that with the growth of complexity of equipment and the growth of the sophistication of mathematical analysis aided by computers, the publication and analysis of all data required for others to reproduce the work has become more cumbersome. There is no doubt about that, and there is also little doubt that journal space does not lead itself to providing everything, but this in turn raises an interesting question: is the finding of the experiment that complicated that some simplified version cannot be provided? That leads me to a particular dislike I have for computational chemistry papers, in which the general reader such as me, who has an interest but is not involved, has no idea what key features led to the conclusion because they are not listed. Of course the program details are too complicated, but if there is nothing of general interest, why is it published?
When I was doing my PhD, I tried a synthesis outlined in a report in Tetrahedron Letters, and I could not get it to go. Now first, the substrate was not the same, so maybe the reaction did not go on what I was trying to do, at least that was the conclusion I reached, so I abandoned that route, and instead tried a route that was very much longer, but for which I at least could find enough details to know whether I was doing it properly. Also, while it was time-consuming, at least it had the merit of working, although it did not give me quite the range of substitution patterns I would have preferred. (I also tried an alternative synthesis, and it worked to some extent, but only on the most electron-rich substrates. That gave me more reason to believe the first option would not work.) However, four years after I had finished my PhD I came across a full paper where the real details of that failed option were finally published, and one condition that I doubt a young student would be likely to recognize as important had been left out of the letter, yet this condition was absolutely critical, and it would never have been put in accidentally because it needed a special procedure that was quite outside the usual conditions of organic syntheses. I do not consider that a minor sin. I consider that as likely to be due to an egotist who wanted to get as many papers in this field before others worked the obvious possibilities. In my opinion, a synthesis procedure is useless unless sufficient detail is given so that a tolerably competent chemist can carry it out and make the required product. So, for me, if this is a minor sin, some of the others must be pretty bad.
Van Gunsteren then proceeds to criticize the practice of dumping procedures in supplementary information. His criticism of this is largely based on this being less well reviewed than the main body of the paper. Personally, I do not find this to be terribly important. The fact is, in most papers there is a strict limit to what peer reviewers can be required to find. They may, and I have occasionally found absolutely critical errors of procedure, but basically the whole point of peer review (at least in my opinion) is to ensure the paper is coherent and understandable, and makes a point worth making. The peer reviewer usually will have no more chance of finding a basic flaw in a procedure than anyone else who does not try the procedure, and the peer reviewer cannot be expected to reproduce the work. It is the responsibility of the author alone to ensure that the details are correct, AND all details are there. After all, the author alone actually knows what was done, unless, of course, it was a student who did it
Posted by Ian Miller on Jan 19, 2015 7:04 PM GMT
This will be my last post for 2014, and as is customary at this time of the year, I thought I should survey what I thought were the highlights for me this year. I started this year with posts on how the ancients could have considered the heliocentric theory, largely to support my science fiction trilogy that I had written. The key here was to get this into the plot as a key element, and that gave me the chance to try to explain what I believe science is all about. Now the good news is I have sold quite a few copies, so hopefully there are some more out there that get introduced to the beauty of theoretical science at a level they can understand. (I doubt there is any way you could explain quantum field theory to the general public in a way that makes sense and honestly goes to the basis of the theory.) The key I tried to get across was to ask questions and devise ways to make critical observations that clearly separate possibilities. (In this case, either the Earth moves, or it does not, therefore one needs observations that could be carried out by an ancient Roman that will be different whether it does or does not.)
The next milestone for me was in a sense a negative one. My ebook on "Planetary Formation and Biogenesis" was published three years ago, and its basis is that planetary accretion started through chemical interactions (including physical chemical) in the accretion disk, which has led to the various planetary systems being compositionally different. What is interesting for me is that in the following three years, no observation has been found that falsifies anything of substance toward the theory. Unfortunately, while I made over seventy predictions based on the theory, most are too difficult to carry out. Notwithstanding that, there is one simple chemical experiment that could be done in the lab right now. I shall post on that in the new year, but it is based on an experiment carried out by Carl Sagan's group in the 1960's and which has seemingly been forgotten, or at least its significance not appreciated, since.
The next milestone for me was to produce my ebook on biofuels. I have worked in this area on and off (depending on funding, which has been erratic to say the least) for decades. The conclusion I reached is that by simply looking at the size of the problem (the amount of oil consumed) this cannot be met by biofuels without using the oceans. There are ways to do this, at least in theory, but it would need a significant investment in science to achieve it. I am not holding my breath waiting for funding, but then again I have now reached an age where my personal research involvement is fading away at a serious rate.
What about chemistry as a whole? My impression is that it is very efficient at making new compounds and doing things with them, but I continue to think that we are not so good at sorting and analyzing what we know, probably because of the rate of production of new compounds, etc. I have no doubt that people know a very lot about their specialist field, but in my opinion it is harder to understand the bigger picture. Maybe you do not agree? If so, I welcome your comments.
Meanwhile, I wish you all a very Merry Christmas, and all the best for 2015. I shall be back mid January.
Posted by Ian Miller on Dec 15, 2014 2:01 AM GMT
Chemists are fairly adept at finding out what molecules are present in a sample, but what happens when the sample is light years away? Astronomers have worked out how to do some spectroscopy, but it is not exactly easy to do. One of the interesting reports recently was the announcement of the measurement of an exoplanet's atmosphere (Nature 526: 526 – 529). When starlight passes through the atmosphere, various absorption lines can be seen as long as the atmosphere is basically transparent. While the star is a strong source of light (somewhat too strong, since most of the light does not go through the planetary atmosphere, as the star is very much bigger) the path length of a giant planet's atmosphere is also somewhat longer than the average laboratory cell! In this case, the main signal detected was water, and it was noted that the level of heavier elements in the atmosphere relative to hydrogen was no greater than 700 times that of the star, as would be expected if the planet was a giant that accreted gas from an accretion disk. Not that there would be many other ways of making a gas giant.
Another study (Icarus 243: 39 – 47) considered the chemistry of cometary methanol during impacts. Impacts cause methanol to dissociate into CO and CH4, however the energies are such that methanol should survive accretion onto the large icy satellites, including Callisto, Ganymede, Titan, Ceres and Pluto, although cometary impacts following accretion will produce dissociation. Asteroid impacts onto Ceres would dissociate methanol. However, Callisto could have produced up to 10-2 bar during the late heavy bombardment, while Titan would have acquired 0.1 bar. Since it did not, the authors imply that the methanol concentration in the Saturnian system was much lower than that of comets, or alternatively, some unspecified conversion of CO to CH4 occurred. This supports my mechanism of planetary formation, in which comets were not the source of methanol or other carbonaceous material on the icy bodies. Titan would have contained methanol, but this would be converted to methane by geochemical processes. These authors also show that CH3OH and CH4 abundances on a persistently shadowed part of the moon cannot be of cometary origin.
One of the more difficult questions is what the original earth was like. The standard theory has it that the planet formed as a consequence of giant collisions that led to a magma ocean, but a recent publication (arXiv:1403.0806) throws up interesting constraints. The authors propose at least two giant impacts to generate a global magma ocean based on the ratios of 3He to 22Ne. The depleted mantle has a ratio at least 10, while a more primitive mantle has a ratio of 2.3 - 3. The solar ratio is 1.5.  In-gassing of gravitationally accreted nebular atmosphere will explain the 3, but to get to 10 it requires at least two episodes of atmospheric blow-off and magma ocean outgasssing. The preservation of the low ratio in a primitive reservoir sampled by plumes suggests that later giant impacts, including the moon-forming impact, did not generate a whole mantle magma ocean. Atmospheric loss episodes with giant impacts provide an explanation for Earth's subchondritic C/H, N/H and Cl/F elemental ratios, while preserving chondritic isotope ratios, but if so, a significant proportion of terrestrial water and other volatiles were accreted prior to the last giant impact, otherwise the fractionated elemental ratios would have been overprinted by the late veneer. What is most surprising here is that the collision that caused the moon to form was insufficient, yet the carbon, nitrogen and halogens were determined relative to hydrogen prior to the moon-forming event. That would require the current volatiles were degassed from the earth at a later date.
There were two big events in November. The first involved Philae landing on a comet, and apparently it has made a lot of measurements, and sent the data back to Earth. However, as yet we have no idea what was discovered. The fact it landed and ended up in the shade was bad news because the solar cells will not recharge the batteries adequately. For me the big disappointment was that the device that bored into the comet apparently struck something hard, and when the drill was withdrawn, apparently there was no sample. This is one of the difficulties with robots; whoever designs them has to know what the conditions would be. Why would there be no sample? One possibility is that the ice has clathrated or adsorbed gas in it, and the heat of the drill vaporized the gas, the pressure of which blew out the sample, however I guess we shall never know because "no sample" cannot be analysed.
The second big event involved the European Space Agency, who have studied the star, HL Tauri and found an accretion disk around it. The star is about 1 million years old, and the disk has rings in it, with dark gaps between them. The most obvious cause for such rings would be the formation of planets, although that does not mean there is a planet in every gap, because while a planet will clear out dust on its path, gravitational resonance will also clear out material. One problem is we cannot see the planets. Why would we? We can see four giant planets around the star HR 8799. These are newly-formed giants, and the gravitational energy of the gas falling onto the planet heats it to a yellow-white heat, hence they glow. These are all very much bigger than Jupiter. Similarly, there is a star LkCa 15 that is 3 million years old, and we see a planet much bigger than Jupiter, and significantly further from the star. Planetary growth should be faster the closer to the star, at least for the same sort of planet, because the density of matter increases as it falls into the star. Since we only see one giant, my theory requires there to be three other giants we cannot see, presumably because they are yet of insufficient size to glow sufficiently brightly for us to image them. So, if I am right, 1 My gets you giants of the size we have, and the longer the disk lasts, the bigger the giants get.
Posted by Ian Miller on Nov 30, 2014 8:58 PM GMT
It seems to me there are two purposes for theory: to enable the calculation of things of interest so that predictions can be made, or to lead to understanding so that even if calculations are not practical, at least educated guesses can be made to guide further action. At the risk of drawing flak from the computational chemists, I think the second purpose is of more importance to chemists. The problem is, chemistry is based on a partial differential equation that cannot in general be solved, if for no other reason than the equations relating to a three-body problem involving a central field cannot be solved exactly. That leaves the question, if you cannot solve that, what can you do? What chemists have done is to take solutions from what can be solved (the hydrogen atom) and base models on those. Thus we have orbitals that correspond to the excited state solutions of the hydrogen atom. The perceptive reader of my previous posts will realize I have argued that the actual orbitals do not exactly correspond, nevertheless the wave functions I argue for (essentially superpositions of waves with fewer nodes based on the principle that separation is possible provided all components have quantized action) are essentially the same in terms of angular distributions, so that issue is irrelevant to the present issue, which is what to do with these orbitals relating to dative bonds? Most chemists are familiar with one answer relating to dative bonds: models based on arrows, etc.
Recently, we have seen a debate about dative bonds in Angew. Chem. In. Ed. (2014, 53, 370 – 374; 2014, 53, 6040 – 6046.). There seem to be several points being made, but they tend to boil down to the use of arrows, what the dative bond is, and what model is worth following. This discussion attracted the heretic in me!
First, why models? One of the protagonists (Frenking) used this quote: Bonding models are not right or wrong but they are more or less useful. This raises the issue, what do we mean by "right or wrong", and when can a model that is known to be wrong continue to be used? In the first case a model can be seen to give useful outputs and can be used while there are no known examples of it being wrong, and, of course, there is nothing wrong with using a model that you know to be an approximation, as long as everyone accepts that it is an approximation. Another time when the model is strictly wrong but can still be used (in my opinion anyway) is when it is only wrong when a given external condition is imposed that gives a known effect, in which case it can be used when that effect is absent. The most obvious example is Newtonian mechanics. Newton assumed action at a distance was immediate. It is not, and when that is relevant we have to resort to Einstein's mechanics, but when motion is such that the effects of light speed can be considered as effectively instantaneous, you would be mad not to use Newtonian mechanics.
However, back to the dative bond. What is it? Seemingly Haaland (Angew. Chem. Int. Ed. 1989, 28, 992 – 1007.) considered: The basic characteristics of a dative bond, depicted with an arrow “→”, are its weakness, the substantially longer bond length compared to typical single bonds, and a rather small charge transfer. My personal view is this does not help much. What does " substantially longer bond length compared to typical single bonds" mean? In this sense, it must be recalled that bond lengths vary, and the dative bond does not have a non-dative counterpart. Both parties to this discussion used the example of borazane (NH3→BH3). Right – what is the length of a non-dative nitrogen-boron single bond free of other complications, including lone-pair interactions? The next question, though, is, if we write it like that, what does the arrow mean? What I was taught as an undergrad, and it seems reasonable enough, is that a two-electron bond forms using both electrons from the nitrogen lone pair. Now, part of this discussion then focused on, what does that mean?
A lot of people seem to think that what happens is that the nitrogen transfers an electron to the boron atom, then the two electrons pair. The net result is that the molecule is a zwitterion, with N and B- charges on the relevant atoms, with a little subsequent polarization of the hydrogen atoms. That would seem to contradict Haaland in that such a distribution would give a very strong dipole moment, but note now what Frenking says: "Writing ammonia borane H3N-BH3 as a zwitterion yields a negative charge at boron and a positive charge at nitrogen, while the partial charges exhibit the opposite polarity." What exactly does that mean? From what I can make out from a cursory glance at the literature, borazane has a dipole moment of 5.2 D. Now, which way is that likely to go? I cannot see a sufficient electron transfer to get that dipole moment from boron to nitrogen, so it seems reasonable to me to assign the direction of flow to be from the lone pair of nitrogen towards boron, as the arrow indicates. Accordingly, I find this discussion just a little misleading. However, I also do not feel that the concept of the nitrogen transferring an electron to boron, and the two pairing is very helpful either.
So, how do I see the dative bond? In my picture, the nitrogen atom has a lone pair, and those electrons are described by a wave function that has a barrier at infinity, while boron, if it hybridizes, can create an sp3 configuration with an empty wave function, which I shall describe as a hole. If the nitrogen atom approaches such that the lone pair wave function is directed towards the hole on the boron atom, the boron atom now provides a barrier to the lone pair wave function, perhaps described as the vacant sp3 orbital "capturing" the lone pair and reducing the range over which the electrons can roam by the boron atom providing a turning point. As positional uncertainty is lowered, momentum increases, kinetic energy increases, and by the virial theorem, total energy is lowered. In that picture, the arrow is a great way of describing it, and the lone pair mechanics are now determined both by the nitrogen atom and the boron atom, and to maintain the sp3 hybridization, the lone pair has to spend increased time away from the nitrogen atom, hence the high dipole moment. Note that that is also more valence-bond type thinking than molecular orbital thinking. As to why I put that here, apart from highlighting the debate, the sort of thinking of this last model, which is essentially that the dative bond forms as a cosnequence of the change to boundary conditions applied to a lone pair helps me; whether it helps anyone else is, I suppose, a more interesting question.
Posted by Ian Miller on Nov 9, 2014 8:59 PM GMT
Recently, there have been two themes regarding the Moon's origin. Some unexpected but now well-known results from the samples returned by Apollo included:
1. the rocks were remarkably dry,
2. the isotopes of oxygen and some other elements were essentially the same as those of Earth whereas these isotope ratios differ from other samples, such as chondrites, and from Mars,
3. there was considerable anorthosite, which is a feldspathic mineral, present. Earth is the only planet that we know of that has extensive feldspar. (Mars has a limited amount of plagioclase, but no known extensive granite. Venus may have two granitic cratons, Ishtar and Aphrodite Terra, but we have no means of knowing.)
This information was most easily accommodated by postulating a Mars-sized impactor, Theia, colliding with Earth and sending up massive amounts of silica vapour, from which the moon condensed. Various computations have shown this was possible, it explained the dryness, provided the bulk of the mass came from Earth it explained the isotope levels and the composition, so it became the established theory. Because the condensate was from Earth's surface, radioactivity levels would be low, which explains why the moon has been essentially dead for a long time. The major activity has been considered to have involved massive impacts during the so-called late bombardment. There was always one problem: in detail most impactor computations require much of the moon to have come from Theia, in which case Theia, coming from somewhere else, should have a different isotope and mineral composition. Also, the relative velocity of Theia on impact should not significantly exceed the escape velocity, which means, at a distance where Earth's gravitational field becomes insignificant, it should have little excess energy. There is one largely overlooked option from Belbruno and Gott that I prefer: Theia accreted at one of the Lagrange points to the Earth/Sun system. If we assume the isotope composition of the accretion material is dependent on the solar radial distance, then the composition similarity follows automatically, while there is no problem with the collision energy. If my theory outlined in Planetary Formation and Biogenesis is correct, maximum rates of initial accretion occurred for rocky planets at the Earth distance from the star, and there would be enhanced accretion of calcium aluminosilicates (because as cements, they caused the accretion) and this would explain/require the enhanced anorthosite.
The standard picture is now starting to show signs of wear. First, the moon did not die quite so rapidly, and certain small volcanic areas on the Moon appear to have had eruptions within 100 My BP (before present), and possibly up to 50 My BP. Further, while the Procellarum region has been interpreted as an ancient impact basin of approximately 3,200 km diameter, gravity anomalies show that the region is essentially a massive lava outflow, consistent with the higher concentrations of the heat producing elements uranium, thorium and potassium in the rock. These elements are readily concentrated if the body has melted, because they tend to be the last to crystallize out. But that requires fluid, such as from a magma ocean. Even if the impactor did not form a vapour, a magma ocean still remains very probable. The magma ocean also favours the formation of the aluminous crust, as it would float on basalt. (Interestingly, one review noted that plagioclase only floats on dry magma; where this came from is unclear because basalts usually have a density of about at least 0.7 units greater.)
The issue of whether the moon condensed from vapours is unclear. There is a lack of fractionation among refractory elements, which is strong evidence that the moon did not form by condensation of vapours, yet the moon is depleted in volatile elements such as potassium, which is generally considered to indicate that there were vapours, BUT it turns out the isotopes of potassium are the same as on earth and other solar system bodies, which counts against vapour condensation. Even more suggestive, lithium isotopes are the same as on Earth. Thus it is generally concluded that the Moon has little trace of the impacting body, even though models show the impactor makes a significant contribution to the putative proto-lunar disk. To summarize, the formation of the Moon requires a highly energetic origin, it carries the elemental signature of Earth, but it is depleted in volatile elements and water. Of course, if Theia accreted at the Lagrange point, then the resultant collision will still have been energetic, but maybe not quite as energetic. There may have been sufficient energy to lead to extensive dehydration and moderate loss of potassium, but essentially as a single event, which would not lead to significant isotope fractionation, as opposed to equilibration between vapour/liquid, which would.
Andrews-Hanna, J. C and 13 others. 2014. Structure and evolution of the lunar Procellarum region as revealed by GRAIL gravity data. Nature 514: 68 – 71.
Belbruno, E., Gott, J.R. 2005. Where did the Moon come from? Astron. J. 129: 1724–1745.
Braden, S. E., and 5 others, 2014. Evidence for basaltic volcanism on the Moon within the past 100 million years. Nature Geoscience : doi:10.1038/ngeo2252
Taylor, S. R. 2014. The Moon re-examined. Geochim Cosmochim. Acta 141: 670–676.
Posted by Ian Miller on Oct 26, 2014 10:00 PM GMT
One theme of my posts that I have raised more than once is that while scientists are very good at collecting information and of measuring things, this leaves the problem of interpreting what it means. Scientific theory is based on either propositions or statements. A proposition is of one of two forms:
(1)  If theory P is true, then you will observe A
(2)  If and only if theory P is true, then you will observe A
Failure to observe A falsifies either proposition, but if you observe A, all you can say about (1) is the theory is in play. As Aristotle noted over two millennia ago observing A can only prove P if (2) applies, and it is the "only" condition that is difficult to validate. A statement (and an equation is a statement) carries the implied proposition that it is true.
What brought this thought on was one paper (Science, 345: 1590 – 1593) that has had quite some publicity, even in the public news media. What it claims is that at least some of the water we have is older than the solar system. What does that mean? First, it was deuterium/hydrogen ratios that were measured. We also note the authors were astrophysicists, and I quote: " our emphasis is on the physical mechanism necessary for D/H enrichment: ionization." As stated, that is an "only" statement, but I consider the "only" condition is unjustified. However, before getting to that, all hydrogen and deuterium was made in the Big Bang, and all oxygen atoms were made in supernovae. Water is made in space by oxygen and hydrogen reacting, usually on dust. Deuterium enrichment can arise because the O – D bond is stronger than the O – H bond, mainly because the latter has the larger zero point energy, so any process that breaks an O – H bond, particularly if it just does so, may increase the D/H ratio in what remains. It also arises through sublimation equilibria of ices in space, as heavier molecules sublime slightly less easily and under equilibrium conditions, they become enriched. Under these conditions, the D/H ratio of all water remains constant, and if ice gets enriched in deuterium, the vapour becomes depleted.
What they note is that the highest levels of D/H in water occurs in interstellar ices, and that Earth's oceans have a significant deuterium enhancement over solar hydrogen levels and are similar to comets from Jupiter's orbit, and a little less than that of interstellar water. They then model what they believe happened in the solar accretion disk and note that the deuterium levels we see are inconsistent with the disk physics/chemistry leading to the observed enhanced, with respect to solar, deuterium levels. What they then conclude is that comets could comprise either 14% or up to 100% of accreted interstellar ice, and ~7% or up to 30-50% of earth's oceans originated as interstellar ices. Why the "either" options? Largely because while they have a ratio for interstellar ices, they also have a water signal from the disk of a protostar. In short they believe the nature of the original water may vary from star to star. However, that is irrelevant to their claim that our water predates our solar system formation. They then conclude that provided the formation of our solar nebula was typical, then interstellar ices from the molecular cloud core should be available to all young planetary systems.
The last conclusion seems obvious. If there is water and ice in the cloud, which would be expected as long as the carbon levels do not consume all the oxygen, then the water ice should persist at least to the outer parts of the accretion disk, and indeed my theory of formation of the gas giants relies on this being so, so in one sense the paper supports my theory. On the other hand, provided there were water ices in the cloud, what could possibly happen to them until they reached the ice sublimation temperature, given that the disk is opaque so while the star is forming, ionizing radiation should be absorbed much closer to the star? It is here that they seem to have overlooked that there are three important hydrogen sources: interstellar ice, interstellar water vapour, and hydrogen. The latter is about four orders of magnitude higher than anything else, and determines the initial deuterium levels in the star. Nuclear burning will then decrease the stellar deuterium levels.
However, the conclusion that Earth's water reflects the deuterium content of the water as it was accreted is an "only" statement, and it is not true. There is a further possible mechanism: as water travels through hot rock, and current volcanism shows it does, it may oxidize any reduced species, and in many cases liberate hydrogen, which may then escape to space. Such reactions involving the breaking of the O-H bond will also be affected by the chemical isotope effect, with O-D bonds reacting significantly more slowly, and that in turn will lead to deuterium enrichment. That is my explanation for the Venusian atmosphere, where there is a hundred-fold enrichment of deuterium (Science 216: 630-633). The reactions include water reacting with carbon or carbides as the original source of the carbon dioxide in the Venusian atmosphere. As I showed in Planetary Formation and Biogenesis by reviewing a number of papers, either the gases were emitted from the earth, in which case they had to be accreted as solids, or they were delivered from space, but if the latter were the case, each rocky planet had to be struck by completely different types of bodies, and the Moon, quite remarkably, be struck by only trivial amounts of any of the volatile containing bodies. Note that most asteroidal bodies contain negligible volatiles.
So what do I make of this? Of course water arrived from interstellar space, and this work at least supports my concept of ice accretion. On the other hand, the presence of ices in the disk is generally held to be the reason why the giants form, so in another sense this paper simply supports what was long assumed. I am not convinced it warranted the media attention it received.
Posted by Ian Miller on Oct 13, 2014 1:57 AM BST
There are a number of problems that seem to be looming, one of which is the climatic effects of the so-called greenhouse gases. Science should be able to address such problems, but the question arises when a discovery is made is, is this a solution to the designated problem, could it be a solution to the designated problem if some further problems can be overcome, or is it simply an interesting observation but essentially irrelevant in terms of solving any of our problems? With the problem of getting funding for science, "relevance" often becomes an issue. Accordingly, funding applications frequently make significant claims as to what their research might achieve, and there are advantages in carrying this over into the subsequent papers. Of course some of these papers may truly herald an opportunity. So, what do you make of the following?
Ammonia is an important chemical for fertilizer, and is usually made through the Haber-Bosch process, which involves reacting nitrogen gas with hydrogen under pressure, the hydrogen being made by steam reforming of methane, in turn obtained from natural gas. The oxygen from the steam ends up eventually as carbon dioxide, so it contributes to the greenhouse effect. However, a new process has been claimed (Science, 345: 638 – 640) that involves electrolysis of air and steam in a pressurized molten hydroxide suspension of nano-sized Fe2O3, at temperatures of 200 – 250 oC. This process results in the conversion of nitrogen to ammonia with an efficiency that is apparently 35% of the applied current, the other 65% resulting in excess hydrogen. Hydrogen would remain a marketable product. The chemistry is interesting. Iron/iron oxide is a catalyst for the Haber-Bosch process, but that process uses pressures considerably higher than would be found in this reaction. That comparison is probably irrelevant, as is shown by ball-milling standard iron oxide, in which case the reaction did not go, so the nano-sizing is important. The question then is, is this a solution to a problem or merely an interesting side-issue? That leaves open the question, how likely is it that this reaction will scale up successfully, and if it does, then run successfully?
The first problem that I could see is that the efficiency drops off at higher current, thus the efficiency of one synthesis was >30% at 20 mA, but ~7% at 250 mA. The suggestion was that the conversion is limited by the available area of nano-Fe2O3, which may or may not be fixable during scale-up. From the chemical point of view, the nanoparticles were dispersed throughout the solution, but the electron transfer would presumably occur at the electrodes, so that raises the question, exactly what are the nanoparticles doing? The electrodes were nickel, so they should not be a problem for scale-up, but the area might be. The production rates were in the order of 7 x 10-9 mol NH3 per second per square centimetre. That would require a very large area to get 1t/hr, which is hardly a rate to get excited about. The requirement for nano-sized  Fe2O3 would also worry me because Fe2O3 slowly dissolves in hot sodium hydroxide solution to make sodium ferrite. This was not mentioned in the article. On the other hand, they found conditions that stabilized production for six hours. (Actually, it may not be beyond the bounds of possibility that sodium ferrite is the catalyst, as nano-sized Fe2O3 might well be more reactive than the bulk oxide. That is yet another aspect that at least needs answering.) Is this possibly a commercial process? My guess is no, at this stage at least, but it does provide an interesting new opportunity for research. If they could get the current density up significantly, then perhaps there is something here.
Would that help solve the greenhouse problem? In my view, since this electricity would be the marginal production, no, unless we find a way to make electricity that totally stops the use of fossil fuels to make electricity. Nevertheless, the production of ammonia is required to address the food problem. However, if we really want to do something about global warming through ammonia usage, then a good place to start would be to make nitrogen fertilizers more efficient. A very large amount of such nitrogen finds its way into N2O, presumably through the decomposition of ammonium nitrite.  Accordingly, there is plenty of work remaining for further research. The question then is, how to fund it? Unfortunately, the scientist's first duty is to obtain funding, which encourages flag waving in papers.
Posted by Ian Miller on Sep 29, 2014 3:17 AM BST
The question I am now posing involves how scientific papers should be presented where the author faces a dilemma. On one hand, the author wants to show something that might lead to more widespread use, but on the other hand, the information might have more general use. The first point is obviously desirable if in fact the use proposed makes sense, but even if it does not it might still make sense while reporting to funding agencies. The second point involves the dissemination of knowledge, and the problem is if it is presented in one way, it may not be seen by others for whom it may be more useful. The huge output of scientific papers means that nobody can read any more than a tiny fraction, and everybody has to have some form of very coarse screening otherwise they never get anything done.
These thoughts were started, for me, by a recent paper (Angew. Chem. Int. Ed. 53, 9755 –9760) which claimed to give an interesting approach to biofuel production, but I feel the more interesting aspect of it was the implied underpinning chemistry. The basic process involved three reactions that started with molecules such as furfural and hydroxymethyl furfurals, which are acid degradation products of carbohydrates. Furfural is readily obtained from pentoses because it steam distills out of a reaction in which carbohydrates are acid hydrolysed at higher temperatures, but hydroxymethyl furfural does not do this, and instead it degrades further. It can be isolated, but at a cost, and at only moderate yield. So, before we go much further, this paper will have questionable direct applicability because it involves relatively expensive starting materials that represent only a part of the initial resource.
But it is what happens next that is of interest. The authors carry out an aldol condensation of the furfurals with acetone, thus getting C8, C9, C10, C16, etc materials. Furfural gives the furan ring and the unsaturated ketone. These are now reacted at elevated temperatures and pressures with NbOPO4 in the presence of hydrogen and a Pd catalyst. The interesting part now is that the NbOPO4 has the ability to pull out the oxygens, including the furfural ring oxygen and the ketonic oxygen (although this may be a dehydration reaction as the carbon-carbon double bond becomes hydrogenated), with the result that we end up with linear hydrocarbons.
The niobium phosphate gets a 94% yield of hydrocarbons, whereas aluminium phosphate gets a zero yield of hydrocarbons, while the palladium there catalyses the hydrogenation of the double bonds. Actually, the phosphate is not that important as Nb2O5 gives the same yield of hydrocarbons. According to the authors, what happens is that the bulk Nb – O  – Nb groups break, permitting a Nb – O – C  bond to form, and a nearby hydrogen atom can transfer to the carbon atom.
The question then is, what use is this to biofuels? Superficially, not that much because the problem of getting furans probably makes this uneconomic. Not only that, but while the C16 hydrocarbons would make excellent diesel, linear C8 hydrocarbons are not at all attractive as fuels, as lying in the petrol range and having an octane number approaching zero makes them undesirable. What I would find more interesting, though, is how this catalytic system would function with lignin, or lignin derived smaller molecules. While lignin polymerization has essentially no pattern, nevertheless many of the linkages occur through C – O – C bonds. If they could be hydrogenated, and the methoxyl groups removed, it might be a breakthrough in biofuel development. The question then is, why did these authors not try their reaction on lignocellulose to see what would happen? Perhaps they did, and perhaps there are more papers coming, but I do not feel that is constructive. We need to see the fewest papers presented consistent with getting all information over, so as to reduce the deluge.
Posted by Ian Miller on Sep 14, 2014 10:43 PM BST
   1 2 3 4 5 6 7 8 9 ... 18