Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

By the title, I mean, what sort of body is it, and how did it form. Ceres is the largest body in the asteroid belt, and it is essentially spherical, from gravitational energy minimization. It lies at a distance at which the remaining bodies are mainly carbonaceous asteroids, and are made of rock with some water and organic material. It should be noted that the part of the asteroid belt closest to the star contains mainly silicaceous asteroids, so an interesting question is, how did these different bodies form? The issue is made more complicated because there are also some such as Vesta that appear to have an iron core. To get an iron core, the temperature of the body had to get above 1538 oC, yet the evidence from meteorites is the carbonaceous bodies never got above ~200 oC. How did all this happen?
 
In my Planetary Formation and Biogenesis, I supported the hypothesis that Vesta and one or two other bodies really formed much closer to the star, and were moved out by gravitational interactions and became circularized where they are now. If that is right, that gets rid of that problem. As to why, after forgetting about Vesta and similar asteroids, there are two major classes of asteroids that are quite different. Within my interpretation, I propose planetary formation starts basically through chemistry, and the bodies stick together initially through chemical (including physical chemical) interactions. As for these asteroids, I suggested they formed by different methods, and consequently should have different chemical composition. In particular, the carbonaceous ones formed as or after the accretion disk cooled down. The concept was that at the higher temperatures, organic materials such as methanol, known to be in the disk, pyrolysed on silica particles and formed tarry material, and later this tarry material permitted bodies to stick together. One possible reason why the bodies are so small is that the tar would only be sticky over a modest range of temperatures. The net result of this is, based on meteorite samples, the carbonaceous asteroids tend to be black, and have various small rock-crystals distributed through them. Accordingly, if you break the meteorite the interior remains black, or at least dark coloured.
 
The puzzle, for the moment, is that the space vehicle Dawn has observed bright spots on Ceres' surface. These are quite white, in amongst the otherwise depressing grey-black. The nature of them is not that difficult to explain. In my opinion, they are most likely to comprise exposed ice. The problem then is, how does ice get there? Carbonaceous chondrites have between 3 and 22% water in them, but this water level may be inflated because the rocks have been lying around on Earth for some time before being picked up. The density relative to water of Ceres is 2.17, which means that in the absence of an iron core, the composition is richer in silicates than anything else. (Granite and silica tend to have densities in the order of 2.5, while olivines/pyroxenes will be in the order of 3.3.). One possibility is that in differentiation, the ice melted and accumulated, like lava, as deposits. But if that were the case, why would not the impact simply remelt it and mix it with the silicates? That would leave at best very dirty ice.
 
The white spots appear to be within craters, so it is possible that the impacts have melted the water deeper below and subsequently water has flowed out and solidified. That requires the water not to be dirty, to get the bright spots, and that suggests there were richer deposits of ice at some depth below the surface, and after impact, the pressure of steam cleared a pipe through the rock, and later the residual water flowed to the surface. So, what are the options? As I see them, Ceres may be an abnormally larger carbonaceous asteroid and the water has been mobilized by impact. The other possibility is that Ceres started life in the Jovian accretion zone, and was thrown inwards, during which it picked up more dust. This assumes it started life a bit like Ganymede/Callisto (densities between 1.83 and 1.93), and gained more dust and silicates on its surface. My guess is it started life in the Jovian region because that is the easiest way for it to get so big. If this is so, Ceres is not a typical body within the carbonaceous asteroid distribution, and Dawn will add no more information as to their formation. What remains to be seen is what information Dawn can gain.
 
Assuming all goes well, I shall add a photo of such spots and you can form your own opinion. The photo is, of course, due to NASA. What do you think?


  more...
Posted by Ian Miller on Jun 7, 2015 11:46 PM BST
In two previous posts, I have mentioned two of the seven sins of academics in the natural sciences mentioned in an article by van Gunsteren  (Angew. Chem. Int Ed. 52: 118 – 122).  The third sin was insufficient connection between data and hypothesis, or over-interpretation of data. My personal view is this is not a sin at all, as long as you are honest about what you are doing. Perhaps the best-known example is that of Kepler. Strictly speaking, his data were not really sufficiently robust to justify his laws, but Kepler decided (correctly) that the planets should follow some sort of function, and the ellipse fitted the data better than anything else. Similarly, in one sense it was an act of faith for Newton to accept Kepler's laws as laws, but look what came from it. My view is, that as long as you are honest, there is no harm in drawing a conclusion from data that does not fully support it, as long as it is clear what you are doing, and as long as the conclusion is not put to a critical use. This if considering whether something is safe, then if the data does not prove safety, it does not hurt to hypothesise that it could be safe, as the hypothesis takes everyone forward, but only if it is clear that it is a hypothesis.
 
The next sin mentioned is the reporting of only favourable results. Here I am in total agreement. If some result does not support your hypothesis, you should investigate it thoroughly, and if it persists, you not only report it, but you should confess that the hypothesis is wrong as stated. To me, it is a sin, albeit a less serious one, to report the data and make no comment on it. The statement that it is unexpected, or to state it and end the sentence with an exclamation mark, is not adequate. The reason for this is that in logic, ONE observation that cannot be explained by the theory is sufficient to falsify it.
 
Another sin mentioned was the neglect of errors found after publication. If the error is in the reporting of the data, such as a spectral peak listed in the wrong place, obviously this should be reported. However, I am less sure of reporting when the report does not make a significant difference and does not conclude the matter. In my opinion, it is almost as big a sin to put out a sequence of papers on the same subject, and having a conclusion that moves around a little from paper to paper. If the first conclusion is near enough, in my opinion there should be no corrections until the author is convinced the subject is sorted. There is far too much in the literature already, without salting and peppering it with minor variations, none of which significantly improve the issue.
 
The remaining sins listed were plagiarism and the direct fabrication of data. I agree these are bad sins, but do they actually happen? I have heard there are examples from students, but surely this is as much the fault of supervisors. I would hope that professional scientists would never even think of this. As far as I know, I have never run across an example of either of these. Have you?

 
I realize these opinions might be controversial, but so what? I hope it does stimulate discussion. I also think the list given in this article is incomplete, and I feel there are more sins that are equally bad (except possibly for the last two). More on them some other time.
Posted by Ian Miller on May 25, 2015 4:54 AM BST
In January of this year I started a series of posts based on an article in Angew. Chem. Int Ed. 52: 118 – 122, where van Gunsteren mentioned the seven deadly sins of chemists. I commented on the first one (inadequate descriptions of methodology), inspired in part by an example that help up progress on my PhD when an eminent chemist left out a very critical piece of the experimental methodology and I was not smart enough to pick it, but then I got distracted by a series of what I thought were important announcements, coupled with one or two things that were happening in my life.
 
The second sin was "Failure to perform obvious, cheap tests that could repudiate or confirm a model, theory or measurement." The defence, of course, is that the experimenter did not think of it, and I am far from thinking that one should blame an experimenter for failing to do the "obvious". The problem with "obvious" is it is always so when pointed out in retrospect, but far from it at the time. Nevertheless late in my career I have an example that is a nuisance, and in this case, it is not even chemistry, but rather physics. My attempts at understanding the chemical bond, and, for that matter, some relationships I found relating to atomic orbitals (I. J. Miller, 1987.  Aust. J. Phys. 40 : 329 -346) led me to an alternative interpretation of quantum mechanics. It is a little like de Broglie's pilot wave, except in this case I assume there are only physical consequences when the wave is real, which, for a travelling wave, from Euler, is once per period. (Twice for certain stationary states.) As with the Schrödinger equation, the wave here is fully deterministic. (For the Schrödinger equation, if you know ψ for any given set of conditions, you know ψ for any changed conditions, hence the determinism. The position of the particle is NOT deterministic. The momentum is in as much as it is conserved, but not at a specific point in space.) Now, my interpretation of quantum mechanics has a serious disagreement with standard QM in terms of the delayed quantum eraser. Let me explain the experiment, details of which can be found at Phys. Rev. Lett. 84: 1 – 5.
 
But first, for those who do not know of it, the two slit experiment. Suppose you fire electrons at two slits spaced appropriately. On the screen behined, eventually you get a diffraction pattern. Now, suppose on the other side, you shine light on the screen. As the electrons emerge from a slit, and an electron only goes through one slit, the electron scintillates, and you know through which slit the electron passed, however, now the diffraction pattern disappears, and the resultant pattern is of two strips, and if the photomultiplier can assign the signal to a specific electron (requiring low intensity) then it is shown that a given strip is specific to a given slit. Standard quantum mechanics states that it is because you know the passage, there is no diffraction. By knowing the path, you have converted the experiment into a particle experiment, and all wave characteristics are lost. You can know particle properties or wave properties, but not both.
 
Now, this experiment starts the same way, but at the back of the slits there are two down converters, each of which turns a given photon into two photons of half energy. One of these, called the signal photon, goes to the photomuliplier, while the other photon, called an idler photon, sets off on a separate path from each down converter, so at this point, there are two streams that define which slit the photon went through. Accordingly, by recording the signal photons paired to one of these streams, it is known which path the signal photon took, and there should be no diffraction pattern if standard quantum mechanics is correct on this issue. What was actually done was that each stream was directed at a beam splitter, and half of each stream of idler photons went to a separate photomultiplier, and when the paired signal electrons were studied, there was no diffraction pattern. If, on the other hand, the the other half went to two further beam splitters such that the beams were mixed, and knowledge of which slit the parent photon went through was lost, the paired signal photons gave a diffraction pattern. Weirder still, the path lengths were such that what the idler photons did occurred after the signal photons had been recorded, i.e. the diffraction pattern either occurred or did not occur depending on a future event.
 
So where is the sin? Do you see what should have been done? The alternative explanation may seem a bit hard to swallow, but is it harder than believing the photons would give a diffraction pattern or not depending totally on what was going to happen in the future? Remember,  the idler photons could have been sent to Alpha Centauri to do the critical mix/not mix and the theory states clearly that the signal photons will, er, what? Rearrange the records eight years years later if the physicist does something different at the other end?
 
What I would have liked to see was that one stream of the idler photons heading to the mixing was blocked. The theory is, in the down converter, it would be possible that only one of the photons carried diffraction information, and that would go equally to signal or idler photons by chance. However, the next beam splitter could split idler photons not on chance but by whether they carried diffraction information, or appropriate polarization. The difference is, the separation is causal, and nothing to do with what the experimenter knows. If the partners of these two streams of idler photons heading to the mixing step carry the diffraction information, cutting out one of those streams will merely delete half of the information (because only hald the signal photons are now counted) if the patterns arise deterministically (and recall in terms of wave properties the Schrödinger equation is deterministic.) If the experimenter's knowledge is critical, then the diffraction pattern will go because the experimenter knows which path the photons have taken.
 
The point is, if physicists over the last decade have not commented on this, then maybe it is not that obvious. Maybe it is not a sin not to do the "obvious", because it is seldom obvious at the time. Hindsight is great, but if you did not see the sin before I told you, maybe you will be more generous when others appear to have sinned.
Posted by Ian Miller on Apr 21, 2015 4:37 AM BST
Ever wondered why planets rotate the way they do? All the outer ones appear to have prograde rotation, i.e. they rotate in the direction as of they were rolling along. However, Mercury and Venus are exceptions. Mercury has a very slow rotation that is explained by it being in a tidal resonance with the sun, so that is no mystery, but Venus rotates slowly, and the wrong way. Most people have viewed this in terms of the standard theory of planetary accretion, where the central body gets hit by a large number of planetesimals, or even larger bodies, from random directions, and the resultant spin is a result of preferential strikes. Earth may well have included this effect when it was struck by Theia to form the Moon. In this case the Moon's orbit also take sup angular momentum from the collision. Venus has no moon, and it spins slowly so the theory went, it was just unlucky and got hit the wrong way at the end by something big. But if that were the case, why no satellite?
 
There was a recent paper in Science (346: pp 632 – 635) that put a different picture on this. If the planet has an atmosphere, atmospheric temperatures oscillate from night and day, which creates large-scale mass redistribution within the atmosphere, the so-called thermal tides. The retrograde motion occurs because the hottest part of the day is a few hours after midday, due to the thermal inertia of the ground. Because of this asymmetry in atmospheric mass redistribution, the stellar gravity exerts a non-zero torque on the atmosphere, and through frictional coupling, the spin of the planet is modified. This is why Venus has retrograde spin. Atmospheric modelling then showed that the resultant torques for a planet in the Venusian position with a 1 bar atmosphere are an order of magnitude stronger than for Venus, mainly because the very thick atmosphere scatters or absorbs most of the sunlight before it reaches the surface. As a consequence, rocky planets in the habitable zone around lower mass stars may well have retrograde rotation.
 
During these posts, the reader may have noticed that I sometimes view computer models with scepticism. Here are two examples that illustrate why. The first is from Planet. and Space Sci. 105: 133 – 147, where two models were made of atmospheric precipitation on Mars ca 3.8 Gy BP. The valley network analysis suggests an average of 1.5 – 10.6 mm/d liquid water precipitation, whereas the atmospheric model predicts about 0.001 – 1 mm/d of snowfall, depending on CO2 partial pressure (which varied from 20 mb to 3 bar in the models) and with global mean temperatures below freezing point. The authors suggest that this shows there was a cold early Mars with episodic snow-melt as a source of the run-off. I rather fancy it shows something is left out of the analysis, i.e. there is something we do not understand, because all the evidence to date makes a persistent 3 bar atmosphere most unlikely, and even then, it only works by a near miss at the extremes. The other came from Icarus 252: 161 – 174. Here, an extensive suite of terrestrial planet formation simulations showed that the rocky planets have overlapping stochastic feeding zones. Worse, Theia, the body that formed the Moon, has to be significantly more stochastic than that of Earth, and the probability that the two would have the same isotopic composition is very small, yet the isotopic composition is essentially identical. The authors state there is no scenario for the Moon's origin consistent with its isotopic composition and a high probability event. Why not concede that the premises behind the model are wrong? And there, in my opinion, is the basic problem. Almost nobody goes back and checks initial assumptions once they have been accepted for a reasonable time. And if you do, as I have done for planetary formation, nobody cares. As it happens, each of these is properly accounted for in my Planetary Formation and Biogenesis.
 
There is a clear published model from Belbruno and Gott that would permit the Moon to have the same isotopic ratios as Earth, and that assumes Theia accreted at one of the two Lagrange points L4 or L5 (Astron. J. 129: 1724–1745). (Lagrange points are where gravitational effects of two major bodies more or less cancel, and a third body can be at L4 or L5 indefinitely, as long as it does not become big enough to be gravitationally significant. L4 and L5 are actually saddle points, and bodies that fall off the "saddle" have net forces that pull them back, so they carry out motion about the point. Jupiter's Trojans are examples.) So why did these other authors not cite this model as a possible way out of their problem? One possible reason is they have never heard of the model, which is almost never cited, one of the reasons being that within the standard model of stochastic accretion of ever increasingly large bodies, nothing could accrete at the Lagrange points because collisions would knock them off. So now we have a problem. The standard model would not permit the conditions by which one model would explain the observations, but the observations also effectively falsify the standard model. So, what will happen? Because there is no way to have discussions on topics such as these, other than in blogs, the whole issue will be forgotten for some length of time. Progress is held up because the modern method of disseminating information has so much information in it that linking it does not always occur.
Posted by Ian Miller on Apr 5, 2015 11:33 PM BST
Two posts ago, I issued two challenges for readers to try their hand at developing theory, and so far I have received a disappointing response. Does nobody care about theory? Anyway, my second question was, why did nature choose ribose? Recall that ribose is not the easiest sugar to make, and in the Butlerov synthesis, under normal conditions essentially no ribose is made. However, that may be misleading, as there are other options. One that appeals is, providing pH 9 or more is reached, silicates dissolve slightly, and catalyse the condensation of glyceraldehyde and glycolaldehyde to form pentoses, and the furanose form is favoured (Lambert et al. 2010. Science 327: 984-986). This strongly favours ribose.
 
However, even if we can find a way to make ribose, it is inconceivable that we can do that without making other sugars, so why did nature choose ribose? One answer is, it is the most suitable, but that begs the question, why? It is certainly not that it alone can lead to duplexes once the strand is made, because it has been shown that duplexes based on xylopyranoside or arabinopyranoside, or even ribopyranoside have better duplex binding, and xylose and arabinose are easier to make.
 
I think the answer lies in part in what is an essentially forgotten paper by Ponnamperuma et al. 1963 (Nature 199: 222-226.) What Ponnamperuma et al. did was to take adenine, ribose and phosphate in aqueous solution, then they shined hard UV light (wavelength about 250 nm) on it. Products included adenosine and adenosine phosphates, including adenosine tripolyphosphate. This was quite a stunning achievement, but it leaves open the question, why did it work? Before addressing that, however, we might see why this has been forgotten, apart from the issue of who reads the literature before computer searching? There is a serious flaw in this being the cause of life, and that is that it is almost impossible to conceive of an atmosphere that will remain transparent to such short wavelength UV. For example, water gets photolysed to oxygen, thence to ozone, which screens out the hard UV. If there are reducing materials there, you get a haze like that on Titan, and again, the hard UV gets screened out.
 
My recommended way of forming a theory is to ask questions, and in this case, the question is, why does light make the phosphate ester? The adenine is clearly absorbing the photon, and one can see that the link between adenine and ribose may be photocatalysed, but what happens next? All bonds in the ribose are σ bonds, so there should be no extension to the excited electronic state. The next question is, how can one make phosphate esters? This is slightly easier: if you heat a hydroxyl and a phosphate with a hydroxyl to about 200 degrees C, water is eliminated and we get the phosphate ester.
 
This suggests the answer to the problem should lie in radiationless decay of the excited state, where the energy is dissipated in a sequence of vibrational energy levels decaying to the ground state. We now see that a vibrationally excited hydroxyl could form an ester if it had the same kinetic energy as a hydroxyl at 200 degrees C. If that is the case, we now see why nature chose ribose: the furanose is more flexible, and the 5-hydroxyl on a furanose will behave a little like the end of a whip. Ribose is the only sugar that forms a reasonable fraction of itself in the furanose form in aqueous solution. Now, adenine cannot be the primary absorber originally, but there is another option, and that is, given the appropriate reduced rocks, if the cell wall hydrocarbons contain dissolved porphyrans, or some similar material, the absorption could be through them.
 
Which brings us to an experiment that could be carried out. Make micelles or vesicles from hydrocarbon alcohols with phosphate esters as the surfactant, and have them with dissolved porphyran, and ensure the water within contains phosphate, adenine, and a mixture of ribose, xylose and arabinose. The prediction is that adenosine phosphates will be formed, but the xylose and arabinose will not participate in forming phosphate esters. If that is true, it is fairly clear why nature chose ribose: it is the only sugar that works
 
Thus we have a clear possible explanation, and an experiment that would confirm of falsify it. The question now is, will anyone carry it out?
Posted by Ian Miller on Mar 23, 2015 12:17 AM GMT
So, my theory challenge, with three weeks to think about it, got no responses. Perhaps nobody is reading these posts. Perhaps nobody cares about theory. That would be ugly. Perhaps the problems were too hard. Really? Anyway, first, a review of where science is at the moment: www.ncbi.nlm.nih.gov/pmc/articles/PM2857173/  My argument is that none of this review answers the question, but it does give a very large number of references. Given that there was this much activity that failed, maybe this challenge was unnecessasrily hard, but let me give you my proposal on how homochirality occurred.

The way to form theories is to ask questions, and in this case ask why did nature choose to be homochiral, given that it wasted half its resources. Why would not some other life form use both, and gain competitive advantage? The obvious answer is that nature chose homochirality because it had to, i.e. if it did not become homochiral, there would be no life. Now, most of what life requires does not demand homochirality. Sources of chemicals could in principle be of any chirality, light is not chiral, energy transport (ATP) depends on the tripolyphosphate, however there is one part where chirality is critical: reproduction. Reproduction occurs when a strand of nucleic acid allows its complement to form as a second strand, where it forms a duplex (double helix). When the duplex separates later, both single strands can grow further new strands, which in turn can form two new duplexes. Note that the helical nature is imposed by the chirality of C-4 on the ribose. The single strand does not have to form a helix, but the two strands, to be intertwined, must both form a helix with the same pitch.

The second strand does not grow by itself. What must happen is that the second strand forms by the complementary bases, with 5-phosphated ribose attached, form hydrogen bonds with their complementary base on the nucleic acid strand. It is now loosely attached by the few hydrogen bonds, and either the required 3-hydroxyl is close to a 5'-phosphate or it is not. If it is, then the ester bond can form, given an impulse from somewhere to overcome the activation energy. If the ribose chirality is correct, esters can form; if it is not, the two sites never come close enough, no ester is possible, and the base eventually wanders off and sooner or later the correct chirality will appear and the duplex grows. Think of a nut and bolt - you cannot make this work if every now and again the thread changes from left handed to right handed pitch. If there is a wrong chirality on the first strand, no duplex can form either, and the impulse required to bring the groups together is now also the impulse required to unravel the duplex.

RNA strands can form loops held together by magnesium ions and these emerging ribozymes can act as catalysts, and these can hydrolyse exposed RNA strands. It may be that it can preferentially solvolyse parts where the pitch chages. Some work is required to validate that piece of speculation, neveretheless, the duplex is at a lower energy than two single strands, so eventually we expect a double helix to form, especially if errors in the chain can be solvolysed.

Once you have a reproducing chiral molecule that can act as a catalyst, then it uses all the resources more effectively than any other option, and when it catalyses syntheses, it synthesises chiral entities. Thus it is RNA that is critical for homochirality; it is the only molecule that can arise naturally, sort itself out, then reproduce. Reproduction ensures that it prevails. Whether it chooses D for sugars and L for amino acids would be pure chance on this interpretation, and it would be predicted that half of alien life would choose the other.

Is that unnecessarily difficult?
Posted by Ian Miller on Mar 9, 2015 12:23 AM GMT
One of the themes I have persisted with in these posts is that in chemistry, thinking about theory is either dead or in terminal decline. Prove me wrong!
 
In my last post, I offered a challenge to readers, specifically, can you provide answers to:
1.     Why did nature choose ribose for nucleic acids?
2.     How did homochirality arise?
So far, I have no guesses/inspired answers. Come on! Assuming someone is actually reading these posts, either they don't care or it is too difficult. Now, surely you are not going to concede that I can work out more than you can? So, is anyone going to try?
 
My first answer next week.
Posted by Ian Miller on Mar 1, 2015 8:38 PM GMT
As I haven't updated for some time, there should be a lot of references summarized, however, many of them turned out to be only of partial interest. There were a few papers discussing the origin of chondrules. Up until recently, while the formation of these had been something of a mystery, the nearest there was to a conclusion was that they were formed in the plume formed by collisions of planetesimals. Fu et al. (Science 346: 1089 – 1092) found that chondrules in one meteorite formed at about 2 – 3 My after the first small bodies. This was supported by Palme, et al. (Earth Planet. Sci. Lett. 411: 11 – 19.) who found that chondrules behaved as if they were formed before any larger bodies, and were not formed by impacts. On the other hand, Johnson et al. (Science 517: 339 – 341) argued that chondrules in CB chondrites probably formed in a plume produce by an impact at a relative velocity of greater than 10 kilometers per second. If so, meteorites are a byproduct of planet formation rather than left-over building material. Of course there may have been more than one route that lead to their formation, but as can be seen, there is no clear answer to this question yet.
 
The landing on the Jupiter family comet 67P/Churyumov-Gerasimenko probably attracted as much recent public interest as anything in science, and some data are coming in. Examination of the comet's water (Altweg et al. Science 347: 126952-1 to 3) showed that the comet had a D/H ratio that is three times greater than Earth's water, and hence Jupiter family comets could not be a significant source of Earth's water. The reason this is important is that the water has to come from somewhere, and comets were once considered the obvious source. However, the usual distant comets were found to have too much deuterium, so Jupiter-family comets became the choice. The alternative is carbonaceous chondrites, but the problem with these is they are rather rare, and lie on the outer part of the asteroid belt. Had they been the source, why were there so many of them, and so few of what are now the more common asteroids? And if there were that many, why did they not accrete into a small planet? In my theory, neither comets nor chondrites are significant sources of water on earth.
 
 Meanwhile, spectral data of the surface of the comet was compatible with the presence of opaque minerals associated with non-volatile organic material with C-H and O-H bonds, but with very little contribution from N-H bonds (Capaccioni et al. 2015. Science aaao628 – 1 to 5.). There may have been small amounts of ice, but no ice-rich patches and the surface was generally dehydrated. Similarly, (Hassig et al. 2015 Science 347: aaa0276-1 to 4) showed the outgassing comprised water, CO and CO2. The relevance here is the absence of significant amounts of nitrogen-containing compounds, as required by my theory of planetary formation, as long as the body did originate in the Jovian region. That is pleasant support, even though the supporters do not realize it.
 
Significant results are beginning to come in from Gale crater, on Mars. Of particular interest to me was that from Bridges et al. 2015 (J. Geophys. Res.: Planets: 10.1002/2014JE004757). The clays found at Gale crater was consistent with the basalt having reacted with a fluid of pH between 7.5 and 12, and further, the reactions did not occur in a setting where exchange with an overlying CO2 atmosphere was possible because had it been so, there would have been deposits of carbonates, and there were not. My theory involved an early methane atmosphere with ammonia in the local water, although of course Gale crater may not be a good example of early Mars, as impact craters could have their own localized geology. Nevertheless, overall all these facts are in accord with what I published, and nothing has been found that contradicts it, so I am tolerably happy.
 
On a more personal level, on March 4 I have been asked by the Wellington Astronomical Society to give a talk, and include some chemistry. Accordingly, I chose "Origin of life" as a topic and have put out an abstract and issued two challenges, which readers here may as well join in.
1.     Why did nature choose ribose for nucleic acids?
2.     How did homochirality arise?
Put your guesses or inspired knowledgeable comments at the end of this post. The answers are not that difficult, but they are subtle. I shall post my answers in due course. In the meantime, I am offering serious discounts on my ebook "Planetary Formation and Biogenesis" from Amazon (US and UK only) from March 6 for about six days, the discount abating over time, so get in early. (Sorry about that commercial intrusion.)
Posted by Ian Miller on Feb 16, 2015 2:21 AM GMT
Yes, this post will be controversial, but I am doing it for several reasons. The first is my wife was convinced there is, and she was equally convinced that I, as a scientist, would quietly argue the concept was ridiculous. However, as she was dying of metastatic cancer we had a discussion of this issue, and I believe the following theory gave her considerable comfort. Accordingly, I announced this at her recent funeral in case it helped anyone else, and I have received a number of requests to post the argument. I am doing two posts: this one with some mathematics, and one where I merely assert the argument for those who want a simpler account.
 
First, is there any evidence at all? The issue is complicated in that observational verification can only be answered by dying. If there is, you find out. What we have to rely on is statements from people who did not quite die, and there are numerous accounts from such people \, and they claim to see a tunnel of light, and relations at the other end. There are two possible explanations:
(1)  What they see is true,
(2)   When the brain shuts down, it produces these illusions.
The problem with (2) is, why does it do it the same way for all? There was also an account recently of someone who died on an operating table, but was resuscitated, and he then gave an account of what the surgeons were doing as viewed from above. One can take this however one likes, but it is certainly weird.
 
What I told Claire arises from my interpretation of quantum mechanics, which is significantly different from most others', and I shall give a brief outline now. If anyone is interested in going deeper, I have an ebook on the subject (http://www.amazon.com/dp/B00GTB8LJ6) I start by considering the two-slit experiment, and consider the diffraction pattern that is obtainable. Either there is a wave guiding the particles or there is not. Most physicists argue there is not. They just happen to give that distribution. You ask, why? They tend to say, "Shut up and compute!" For the fact is, computations based on what is a wave equation give remarkably good agreement with observation, but nobody can find evidence for the empty wave. For me, there must be something causing this behaviour. Accordingly, my first premise is:
The wave-like distributions found in quantal experiments are caused by a wave. (1)
This was first proposed in de Broglie's pilot wave theory, but modern quantum theory does not assert this.
 
As with general quantum mechanics, the wave is represented mathematically by
 ψ = Aexp(2πiS/h)    (2)
where A is the amplitude, S is the action, and h is Planck's quantum of action. Note that the exponent must be a number. As a consequence, it is generally held that the wave function is complex, but this is not entirely true. From Euler's relation
exp(πi) =-1         (3)
it follows that, momentarily, when S = h/2, or h, the wave becomes real.
My second premise is
The physics of the system are determined when the wave becomes real.   (4)
This is the first major difference between my interpretation and standard quantum mechanics. The concept that the system may behave differently when the wave function is real rather than imaginary has, as far as I know, not been investigated. This has a rather unexpected benefit too: the dynamics involve a number of discrete "realizations", and the function is NOT smooth and continuous in our domain. If you accept that, it immediately follows why stationary states of atoms are stable and the electron does not radiate when it accelerates as required by Maxwell's laws. The reason is, the position of realization does not change in the stationary state, and therefore the determination of the properties shows no acceleration. From that, it is very simple to derive both the Uncertainty Principle and the Exclusion Principle, and these are no longer independent propositions.
 
Now, if (1) and (4), it follows that the wave front must travel at the same velocity as the particle; if it did not, how could it affect the particle? The phase velocity of the wave is given by
v = E/p        (5)
Since p is the momentum of the particle, and if the phase velocity is the same as the particle velocity (for the particle, consider expectation velocity), then the right hand side must be mv2/mv = v. (Recall that the term v must equal the article velocity.) That means the energy of the system must be twice the kinetic energy of the particle. This simply asserts that the wave transmits energy. Actually, every other wave in physics transmits energy; just not the textbook quantal matter wave, which transmits nothing, it does not exist, but it defines probabilities. (As an aside, since energy is proportional to mass, and mass is proportional to probability of finding it, in general this interpretation does not conflict directly with standard quantum mechanics.) There are obvious consequences of this that lie outside this post, but what I find strange is that nobody else seems to have considered this option. For this discussion, the most important consequence is that both particle and wave must maintain the same energy. The wave sets the particle energy because the wave is deterministic; the particle is not and has to be guided by the wave. There is now a further major difference between this interpretation and the standard interpretation: waves are both linear and separable, as in standard wave physics. There is no need for a non-divisible wave for the total state of an assembly because there is no renormalization due to probabilities.
 
Now, what is consciousness? Strictly speaking, we do not know exactly, but examination of brains that are conscious appear to show considerable electrical activity. Furthermore, this activity is highly ordered. While writing this, my brain is not sending random pulses, but rather it is organising some reasonably complicated  thoughts and setting out action. To do that, and overcome entropy, there is a serious expenditure of energy in the body. (The brain uses a remarkably high fraction of the body's energy.) I leave aside how this happens, but I require consciousness to be due to some matrix that remains undefined but evolves and is superimposed on the brain, and it orders the activity. Without such a superimposed entity, simple entropy considerations would lead to the decay of the order required for conscious thought. Such order must involve the movement of electrons an since this is quantum controlled, then the corresponding energy must be found in an associated wave. It therefore follows that when we are conscious and living "here", there is a matrix of waves with corresponding energy "there".
 
Accordingly, if this Guidance Wave interpretation of quantum mechanics is correct, then the condition for life after death is very simple: death occurs because the body cannot supply the energy required to match the Guidance Waves that are organizing consciousness, but if at that point the energy within the Guidance Wave matrix can dissociate itself from the body, and maintain itself "there", and recall that the principle of linearity is that other waves do not affect it, then that wave package can continue, and since it represents the consciousness of a person, that consciousness continues. That does not mean there is life after death, but it does in principle appear to permit it.
 
Is the Guidance Wave interpretation correct? As far as I am aware, there is no observation that would falsify my alternative interpretation of quantum mechanics, while my Guidance Wave theory does make two experimental predictions that contradict standard quantum mechanics, and these could be tested in a reasonably sophisticated physics lab. It also greatly simplifies the calculation of some chemical bond properties.
 
Is there life after death? In my opinion, you only find out when you die, but interestingly, this interpretation gave Claire surprising comfort as her death approached. If it gives any comfort to anyone else, this post will be worth it to me.
Posted by Ian Miller on Feb 2, 2015 1:34 AM GMT
Back again. My wife died on the 16th of January, so my blogging will be a bit erratic for a while, but at the end of last year I had planned some, and one theme included the behavior of the scientific community in the dissemination and discovery of knowledge, so here is the first blog that was written before this unfortunate event.
 
In a recent essay in Angew. Chem. Int Ed. 52: 118 – 122, van Gunsteren outlined what he considered the seven sins in academic behavior, which he ordered in increasing gravity of the offence. I found this to be quite interesting, and worth exploring further, The least "severe" sin, according to the author was "Poor or incomplete description of the work". As the author says, reproducibility is a critical element of good science.
 
Why is this not such a sin? The author then argues that with the growth of complexity of equipment and the growth of the sophistication of mathematical analysis aided by computers, the publication and analysis of all data required for others to reproduce the work has become more cumbersome. There is no doubt about that, and there is also little doubt that journal space does not lead itself to providing everything, but this in turn raises an interesting question: is the finding of the experiment that complicated that some simplified version cannot be provided? That leads me to a particular dislike I have for computational chemistry papers, in which the general reader such as me, who has an interest but is not involved, has no idea what key features led to the conclusion because they are not listed. Of course the program details are too complicated, but if there is nothing of general interest, why is it published?
 
When I was doing my PhD, I tried a synthesis outlined in a report in Tetrahedron Letters, and I could not get it to go. Now first, the substrate was not the same, so maybe the reaction did not go on what I was trying to do, at least that was the conclusion I reached, so I abandoned that route, and instead tried a route that was very much longer, but for which I at least could find enough details to know whether I was doing it properly. Also, while it was time-consuming, at least it had the merit of working, although it did not give me quite the range of substitution patterns I would have preferred. (I also tried an alternative synthesis, and it worked to some extent, but only on the most electron-rich substrates. That gave me more reason to believe the first option would not work.) However, four years after I had finished my PhD I came across a full paper where the real details of that failed option were finally published, and one condition that I doubt a young student would be likely to recognize as important had been left out of the letter, yet this condition was absolutely critical, and it would never have been put in accidentally because it needed a special procedure that was quite outside the usual conditions of organic syntheses. I do not consider that a minor sin. I consider that as likely to be due to an egotist who wanted to get as many papers in this field before others worked the obvious possibilities. In my opinion, a synthesis procedure is useless unless sufficient detail is given so that a tolerably competent chemist can carry it out and make the required product. So, for me, if this is a minor sin, some of the others must be pretty bad.
 
Van Gunsteren then proceeds to criticize the practice of dumping procedures in supplementary information. His criticism of this is largely based on this being less well reviewed than the main body of the paper. Personally, I do not find this to be terribly important. The fact is, in most papers there is a strict limit to what peer reviewers can be required to find. They may, and I have occasionally found absolutely critical errors of procedure, but basically the whole point of peer review (at least in my opinion) is to ensure the paper is coherent and understandable, and makes a point worth making. The peer reviewer usually will have no more chance of finding a basic flaw in a procedure than anyone else who does not try the procedure, and the peer reviewer cannot be expected to reproduce the work. It is the responsibility of the author alone to ensure that the details are correct, AND all details are there. After all, the author alone actually knows what was done, unless, of course, it was a student who did it
Posted by Ian Miller on Jan 19, 2015 7:04 PM GMT
   1 2    Next >