Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

When I started this post, it was to be about search engines, the problem being that a search is never specific enough, and just about anything can be mixed up in the answer. To illustrate the problem, I thought I would Google "Peer review criticisms", and by doing so, I got 3,570,000 hits. Surely they can't all be relevant. However, curiosity got the better of me, and after an admittedly ridiculously inadequate look at the literature, I decided to post about peer review itself. The question to be addressed is, is peer review helpful to unraveling the secrets of nature, or is it part of the problem? So here are some comments that I picked up.
 
However, before I start, a different question might be, does peer review do any good? One possibility is that it sends the half-baked back to the author for further baking. It may also be claimed to help emerging scientists to present their ideas in a way that would lead to better understanding of what they are doing. If so, this could be very helpful, however, that only applies to papers sent back for revision. There is also the question of whether papers being sent back for revision truly need it? That someone does not write in the style of the reviewer is beside the point. It also eliminates the "crank" stuff, but herein lies the problem: what happens if an upcoming Einstein is labeled a crank? The current argument is, if it is that good, it will find a place eventually. Perhaps, but will it then be read? However, let us return to the literature.
 
First, a quote from a source that shall remain anonymous: "Peer review seldom detects fraud, or even mistakes. It is biased against women and against less famous institutions. Its benefits are statistically insignificant and its risks – academic log-rolling, suppression of unfashionable ideas, and the irresistible opportunity to put a spoke in a rival's wheel – are seldom examined." To that I would add, it is most certainly biased against individuals that do not have a University or major institutional address. The individual scientist does not exist, according to the reviewing system. If you do not have a suitable address, you must be a crank. No need to waste time reading the paper. Herein lies another real problem: papers can be rejected by the Editor without peer review, or even without any evidence of having been read, at least past the title.
 
Now, from www.evolutionnews.org/2012/02/problems_with_p056241.html
This article made six points:
  1. Good science does not have to be published in the peer-reviewed literature. The examples cited include rejections, and I shall deal with those later.
  2. The peer-review system wrongly rejects scientifically valid papers, while it wrongly accepts scientifically flawed papers. Personally, I feel that it is too much to expect a peer reviewer to find fraud and you cannot expect it to uncover faulty experimental procedures that are not specified. In this sense, journals are increasingly encouraging methods to be explained as a reference to somewhere else, which in turn is a reference to somewhere else again, and so on. However, it also makes the valid point that peer-reviewing is both time-consuming and expensive, and often excludes people for no good reason.
  3. Scientific peer-reviewers are not perfectly objective. Again, the performance of the reviewers is questioned, but a very interesting point followed: journals have a very strong economic interest in preserving the current system, and scientists go along with it because it helps them maintain their position. In my mind, this is not a good reason to persist with it.
  4. The "peer-review card" is often played to silence scientific dissent. This, to my mind is a serious criticism, although more of the scientific elite than the reviewing system.
  5. Peer-review is often biased against non-majority viewpoints." Denyse O'Leary is quoted as saying: "The overwhelming flaw in the traditional peer review system is that it listed so heavily toward consensus that it showed little tolerance for genuinely new findings and interpretations."
  6. Not being recognized in peer-reviewed literature does not imply a lack of scientific merit. The simple fact is, in logic, the contrary position is a fallacy in the ad verecundiam class.
In a later post I shall continue on this theme, but before I do, what are your thoughts on this matter?
Posted by Ian Miller on Aug 16, 2015 9:25 PM BST
Since my last post, I received a telephone questionnaire relating to what the RSC should be doing, which raises the question, what should it be doing? First, some things are obvious, particularly relating to practicing chemists, and in my opinion, the RSC does these rather well. There are obviously some additional things it could be doing, and any organization has room for improvement, but I suspect the average response to such a questionnaire will be to make minor adjustments to what is already there. Leaving aside the need to work for chemists, I think there are three major areas for the society to consider.
 
The first is to get a basic understanding of chemistry to the general public. In the recent RSC poll, 55% of the public believed it is important to know about chemistry, but I bet most who answered that way would admit they know very little. By basic understanding, I mean enough to understand the problems the world faces, enough to see them safely through their lives, and enough to sense whether someone making a public statement is speaking sensibly. In New Zealand recently, one family died by accidental carbon monoxide poisoning, and had they understood the problem, this would not have happened. I know you can never totally prevent such incidents, nor can you cover every eventuality, but I think the society could be more active in trying, perhaps by giving some basic information in an easily comprehended form on its website, or perhaps by making Youtube video educational items.
 
The second is to make a broader explanation of certain important environmental issues available to the general public. I have seen a lot of irrational and false comments about matters like climate change, and I feel the Society should make a bigger effort to show the public how to handle the chemical aspects, or perhaps with the Institute of Physics, a proper overall picture, including a discussion of what we do not know for sure. The problem, as I see it, is that science tends to present very technical statements with proper scientific statements of uncertainty, but the public cannot understand them, and instead fall to "snake oil" merchants. I am not saying there cannot be dissent, but the public must realize that dissent requires logical analysis and evidence. I think the public is smarter than we give them credit for, BUT they lack specific information.
 
The third is I think the Society should show how chemistry can assist the economy, and not just by helping big companies. In the RSC survey, many people believed that chemistry can help the economy, and the fact is that now the more wealthy countries have a strong knowledge input into their economies. It can also show up problems smaller companies have and make basic chemical information available. Not all companies are associated with a University, and it is important that graduates, when they go into the world, have the opportunity to stay on top of their profession.
 
There will be other things the Society could do, and many may be more significant, but they are my thoughts as to what could be done. What are yours? One of the better things the RSC does is to put up blogs like this, and while this offers the chance to involve all the members in idea creation, that only works if the members participate, so why not throw in your thoughts?
 
Posted by Ian Miller on Jul 27, 2015 3:35 AM BST
Towards the end of last year, Nature published two articles that raised issues with the peer review system. In one, the claim was made that the reviewer of one paper had not even read the paper, because he commented on something that might usually be found in such a paper, but in this case was not. Now, naturally this would be a cause for concern for the author. However, the article then went on to a somewhat deeper issue, namely that in 2013, the articles indexed in Elsevier's Scopus rose to 2.7 million. Now not all of these would be peer reviewed, but you see the problem. There are just not enough experts to deal with this flood of material. What happens is that scientists are given papers that are more outside their specialty, and yes, the reviewer may be able to evaluate the methods and results section, but, according to the claim, lack the expertise to evaluate the introduction and discussion.
 
The article then made the claim that reviewers should verify the authors are quoting the right literature to support their views. Now, I dispute this. The reason for citing literature is when the papers put forward views or results that are of significance to the argument that will follow, but an introduction of a paper is not a general review of anything that is vaguely associated with the topic. So the first question is, is the work novel? The probability of recognizing plagiarism unfortunately increases very rapidly as the level of expertise and experience of the reviewer becomes more significant. Recall the word "peer"? If the work is accepted as novel, then the next question is, are there any facts or assumptions seemingly pulled out of thin air that should be referenced to some other paper? If so, those papers should be referenced, but there is a problem in that the reviewer should not be expected to do a full literature search. Surely the author has to take responsibility. The important thing is the author does not claim that to which he is not entitled. The third problem is, is there a reference that shows the work to be wrong? Again, an expert in the field should be able to show this, but how to find enough such experts? Experts, by definition, are a small fraction of the total scientists.
 
The second Nature article illustrated what I believe is a far more insidious problem: the peer review loop. Authors are often asked to recommend peer reviewers, so they do, within a small group of friends. In return, they know they will be recommended back. In a nice cozy circle, of course you recommend publication, with maybe the odd correction to show you did something. Even worse was the case where one author did his own reviews, and sent them to colleagues who in turn would submit them to the editor. This was caught out because everyone did it far too quickly. If you are going to cheat the system, obviously you should do it slowly, when the editor will be finally glad to have something on his table!
 
I believe recommendations from scientists should stop, but that then raises the question, how does the editor find reviewers? Personal knowledge is valuable, but with the great flood of papers coming in, can the editor know enough experts so that those he chooses are not overwhelmed? Then there is a problem with multi-author papers. If you look at some of the papers regarding the results from some of the NASA space probes, there may be up to fifty authors cited. In other words, anyone who knows sufficient is already an author.
 
But this raises the questions, why would scientists want to game the system, and why is peer review required? The answer to the first appears to be, to get more papers that go towards building up a reputation for promotion, awards, whatever. I suspect the scientific literature would become a lot more manageable if this practice were to stop. But the answer to the second one might come from something similar to the physicists' ArXiv; the authors can publish anything in e-form, but it is then open to public peer review, in which relevant comments from others can be attached. If the paper survives, it is worth keeping. If there is obvious plagiarism then the work is deleted and the plagiarist publicly identified. If some references have been left out, they can be added, although some check on relevance as opposed to self-citing would be required. If there is evidence the work is wrong, then that work can be submitted as a linked paper, and if  the first paper is not adequately defended, then everyone will disregard it. Let the scientific community do the peer review. This would not have worked while journals were printed on paper, but when they are essentially e-journals, why not? What do you think?
 
Posted by Ian Miller on Jul 5, 2015 11:47 PM BST
Those funding research want the outcome of their funding to be useful, which means that the research either leads to something useful, or it does so indirectly by inspiring someone else. The problem with this attitude is that it inspires some writers to present their research in a way that looks a lot more relevant than it actually is. The best that can be expected is that it will inspire someone else to pick up the challenge. But how does that come about?
 
One interesting problem involving invention, or of the development of new technology, is the ability to see things before others do, to see the possibilities and also to quickly see what will become a dead end well before it wastes too much of your time. So with that in mind, what to make of a recent paper on CO2 reduction (JACS 137: 5332)? The statement made in the paper is that in the presence of Li2[2,2-C6H4(BH3)2 ], CO2 can be reduced to methane or methanol if a suitable reducing agent is present. What appears to happen is that the CO2 forms some sort of complex with the two BH3- groups, and the CO2 molecule becomes bent, and an aromatic system is formed. This would significantly lower the activation energy for reaction at the carbon atom, hence the catalytic activity. Unfortunately, the reducing agent is not one that obviously comes to mind for making bulk chemicals.
 
So, is this useful? The first thing to note is that we need to get rid of CO2, and in principle, this does it. Why "in principle"? Because, when we look at the table of yields, the yields quoted are those of the reducing agent's products. Thus triethylsilane was used as the reducing agents, and all the products noted were silicon derivatives. This certainly supports the concept that a reduction occurred BUT it gives no clues for specific reactions, how much of what product was obtained. Elsewhere, it adds that the products were either methane or methanol, and one diagram shows excellent yield of an intermediate , and implies up to 89% yield of methanol. That sounds promising, until we start to consider the reducing agent. If you have to use reducing agents like triethylsilane, then the reagents will be far more expensive and difficult to get than the products, particularly since several mole equivalents will be required.
 
In my opinion, as it stands it is unlikely to be useful, as it uses difficult-to-get reagents to make commonly available chemicals. On the other hand, it seems to me that there is potential here to make use of the aromatic intermediate is some way yet to be discovered. This is the challenge of chemistry.
 
How would you respond to this challenge? My guess is that pressurized hydrogen with a hydride transfer agent that acted catalytically would be the best bet for methanol. However, some reactive anion that did not react with the boron groups might offer the chance to add to the carbon atom and make a carboxylic acid, which in turn might be of synthetic interest. Your opinions?
Posted by Ian Miller on Jun 21, 2015 11:06 PM BST
By the title, I mean, what sort of body is it, and how did it form. Ceres is the largest body in the asteroid belt, and it is essentially spherical, from gravitational energy minimization. It lies at a distance at which the remaining bodies are mainly carbonaceous asteroids, and are made of rock with some water and organic material. It should be noted that the part of the asteroid belt closest to the star contains mainly silicaceous asteroids, so an interesting question is, how did these different bodies form? The issue is made more complicated because there are also some such as Vesta that appear to have an iron core. To get an iron core, the temperature of the body had to get above 1538 oC, yet the evidence from meteorites is the carbonaceous bodies never got above ~200 oC. How did all this happen?
 
In my Planetary Formation and Biogenesis, I supported the hypothesis that Vesta and one or two other bodies really formed much closer to the star, and were moved out by gravitational interactions and became circularized where they are now. If that is right, that gets rid of that problem. As to why, after forgetting about Vesta and similar asteroids, there are two major classes of asteroids that are quite different. Within my interpretation, I propose planetary formation starts basically through chemistry, and the bodies stick together initially through chemical (including physical chemical) interactions. As for these asteroids, I suggested they formed by different methods, and consequently should have different chemical composition. In particular, the carbonaceous ones formed as or after the accretion disk cooled down. The concept was that at the higher temperatures, organic materials such as methanol, known to be in the disk, pyrolysed on silica particles and formed tarry material, and later this tarry material permitted bodies to stick together. One possible reason why the bodies are so small is that the tar would only be sticky over a modest range of temperatures. The net result of this is, based on meteorite samples, the carbonaceous asteroids tend to be black, and have various small rock-crystals distributed through them. Accordingly, if you break the meteorite the interior remains black, or at least dark coloured.
 
The puzzle, for the moment, is that the space vehicle Dawn has observed bright spots on Ceres' surface. These are quite white, in amongst the otherwise depressing grey-black. The nature of them is not that difficult to explain. In my opinion, they are most likely to comprise exposed ice. The problem then is, how does ice get there? Carbonaceous chondrites have between 3 and 22% water in them, but this water level may be inflated because the rocks have been lying around on Earth for some time before being picked up. The density relative to water of Ceres is 2.17, which means that in the absence of an iron core, the composition is richer in silicates than anything else. (Granite and silica tend to have densities in the order of 2.5, while olivines/pyroxenes will be in the order of 3.3.). One possibility is that in differentiation, the ice melted and accumulated, like lava, as deposits. But if that were the case, why would not the impact simply remelt it and mix it with the silicates? That would leave at best very dirty ice.
 
The white spots appear to be within craters, so it is possible that the impacts have melted the water deeper below and subsequently water has flowed out and solidified. That requires the water not to be dirty, to get the bright spots, and that suggests there were richer deposits of ice at some depth below the surface, and after impact, the pressure of steam cleared a pipe through the rock, and later the residual water flowed to the surface. So, what are the options? As I see them, Ceres may be an abnormally larger carbonaceous asteroid and the water has been mobilized by impact. The other possibility is that Ceres started life in the Jovian accretion zone, and was thrown inwards, during which it picked up more dust. This assumes it started life a bit like Ganymede/Callisto (densities between 1.83 and 1.93), and gained more dust and silicates on its surface. My guess is it started life in the Jovian region because that is the easiest way for it to get so big. If this is so, Ceres is not a typical body within the carbonaceous asteroid distribution, and Dawn will add no more information as to their formation. What remains to be seen is what information Dawn can gain.
 
Assuming all goes well, I shall add a photo of such spots and you can form your own opinion. The photo is, of course, due to NASA. What do you think?


  more...
Posted by Ian Miller on Jun 7, 2015 11:46 PM BST
In two previous posts, I have mentioned two of the seven sins of academics in the natural sciences mentioned in an article by van Gunsteren  (Angew. Chem. Int Ed. 52: 118 – 122).  The third sin was insufficient connection between data and hypothesis, or over-interpretation of data. My personal view is this is not a sin at all, as long as you are honest about what you are doing. Perhaps the best-known example is that of Kepler. Strictly speaking, his data were not really sufficiently robust to justify his laws, but Kepler decided (correctly) that the planets should follow some sort of function, and the ellipse fitted the data better than anything else. Similarly, in one sense it was an act of faith for Newton to accept Kepler's laws as laws, but look what came from it. My view is, that as long as you are honest, there is no harm in drawing a conclusion from data that does not fully support it, as long as it is clear what you are doing, and as long as the conclusion is not put to a critical use. This if considering whether something is safe, then if the data does not prove safety, it does not hurt to hypothesise that it could be safe, as the hypothesis takes everyone forward, but only if it is clear that it is a hypothesis.
 
The next sin mentioned is the reporting of only favourable results. Here I am in total agreement. If some result does not support your hypothesis, you should investigate it thoroughly, and if it persists, you not only report it, but you should confess that the hypothesis is wrong as stated. To me, it is a sin, albeit a less serious one, to report the data and make no comment on it. The statement that it is unexpected, or to state it and end the sentence with an exclamation mark, is not adequate. The reason for this is that in logic, ONE observation that cannot be explained by the theory is sufficient to falsify it.
 
Another sin mentioned was the neglect of errors found after publication. If the error is in the reporting of the data, such as a spectral peak listed in the wrong place, obviously this should be reported. However, I am less sure of reporting when the report does not make a significant difference and does not conclude the matter. In my opinion, it is almost as big a sin to put out a sequence of papers on the same subject, and having a conclusion that moves around a little from paper to paper. If the first conclusion is near enough, in my opinion there should be no corrections until the author is convinced the subject is sorted. There is far too much in the literature already, without salting and peppering it with minor variations, none of which significantly improve the issue.
 
The remaining sins listed were plagiarism and the direct fabrication of data. I agree these are bad sins, but do they actually happen? I have heard there are examples from students, but surely this is as much the fault of supervisors. I would hope that professional scientists would never even think of this. As far as I know, I have never run across an example of either of these. Have you?

 
I realize these opinions might be controversial, but so what? I hope it does stimulate discussion. I also think the list given in this article is incomplete, and I feel there are more sins that are equally bad (except possibly for the last two). More on them some other time.
Posted by Ian Miller on May 25, 2015 4:54 AM BST
In January of this year I started a series of posts based on an article in Angew. Chem. Int Ed. 52: 118 – 122, where van Gunsteren mentioned the seven deadly sins of chemists. I commented on the first one (inadequate descriptions of methodology), inspired in part by an example that help up progress on my PhD when an eminent chemist left out a very critical piece of the experimental methodology and I was not smart enough to pick it, but then I got distracted by a series of what I thought were important announcements, coupled with one or two things that were happening in my life.
 
The second sin was "Failure to perform obvious, cheap tests that could repudiate or confirm a model, theory or measurement." The defence, of course, is that the experimenter did not think of it, and I am far from thinking that one should blame an experimenter for failing to do the "obvious". The problem with "obvious" is it is always so when pointed out in retrospect, but far from it at the time. Nevertheless late in my career I have an example that is a nuisance, and in this case, it is not even chemistry, but rather physics. My attempts at understanding the chemical bond, and, for that matter, some relationships I found relating to atomic orbitals (I. J. Miller, 1987.  Aust. J. Phys. 40 : 329 -346) led me to an alternative interpretation of quantum mechanics. It is a little like de Broglie's pilot wave, except in this case I assume there are only physical consequences when the wave is real, which, for a travelling wave, from Euler, is once per period. (Twice for certain stationary states.) As with the Schrödinger equation, the wave here is fully deterministic. (For the Schrödinger equation, if you know ψ for any given set of conditions, you know ψ for any changed conditions, hence the determinism. The position of the particle is NOT deterministic. The momentum is in as much as it is conserved, but not at a specific point in space.) Now, my interpretation of quantum mechanics has a serious disagreement with standard QM in terms of the delayed quantum eraser. Let me explain the experiment, details of which can be found at Phys. Rev. Lett. 84: 1 – 5.
 
But first, for those who do not know of it, the two slit experiment. Suppose you fire electrons at two slits spaced appropriately. On the screen behined, eventually you get a diffraction pattern. Now, suppose on the other side, you shine light on the screen. As the electrons emerge from a slit, and an electron only goes through one slit, the electron scintillates, and you know through which slit the electron passed, however, now the diffraction pattern disappears, and the resultant pattern is of two strips, and if the photomultiplier can assign the signal to a specific electron (requiring low intensity) then it is shown that a given strip is specific to a given slit. Standard quantum mechanics states that it is because you know the passage, there is no diffraction. By knowing the path, you have converted the experiment into a particle experiment, and all wave characteristics are lost. You can know particle properties or wave properties, but not both.
 
Now, this experiment starts the same way, but at the back of the slits there are two down converters, each of which turns a given photon into two photons of half energy. One of these, called the signal photon, goes to the photomuliplier, while the other photon, called an idler photon, sets off on a separate path from each down converter, so at this point, there are two streams that define which slit the photon went through. Accordingly, by recording the signal photons paired to one of these streams, it is known which path the signal photon took, and there should be no diffraction pattern if standard quantum mechanics is correct on this issue. What was actually done was that each stream was directed at a beam splitter, and half of each stream of idler photons went to a separate photomultiplier, and when the paired signal electrons were studied, there was no diffraction pattern. If, on the other hand, the the other half went to two further beam splitters such that the beams were mixed, and knowledge of which slit the parent photon went through was lost, the paired signal photons gave a diffraction pattern. Weirder still, the path lengths were such that what the idler photons did occurred after the signal photons had been recorded, i.e. the diffraction pattern either occurred or did not occur depending on a future event.
 
So where is the sin? Do you see what should have been done? The alternative explanation may seem a bit hard to swallow, but is it harder than believing the photons would give a diffraction pattern or not depending totally on what was going to happen in the future? Remember,  the idler photons could have been sent to Alpha Centauri to do the critical mix/not mix and the theory states clearly that the signal photons will, er, what? Rearrange the records eight years years later if the physicist does something different at the other end?
 
What I would have liked to see was that one stream of the idler photons heading to the mixing was blocked. The theory is, in the down converter, it would be possible that only one of the photons carried diffraction information, and that would go equally to signal or idler photons by chance. However, the next beam splitter could split idler photons not on chance but by whether they carried diffraction information, or appropriate polarization. The difference is, the separation is causal, and nothing to do with what the experimenter knows. If the partners of these two streams of idler photons heading to the mixing step carry the diffraction information, cutting out one of those streams will merely delete half of the information (because only hald the signal photons are now counted) if the patterns arise deterministically (and recall in terms of wave properties the Schrödinger equation is deterministic.) If the experimenter's knowledge is critical, then the diffraction pattern will go because the experimenter knows which path the photons have taken.
 
The point is, if physicists over the last decade have not commented on this, then maybe it is not that obvious. Maybe it is not a sin not to do the "obvious", because it is seldom obvious at the time. Hindsight is great, but if you did not see the sin before I told you, maybe you will be more generous when others appear to have sinned.
Posted by Ian Miller on Apr 21, 2015 4:37 AM BST
Ever wondered why planets rotate the way they do? All the outer ones appear to have prograde rotation, i.e. they rotate in the direction as of they were rolling along. However, Mercury and Venus are exceptions. Mercury has a very slow rotation that is explained by it being in a tidal resonance with the sun, so that is no mystery, but Venus rotates slowly, and the wrong way. Most people have viewed this in terms of the standard theory of planetary accretion, where the central body gets hit by a large number of planetesimals, or even larger bodies, from random directions, and the resultant spin is a result of preferential strikes. Earth may well have included this effect when it was struck by Theia to form the Moon. In this case the Moon's orbit also take sup angular momentum from the collision. Venus has no moon, and it spins slowly so the theory went, it was just unlucky and got hit the wrong way at the end by something big. But if that were the case, why no satellite?
 
There was a recent paper in Science (346: pp 632 – 635) that put a different picture on this. If the planet has an atmosphere, atmospheric temperatures oscillate from night and day, which creates large-scale mass redistribution within the atmosphere, the so-called thermal tides. The retrograde motion occurs because the hottest part of the day is a few hours after midday, due to the thermal inertia of the ground. Because of this asymmetry in atmospheric mass redistribution, the stellar gravity exerts a non-zero torque on the atmosphere, and through frictional coupling, the spin of the planet is modified. This is why Venus has retrograde spin. Atmospheric modelling then showed that the resultant torques for a planet in the Venusian position with a 1 bar atmosphere are an order of magnitude stronger than for Venus, mainly because the very thick atmosphere scatters or absorbs most of the sunlight before it reaches the surface. As a consequence, rocky planets in the habitable zone around lower mass stars may well have retrograde rotation.
 
During these posts, the reader may have noticed that I sometimes view computer models with scepticism. Here are two examples that illustrate why. The first is from Planet. and Space Sci. 105: 133 – 147, where two models were made of atmospheric precipitation on Mars ca 3.8 Gy BP. The valley network analysis suggests an average of 1.5 – 10.6 mm/d liquid water precipitation, whereas the atmospheric model predicts about 0.001 – 1 mm/d of snowfall, depending on CO2 partial pressure (which varied from 20 mb to 3 bar in the models) and with global mean temperatures below freezing point. The authors suggest that this shows there was a cold early Mars with episodic snow-melt as a source of the run-off. I rather fancy it shows something is left out of the analysis, i.e. there is something we do not understand, because all the evidence to date makes a persistent 3 bar atmosphere most unlikely, and even then, it only works by a near miss at the extremes. The other came from Icarus 252: 161 – 174. Here, an extensive suite of terrestrial planet formation simulations showed that the rocky planets have overlapping stochastic feeding zones. Worse, Theia, the body that formed the Moon, has to be significantly more stochastic than that of Earth, and the probability that the two would have the same isotopic composition is very small, yet the isotopic composition is essentially identical. The authors state there is no scenario for the Moon's origin consistent with its isotopic composition and a high probability event. Why not concede that the premises behind the model are wrong? And there, in my opinion, is the basic problem. Almost nobody goes back and checks initial assumptions once they have been accepted for a reasonable time. And if you do, as I have done for planetary formation, nobody cares. As it happens, each of these is properly accounted for in my Planetary Formation and Biogenesis.
 
There is a clear published model from Belbruno and Gott that would permit the Moon to have the same isotopic ratios as Earth, and that assumes Theia accreted at one of the two Lagrange points L4 or L5 (Astron. J. 129: 1724–1745). (Lagrange points are where gravitational effects of two major bodies more or less cancel, and a third body can be at L4 or L5 indefinitely, as long as it does not become big enough to be gravitationally significant. L4 and L5 are actually saddle points, and bodies that fall off the "saddle" have net forces that pull them back, so they carry out motion about the point. Jupiter's Trojans are examples.) So why did these other authors not cite this model as a possible way out of their problem? One possible reason is they have never heard of the model, which is almost never cited, one of the reasons being that within the standard model of stochastic accretion of ever increasingly large bodies, nothing could accrete at the Lagrange points because collisions would knock them off. So now we have a problem. The standard model would not permit the conditions by which one model would explain the observations, but the observations also effectively falsify the standard model. So, what will happen? Because there is no way to have discussions on topics such as these, other than in blogs, the whole issue will be forgotten for some length of time. Progress is held up because the modern method of disseminating information has so much information in it that linking it does not always occur.
Posted by Ian Miller on Apr 5, 2015 11:33 PM BST
Two posts ago, I issued two challenges for readers to try their hand at developing theory, and so far I have received a disappointing response. Does nobody care about theory? Anyway, my second question was, why did nature choose ribose? Recall that ribose is not the easiest sugar to make, and in the Butlerov synthesis, under normal conditions essentially no ribose is made. However, that may be misleading, as there are other options. One that appeals is, providing pH 9 or more is reached, silicates dissolve slightly, and catalyse the condensation of glyceraldehyde and glycolaldehyde to form pentoses, and the furanose form is favoured (Lambert et al. 2010. Science 327: 984-986). This strongly favours ribose.
 
However, even if we can find a way to make ribose, it is inconceivable that we can do that without making other sugars, so why did nature choose ribose? One answer is, it is the most suitable, but that begs the question, why? It is certainly not that it alone can lead to duplexes once the strand is made, because it has been shown that duplexes based on xylopyranoside or arabinopyranoside, or even ribopyranoside have better duplex binding, and xylose and arabinose are easier to make.
 
I think the answer lies in part in what is an essentially forgotten paper by Ponnamperuma et al. 1963 (Nature 199: 222-226.) What Ponnamperuma et al. did was to take adenine, ribose and phosphate in aqueous solution, then they shined hard UV light (wavelength about 250 nm) on it. Products included adenosine and adenosine phosphates, including adenosine tripolyphosphate. This was quite a stunning achievement, but it leaves open the question, why did it work? Before addressing that, however, we might see why this has been forgotten, apart from the issue of who reads the literature before computer searching? There is a serious flaw in this being the cause of life, and that is that it is almost impossible to conceive of an atmosphere that will remain transparent to such short wavelength UV. For example, water gets photolysed to oxygen, thence to ozone, which screens out the hard UV. If there are reducing materials there, you get a haze like that on Titan, and again, the hard UV gets screened out.
 
My recommended way of forming a theory is to ask questions, and in this case, the question is, why does light make the phosphate ester? The adenine is clearly absorbing the photon, and one can see that the link between adenine and ribose may be photocatalysed, but what happens next? All bonds in the ribose are σ bonds, so there should be no extension to the excited electronic state. The next question is, how can one make phosphate esters? This is slightly easier: if you heat a hydroxyl and a phosphate with a hydroxyl to about 200 degrees C, water is eliminated and we get the phosphate ester.
 
This suggests the answer to the problem should lie in radiationless decay of the excited state, where the energy is dissipated in a sequence of vibrational energy levels decaying to the ground state. We now see that a vibrationally excited hydroxyl could form an ester if it had the same kinetic energy as a hydroxyl at 200 degrees C. If that is the case, we now see why nature chose ribose: the furanose is more flexible, and the 5-hydroxyl on a furanose will behave a little like the end of a whip. Ribose is the only sugar that forms a reasonable fraction of itself in the furanose form in aqueous solution. Now, adenine cannot be the primary absorber originally, but there is another option, and that is, given the appropriate reduced rocks, if the cell wall hydrocarbons contain dissolved porphyrans, or some similar material, the absorption could be through them.
 
Which brings us to an experiment that could be carried out. Make micelles or vesicles from hydrocarbon alcohols with phosphate esters as the surfactant, and have them with dissolved porphyran, and ensure the water within contains phosphate, adenine, and a mixture of ribose, xylose and arabinose. The prediction is that adenosine phosphates will be formed, but the xylose and arabinose will not participate in forming phosphate esters. If that is true, it is fairly clear why nature chose ribose: it is the only sugar that works
 
Thus we have a clear possible explanation, and an experiment that would confirm of falsify it. The question now is, will anyone carry it out?
Posted by Ian Miller on Mar 23, 2015 12:17 AM GMT
So, my theory challenge, with three weeks to think about it, got no responses. Perhaps nobody is reading these posts. Perhaps nobody cares about theory. That would be ugly. Perhaps the problems were too hard. Really? Anyway, first, a review of where science is at the moment: www.ncbi.nlm.nih.gov/pmc/articles/PM2857173/  My argument is that none of this review answers the question, but it does give a very large number of references. Given that there was this much activity that failed, maybe this challenge was unnecessasrily hard, but let me give you my proposal on how homochirality occurred.

The way to form theories is to ask questions, and in this case ask why did nature choose to be homochiral, given that it wasted half its resources. Why would not some other life form use both, and gain competitive advantage? The obvious answer is that nature chose homochirality because it had to, i.e. if it did not become homochiral, there would be no life. Now, most of what life requires does not demand homochirality. Sources of chemicals could in principle be of any chirality, light is not chiral, energy transport (ATP) depends on the tripolyphosphate, however there is one part where chirality is critical: reproduction. Reproduction occurs when a strand of nucleic acid allows its complement to form as a second strand, where it forms a duplex (double helix). When the duplex separates later, both single strands can grow further new strands, which in turn can form two new duplexes. Note that the helical nature is imposed by the chirality of C-4 on the ribose. The single strand does not have to form a helix, but the two strands, to be intertwined, must both form a helix with the same pitch.

The second strand does not grow by itself. What must happen is that the second strand forms by the complementary bases, with 5-phosphated ribose attached, form hydrogen bonds with their complementary base on the nucleic acid strand. It is now loosely attached by the few hydrogen bonds, and either the required 3-hydroxyl is close to a 5'-phosphate or it is not. If it is, then the ester bond can form, given an impulse from somewhere to overcome the activation energy. If the ribose chirality is correct, esters can form; if it is not, the two sites never come close enough, no ester is possible, and the base eventually wanders off and sooner or later the correct chirality will appear and the duplex grows. Think of a nut and bolt - you cannot make this work if every now and again the thread changes from left handed to right handed pitch. If there is a wrong chirality on the first strand, no duplex can form either, and the impulse required to bring the groups together is now also the impulse required to unravel the duplex.

RNA strands can form loops held together by magnesium ions and these emerging ribozymes can act as catalysts, and these can hydrolyse exposed RNA strands. It may be that it can preferentially solvolyse parts where the pitch chages. Some work is required to validate that piece of speculation, neveretheless, the duplex is at a lower energy than two single strands, so eventually we expect a double helix to form, especially if errors in the chain can be solvolysed.

Once you have a reproducing chiral molecule that can act as a catalyst, then it uses all the resources more effectively than any other option, and when it catalyses syntheses, it synthesises chiral entities. Thus it is RNA that is critical for homochirality; it is the only molecule that can arise naturally, sort itself out, then reproduce. Reproduction ensures that it prevails. Whether it chooses D for sugars and L for amino acids would be pure chance on this interpretation, and it would be predicted that half of alien life would choose the other.

Is that unnecessarily difficult?
Posted by Ian Miller on Mar 9, 2015 12:23 AM GMT
   1 2 3 4 5 6 7 8 9 ... 18