Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

In a previous blog, I discussed the case of whether platinum, palladium or gold could form oxo compounds. The argument that they could was published, with mountains of data that allegedly supported the case. Eventually, these assignments were found to be wrong, and one critic blamed the referees for permitting the original publication. Throughout the discussion, however, nobody seemed to come to grips with the problem: what comprises proof? One question that fascinates me is why do undergraduate science courses inevitably omit to address this question? Why do no such courses include the prerequisite for some, even brief, course in logic?
 
We sometimes see the argument, often attributed to Popper, that you cannot prove a statement in science; all you can do is falsify it. I think that is unnecessarily restrictive. One answer was given by Conan Doyle: when all possible explanations but one for an observation are eliminated, then that one, however unlikely it might seem, must be the truth. The reason why Popper's argument fails in such cases is that there was an observed effect, and therefore something must have caused it.
 
Set theory provides a formal means of answering the title question. Suppose I carry out some operation that addresses a scientific question and I obtain an observation, and to simplify the discussion, assume I am trying to obtain a structure of a molecule. There will be a set of structures consistent with that observation. Suppose I do another; there will be a further set of structures consistent with it. Suppose we keep making observations. If so, we generate a number of such sets, and because the molecule remains constant, the desired structure must be a member of the intersection of all such sets. The structure is proved when such an intersection contains only one element. Of course, this raises the question of the suitability of data. Simply reproducing the same sort of observation many times merely produces much the same set many times.
 
The problem, of course, is to ensure that the sets of structures that might give rise to the observation is complete because the logic fails when the truth was not considered, and proof fails when the sets are incomplete. (The truth not being considered may show up if the intersection of all sets is the empty set.) We may now guess at a problem with the oxo compounds: there was an awful lot of data consistent with the argued structure, but it was not definitive. This is one place where it is possible that the referees failed, but then the question arises, is it reasonable to expect the referees to pick it? Referees have general expertise, but only the author really knows what was observed. Should authors have to outline their logic? I think so, but I know I am in a minority.
 
I should also declare an interest in that last comment. I published a series of papers devoted to determining the substitution patterns on red algal polysaccharides, and I followed that logic. Accordingly, the papers tended to have a large number of set relationships, and a number of matrices. Rather interestingly, eventually the editor told me to desist and write papers that looked like everybody else's. Since I am not paid to write papers, and they made no difference whatsoever to my well-being, I simply desisted.
Posted by Ian Miller on Sep 23, 2012 12:53 AM BST
Yes, this is definitely off the formal topic, but I am curious to see what comes up. Most people have heard of the C. P. Snow inspired debate relating to the arts versus science, with which, as an aside, I disagree. I have a number of scientist friends who actively participate in some form of "art", usually music, and most scientists at least read something other than scientific papers and news items. On the other hand, I am not so sure that non-scientists understand the concepts of science, which I believe is bad because some decisions are coming that will strongly affect our future, and they will depend on science to get a good outcome. I am not suggesting that everybody study science, but I think it would be helpful if they understood enough of the underlying methodology to be able to tell the difference between a reasoned argument and snake oil. The resolution of reasoned arguments can be left to experts, but the ordinary person has to be able to tell which statements are reasoned and which are not if democracy is to work. If you are going to demand the right to vote, you have the obligation to vote in a reasoned fashion.
 
In this context, you might note that Plato was strongly against democracy. In "The Republic" he posed this question: if you are in a boat at sea with very limited supplies, do you want a vote or do you want a navigator?
 
Anyway, on the basis that if you believe something you should do something about it, I have self-published a couple of futuristic novels involving the concept of science in literature to get people thinking, and to support these I have started a second blog (https://ianmillerblog.wordpress.com) and I am starting with the theme science in literature. I have also posted a quiz question, which readers here might like to try their luck with. The question: can you think of a famous story involving a cloaking device that underpins a plot involving abuse of power, pride, wishing for what you should not have, and the curse of chattering women? If so, let me know and award yourself an imaginary chocolate fish. The one I am thinking of is extremely well-known, although probably very few have actually read it, which is something of a pity.
 
The second question is, can chemists come up with an answer sooner than those associated with books?
Posted by Ian Miller on Sep 14, 2012 4:54 AM BST
My alternative theory survived August! I found no papers that falsified the major premises, although one paper would have led me to change slightly what I wrote. The good news is that it far more strongly falsifies the standard position. This paper (Hirschmann et al., Earth Planet. Sci. Lett. 3465-348: 38-48.) demonstrated that molecular hydrogen is significantly more soluble in molten silicates when under pressure than had previously been realized. The standard theory is that early silicates were oxidized, the logic being:
(a) Modern volcanoes emit oxidized gases.
(b) Modern and ancient volcanic silicates have a similar composition.
(c) Therefore, ancient volcanic gases were oxidized (CO2 and N2)
 
My argument is that (c) does not follow. The “oxidation state” is not a valid variable (it is conserved in a closed system) and that the nature of silicates is determined by the free energy and depends on the local temperature and pressure, and on the movement of matter between phases. In this context, the major silicates in volcanic rock are olivines and pyroxenes, with the iron being present in the ferrous state. At much higher pressures (deeper) ferrous silicates disproportionate into ferric and iron, and so such cations in a rock do not indicate much except the local conditions when the rock crystallized.
 
The significance of this lies in the nature of the original atmosphere. Standard theory says “oxidized gases”; my argument was some reduced gases, which were generated when carbonaceous material (see my previous blog, “Carbonaceous Mars”) reacted with water thus producing CO and H2 (syngas) and when iron reacted with water to make ferrous or, if deep enough, ferric hydroxide and hydrogen gas. The hydrogen is critical for making some of the molecules that are critical for biogenesis, and to some extent these would be more difficult to make if hydrogen escaped rapidly to space. Evidence that hydrogen would dissolve in silicates meant that it would be available for further synthesis for a longer time. Magma can take a Gy to move 1000 km upwards.
 
This does not mean that my argument must be right, but at least it makes it more plausible, in which case the production of the precursors to life is not an extraordinarily unlikely event at all, but is probable on any Earth-like planet (Earth-like being defined as being of comparable size and having massive granitic cratons). What is “comparable”? The range of sizes is unclear, and this makes Mars a fascinating exploration site. Will Curiosity find clues to biogenetic material? We do not know yet; if there is any remaining on Mars it has to be protected from the ionizing radiation, so it will have to be buried. Digging is a problem, because digging has to be in the correct place and be deep enough. We await results.
Posted by Ian Miller on Sep 4, 2012 12:50 AM BST
While most scientists have mixed feelings about referees, particularly after having had a paper rejected, they also have mixed feelings about expressing views on refereeing. Once you get old enough, mixed feelings crystallize! This blog was inspired by p9, Chemistry World, August 2012. While "standard wisdom" asserted platinum, palladium and gold did not form oxo compounds, between 2004 and 2007 Craig Hill published papers containing a large amount of data supporting the claim that they did. These papers were subsequently retracted in light of further evidence. There was no question that the original data were correct, but the author now admits the interpretation of their significance was incorrect. The original authors stated that this showed science was working. "Not very well," answered one scathing critic, who stated the papers should have been stopped by the referees, and he was quite scathing about the quality of the refereeing. The issue is, is that opinion valid?
 
In my opinion, referees should never stop publication of a paper on the grounds that it is wrong unless they can show where, and give the author a chance to rebut their criticism. Science is in a bad way if papers can be rejected simply because referees do not believe them. If one learns nothing else from history, surely one should learn from the Almagest that authority has no place in science; only observations determine whether a theory is false. At stake is the future of science. Whatever science needs now, "priestly authority" is not one of them.
 
What I find to be of particular importance is that if the evidence was not sufficient, or it permitted alternative explanations, why did the critic not see this at some time during the following 8 years? If nobody can tell that something is wrong over 8 years, I think it is unfair to criticize the referee, who had a few days to view the paper, and furthermore, while he would have some general relevant experience, he would not be an expert in those specific areas. Whoever was at fault here (if anyone was) it was not the referees.
 
What could have gone wrong? The most obvious is that while a wealth of data was collected, it did not lead to a singular conclusion. I shall elaborate on this in a future post, because I have attempted to advocate a procedure for such structural analysis that is a little different from what many follow. The other problem is more serious, and that is, perhaps there is no place where doubts can be put forward and debated in a logical fashion. In these days of unlimited web space, this is correctable. It seems to me there should be such a forum, managed, and reviewed before postings are accepted, but reviewed for one purpose only: to ensure that the posting makes a legitimate point and is done so in accord with the logic rules of debate, i.e. as laid down by Aristotle, such as attacks on the conclusion are valid, but attacks on the person are not and resorting to authority are not.
Posted by Ian Miller on Aug 28, 2012 9:24 PM BST
During world war II, Germany made a certain amount of synthetic fuel by hydrogenating coal (the Bergius process). If one can hydrogenate coal, why not biomass? If we do not have an immediate answer, why not, and what are we prepared to do about it?
 
In fact there appears to be no good reason why we cannot because it has been done, at least to the workshop level. One process advocated during the previous energy crisis (Kaufman et al., Chemie Ing. Techn. 46, (1974) 14) involved taking finely divided biomass slurried in oil and mixed with nickel hydroxide and heating this to between 400 – 450 degrees C, with at least 5 MPa pressure for about twenty minutes, and in the presence of hydrogen, in which case it makes an oil that has immediate physical properties similar to diesel.
 
The advantage of this process is that all the biomass is useful, as any carbonaceous material can be hydrogenated and the products, essentially hydrocarbons, fit directly into the oil distribution system. The diesel and jet fuel cuts could probably be used directly, although some form of cracking would be required for petrol. Admittedly, a limited number of nitrogen heterocycles, where the nitrogen is at a position where aromatic rings are fused together are difficult to hydrogenate, but these should not be a problem for most biomass. An important point is the lignin, which contains a high proportion of the energy of the biomass, should hydrogenate smoothly. The yields of useful material obtained from a 50 kg/day unit were impressive, from memory in the low forty per cent range by weight, assuming oven-dried starting material. This included a small amount of pitch-like material made, which might be rehydrogenated, or alternatively used as a bitumen substitute.
 
So, why is this process almost never advocated? One reason might be that the production of hydrogen could be a problem. Another could be that it is unlikely that this type of process could be protected by patent, although the same is probably true for most oil refining technology. More likely reasons include this work has been essentially forgotten, and that high pressure chemistry is not fashionable. That raises the question, should the future be determined by our reluctance to work on what had been developed previously, our reluctance to visit the literature, or our fashion preferences?
Posted by Ian Miller on Aug 14, 2012 10:57 PM BST
July was a good month for my planetary formation theory. Of eleven meteorites known to have originated from Mars, one of which is approximately 4 Gy old, ten of them had carbonaceous material embedded in their basalts (Steele et al., Science 337: 212). My theory requires that carbon on the rocky planets had to be accreted as solids (carbon, carbides or carbonaceous material) and the atmospheres arise through this material becoming oxidised by water when temperatures of the rock get above about six hundred degrees centigrade. This will give rise to a mixture of carbon dioxide and methane, the extreme pressures causing carbon monoxide to largely further react. I argue that due to the expected chemical isotope effect during this oxidation, this reaction is the source of a significant contribution to the deuterium enhancement on planets, especially Venus but to a lesser extent on Mars. I also argue the reason Venus has almost no water is in part because it accreted at a higher temperature so it accreted less, and because it has more carbon, it used most of its water making its oppressive atmosphere, thus amplifying the chemical isotope effect.
 
In my ebook I made over 80 predictions, but I never had the nerve to predict that Martian basalts would contain carbonaceous material, although I did predict this for Mercury. There were two reasons. The first was that I expected the surface of Mars would be too oxidised, and little carbon could remain. The second was that I was aware of the meteorites, I knew that no carbonaceous material had been reported, and it never occurred to me that the reason why not was that nobody had looked!
 
While on the subject of primordial atmospheres, standard theory requires that reduced nitrogen arose from oxidation of atmospheric nitrogen, with nitrous and nitric acids subsequently being dissolved in the acidic seawater and then being reduced. Nitrites are reduced at about 70 degrees C over pyrites, nitrates at about 120 degrees, and of course it is usually argued that this would happen at black smokers. My argument was that nitrous acid is not good to have in the presence of the amines needed for life as it would diazotize them, but I also learned (Heilman et al. JACS 134: 11573) that nitrosyl compounds act as a source of NO, which in turn is a powerful antibiotic, which is hardly the most desirable environment for bacteria to try evolving. (If the atmosphere was primarily carbon dioxide, the ancient seas would be rich in ferrous ions from weathered basalts, and hence nitrosyl compounds should form.) The antibiotic properties of NO have been known for some time; it was just that (blush) I did not know it.
Posted by Ian Miller on Aug 5, 2012 12:35 AM BST
In the July edition of Chemistry World there was an item (p 20) on the SN2 reaction involving ballistic experiments on the reaction between hydroxide (with or without additional water molecules) and methyl iodide in helium. The results appeared to be that hydroxide plus methyl iodide by themselves simply led to ballistic outcomes. If one molecule of water could be incorporated, the iodide left on the trajectory consistent with the classic textbook SN2 result. If two molecules of water were incorporated, a longer-lived complex was formed that had a lifetime sufficiently long that the iodide could come out at any direction. That is unexceptional. However, the item ends with a statement: “the SN2 mechanism that undergrads are told is a fairy tale, up there with Santa Claus and the Easter bunny”. The commentator was surprised that any of the results supported the textbook version. So it appears that in volume 1 of my Elements of Theory, I joined the Easter bunny. Oh dear!
 
What can my defence possibly be? First,  I discussed the SN2 mechanism in an example of where science had not fired properly, in this case the so-called non-classical 2-norbornyl cation. In the 2-norbornyl system, leaving groups that are endo react second order to give exo products, which is the textbook SN2 reaction. However, exo leaving groups react significantly faster and give exo products, which shows that reaction is not simple SN2. There are clear reasons why the SN2 mechanism is unlikely to apply to these exo substituted molecules and the chapter pointed out that despite the extreme amount of work carried out on the 2-norbornyl system, the reason for the acceleration of the exo substituents remained unexplained. (The book has over seventy problems at the end; one was to find an explanation for the so-called non-classical ion, so for those who want an intellectual exercise, why not try your luck? As a clue, my answer relies on each side being partly correct, and each side correctly falsifying the other side in some respects. Given two Nobel prize-winners failed to reach a  conclusion over ten years, I rate this as one of the more difficult problems that I set.)
 
So, at the risk of being hammered again as something worse than an Easter bunny, I wish to point out that the information on the given experiments are quite consistent with the textbooks as I know them. First, there is no evidence whatsoever that there was no structural inversion (difficult to show with methyl iodide). The concept that there may be a small energy minimum on the reaction coordinate is expected if a transition state is stabilized so that it lies between two energy maxima, and if such a longer-lived intermediate can exist, the Uncertainty Principle requires that it has rotational uncertainty, let alone classical rotational motion from the collision. Finally, classical Debye-Huckel theory predicts stabilization of ionic intermediates from adjacent water molecules. There was nothing I copuld see in these experiments that is not in accord with the textbooks.
 
Actually, I agree with the commentator that the textbook discussion of the SN2 mechanism is an oversimplification, however my criticisms lie outside the scope of these experiments. Perhaps the subject of another blog.
 
Posted by Ian Miller on Jul 26, 2012 12:59 AM BST
So far, most of my blogs on biofuels have focused on making sugars with ethanol as the obvious end-point, although most of the arguments would apply equally to any fermentation, such as making butanol or acetone. I have done this not necessarily because I think this is the best option, but rather I have been trying to cover territory in an orderly fashion. There is one further way of making sugars: heat the polysaccharides sufficiently in the presence of a nucleophile. "Sufficient" usually means generating significant pressure, since the appropriate temperatures are usually higher than 250 degrees C. If we use water, we get sugars directly, however we may also get degradation products. If we use something like an alcohol, potentially in the presence of catalysts, glycosides are formed, which are usually more stable but can be hydrolyzed subsequently, and if ethanol is used, the solution can be fermented directly. This process is relatively undefined; once upon a time, mercaptans were used this way to analyze polysaccharides, but there are now “better” methods. To the best of my knowledge, this possible route is invariably absent from proposals. This suggests that either the route is faulty, or there is a general lack of imagination. The question is, which?
 
One of my early efforts in the area of biofuel was to use phenol, and good yields of the phenolic glycosides were obtained. Further, phenol is probably a better leaving group, so hydrolysis of the glycosides is straightforward. Unfortunately, as the perceptive may notice, there is a major drawback. No, it is not that phenol would kill the yeasts that are needed for fermentation; that is a difficulty, but it can be overcome. The real problem is that that if the sugars start to thermally decompose, any formaldehyde so formed couples phenols to form xanthene. This is not a particularly useful material, and it consumes two molecules of phenol for each molecule of formaldehyde. We might note that the lignin is a good source of phenols, and for hardwoods, there would be no xanthene difficulty, however, in this route for making biofuels the phenol glycosides decompose thermally before lignin degradation gets underway at a sufficient rate. As with so many ideas, as presented the use of phenols is a failure.
 
However, in accord with the theme of this blog, there is more than one way to look at a problem. Should more efforts be made to employ ethanol or methanol? Is there an option that is being overlooked? Is the whole concept a bad idea and should be put to a merciful conclusion? What do the readers think? Theory involves a lot more than simply computing; applied theory should in principle be able to make a lot of headway on practical problems such as this so the exercise could be useful.
 
Posted by Ian Miller on Jul 20, 2012 3:32 AM BST
In my opinion, it is not to list data and work done (although that is valuable), it is not to make people comfortable, but rather it is to make statements that summarize knowledge. Thus stating that the force due to gravity varies inversely proportional to the square of the distance between bodies summarizes all data on that topic. Of course, not everything is so well-studied, but that does not mean that we can avoid some obvious errors. One review (R. Brasser, Space Sci. Rev. DOI 10.1007/s11214-012-9904-2), which aims to explain the small size of Mars, came to my mind . The standard theory involves a distribution (without turning points) of planetesimals formed by a mechanism that is not understood then colliding gravitationally to form Mars-sized bodies called embryos, which then collide to form planets. This mechanism leads to various scenarios, depending on assumed initial conditions, but whenever you get four rocky planets, Mercury and Mars are always bigger. So why are they so small?
 
The review shows that if the rocky planets formed from an annulus of planetesimals from between 0.7 -1-A.U. (1 A.U. is the Earth-Sun distance), then you get what we see, with the exception of Mercury. Why this annulus? One proposition is that Jupiter and Saturn migrated in, then migrated back out again, and while doing so, cleared out a lot of planetesimals. (These have to be moved to permit the movement of the giants while conserving angular momentum and energy.) Therefore it was argued that the small size of Mars supported this theory. Further, it is argued that Mars must be a simple embryo, and hence formed early. Support for this comes from isotope measurements, specifically the ratio between 182W and 182Hf, which fix the time of differentiation. (182W dissolves readily in iron, 182Hf prefers to stay in silicates, and, of course, decays to tungsten, so the levels indicate when the iron separated from the silicates.)
 
The review then argues planetary water was brought in the embryos. Venus is drier because its embryos formed in hotter regions, but if so, Mars should be much wetter per unit mass than Earth. There are three reasons for this: Mars formed in a cooler region, Earth has a large iron core and from a chemical perspective, iron is unlikely to bring water with it, and finally, the enormous heat generated in embryo collisions should drive off a significant of water. (The collision of Theia with Earth formed an essentially anhydrous Moon, and, according to modeling, a similar mass of silicates at about ten thousand degrees C, much of which was lost to space.)
 
For me, there was a glaring problem; having made a prediction, the review overlooked the fact that it was just plain wrong. All the evidence is that while Mars definitely had large amounts of water flowing, but the total water is only a few per cent of that of Earth, per unit mass.
 
So, what have we got? A theory that invokes a very specific migration of two planets to explain the small size of another, while ignoring the small size of the remaining one, and which makes one only prediction, and that prediction is not met. Things that are uncomfortable are ignored. Chemists wouldn't do that, of course, would they? Watch for a future post.
Posted by Ian Miller on Jul 14, 2012 5:06 AM BST
As far as I am aware, there were no papers published in June that were critical to a theory of planetary formation so my ebook propositions last another month, but there were papers of interest, one of which is the basis of this blog. It is often argued that scientists do not communicate with the public very well. Part of the reason might be that sometimes we do not have a clear message. We may have very clear data, but there may be more than one way to interpret it, and we tend to see what we want to see.
 
There was a recent paper in Nature that measured the reflectance of Shackleton Crater, a polar lunar crater. The data established three points:
(a) The floor of the crater, which receives no sunlight, was brighter than the usual lunar material,
(b)  The walls of the crater, which receives sunlight, was brighter than the usual lunar material.
(c)  Standard lunar regolith plus 22% water would give the same reflectance as the floor of the crater.
 
There were two interpretations. One, a comment in Nature, argued that because of the sunlight striking it, the crater walls must be anhydrous, therefore the floor was likely to comprise eroded wall material. A second, on the NASA website, based on the principle that there is no easy means of mass transport on the Moon, argued that the floor is most likely to have frosts, which would be stable for millions of years. In this context, gravitational collapse would be expected to provide much brighter areas on the rim of the crater, but much less so in the centre, and this was not noticed. There is one further possibility: the impactor consisted of an abnormally bright material, and we are viewing the residue.
 
The problem, of course, is that finding water would be highly desirable from NASA’s point of view, because it would then be easier to get further funding. Absence of water is more desirable from certain theorists’ points of view, because the standard theory of lunar formation involves the Moon condensing from molten silicates formed through a collision of  a massive body, Theia, with Earth. In short, it is only too easy to interpret the data in terms of what you hope, rather than what you know.
 
What we know is (a – c) above; what we need is, at a minimum, some spectral data. The news media picked up on this story, but usually only one half of it, which results in conflicting stories in the public domain, and that does not help the credibility of science. At the risk of being repetitive, I think we need a better means of analyzing data and presenting theories.
Posted by Ian Miller on Jul 6, 2012 6:01 AM BST
   1 ... 9 10 11 12 13 14 15 16 17 18