Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

So far, most of my blogs on biofuels have focused on making sugars with ethanol as the obvious end-point, although most of the arguments would apply equally to any fermentation, such as making butanol or acetone. I have done this not necessarily because I think this is the best option, but rather I have been trying to cover territory in an orderly fashion. There is one further way of making sugars: heat the polysaccharides sufficiently in the presence of a nucleophile. "Sufficient" usually means generating significant pressure, since the appropriate temperatures are usually higher than 250 degrees C. If we use water, we get sugars directly, however we may also get degradation products. If we use something like an alcohol, potentially in the presence of catalysts, glycosides are formed, which are usually more stable but can be hydrolyzed subsequently, and if ethanol is used, the solution can be fermented directly. This process is relatively undefined; once upon a time, mercaptans were used this way to analyze polysaccharides, but there are now “better” methods. To the best of my knowledge, this possible route is invariably absent from proposals. This suggests that either the route is faulty, or there is a general lack of imagination. The question is, which?
 
One of my early efforts in the area of biofuel was to use phenol, and good yields of the phenolic glycosides were obtained. Further, phenol is probably a better leaving group, so hydrolysis of the glycosides is straightforward. Unfortunately, as the perceptive may notice, there is a major drawback. No, it is not that phenol would kill the yeasts that are needed for fermentation; that is a difficulty, but it can be overcome. The real problem is that that if the sugars start to thermally decompose, any formaldehyde so formed couples phenols to form xanthene. This is not a particularly useful material, and it consumes two molecules of phenol for each molecule of formaldehyde. We might note that the lignin is a good source of phenols, and for hardwoods, there would be no xanthene difficulty, however, in this route for making biofuels the phenol glycosides decompose thermally before lignin degradation gets underway at a sufficient rate. As with so many ideas, as presented the use of phenols is a failure.
 
However, in accord with the theme of this blog, there is more than one way to look at a problem. Should more efforts be made to employ ethanol or methanol? Is there an option that is being overlooked? Is the whole concept a bad idea and should be put to a merciful conclusion? What do the readers think? Theory involves a lot more than simply computing; applied theory should in principle be able to make a lot of headway on practical problems such as this so the exercise could be useful.
 
Posted by Ian Miller on Jul 20, 2012 3:32 AM BST
In my opinion, it is not to list data and work done (although that is valuable), it is not to make people comfortable, but rather it is to make statements that summarize knowledge. Thus stating that the force due to gravity varies inversely proportional to the square of the distance between bodies summarizes all data on that topic. Of course, not everything is so well-studied, but that does not mean that we can avoid some obvious errors. One review (R. Brasser, Space Sci. Rev. DOI 10.1007/s11214-012-9904-2), which aims to explain the small size of Mars, came to my mind . The standard theory involves a distribution (without turning points) of planetesimals formed by a mechanism that is not understood then colliding gravitationally to form Mars-sized bodies called embryos, which then collide to form planets. This mechanism leads to various scenarios, depending on assumed initial conditions, but whenever you get four rocky planets, Mercury and Mars are always bigger. So why are they so small?
 
The review shows that if the rocky planets formed from an annulus of planetesimals from between 0.7 -1-A.U. (1 A.U. is the Earth-Sun distance), then you get what we see, with the exception of Mercury. Why this annulus? One proposition is that Jupiter and Saturn migrated in, then migrated back out again, and while doing so, cleared out a lot of planetesimals. (These have to be moved to permit the movement of the giants while conserving angular momentum and energy.) Therefore it was argued that the small size of Mars supported this theory. Further, it is argued that Mars must be a simple embryo, and hence formed early. Support for this comes from isotope measurements, specifically the ratio between 182W and 182Hf, which fix the time of differentiation. (182W dissolves readily in iron, 182Hf prefers to stay in silicates, and, of course, decays to tungsten, so the levels indicate when the iron separated from the silicates.)
 
The review then argues planetary water was brought in the embryos. Venus is drier because its embryos formed in hotter regions, but if so, Mars should be much wetter per unit mass than Earth. There are three reasons for this: Mars formed in a cooler region, Earth has a large iron core and from a chemical perspective, iron is unlikely to bring water with it, and finally, the enormous heat generated in embryo collisions should drive off a significant of water. (The collision of Theia with Earth formed an essentially anhydrous Moon, and, according to modeling, a similar mass of silicates at about ten thousand degrees C, much of which was lost to space.)
 
For me, there was a glaring problem; having made a prediction, the review overlooked the fact that it was just plain wrong. All the evidence is that while Mars definitely had large amounts of water flowing, but the total water is only a few per cent of that of Earth, per unit mass.
 
So, what have we got? A theory that invokes a very specific migration of two planets to explain the small size of another, while ignoring the small size of the remaining one, and which makes one only prediction, and that prediction is not met. Things that are uncomfortable are ignored. Chemists wouldn't do that, of course, would they? Watch for a future post.
Posted by Ian Miller on Jul 14, 2012 5:06 AM BST
As far as I am aware, there were no papers published in June that were critical to a theory of planetary formation so my ebook propositions last another month, but there were papers of interest, one of which is the basis of this blog. It is often argued that scientists do not communicate with the public very well. Part of the reason might be that sometimes we do not have a clear message. We may have very clear data, but there may be more than one way to interpret it, and we tend to see what we want to see.
 
There was a recent paper in Nature that measured the reflectance of Shackleton Crater, a polar lunar crater. The data established three points:
(a) The floor of the crater, which receives no sunlight, was brighter than the usual lunar material,
(b)  The walls of the crater, which receives sunlight, was brighter than the usual lunar material.
(c)  Standard lunar regolith plus 22% water would give the same reflectance as the floor of the crater.
 
There were two interpretations. One, a comment in Nature, argued that because of the sunlight striking it, the crater walls must be anhydrous, therefore the floor was likely to comprise eroded wall material. A second, on the NASA website, based on the principle that there is no easy means of mass transport on the Moon, argued that the floor is most likely to have frosts, which would be stable for millions of years. In this context, gravitational collapse would be expected to provide much brighter areas on the rim of the crater, but much less so in the centre, and this was not noticed. There is one further possibility: the impactor consisted of an abnormally bright material, and we are viewing the residue.
 
The problem, of course, is that finding water would be highly desirable from NASA’s point of view, because it would then be easier to get further funding. Absence of water is more desirable from certain theorists’ points of view, because the standard theory of lunar formation involves the Moon condensing from molten silicates formed through a collision of  a massive body, Theia, with Earth. In short, it is only too easy to interpret the data in terms of what you hope, rather than what you know.
 
What we know is (a – c) above; what we need is, at a minimum, some spectral data. The news media picked up on this story, but usually only one half of it, which results in conflicting stories in the public domain, and that does not help the credibility of science. At the risk of being repetitive, I think we need a better means of analyzing data and presenting theories.
Posted by Ian Miller on Jul 6, 2012 6:01 AM BST
I recently attended a talk where computations on some quite complicated molecules gave excellent agreement with observation. What concerned me was that the biggest single term was usually the energy required to provide compliance with the Exclusion Principle. I emailed the speaker for an explanation of how this could arise; so far I have not received a response so if anyone reading this wishes to explain, I would be very interested.
 
In my interpretation of quantum mechanics, compliance with the Exclusion Principle should involve zero energy change. I assume the wave is real, and a stationary state arises when the wave function is single-valued, i.e. as the wave completes any number of periods, it has the same value, which means that value and what it represents does not move. Accordingly, following any number of periods, the particle generating such a wave shows zero expectation change of position at that value of the wave function, hence it has shown zero net acceleration between quanta of action. This is the only condition consistent with Maxwell’s electrodynamics that permits the electron not to radiate energy.
 
If so, the Exclusion Principle arises as follows. The wave can only be single-valued when the action is quantized. For a single electron, such quantization requires a 2-cycle period; for the 1s orbital there are no nodes and two cycles are required for a crest and a trough. For higher wave functions the same applies, although the argument is more complicated. Now, a fundamental property of waves is that two equivalent waves in opposing directions form a stationary wave with half the wavelength. Adding a third electron, and its corresponding wave, cannot under any circumstances permit a stationary wave, in which case Maxwell’s electrodynamics ensures that state is totally unstable, and must either radiate or absorb electromagnetic energy. The Exclusion Principle follows, with, at this stage, no reference to magnetism. One important point, of course, is the waves must have the correct phase relationship, and it is from this, and the corresponding wave component due to magnetic interactions, that gives the required quantum number relationship.
 
What this claims is that the Exclusion Principle follows if the wave is real, if action is quantized, and if Maxwell’s equations apply. Whatever else, I shall back Maxwell’s equations almost beyond any other piece of physics. If so, the Exclusion Principle is NOT a piece of independent physics. The requirement to comply with the Exclusion Principle in forming a bond is that there must be a correct phase relationship between the waves of two unpaired electrons. But it is a property of waves that a phase shift does not involve a change of energy, in which case the energy term arising from the need to comply with the Exclusion Principle must be zero.
 
Either the above is wrong or the computations are wrong. Agreement with observation does not imply truth; the most successful theory ever in terms of getting correct answers over the longest period of time was the theory of Claudius Ptolemy, and that was just plain wrong. So you see why I am unimpressed by these computations.
Posted by Ian Miller on Jun 22, 2012 3:00 AM BST
One problem I am trying to allude to in these blogs is that we have to avoid wastage of effort. We have only so much money to devote to developing new technology, and once something is completed, it should not be left in a form where the knowledge decays. As another example, there is a means of making ethanol through simple pyrolysis, and it is infuriating because while a considerable amount of money was invested in developing it, I have no idea how well it would work in practice. The problem is that it was developed towards the end of the last energy crisis, the scientists will have retired or will be dead, and there is insufficient information available. Results were written up in scientific papers, however the practical experience is lost, as are the answers to questions seemingly omitted from the papers. The omissions were not important at the scale of operations, but they could become critical on scale-up.
 
The concept is that when cellulose is pyrolysed, the first product formed is mainly levoglucosan, or 1,6-anhydroglucose. This can be readily prepared on the laboratory scale through simple vacuum pyrolysis, and the conditions for doing this were extensively studied by Fred Shafizadeh. The key problem is to get the levoglucosan out of the biomass before it can further react, so Shafizadeh heated small volumes of finely divided cellulose under vacuum. That works well, but it is not that easy to scale up.
 
Claims for the solution to that problem came from the old Soviet Union. Two separate schools heated biomass chips (presumably reasonably finely divided) in fast-moving steam at about 350 degrees C and about 6 bar. There are now two options. The steam can be cooled and depressurized, in which case the sugars condense out, then the steam can be repressurized and reheated. This is the most energy efficient. The second is the water can also be condensed to give a dilute solution of sugars. There can also be a process that lies in between: some of the steam is condensed to give a stronger solution of sugars. Such processes, provided they are properly engineered to recycle heat, should be reasonably energy efficient, if they work according to plan. From what I can gather, this system was engineered up to do several kg/hr continuously, but details are hard to come by. Basically, if someone wanted to repeat this, they would have to start from scratch as far as engineering is concerned. There is also no real information available relating to lignin fragmentation products that presumably came over with the sugars. This could be important because phenolics should cause enzymes to cease functioning.
 
Is this a genuine possible answer to the problem of making biofuels? I have no idea. It passes the test of “looking reasonable”, it should be a net energy producer because the lignin alone should power the system, but there are two problem areas: can the system be engineered to run reliably, and are there too many impurities in the product to produce useful fuel? What is a real shame is that the original scientists and engineers would be able to answer those questions, at least at the level they operated on, but the information is lost. That sort of tragedy should not be repeated. 
Posted by Ian Miller on Jun 14, 2012 10:27 PM BST
Results from Dawn at Vesta were published in Science, but they were not especially informative, at least for theories of planetary accretion. There were data on the surface composition, which confirmed the proposition that certain meteorites came from Vesta and that Vesta differentiated. Somewhat confusing, the density was determined to be 3.456, which is not that much different from orthopyroxene, and this makes it difficult to see how the large core could be iron. It was argued that the outer layers must be very porous, suggestive of volatiles having been emitted. In this context, Vesta has strange stripes similar to those of Phobos, which also appears to have been struck on the end with an impact almost large enough to disrupt it. These results do not really add much to how Vesta formed, largely because they mainly confirm what was already “known”, and also since Vesta is very atypical for an asteroid, it may have formed elsewhere and migrated.
 
Soon Dawn will investigate Ceres. What do I expect there? While I made over 80 predictions in the ebook, I left out Ceres. If we assume, from its spherical shape, it has differentiated, my theory of its formation would suggest that the surface will probably be largely ice covered. The ice may be rich in organic material, or it may have a lot of blackish carbonaceous material, or it may be reasonably white, the variation depending on how much heat was generated in its interior for how long, which is unknowable.
 
Two theories were espoused that largely contradict my work. One argued that carbonaceous chondrites differ from ordinary chondrites because they formed earlier (Icarus, 220, 162). I argue that they are different because they formed at a different temperature in a different place. In this context, my criticism of the “earlier” theory is why did these bodies not continue accreting? The second paper (Icarus, 220, 144) proposed that the great water flows on Mars at about 3.8 Gy BP were caused by the Argyre  impactor heating the surface, giving a greenhouse atmosphere based on 6.5 bar water. Such water would not lie on the surface, and the erosion features would be due to water flowing as the heating collapsed. It is unclear to me why the water did not snow out as opposed to rain out, particularly since the Martian winter is twice as long as ours, and the south polar region is dark for this time. There is reasonably clear evidence fluid flow on Mars occurred intermittently over at least a 200 My period, and I for one cannot see how one impactor could manage that.
 
Three papers were in accord with my theory, but two would be in accord with most theories. The UV spectrum of Enceladus showed ammonia hydrate (Icarus, 220, 29). This is hardly surprising, though, and while required for my theory, it is expected from a number of others. The second was that it was proposed that the lunar basalts arose from very deep melting, in the presence of volatiles. Further, these are more localized to the nearside. This is consistent with the Theia impactor not being fully devolatalized, although again that is not specific to my theory. Finally, a paper in Nature (485, 490) proposed that there was a great geological discontinuity on Earth at about 2.5 Gy BP, and prior to this volcanic gases were more reduced. Standard theory has argued gases were always oxidized; my theory argues that all volatiles on the rocky planets were initially reduced because they had to be accreted as solids and became volatile by reaction with water (which is why Venus has more gas than Earth but essentially no water - Venus accreted less water because it was hotter and it used up almost all of it generating the atmosphere, and in subsequently oxidising it.)
 
Posted by Ian Miller on Jun 2, 2012 5:13 AM BST
Pyrolysis of biomass is one of the oldest technologies, originally used to make charcoal for iron smelting, however recently it has been advocated as a means of making liquid fuels. I find this difficult to come to grips with and I question, from a theoretical point of view, is this process worth persisting with or should we move onto something else? Are we wasting valuable money pursuing a lost cause?The objective of gasification is excluded from this discussion.
 
Saleh et al. (Energy and Fuels, 23, 3767 (2009) has given an outline of how to optimize pyrolysis of biomass. The required conditions to optimize liquid products are a very high heating rate, finely ground feed and temperature of 500 degrees C. The yield of liquids can be as high as 44%, although these may have up to 40% water in them. The rest is char and gas. The problem then is, what to do with the liquids? The major constituents are often formic and acetic acids, 1-hydroxy-2-propanone, furfural and a range of phenols. There are also at least a hundred minor components, some of which reach a few per cent. Saleh et al. suggest that this oil can be converted to synthesis gas for Fischer Tropsch synthesis, but this seems somewhat wasteful – if synthesis gas was required, why not gasify the biomass as a whole? Similarly, the oils can be upgraded through hydrogenation technology, (at the cost of losing the small carbon fragments) but this becomes yet a further step, and if you are going to do that, why not hydrogenate the biomass in the first place? Some suggest using it as low-grade heating oil, but if you merely want to heat something, it might be more efficient to burn the original biomass. One thing that should not be done is to store it for a significant time because most pyrolysis oils appear to be somewhat reactive, and over time they darken and become much more viscous.
 
The composition is highly dependent on how you do the pyrolysis, and on the raw material. The reason for this is that while the pyrolysis initial steps are quite simple, the products further react. Lignins are based on the free radical condensation of phenols substituted at the 4-position with CH=CH-CH2-OH groups; the phenol may have 0, 1 or 2 methoxyl groups adjacent to the phenolic group, depending on the nature of the plant. During pyrolysis, lignins tend to break either at the 4-position or one carbon further, so the products tend to be methoxylated phenols, 4-methyl methoxylated phenols, aldehydes such as vanillin, together with a variety of other 'bits and pieces' from the alkyl group. As far as I am aware there is no preferred procedure that makes some more useful products. Pieces not attached to phenols are effectively lost.
 
Cellulose first decomposes almost entirely to levoglucosan (1,6-anhydroglucose), which is volatile at the decomposition temperature. However it is also highly reactive, and if it cannot be removed from the solid in a very short time, it reacts with something else and forms a cascade of materials, most of which would generally be described as 'tars'. The same can be said for a number of other products of biomass pyrolysis, which essentially then requires the material to be very much size-reduced (which is expensive) and the heat transfer rate to be very high.
 
To summarize, this technology converts about a third to a half of the biomass to form liquid with a wide a range of reactive components that have little direct use. Superficially, pursuing this technology would seem to be something of a last resort yet there is evidence that a lot of money is being spent attempting to make it work. There appear to be more papers produced in the energy-related journals  that I read on this topic than any other. Why? Someone is just plain wrong; the question is, who?
Posted by Ian Miller on May 22, 2012 4:19 AM BST
There are three predominant interpretations of quantum mechanics: in the Copenhagen Interpretation the event (say, the path of a particle following diffraction) is determined probabilistically by the act of observation, the Multiverse interpretation has all probabilities eventuate somewhere, while the Pilot Wave has the event decided causally. One major failing with the Pilot Wave is that it becomes almost impossible, in its current form, to contribute to chemistry.
 
My interest in this problem started in my honours year, with a lecture on the hydrogen molecule. I stopped the lecture to point out that the Hamiltonian operator as presented led to the system becoming increasingly stable as the internuclear distance D diminished. The operator had no causal reason to diminish electron probability between the nuclei, and the lecturer could only agree that something was wrong. Shortly after, it occurred to me that the answer must lie in wave interference, so I tried a wave approach. Ten minutes later I had an analytical answer: 1/3 the energy of the hydrogen atom. (Look it up – it's not bad!)
 
Either the wave determines the motion of the particle or it does not. If it does not, what is it doing? Why do all quantum computations accidentally end up in agreement with the predictions that it does?  With that thought, I decided that for me, the wave must determine the particle motion, but how? Conceptually, this is the Pilot Wave, but in detail, what I have come up with is somewhat different.
 
For the wave to act on the particle, they must interact. Now, assume the particle has a velocity v. What is the wave doing? With a little algebra, it is easy to show that the phase velocity (the velocity of, say, a wave crest) is given by E/p, where p is the momentum of the particle (by derivation). The problem then is, what is E? Some textbooks woodenly quote E = m(c squared). This has the rather remarkable property that the phase velocity always exceeds the speed of light, and the stationary particle gives off waves at infinite velocity. Some say that this does not violate relativity because the wave carries no information. That is absolute nonsense: it defines the velocity of the particle. Worse than that, it defines an absolute velocity, which requires a fixed background as a reference, and that also violates the most fundamental principle of relativity. Heisenberg objected, and put E equal to the kinetic energy, which has the rather odd property of requiring the wave to travel at half the particle velocity, which makes it difficult to see how it can affect the particle.
 
Which gets to my version. I put E = m(v squared), twice the kinetic energy, which means the wave must contain energy, and hence is real. The wave now travels at exactly the same velocity as the particle, and hence can affect it. Now the wave function only works if it is in the form exp (2pi.i S/h). (A sine wave does not give you quantum mechanics.) My guess is that puts the wave in an additional dimension, which is almost required for energy conservation, in which case quantum mechanics is the first actual evidence of the additional dimensions proposed by string theorists.
 
Of course, you won't believe that. So, how do you explain how the wave function makes the particles do what they do?
Posted by Ian Miller on May 9, 2012 12:01 AM BST
Just when I thought my sequence of blogs on making ethanol as a biofuel was complete, I found another option that involved two major projects in the US (Range Fuels and Enerkem) and, in my opinion, should never have been considered.  Both first made synthesis gas by gasifying biomass, a technology that is reasonably well-developed. As an example, in the mid 1970s Union Carbide operated a 200 t/day plant at South Charleston, West Virginia, although it is unclear how many continuous hours were operated. The Purox gas typically contained about 26% hydrogen, 40% CO and 23% CO2, and some source of additional hydrogen (and a significant additional cost) appears to be required.
 
So, assume you have syngas and want to make fuels. What next? Most analyses would select one of three obvious routes that are deployed commercially: Fischer Tropsch (FT), methanol, and short alcohols, which is an abbreviated FT route. However, from what I can make out, these two investments elected to propose to make ethanol the hard way. They both started off by making methanol (and Enerkem apparently has four plants operating that are technical successes) and I love the next step, for personal reasons noted below. They propose to react methanol with restricted amounts of CO to make acetic acid/methyl acetate. All acetic acid is then converted to methyl acetate in a further step. The methyl acetate is then hydrogenated to make a methanol/ethanol mix, from which methanol is recycled and ethanol produced. After huge investments, neither venture could convince the market this is economical. While the jury is still out, it appears that these ventures will not produce ethanol.
 
 My question is, why was this alternative route to ethanol not rejected on theoretical grounds prior to the loss of so much money? Maybe I am perverse, but shouldn't a simple analysis have suggested "too many steps to compete with the single step process?" In my opinion, hundreds of millions of dollars could have been saved simply by an hour of sound theoretical analysis. The reason I wrote my first ebook, and I started this blog, is to try to convince that theory is not confined to abstractions or marathon computations. Thinking first can be productive, and it is cheap!
 
Now to round off a little personal note to explain a comment above . When I was a first year student, I undertook extra reading, and not the usual sort. Accordingly, when I got the question in a test, "Convert an alcohol to an acid with one extra carbon atom" I could not resist writing down  "R-OH  + CO  -> product", and listed a catalyst, a temperature and a pressure. Yes, I know what they wanted, but I was strong on alternatives even then. This got zero marks, so I protested. "You can't do anything like that," they explained. Now that was not satisfactory. "According to Paul Karrer," I replied, "IG Farbenindustrie used this to make thousands of tons," then I added, "He is a reputable chemist, isn't he?" (He was one of the 1937 Nobel Prize winners in chemistry.)  Rather grumpily, they had to agree he was, but they did not give me any marks because you could not do that in the lab, and when I looked as if I might question that, they added, "at least in glassware". That was not specified in the question, but I did not care. I had made a point.
Posted by Ian Miller on May 2, 2012 3:31 AM BST
You announce an alternative theory and further papers come forwards. There is a natural tenseness: are you falsified already? At the end of each month I shall report on progress of which I am aware, good, bad or indifferent. For April, the most relevant papers I came across were as follows.
 
Smith et al. (Science 336: 214-217): Mercury has a relatively high moment of inertia (0.353 + 0.014). Since a uniform sphere has a moment of inertia of 0.4, this indicates that Mercury has a relatively thin crust. What was postulated was a silicate crust ca 300 km thick, 100 km of iron sulphide, then an iron-rich liquid core, possibly over a solid interior core, all of which was claimed to be consistent with Mercury having considerably more reduced components, similar to those of enstatite chondrites. How does that sit? I made a prediction that Mercury would have more reduced components, and specified phosphides, nitrides and carbides, which are found in enstatite chondrites. A higher sulphide content for Mercury had previously been proposed, and I never considered a separate layer of iron sulphide because iron sulphide is usually considered to dissolve in an iron core.
 
Kleine et al. (Geochim. Cosmochim. Acta. 84: 186-203): a number of Angrites (a small but diverse group of refractory mafic to ultramafic meteorites from a body that had differentiated) were shown to have originated from one body and that body apparently differentiated twice, both within 2 My of the formation of calcium aluminium inclusions. Actual accretion of the parent body must have occurred within 1.5 My. This is harder to judge. I require the rocky planets other than Mercury to accrete between 1 and 2 My, with Mercury essentially complete after 1 My, although there would be numerous later crater-forming strikes. Since we do not know what the parent body was, standard theory might argue that it was one of the planetesimals, although standard theory has no mechanism by which planetesimals form; they are the assumed starting place.
 
Fastook et al. (Icarus 219: 25-40): evidence was found for subglacial meltwater channels in the south circumpolar Dorsa Argentea (Mars) dating from the Noachian/Hesperian. The authors state these data require a local temperature in the range of -50 to -75 oC, the data are consistent with basal melting but not "top-down" melting, and this contradicts the concept of a warm and wet early Mars. My explanations for the Martian fluvial systems assumed temperatures never averaged much above -60 oC, and were probably generally about -80 oC, which equally contradicts the early "warm and wet" Mars. I also provided a mechanism for basal melting to explain the great chaotic flows, and the same mechanism would apply here, except that the fluid could escape, hence the channels.
 
Yaoling Niu (RSC Advances 2: 3587-3591): Zr-Hf and Nb-Ta are effectively elemental twins in their standard valence states (+4 and +5 respectively), and standard theory argues they should not separate during geological processes. The assumption also is that the Earth has always been basically "oxidized", as shown by the distribution of vanadium and chromium in various ancient and modern magmas. As Niu shows, perhaps they should behave similarly, but seemingly they do not. My proposed mechanism for Earth's formation requires the original material to be reduced, more like enstatite chondrites. These pairs do not have equivalent redox potentials, and hence would behave differently under certain reduced conditions.
 
Readers will have to form their own opinion as to the relative success of the theory, but I feel that so far it is very much still alive.
Posted by Ian Miller on Apr 26, 2012 3:26 AM BST
   1 2 3 4