Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

Perhaps one of the more interesting questions is where did Earth's volatiles come from? The generally accepted theory is that Earth formed by the catastrophic collisions of planetary embryos (Mars-sized bodies), which effectively turned earth into a giant ball of magma, at which time the iron settled to the core though having a greater density, and took various siderophile elements with it. At this stage, the Earth would have been reasonably anhydrous. Subsequently, Earth got bombarded with chondritic material from the asteroid belt that was dislodged by Jupiter's gravitational field (including, in some models, Jupiter migrating inwards then out again), and it is from here that Earth gets its volatiles and its siderophile elements. This bombardment is often called "the late veneer". In my opinion, there are several reasons why this did not happen, which is where these papers become relevant. What are the reasons? First, while there was obviously a bombardment, to get the volatiles through that, only carbonaceous chondrites will suffice, and if there were sufficient mass to give that to Earth, there should also be a huge mass of silicates from the more normal bodies. There is also the problem of atmospheric composition. While Mars is the closest, it is hit relatively infrequently compared with its cross-section, and hit by moderately wet bodies almost totally deficient in nitrogen. Earth is hit by a large number of bodies with everything, but the Moon is seemingly not hit by wet bodies or carbonaceous bodies. Venus, meanwhile, is hit by more bodies that are very rich in nitrogen, but relatively dry. What does the sorting?
 
The first paper (Nature 501: 208 – 210) notes that if we assume the standard model by which core segregation took place, the iron would have removed about 97% of the Earth's sulphur and transferred it to the core. If so, the Earth's mantle should exhibit fractionated 34S/32S ratio according to the relevant metal-silicate partition coefficients, together with fractionated siderophile metal abundances. However, it is usually thought that Earth's mantle is both homogeneous and chondritic for this sulphur ratio,  consistent with the acquisition of sulphur  ( and other siderophile elements) from chondrites (the late veneer). An analysis of mantle material from mid-ocean ridge basalts displayed heterogeneous 34S/32S ratios that are compatible with binary mixing between a low 34S/32S  ambient mantle ratio and a high 34S/32S recycled component. The depleted end-member cannot reach a chondritic value, even if the most optimistic surface sulphur is added. Accordingly, these results imply that the mantle sulphur is at least partially determined by original accretion, and not all sulphur was deposited by the late veneer.
 
In the second (Geochim. Cosmochim. Acta 121: 67-83), samples from Earth, Moon, Mars, eucrites, carbonaceous chondrites and ordinary chondrites show variation in Si isotopes. Earth and Moon show the heaviest isotopes, and have the same composition, while enstatite chondrites have the lightest. A model of Si partitioning based on continuous planetary formation that takes into account T, P and oxygen fugacity variation during Earth's accretion. If the isotopic difference  results solely from Si fractionation during core formation, their model requires at least ~12% by weight Si in the core, which exceeds estimates based  on core density or geochemical mass balance calculations. This suggests one of two explanations: Earth's material started with heavier silicon, or (2) there is a further unknown process that leads to fractionation. They suggest vaporization following the Moon forming event, but would not this lead to lighter or different Moon material?
 
One paper (Earth Planet. Sci. Lett. 2013: 88-97) pleased me. My interpretation of the data related to atmospheric formation is that the gaseous elements originally accreted as solids, and were liberated by water as the planet evolved.  These authors showed that early early degassing of H2 obtained from reactions of water explains the "high oxygen fugacity" of the Earth's mantle. A loss of only 1/3 of an "ocean" of water from Earth would shift the oxidation state of the upper mantle from the very low oxidation state equivalent to the Moon, and if so, no further processes are required. Hydrogen is an important component of basalts at high pressure and, perforce, low oxygen fugacity. Of particular interest, this process may have been rapid. On the early Earth, over 5 times the amount of heat had to be lost as is lost now, and one proposal (501:501 - 504 ) heat pipe volcanism such as found on Io would manage this, in which case, the evolution of water and volatiles may have also been very rapid.
 
Finally, in (Icarus 226: 1489 -1498), near-infrared spectra show the presence of hydrated poorly crystalline silica with a high silica content on the western rim of Hellas. The surfaces are sporadically exposed over a 650 km section within a limited elevation range. The high abundances and lack of associated aqueous phase material indicate high water to rock ratios were present, but the higher temperatures that would lead to quartz were not present. This latter point is of interest because it is often considered that the water flows on Mars in craters were due to internal heating due to impact, such heat being retained for considerable periods of time. To weather basalt to make silica, there would have to be continuous water of a long time, and if the water was hot and on the surface it would rapidly evaporate, while if it was buried, it would stay super-heated, and presumably some quartz would result. This suggests extensive flows of cold water.
Posted by Ian Miller on Sep 30, 2013 3:30 AM BST
In a previous post, I questioned whether gold showed relativistic effects in its valence electrons. I also mentioned a paper of mine that proposes that the wave functions of the heavier elements do not correspond exactly to the excited states of hydrogen, but rather are composite functions, some of which have reduced numbers of nodes, and I said that I would provide a figure from the paper once I sorted out the permission issue. That is now sorted, and the following figure comes from my paper.


 
 
The full paper can be found at http://www.publish.csiro.au/nid/78/paper/PH870329.htm  and I thank CSIRO for the permission to republish the figure. The lines show the theoretical function, the numbers in brackets are explained in the paper and the squares show the "screening constant" required to get the observed energies. The horizontal axis shows the number of radial nodes, the vertical axis, the "screening constant".
 
The contents of that paper are incompatible with what we use in quantum chemistry because the wave functions do not correspond to the excited states of hydrogen. The theoretical function is obtained by assuming a composite wave in which the quantal system is subdivisible provided discrete quanta of action are associated with any component. The periodic time may involve four "revolutions" to generate the quantum (which is why you see quantum numbers with the quarter quantum). What you may note is that for = 1, gold is not particularly impressive (and there was a shortage of clear data) but for = 0 and = 2 the agreement is not too bad at all, and not particularly worse than that for copper.
 
So, what does this mean? At the time, the relationships were simply put there as propositions, and I did not try to explain their origin. There were two reasons for this. The first was that I thought it better to simply provide the observations and not clutter it up with theory that many would find unacceptable. It is not desirable to make too many uncomfortable points in one paper. I did not even mention "composite waves" clearly. Why not? Because I felt that was against the state vector formalism, and I did not wish to have arguments on that. (That view may not be correct, because you can have "Schrödinger cat states", e.g. as described by Haroche, 2013, Angew. Chem. Int. Ed. 52: 10159 -10178). However, the second reason was perhaps more important. I was developing my own interpretation of quantum mechanics, and I was not there yet.
 
Anyway, I have got about as far as I think is necessary to start thinking about trying to convince others, and yes, it is an alternative. For the motion of a single particle I agree the Schrödinger equation applies (but for ensembles, while a wave equation applies, it is a variation as seen in the graph above.) I also agree the wave function is of the form
ψ = A exp(2πiS/h)
So, what is the difference? Well, everyone believes the wave function is complex, and here I beg to differ. It is, but not entirely. If you recall Euler's theory of complex numbers, you will recall that exp() = -1, i.e. it is real. That means that twice a period, for the very brief instant that S = h, ψ is real and equals the wave amplitude. No need to multiply by complex conjugates then (which by itself is an interesting concept –where did this conjugate come from? Simple squaring does not eliminate the complex nature!) I then assume the wave only affects the particle when the wave is real, when it forces the particle to behave as the wave requires. To this extent, the interpretation is a little like the pilot wave.
 
If you accept that, and if you accept the interpretation of what the wave function means, then the reason why an electron does not radiate energy and fall into the nucleus becomes apparent, and the Uncertainty Principle and the Exclusion Principle then follow with no further assumptions. I am currently completing a draft of this that I shall self-publish. Why self-publish? That will be the subject of a later blog.
 
Posted by Ian Miller on Sep 23, 2013 3:30 AM BST
In the latest Chemistry World, Derek Lowe stated that keeping up with the literature is impossible, and he argued for filtering and prioritizing. I agree with his first statement, but I do not think his second option, while it is necessary right now, is optimal. That leaves open the question, what can be done about it? I think this is important, because the major chemical societies around the world are the only organizations that could conceivably help, and surely this should be of prime importance to them. So, what are the problems?
 
Where to put the information is not a problem because we now seem to have almost unlimited digital storage capacity. Similarly, organizing it is not a problem provided the information is correctly input, in an appropriate format with proper tags. So far, easy! Paying for it? This is more tricky, but it should not necessarily be too costly in terms of cash.
 
The most obvious problem is manpower, but this can also be overcome if all chemists play their part. For example, consider chemical data. The chemist writes a paper, but it would take little extra effort to put the data into some pre-agreed format for entry into the appropriate data base. Some of this is already done with "Supplementary information", but that tends to be attached to papers, which means someone wishing to find the information has to subscribe to the journal. Is there any good reason why data like melting points and spectra cannot be provided free? As an aside, this sort of suggestion would be greatly helped if we could all agree on the formatting requirements, and what tags would be required.
 
This does not solve everything, because there are a lot of other problems too, such as "how to make something". One thing that has always struck me is the enormous wastage of effort in things like biofuels, where very similar work tended to be repeated every crisis. Yes, I know, intellectual property rights tend to get in the way, but surely we can get around this. As an example of this problem, I recall when I was involved in a joint venture with the old ICI empire. For one of the potential products to make, I suggested a polyamide based on a particular diamine that we could, according to me, make. ICINZ took this up, sent it off to the UK, where it was obviously viewed with something approaching indifference, but they let it out to a University for them to devise a way to make said polyamide. After a year, we got back the report, they could not make the diamine, and in any case, my suggested polymer would be useless. I suggested that they rethink that last thought, and got a rude blast back, "What did I know anyway?" So, I gave them the polymer's properties. "How did I know that?" they asked. "Simple," I replied, and showed them the data in an ICI patent, at which point I asked them whether they had simply fabricated the whole thing, or had they really made this diamine? There was one of those embarrassed silences! The institution could not even remember its own work!
 
In principle, how to make something is clearly placed in scientific papers, but again, the problem is, how to find the data, bearing in mind no institute can afford more than a fraction of the available journals. Even worse is the problem of finding something related. "How do you get from one functional group to another in this sort of molecule with these other groups that may interfere?" is a very common problem that in principle could be solved by computer searching, but we need an agreed format for the data, and an agreement that every chemist will do their part to place what they believe to be the best examples of their own synthetic work in it. Could we get that cooperation? Will the learned societies help?
 
Posted by Ian Miller on Sep 16, 2013 8:07 PM BST
One concern I have as a scientist, and one I have alluded to previously, lies in the question of computations. The problem is, we have now entered an age where computers permit modeling of a complexity unknown to previous generations. Accordingly, we can tackle problems that were never possible before, and that should be good. The problem for me is, the reports of the computations tell almost nothing about how they were done, and they are so opaque that one might even question whether the people making them fully understand the underlying code. The reason is, of course, that the code is never written by one person, but by rather a team. The code is then validated by using the computations for a sequence of known examples, and during this time, certain constants of integration that are required by the process are fixed. My problem with this follows a comment that I understand was attributed to Fermi: give me five constants and I will fit any data to an elephant. Since there is a constant associated with every integration, it is only too easy to get agreement with observation.
 
An example that particularly irritated me was a paper that tried "evolved" programs on molecules from which they evolved (Moran et al. 2006. J. Am Chem Soc. 128: 9342-9343). What they did was to apply a number of readily available and popular molecular orbital programs to compounds that had been the strong point of molecular orbital theory, such as benzene and other arenes. What they found was that these programs  "predicted" benzene to be non-planar with quite erroneous spectral signals. That such problems occur is, I suppose, inevitable, but what I found of concern is that nowhere that I know was the reason for the deviations identified, and how such propensity to error can be corrected, and once such corrections are made, what do they do to the subsequent computations that allegedly gave outputs that agreed well with observation. If the values of various constants are changed, presumably the previous agreement would disappear.
 
There are several reasons why I get a little grumpy over this. One example is this question of planetary formation. Computations up to about 1995 indicated that Earth would take about 100 My to accrete from planetary embryos, however, because of the problem of Moon formation, subsequent computations have reduced this to about 30 My, and assertions are made that computations reduce the formation of gas giants to a few My. My question is, what changed? There is no question that someone can make a mistake, and subsequently correct it, but surely it should be announced what the correction was. An even worse problem, from my point of view, was what followed from my PhD project, which involved, do cyclopropane electrons delocalize into adjacent unsaturation? Computations said yes, which is hardly surprising because molecular orbital theory starts by assuming it, and subsequently tries to show why bonds should be localized. If it is going to make a mistake, it will favour delocalization. The trouble was, my results, which involved varying substituents at another ring carbon and looking at Hammett relationships, said it does not.
 
Subsequent computational theory said that cyclopropane conjugates with adjacent unsaturation, BUT it does not transmit it, while giving no clues as to how it came to this conclusion, apart from the desire to be in agreement with the growing list of observations. Now, if theory says that conjugation involves a common wave function over the region, then the energy at all parts of that wave must be equal. (The electrons can redistribute themselves to accommodate this, but a stationary solution to the Schrödinger equation can have only one frequency.) Now, if A has a common energy with B, and B has a common energy with C, why does A not have a common energy with C? Nobody has ever answered that satisfactorily. What further irritates me is that the statement that persists in current textbooks employed the same computational programs that "proved" the existence of polywater. That was hardly a highlight, so why are we so convinced the other results are valid? So, what would I like to see? In computations, the underpinning physics, the assumptions made, and how the constants of integration were set should be clearly stated. I am quite happy to concede that computers will not make mistakes in addition, etc, but that does not mean that the instructions for the computer cannot be questioned.
Posted by Ian Miller on Sep 9, 2013 4:31 AM BST
Once again there were very few papers that came to my attention in August relating to my ebook on planetary formation. One of the few significant ones (Geochim Cosmochim Acta 120: 1-18) involved the determination of magnesium isotopes in lunar rocks, and these turned out to be identical with those of Earth and in chondrites, which lead to the conclusion that there was no significant magnesium isotopic separation throughout the accretion disk, nor during the Moon-forming event. There is a difference in magnesium isotope ratios between magnesium found in low and high titanium content basalts, but this is attributed to the actual crystallization processes of the basalts. This result is important because much is sometimes made of variation in iron isotope variations, and in variations for some other elements. The conclusion from this work is that apart from volatile elements, isotope variation is probably more due to subsequent processing than in planetary formation, and the disk was probably homogeneous.
 
Another point was that a planet has been found around the star GJ 504, at a distance of 43.5 A.U. from the star. Commentators have argued that such a planet is very difficult to accommodate within the standard theory. The problem is, if planets form by collision of planetesimals, and as these get bigger, collisions between embryos, the probability of collision, at least initially, is proportional to the square of the concentration of particles, and the concentration of particles depends to some power between 1 and 2, and usually taken as to the power 1.5, of the radial distance from the star. Now standard theory argues that it in our solar system, it was only around the Jupiter-Saturn distance that bodies could form reasonably quickly, and in the NICE theory, the most favoured computational route, Uranus and Neptune formed closer and had to migrate out through gravitational exchanges between them, Jupiter, Saturn, and the non-accreted planetesimals. For GJ 504, the number density of planetesimals would be such that collision probability would be about 60 times slower, so how did they form in time to form a planet four times the size of Jupiter, given that, in standard theory in our system, growth of Jupiter and Saturn was only just fast enough to get a giant?
 
In my opinion, the relative size compared with Jupiter is a red herring, because it also depends on when the gas disk is cleaned out by a stellar outflow. The reason is, in my model, bodies do not grow largely by collision of equally sized objects, but rather they grow by melt accretion of ices at a given temperature, and the rate of growth depends on the initial concentration of solids in the disk only, and of course, the gas inflow rate because that, together with the initial gas temperature and the position of the star within a cluster, determines the temperature, and the temperature determines the position of the planet. If GJ 504 formed under exactly the same conditions as Earth, this planet lies about midway between where we might expect Neptune and Uranus to lie, and which one it represents can only be determined by finding inner planets. In previous computations, the planet should, not form; in my theory, it is larger than would normally be expected but it is not unexpected, and there should be further planets within that orbit. Why is only one outer planet detected so far? The detection is by direct observation of a very young planet that is still glowing over red hot through gravitational energy release. The inner ones will be just as young, but the closer to the star, the harder it is to separate their light from that of the star, and, of course, some may appear very close to the star by being on certain orbital phases.
 
Posted by Ian Miller on Sep 1, 2013 8:58 PM BST
Nullius in verba (take nobody's word) is the motto of the Royal Society, and it should be the motto of every scientist. The problem is, it is not. An alternative way of expressing this comes from Aristotle: the fallacy ad verecundiam. Just because someone says so, that does not mean it is right. We have to ask questions of both our logic and of nature, and I am far from convinced we do this often enough. What initiated this was an article in the August Chemistry World where it was claimed that the “unexpected” properties of elements such as mercury and gold were due to relativistic effects experienced by the valence electrons.
 
If we assume the valence electrons occupy orbitals corresponding to the excited states of hydrogen (i.e. simple solutions of the Schrödinger equation) the energy E is given by E = Z2Eo/n2h2. Here, Eo is the energy given by the Schrödinger equation, n gives the quanta of action associated with the state, and Z is a term that at one level is an empirical correction. Thus without this, the 6s electron in gold would have an energy 1/36 that of hydrogen, and that is just plain wrong. The usual explanation is that since the wave function goes right to the nucleus, there is a probability that the electron is near the nucleus, in which case it experiences greater electric fields. For mercury and gold, these are argued to be sufficient to lead to relativistic mass enhancement (or spacetime dilation, however you wish to present the effects), and these alter the energy sufficiently that gold has the colour it has, and both mercury and gold have properties unexpected from simple extrapolation from earlier elements in their respective columns in the periodic table. The questions are, is this correct, or are there alternative interpretations for the properties of these elements? Are we in danger of simply hanging our hat on a convenient peg without asking, is it the right one? I must confess that I dislike the relativistic interpretation, and here are my reasons.
 
The first involves wave-particle duality. Either the motion complies with wave properties or it does not, and the two-slit experiment is fairly good evidence that it does. Now a wave consistent with the Schrödinger equation can have only one frequency, hence only one overall energy. If a wave had two frequencies, it would self-interfere, or at the very least would not comply with the Schrödinger equation, and hence you could not claim to be using standard quantum mechanics. Relativistic effects must be consistent with the expectation energy of the particle, and should be negligible for any valence electron. 
 
The second relates to how the relativistic effects are calculated. This involves taking small regions of space and assigning relativistic velocities to them. That means we are assigning specific momentum enhancements to specific regions of space, and surely that violates the Uncertainty Principle. The Uncertainty Principle argues the uncertainty of the position multiplied by the uncertainty of the momentum is greater or equal to the quantum of action. In fact it may be worse than that, because when we have stationary states with nh quanta, we do not know that that is not the total uncertainty. More on this in a later blog.
 
On a more personal note, I am annoyed because I have published an alternative explanation [ Aust. J. Phys. 40 : 329 -346 (1987)] that proposes that the wave functions of the heavier elements do not correspond exactly to the excited states of hydrogen, but rather are composite functions, some of which have reduced numbers of nodes. ( The question, “how does an electron cross a nodal surface?” disappears, because the nodes disappear.) The concept is too complicated to explain fully here, however I would suggest two reasons why it may be relevant.
 
The first is, if we consider the energies of the ground states of atoms in a column of elements, my theory predicts the energies quite well at each end of a row, but for elements nearer the centre, there are more discrepancies, and they alternate in sign, depending on whether n is odd or even. The series copper, silver and gold probably show the same effect, but more strongly. The “probably" is because we need a fourth member to be sure. However, the principle remains: taking two points and extrapolating to a third is invalid unless you can prove the points should lie on a known line. If there are alternating differences, then the method is invalid. Further, within this theory, gold is the element that agrees with theory the best. That does not prove the absence of relativistic effects, but at least it casts suspicion.
 
The second depends on calculations of the excited states. For gold, the theory predicts the outcomes rather well, especially for the d states, which involve the colour problem. Note that copper is also coloured. (I shall post a figure from the paper later. I thought I had better get agreement on copyright before I start posting it, and as yet I have had no response. The whole paper should be available as a free download, though.) The function is not exact, and for gold the p states are more the villains, and it is obvious that something is not quite right, or, as I believe, has been left out. However, the point I would make is the theoretical function depends only on quantum numbers, it has no empirical validation procedures and depends only on the nodal structure of the waves. The only interaction included is the electron nucleus electric field so some discrepancies might be anticipated. Now, obviously you should not take my word either, but when somebody else produces an alternative explanation, in my opinion we should at least acknowledge its presence rather than simply ignore it.
Posted by Ian Miller on Aug 26, 2013 3:58 AM BST
Some time ago I had posts on biofuels, and I covered a number of processes, but for certain reasons (I had been leading a research program for a company on this topic, and I thought I should lay off until I saw where that was going) I omitted what I believe is more optimal. The process I had eventually landed on is hydrothermal liquefaction, for reasons as follows.
 
The first problem with biomass is that it is dispersed, and it does not travel easily. How would you process forestry wastes? The shapes are ugly, and if you chip onsite, you are shipping a lot of air. If you are processing algae, either you waste a lot of energy drying it, or you ship a lot of water. There is no way around this problem initially, so you must try to make the initial travel distance as short as possible. Now, if you use a process such as Fischer Tropsch, you need such a large amount of biomass that you must harvest over a huge area, and now your transport costs rise very fast, as does the amount of fuel you burn shipping it. Accordingly, there are significant diseconomies of scale. The problem is, as you decrease the throughput, you lose processing economies of scale. What liquefaction does is reduce the volume considerably, and in turn, liquids are very much easier to transport. But to get that advantage, you have to process relatively smaller volumes. Transportation costs are always less for transport by barge, so that gives marine algae an increased desirability factor.
 
A second advantage of liquefaction is that you can introduce just about any feedstock, in any mix, although there are disadvantages in having too much variation. Liquefaction produces a number of useful chemicals, but they vary depending on the feedstock, and to be useful they have to be isolated and purified, and accordingly, the more different feedstocks included, the harder this problem. Ultimately, there will be the issue of “how to sell such chemicals” because the fuels market is enormously larger than that for chemicals, but initially the objective is to find ways to maximize income while the technology is made more efficient. No technology is introduced in its final form.
 
Processing frequently requires something else. Liquefaction has an advantage here too. If you were to hydrogenate, you have to make hydrogen, and that in turn is an unnecessary expense unless location gives you an advantage, e.g. hydrogen is being made somewhere nearby for some other purpose. In principle, liquefaction only requires water, although some catalysts are often helpful. Such catalysts can be surprisingly cheap, nevertheless they still need to be recovered, and this raises the more questionable issue relating to liquefaction: the workup. If carried out properly, the water waste volumes can be reasonably small, at least in theory, but that theory has yet to be properly tested. One advantage is that water can be recycled through the process, in which case a range of chemical impurities get recycled, where they condense further. There will be a stream of unusable phenolics, and these will have to be hydrotreated somewhere else.
 
The advantages are reasonably clear. There are some hydrocarbons produced that can be used as drop-in fuels following distillation. The petrol range is usually almost entirely aromatic, with high octane numbers. The diesel range from lipids has a very high cetane number. There are a number of useful chemicals made, and the technology should operate tolerably cheaply on a moderate scale, whereupon it makes liquids that can be cheaply transported elsewhere. In principle, the technology is probably the most cost-effective.
 
The disadvantages are also reasonably clear. The biggest is that the technology has not been demonstrated at a reasonable scale, so the advantages are somewhat theoretical. The costs may escalate with the workup, and the chemicals obtained, while potentially very useful, e.g. for polymers, are often somewhat different from the main ones currently used now, so their large-scale use requires market acceptance of materials with different properties.
 
Given the above, what should be done?  As with some of the other options, in my opinion there is insufficient information to decide, so someone needs to build a bigger plant to see whether it lives up to expectations. Another point is that unlike oil processing, it is unlikely that any given technology will be the best in all circumstances. We may have to face a future in which there are many different options in play.
Posted by Ian Miller on Aug 19, 2013 5:03 AM BST
I devoted the last post to the question, could we provide biofuels? By that, I mean, is the land available. I cited a paper in which it showed fairly conclusively that growing corn to make fuel is not really the answer, because to get the total US fuel consumption, based on that paper you would need to multiply the total area of existing ground under cultivation in the US by a factor of 17. And you still have to eat. Of course, the US could still function reasonably well while consuming significantly less liquid fuel, but the point remains that we still need liquid fuels. The authors of this paper could also have got this wrong and have made an error in their calculations, but such errors go either way, and as areas get larger, the errors are more likely to be unfavourable than favourable because the transport costs of servicing such large areas have to be taken into account. On the other hand, the area required for obtaining fuels from microalgae is less than five per cent of current area. Again, that is probably an underestimate, although, as I argued, a large amount of microalgae could be obtained from sewage treatment plants, and they are currently in place.
 
One problem with growing algae, however, is you need water, and in some places, water availability is a problem (although not usually for sewage treatment). Water itself is hardly a scarce resource, as anyone who has flown over the Pacific gradually realizes. The argument that it is salty is beside the point as far as algae go because there are numerous algae that grow quite nicely in seawater. One of what I consider to be the least well-recognized biofuel projects from the 1970s energy crisis was carried out by the US navy. What they did was to grow Macrocystis on rafts in deep seawater. The basic problem with seawater far from a shore is that it is surprisingly deficient in a number of nutrients, and this was overcome by raising water from the ocean floor. Macrocystis is one of the fastest growing plants, in fact under a microscope you can watch cell division proceeding regularly. You can also mow it, so frequent replanting is not necessary. The US navy showed this was quite practical, at least in moderately deep water. (You would not want to raise nutrients from the bottom of the Kermadec trench, for example, but there is plenty of ocean that does not go to great depths.)
 
The experiment itself eventually failed and the rafts were lost in a storm, in part possibly because they were firmly anchored and the water-raising pipe could not stand the bending forces. That, however, is no reason to write it off. I know of no new technology that was implemented without improvements on the first efforts at the pilot/demonstration level. The fact is, problems can only be solved once they are recognized, and while storms at sea are reasonably widely appreciated, that does not mean that the first engineering effort to deal with them is going to be the full and final one. Thus the deep pipe does not have to be rigid, and it can be raised free of obstructions. Similarly, the rafts, while some form of anchoring is desirable, do not have to be rigidly anchored. So, why did the US Navy give up? The reasons are not entirely clear to me, but I rather suspect that the fact that oil prices had dropped to the lowest levels ever in real terms may have had something to do with it.
Posted by Ian Miller on Aug 12, 2013 4:55 AM BST
In previous posts I have discussed the possibility of biofuels, and the issue of greenhouse gases. One approach to the problem of greenhouse gases, or at least the excess of carbon dioxide, is to make biofuels. The carbon in the fuels comes from the atmosphere, so at least we slow down the production of greenhouse gases, and additionally we address, at least partially, the problem of transport fuels. Sooner or later we shall run out of oil, so even putting aside the greenhouse problem, we need a substitute. The problem then is, how to do it?
 
The first objections we see come from what I believe is faulty analysis and faulty logic. Who has not seen the argument: "Biofuels are useless? All you have to do is to see the energy balances and land requirements for corn." This argument is of the "straw man" type; you choose a really bad example and generalize. An alternative was published recently in Biomass and Bioenergy. 56: 600-606. These authors provided an analysis of the land area required to provide 50% of the US transport fuels. Corn came in at a massive 846% of current US cropping area, i.e. to get the fuels, the total US cropping area needed to be multiplied by a factor greater than 8. Some might regard that as impractical! However, microalgae came in at between 1.1 and 2.5% of US cropping area. That is still a lot of area, but it does seem to be more manageable.
 
There is also the question of how to grow things, fuel needed, fertilizer needed, pesticides needed, etc. Corn here comes out very poorly, in fact some have argued that you put more energy in the form of useful work in growing it than you get out. (The second law bites again!) Now, I must show my bias and confess to having participated in a project to obtain chemicals and fuels from microalgae grown in sewage treatment water. It grows remarkably easily: no fertilizer requirements, no need to plant it or look after it; it really does grow itself, although there may be a case for seeding the growing stream to get a higher yield of desirable algae. Further, the algae removes much of the nitrogen and phosphate that would otherwise be an environmental nuisance, although that is not exactly a free run because when finished processing, the phosphates in particular remain. However, good engineering can presumably end up with a process stream that can be used for fertilizer.
 
One issue is that microalgae in a nutrient rich environment, and particularly in a nitrogen rich environment, tend to reproduce as rapidly as possible. If starved of nitrogen, they tend to use the photochemical energy and store its reserves of lipids. It is possible, at least with some species, to reach 75% lipid content, while rapidly growing microalgae may have only 5% extractible lipids.
 
That leaves the choice of process. My choice, biased that I am, uses hydrothermal liquefaction. Why? Well, first, harvesting microalgae is not that easy, and a lot of energy can be wasted drying it. With hydrothermal liquefaction, you need an excess of water, so "all you have to do" is to concentrate the algae to a paste. The quotation marks are to indicate that even that is easier said than done. As an aside, simple extraction of the wet algae with an organic solvent is not a good idea: you can get some really horrible emulsions. Another advantage of hydrothermal liquefaction is, if done properly, not only do you get fuel from the lipids, but also from the phospholipids, and some other fatty acid species that are otherwise difficult to extract. Finally, you end up with a string of interesting chemicals, and in principle, the chemicals, which are rich in nitrogen heterocycles, would in the long run be worth far more than the fuel content.
 
The fuel is interesting as well. If done under appropriate conditions, the lipid acids mainly either decarboxylate or decarbonylate, to form linear alkanes or alkenes one carbon atom short. There is a small amount of the obvious diketone formed as well. The polyunsaturated acids fragment, and coupled with some deaminated aminoacid fragments, make toluene, xylenes, and interestingly enough, ethyl benzene and styrene. Green polystyrene is plausible.
 
As you may gather, I am reasonably enthusiastic about this concept, because it simultaneously addresses a number of problems: greenhouse gases, "green" chemicals, liquid fuels, and sewage treatment, with perhaps phosphate recovery thrown in. There are a number of other variations on this theme; the point of what I am trying to say is there are things we can do. I believe the answer to the question is yes. Certainly there are more things to do, but no technology is invented mature.
 
Posted by Ian Miller on Aug 5, 2013 5:17 AM BST
Another month, and my alternative theories on planetary formation are still alive. Most of the information that I could find was not directly relevant, but nevertheless there were some interesting papers.
 
One piece of interesting information (Science 341: 260-263) is that analysis of the isotopes of H, C and O in the Martian atmosphere by Curiosity rover, and comparison with carbonates in meteorites such as ALH 84001 indicate that the considerable enhancement of heavy isotopes largely occurred prior to 4 Gy BP, and while some atmospheric loss will have occurred, the atmosphere has been more or less stable since then. This is important because there is strong evidence that there were many river flows, etc on the Martian surface following this period, and such flows require a significantly denser atmosphere simply to maintain pressure, and a very much denser atmosphere if the fluid is water, and the temperature has to be greater than 273 oK. If the atmosphere were gradually ablated to space, there would be heavy isotope enhancement, so it appears that did not happen following 4 Gy BP. If there were such an atmosphere, it had to go somewhere other than space. As I have argued, underground is the most likely, but only if nitrogen was not in the form N2. It would also not be lost due to a massive collision blasting the atmosphere away, the reason being there are no craters big enough that were formed following the fluvial activity.
 
There was one interesting piece of modeling to obtain the higher temperatures required for water to flow. (Icarus 226: 229 – 250.) The Martian hydrological cycle was modeled, and provided there is > 250 mbar of CO2 in the atmosphere, the model gives two "stable" states: cold and dry, or warm and wet, the heat being maintained by an extreme greenhouse effect arising from cirrus ice crystals of size > 10μm, even with the early "cool sun". One problem is where the CO2 came from, because while it is generally considered that Earth's volcanoes give off CO2, most of that CO2 comes through subduction, and Mars did not have plate tectonics. Whether this model is right remains to be seen.
 
There was one paper that annoyed me (Nature 499: 328 – 331). The problem is that if Earth formed from collisions of protoplanetary embryos, the energy would have emulsified all silicates and the highly siderophile elements (those that dissolve in liquid iron) should have been removed to the core nearly quantitatively. Problem: the bulk silicates have these elements. An analysis of mantle type rock have chalcogen ratios similar to Ivuna-type carbonaceous chondrites, but are significantly different to ordinary and enstatite chondrites. The authors argue that the chalcogens arrived in a "late veneer", and this contributed between 20 -100% of the water on earth. What has happened is that the authors carried out a series of analyses of rocks and to make their results seem credible, Earth had to be selectively but massively bombarded with one sort of chondrite, but none of the more common ones. Why? The only reason they need this rather strange selection is because they assumed the model in which Earth formed through the collision of planetary embryos. If the Earth accreted by collecting much smaller objects, as I suggest, the problem of the chalcogens simply disappears. It is interesting that the formation of planets through the collision of embryos persists, despite the fact that there is reasonable evidence that the rocky planets formed in about 5 My or less, the Moon formed after about 30 My due to a collision with something approaching embryo size, and modeling shows that formation through such embryo collisions takes about 100 My. The time required is far too long and the evidence is that when there is such a collision, the net result is loss of mass, except possibly from the core.
 
A paper in Angew. Chem Int Ed. (DOI: 10.1002/anie.201303246) showed a convincing mechanism by which hydrogen cyanide can be converted to adenine. This is of particular interest to me because my suggested mechanism for the formation of ATP and nucleic acids is also photochemically assisted. If correct, life would have commenced in vesicles or micelles floating on water.
 
On a positive note (Nature 499: 55- 58) the authors noted that while most stars form in clusters, some are also in loose clusters with stars density at less than 100 per cubic parsec. One problem might have been that stars born in loose clusters might be the only ones that can retain planets, however the authors report transits in two sun-like stars in a dense cluster, which shows that planets can survive in such a cluster, and that the frequency of planet formation is independent of the cluster density. This makes extrasolar planets very much more probable.
Posted by Ian Miller on Jul 29, 2013 3:08 AM BST
   1 ... 5 6 7 8 9 10 11 12 13 ... 18