Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

There was a recent comment to one of my posts regarding the formation of rocky planets, so I thought I should outline how I think the rocky planets formed, and why. The standard theory involves only physical forces, and is that dust accreted to planetesimals, then these collided, eventually to form embryos (Mars-sized bodies), then these collided to form planets. First, why do I think that is wrong? For me, it is difficult to see how the planetesimals form by simple collision of dust, and it is even harder to see how they stay together. One route might be through melting due to radioactivity, but if that is the case, one would need very recently formed supernova debris to get sufficient radioactivity. Then, as the objects get bigger, collisions will have greater relative velocities, which means much greater kinetic energy in impacts, and because everything is further apart, collisions become less probable and everything takes too long. The models of Moon formation generally lead to the conclusion that such massive impacts lead to a massive loss of material.
The difference between the standard theory and mine is that I think chemistry is involved. There are two stages involved for rocky planets. The first is during the accretion of the star, and near the star, temperatures are raised significantly. Once temperatures reach greater than 1200 oC, some silicates become semi-molten and sticky, and this leads to the accretion of lumps. By 1538 oC, iron melts, and hence lumps of iron-bodies form, while around 1500 – 1600 oC. calcium aluminosilicates form separate molten phases, although about 1300 oC a calcium silicate forms a separate phase. (The separation of phases is enhanced by polymerization.) Material at 1 A. U., say, reaches about 1550 - 1600 oC, while near Mars it reaches something like 1300 oC. Of particular relevance are the calcium aluminosilicates, as these form a range of materials that act as hydraulic cements. Also, the closer the material gets to the star, the hotter and more concentrated it gets, so bigger lumps of material form. One possibility is that Mercury is in fact essentially formed from one such accreted lump that scavenged up local lumps. Another important feature is that within this temperature range, significant other chemistry occurred, e.g. the formation of carbides, carbon, nitrides, cyanides, cyanamides, silicides, phosphides, etc.
When the disk cooled down, collisions between bodies formed dust, while some bodies would come together. Dust would form preferentially from the more brittle solids, which would tend to be the aluminosilicates, and when such dust accreted onto other bodies, water from the now cool disk would set the cement and make a solid body that would grow by simply accreting more dust and small bodies. Because there is a gradual movement of dust and gas towards the star, there would be a steady supply of such feed, and the bodies would grow at a rate proportional to their cross-section. Eventually, the bodies would be big enough to gravitationally attract other larger bodies, however the important point is that provided initiation is difficult, runaway growth of one body in a zone would predominate. Earth grows to be the biggest because it is in the zone most suitable for forming and setting cement, and because the iron bodies are eminently suitable for forming dust. The atmosphere and biochemical precursors form because the water of accretion reacts within the planet to form a range of chemicals from the nitrides, phosphides, carbides, etc. What is relevant here is high-pressure organic chemistry, which again is somewhat under-studied.
Am I right? The more detailed account, including a major literature review, took just under a quarter of a million words in the ebook, and the last chapter contains over 80 predictions, most of which are very difficult to do. Nevertheless, an encouraging sign is that the debris of a minor rocky planet around a white dwarf (what remains of an F or A type star) shows the presence of considerable amounts of water. Such water is (in my opinion) best explained by the water being involved in the initial accretion of the body, because it is extremely unlikely that such an amount of water could arrive on a minor rocky planet by collision of chondrites because the gravity of the minor planet is unlikely to be enough to hold such water. Thus this is strongly supportive of my mechanism, and it is rather difficult to see how this arose through the standard theory.
Posted by Ian Miller on Oct 21, 2013 1:57 AM BST
Leaving aside the provision of employment for modelers, I am far from convinced that the climate change models are of any use at all. As an example, we often hear the proposition that to fix climate change we should find a way to get carbon dioxide from the atmosphere, or from the gaseous effluent of power stations. This sounds simple. It is reasonably straightforward to absorb carbon dioxide: bubble the gas through a suitable base. Of course, the problem then comes down to, how do you get a suitable base? Calcium oxide is fine, except you broke down a carbonate at quite high temperatures to get it. Amines offer an easier route, but to collect a power station's output, regenerate your amine, and keep the carbon dioxide under control will require up to a third of the power from your power station. Not attractive. The next problem is, what to do with the carbon dioxide? Yes, some can be sunk into wells, preferably wet basaltic ones as this will fix the CO2, and a small amount could be used as a chemical, say to make polycarbonates, but how many power stations do you think will be accounted for by that?
The problem for climate change is that we currently burn about 9 Gt of carbon per annum, which means we have to fix/use something like 33 Gt of CO2 per annum just to break even, and breaking even is unlikely to fix this carbon problem. The problem is, CO2 is not a very strong greenhouse gas, but it does stay around in the atmosphere for a considerable time. One point that nobody seems to make in public is that even if we stopped emitting CO2 right now, the additional carbon we have already put in the atmosphere will remain for long enough to do a lot more damage. Everybody seems to behave as if we are in a rapid equilibrium, and that is not so. The Greenland ice sheet is the last relic of the last ice age. If we have created the net warming to melt so much per annum, that will keep going until the ice retreats to a position more resilient, at which point our climate will change significantly because we have a much different albedo over a large area. We cannot "fix" climate change by simply stopping the rate of increase of burning carbon; we have to actively reduce the total integrated amount, and not simply worry about the rate of increased production. I suggest that to fix the climate problem, assuming we see it as a problem, we would be better to put more effort into something with a stronger response than fixing CO2.
In the previous post, I attempted (unsuccessfully!) to irritate some people relating to how climate change research is spent. When money becomes available for this, what happens? What I believe happens is that we see numerous proposals for funding to make more accurate measurements of something. My argument is, just supposing we do get more accurate data on, say, the methane output of some swamp, what good does that do? It provides employment for those measuring the output of the swamp, but then what? Certainly it will add more to the literature, but the scientific literature is hardly short of material. Enough such measurements will help models account for what has happened, perhaps, but the one thing I am less confident about is whether such models will be able to answer the question, "Exactly what will happen if we do X?" For example, suppose we decided to try to raise the albedo of the planet by reflecting more light to space, and did this in a region that would lower the temperature of cold fronts coming into Greenland, with the aim of increasing snow deposition over Greenland, how much light would we need to reflect and where should we reflect it? My argument is, until models can give an approximate answer to that sort of question, they are useless. And unless we do something like geo-engineering, we are doomed to have to accommodate the change, because nobody has suggested any alternative that has the capacity to solve the problem. We can wave our hands and "feel virtuous" for claiming that we are doing something, but unless the sum of the somethings solves the problem, it is a complete waste of effort. Worse than that, such acts consume resources that could be better used to accommodate what will come. The only value of a model is to inform us which actions will be sufficient, and so far they cannot do that.
Posted by Ian Miller on Oct 14, 2013 10:13 PM BST
Currently, NASA is asking for public assistance for their astrobiology program, or they were up until the current government shutdown, and in particular, asking for suggestions as to where their program should be going. I think this is an extremely enlightened view, and I hope they receive plenty of good suggestions and take some of them up. This is a little different from the average way science gets funded, in which academic scientists put in applications for funds to pursue what they think is original. This is supposed to permit the uncovering of "great new advances", and in some areas, perhaps it does, but I rather suspect the most common outcome is to support what Rutherford dismissively called, "stamp collecting". You get a lot of publications, a lot of data, but there is no coherent approach towards answering "big questions". That, I think, is a strength of the NASA approach, and I hope other organizations take this up. For example, if we wish to address climate change, what questions do we really want to have answered? What we tend to get is, "Fund me to set up more data gathering," from those too uninspired to come up with something more incisive. We do not need more data to set the parameters so that current models better represent what we see; we need better models that will represent what will happen if we do or do not do X.
So what are the good questions for NASA to address? Obviously there are a very large number of them, but in my view, regarding biogenesis, I think there are some very important ones. Perhaps one of the most important one that has been pursued so far is how do the planets get their water, because if we want life on other planets, they have to have water. The water on the rocky planets is often thought to come from chondrites, as a "late veneer" on the planet. Now, one of the peculiarities of this explanation is that, as I argued in my ebook, Planetary Formation and Biogenesis, this explanation has serious problems. The first is, only a special class of chondrites contains volatiles; the bulk of the bodies from the asteroid belt do not. Further, the isotopes of the heavier elements are different from Earth, the ratios of different volatiles do not correspond to anything we see here or on the other planets, so why is such an explanation persisted with? The short answer is, for most there is no alternative.
My alternative is simple: the planets started accreting through chemical processes. Only solids could be accreted in reasonable amounts this close to the star, unless the body got big enough to hold gravitationally gases from the accretion disk. Water can be held as metal and silicon hydroxyl compound, the water subsequently being liberated. This, as far as I know, is the only mechanism by which the various planets can have different atmospheric compositions: different amounts of the various components were formed at different temperatures in the disk.
If that is correct, we would have a means of predicting whether alien planets could conceivably contain life. Accordingly, one way to pursue this would be to try to understand the high temperature chemistry of the dusts and volatiles expected to be in the accretion disk. That would involve a lot of work for which chemists alone would be suitable. Now, my question is, how many chemists have shown any interest in this NASA program? Do we always want to complain about insufficient research funds, or are we prepared to go out and do something to collect more?
Posted by Ian Miller on Oct 7, 2013 1:10 AM BST
Perhaps one of the more interesting questions is where did Earth's volatiles come from? The generally accepted theory is that Earth formed by the catastrophic collisions of planetary embryos (Mars-sized bodies), which effectively turned earth into a giant ball of magma, at which time the iron settled to the core though having a greater density, and took various siderophile elements with it. At this stage, the Earth would have been reasonably anhydrous. Subsequently, Earth got bombarded with chondritic material from the asteroid belt that was dislodged by Jupiter's gravitational field (including, in some models, Jupiter migrating inwards then out again), and it is from here that Earth gets its volatiles and its siderophile elements. This bombardment is often called "the late veneer". In my opinion, there are several reasons why this did not happen, which is where these papers become relevant. What are the reasons? First, while there was obviously a bombardment, to get the volatiles through that, only carbonaceous chondrites will suffice, and if there were sufficient mass to give that to Earth, there should also be a huge mass of silicates from the more normal bodies. There is also the problem of atmospheric composition. While Mars is the closest, it is hit relatively infrequently compared with its cross-section, and hit by moderately wet bodies almost totally deficient in nitrogen. Earth is hit by a large number of bodies with everything, but the Moon is seemingly not hit by wet bodies or carbonaceous bodies. Venus, meanwhile, is hit by more bodies that are very rich in nitrogen, but relatively dry. What does the sorting?
The first paper (Nature 501: 208 – 210) notes that if we assume the standard model by which core segregation took place, the iron would have removed about 97% of the Earth's sulphur and transferred it to the core. If so, the Earth's mantle should exhibit fractionated 34S/32S ratio according to the relevant metal-silicate partition coefficients, together with fractionated siderophile metal abundances. However, it is usually thought that Earth's mantle is both homogeneous and chondritic for this sulphur ratio,  consistent with the acquisition of sulphur  ( and other siderophile elements) from chondrites (the late veneer). An analysis of mantle material from mid-ocean ridge basalts displayed heterogeneous 34S/32S ratios that are compatible with binary mixing between a low 34S/32S  ambient mantle ratio and a high 34S/32S recycled component. The depleted end-member cannot reach a chondritic value, even if the most optimistic surface sulphur is added. Accordingly, these results imply that the mantle sulphur is at least partially determined by original accretion, and not all sulphur was deposited by the late veneer.
In the second (Geochim. Cosmochim. Acta 121: 67-83), samples from Earth, Moon, Mars, eucrites, carbonaceous chondrites and ordinary chondrites show variation in Si isotopes. Earth and Moon show the heaviest isotopes, and have the same composition, while enstatite chondrites have the lightest. A model of Si partitioning based on continuous planetary formation that takes into account T, P and oxygen fugacity variation during Earth's accretion. If the isotopic difference  results solely from Si fractionation during core formation, their model requires at least ~12% by weight Si in the core, which exceeds estimates based  on core density or geochemical mass balance calculations. This suggests one of two explanations: Earth's material started with heavier silicon, or (2) there is a further unknown process that leads to fractionation. They suggest vaporization following the Moon forming event, but would not this lead to lighter or different Moon material?
One paper (Earth Planet. Sci. Lett. 2013: 88-97) pleased me. My interpretation of the data related to atmospheric formation is that the gaseous elements originally accreted as solids, and were liberated by water as the planet evolved.  These authors showed that early early degassing of H2 obtained from reactions of water explains the "high oxygen fugacity" of the Earth's mantle. A loss of only 1/3 of an "ocean" of water from Earth would shift the oxidation state of the upper mantle from the very low oxidation state equivalent to the Moon, and if so, no further processes are required. Hydrogen is an important component of basalts at high pressure and, perforce, low oxygen fugacity. Of particular interest, this process may have been rapid. On the early Earth, over 5 times the amount of heat had to be lost as is lost now, and one proposal (501:501 - 504 ) heat pipe volcanism such as found on Io would manage this, in which case, the evolution of water and volatiles may have also been very rapid.
Finally, in (Icarus 226: 1489 -1498), near-infrared spectra show the presence of hydrated poorly crystalline silica with a high silica content on the western rim of Hellas. The surfaces are sporadically exposed over a 650 km section within a limited elevation range. The high abundances and lack of associated aqueous phase material indicate high water to rock ratios were present, but the higher temperatures that would lead to quartz were not present. This latter point is of interest because it is often considered that the water flows on Mars in craters were due to internal heating due to impact, such heat being retained for considerable periods of time. To weather basalt to make silica, there would have to be continuous water of a long time, and if the water was hot and on the surface it would rapidly evaporate, while if it was buried, it would stay super-heated, and presumably some quartz would result. This suggests extensive flows of cold water.
Posted by Ian Miller on Sep 30, 2013 3:30 AM BST
In a previous post, I questioned whether gold showed relativistic effects in its valence electrons. I also mentioned a paper of mine that proposes that the wave functions of the heavier elements do not correspond exactly to the excited states of hydrogen, but rather are composite functions, some of which have reduced numbers of nodes, and I said that I would provide a figure from the paper once I sorted out the permission issue. That is now sorted, and the following figure comes from my paper.

The full paper can be found at  and I thank CSIRO for the permission to republish the figure. The lines show the theoretical function, the numbers in brackets are explained in the paper and the squares show the "screening constant" required to get the observed energies. The horizontal axis shows the number of radial nodes, the vertical axis, the "screening constant".
The contents of that paper are incompatible with what we use in quantum chemistry because the wave functions do not correspond to the excited states of hydrogen. The theoretical function is obtained by assuming a composite wave in which the quantal system is subdivisible provided discrete quanta of action are associated with any component. The periodic time may involve four "revolutions" to generate the quantum (which is why you see quantum numbers with the quarter quantum). What you may note is that for = 1, gold is not particularly impressive (and there was a shortage of clear data) but for = 0 and = 2 the agreement is not too bad at all, and not particularly worse than that for copper.
So, what does this mean? At the time, the relationships were simply put there as propositions, and I did not try to explain their origin. There were two reasons for this. The first was that I thought it better to simply provide the observations and not clutter it up with theory that many would find unacceptable. It is not desirable to make too many uncomfortable points in one paper. I did not even mention "composite waves" clearly. Why not? Because I felt that was against the state vector formalism, and I did not wish to have arguments on that. (That view may not be correct, because you can have "Schrödinger cat states", e.g. as described by Haroche, 2013, Angew. Chem. Int. Ed. 52: 10159 -10178). However, the second reason was perhaps more important. I was developing my own interpretation of quantum mechanics, and I was not there yet.
Anyway, I have got about as far as I think is necessary to start thinking about trying to convince others, and yes, it is an alternative. For the motion of a single particle I agree the Schrödinger equation applies (but for ensembles, while a wave equation applies, it is a variation as seen in the graph above.) I also agree the wave function is of the form
ψ = A exp(2πiS/h)
So, what is the difference? Well, everyone believes the wave function is complex, and here I beg to differ. It is, but not entirely. If you recall Euler's theory of complex numbers, you will recall that exp() = -1, i.e. it is real. That means that twice a period, for the very brief instant that S = h, ψ is real and equals the wave amplitude. No need to multiply by complex conjugates then (which by itself is an interesting concept –where did this conjugate come from? Simple squaring does not eliminate the complex nature!) I then assume the wave only affects the particle when the wave is real, when it forces the particle to behave as the wave requires. To this extent, the interpretation is a little like the pilot wave.
If you accept that, and if you accept the interpretation of what the wave function means, then the reason why an electron does not radiate energy and fall into the nucleus becomes apparent, and the Uncertainty Principle and the Exclusion Principle then follow with no further assumptions. I am currently completing a draft of this that I shall self-publish. Why self-publish? That will be the subject of a later blog.
Posted by Ian Miller on Sep 23, 2013 3:30 AM BST
In the latest Chemistry World, Derek Lowe stated that keeping up with the literature is impossible, and he argued for filtering and prioritizing. I agree with his first statement, but I do not think his second option, while it is necessary right now, is optimal. That leaves open the question, what can be done about it? I think this is important, because the major chemical societies around the world are the only organizations that could conceivably help, and surely this should be of prime importance to them. So, what are the problems?
Where to put the information is not a problem because we now seem to have almost unlimited digital storage capacity. Similarly, organizing it is not a problem provided the information is correctly input, in an appropriate format with proper tags. So far, easy! Paying for it? This is more tricky, but it should not necessarily be too costly in terms of cash.
The most obvious problem is manpower, but this can also be overcome if all chemists play their part. For example, consider chemical data. The chemist writes a paper, but it would take little extra effort to put the data into some pre-agreed format for entry into the appropriate data base. Some of this is already done with "Supplementary information", but that tends to be attached to papers, which means someone wishing to find the information has to subscribe to the journal. Is there any good reason why data like melting points and spectra cannot be provided free? As an aside, this sort of suggestion would be greatly helped if we could all agree on the formatting requirements, and what tags would be required.
This does not solve everything, because there are a lot of other problems too, such as "how to make something". One thing that has always struck me is the enormous wastage of effort in things like biofuels, where very similar work tended to be repeated every crisis. Yes, I know, intellectual property rights tend to get in the way, but surely we can get around this. As an example of this problem, I recall when I was involved in a joint venture with the old ICI empire. For one of the potential products to make, I suggested a polyamide based on a particular diamine that we could, according to me, make. ICINZ took this up, sent it off to the UK, where it was obviously viewed with something approaching indifference, but they let it out to a University for them to devise a way to make said polyamide. After a year, we got back the report, they could not make the diamine, and in any case, my suggested polymer would be useless. I suggested that they rethink that last thought, and got a rude blast back, "What did I know anyway?" So, I gave them the polymer's properties. "How did I know that?" they asked. "Simple," I replied, and showed them the data in an ICI patent, at which point I asked them whether they had simply fabricated the whole thing, or had they really made this diamine? There was one of those embarrassed silences! The institution could not even remember its own work!
In principle, how to make something is clearly placed in scientific papers, but again, the problem is, how to find the data, bearing in mind no institute can afford more than a fraction of the available journals. Even worse is the problem of finding something related. "How do you get from one functional group to another in this sort of molecule with these other groups that may interfere?" is a very common problem that in principle could be solved by computer searching, but we need an agreed format for the data, and an agreement that every chemist will do their part to place what they believe to be the best examples of their own synthetic work in it. Could we get that cooperation? Will the learned societies help?
Posted by Ian Miller on Sep 16, 2013 8:07 PM BST
One concern I have as a scientist, and one I have alluded to previously, lies in the question of computations. The problem is, we have now entered an age where computers permit modeling of a complexity unknown to previous generations. Accordingly, we can tackle problems that were never possible before, and that should be good. The problem for me is, the reports of the computations tell almost nothing about how they were done, and they are so opaque that one might even question whether the people making them fully understand the underlying code. The reason is, of course, that the code is never written by one person, but by rather a team. The code is then validated by using the computations for a sequence of known examples, and during this time, certain constants of integration that are required by the process are fixed. My problem with this follows a comment that I understand was attributed to Fermi: give me five constants and I will fit any data to an elephant. Since there is a constant associated with every integration, it is only too easy to get agreement with observation.
An example that particularly irritated me was a paper that tried "evolved" programs on molecules from which they evolved (Moran et al. 2006. J. Am Chem Soc. 128: 9342-9343). What they did was to apply a number of readily available and popular molecular orbital programs to compounds that had been the strong point of molecular orbital theory, such as benzene and other arenes. What they found was that these programs  "predicted" benzene to be non-planar with quite erroneous spectral signals. That such problems occur is, I suppose, inevitable, but what I found of concern is that nowhere that I know was the reason for the deviations identified, and how such propensity to error can be corrected, and once such corrections are made, what do they do to the subsequent computations that allegedly gave outputs that agreed well with observation. If the values of various constants are changed, presumably the previous agreement would disappear.
There are several reasons why I get a little grumpy over this. One example is this question of planetary formation. Computations up to about 1995 indicated that Earth would take about 100 My to accrete from planetary embryos, however, because of the problem of Moon formation, subsequent computations have reduced this to about 30 My, and assertions are made that computations reduce the formation of gas giants to a few My. My question is, what changed? There is no question that someone can make a mistake, and subsequently correct it, but surely it should be announced what the correction was. An even worse problem, from my point of view, was what followed from my PhD project, which involved, do cyclopropane electrons delocalize into adjacent unsaturation? Computations said yes, which is hardly surprising because molecular orbital theory starts by assuming it, and subsequently tries to show why bonds should be localized. If it is going to make a mistake, it will favour delocalization. The trouble was, my results, which involved varying substituents at another ring carbon and looking at Hammett relationships, said it does not.
Subsequent computational theory said that cyclopropane conjugates with adjacent unsaturation, BUT it does not transmit it, while giving no clues as to how it came to this conclusion, apart from the desire to be in agreement with the growing list of observations. Now, if theory says that conjugation involves a common wave function over the region, then the energy at all parts of that wave must be equal. (The electrons can redistribute themselves to accommodate this, but a stationary solution to the Schrödinger equation can have only one frequency.) Now, if A has a common energy with B, and B has a common energy with C, why does A not have a common energy with C? Nobody has ever answered that satisfactorily. What further irritates me is that the statement that persists in current textbooks employed the same computational programs that "proved" the existence of polywater. That was hardly a highlight, so why are we so convinced the other results are valid? So, what would I like to see? In computations, the underpinning physics, the assumptions made, and how the constants of integration were set should be clearly stated. I am quite happy to concede that computers will not make mistakes in addition, etc, but that does not mean that the instructions for the computer cannot be questioned.
Posted by Ian Miller on Sep 9, 2013 4:31 AM BST
Once again there were very few papers that came to my attention in August relating to my ebook on planetary formation. One of the few significant ones (Geochim Cosmochim Acta 120: 1-18) involved the determination of magnesium isotopes in lunar rocks, and these turned out to be identical with those of Earth and in chondrites, which lead to the conclusion that there was no significant magnesium isotopic separation throughout the accretion disk, nor during the Moon-forming event. There is a difference in magnesium isotope ratios between magnesium found in low and high titanium content basalts, but this is attributed to the actual crystallization processes of the basalts. This result is important because much is sometimes made of variation in iron isotope variations, and in variations for some other elements. The conclusion from this work is that apart from volatile elements, isotope variation is probably more due to subsequent processing than in planetary formation, and the disk was probably homogeneous.
Another point was that a planet has been found around the star GJ 504, at a distance of 43.5 A.U. from the star. Commentators have argued that such a planet is very difficult to accommodate within the standard theory. The problem is, if planets form by collision of planetesimals, and as these get bigger, collisions between embryos, the probability of collision, at least initially, is proportional to the square of the concentration of particles, and the concentration of particles depends to some power between 1 and 2, and usually taken as to the power 1.5, of the radial distance from the star. Now standard theory argues that it in our solar system, it was only around the Jupiter-Saturn distance that bodies could form reasonably quickly, and in the NICE theory, the most favoured computational route, Uranus and Neptune formed closer and had to migrate out through gravitational exchanges between them, Jupiter, Saturn, and the non-accreted planetesimals. For GJ 504, the number density of planetesimals would be such that collision probability would be about 60 times slower, so how did they form in time to form a planet four times the size of Jupiter, given that, in standard theory in our system, growth of Jupiter and Saturn was only just fast enough to get a giant?
In my opinion, the relative size compared with Jupiter is a red herring, because it also depends on when the gas disk is cleaned out by a stellar outflow. The reason is, in my model, bodies do not grow largely by collision of equally sized objects, but rather they grow by melt accretion of ices at a given temperature, and the rate of growth depends on the initial concentration of solids in the disk only, and of course, the gas inflow rate because that, together with the initial gas temperature and the position of the star within a cluster, determines the temperature, and the temperature determines the position of the planet. If GJ 504 formed under exactly the same conditions as Earth, this planet lies about midway between where we might expect Neptune and Uranus to lie, and which one it represents can only be determined by finding inner planets. In previous computations, the planet should, not form; in my theory, it is larger than would normally be expected but it is not unexpected, and there should be further planets within that orbit. Why is only one outer planet detected so far? The detection is by direct observation of a very young planet that is still glowing over red hot through gravitational energy release. The inner ones will be just as young, but the closer to the star, the harder it is to separate their light from that of the star, and, of course, some may appear very close to the star by being on certain orbital phases.
Posted by Ian Miller on Sep 1, 2013 8:58 PM BST
Nullius in verba (take nobody's word) is the motto of the Royal Society, and it should be the motto of every scientist. The problem is, it is not. An alternative way of expressing this comes from Aristotle: the fallacy ad verecundiam. Just because someone says so, that does not mean it is right. We have to ask questions of both our logic and of nature, and I am far from convinced we do this often enough. What initiated this was an article in the August Chemistry World where it was claimed that the “unexpected” properties of elements such as mercury and gold were due to relativistic effects experienced by the valence electrons.
If we assume the valence electrons occupy orbitals corresponding to the excited states of hydrogen (i.e. simple solutions of the Schrödinger equation) the energy E is given by E = Z2Eo/n2h2. Here, Eo is the energy given by the Schrödinger equation, n gives the quanta of action associated with the state, and Z is a term that at one level is an empirical correction. Thus without this, the 6s electron in gold would have an energy 1/36 that of hydrogen, and that is just plain wrong. The usual explanation is that since the wave function goes right to the nucleus, there is a probability that the electron is near the nucleus, in which case it experiences greater electric fields. For mercury and gold, these are argued to be sufficient to lead to relativistic mass enhancement (or spacetime dilation, however you wish to present the effects), and these alter the energy sufficiently that gold has the colour it has, and both mercury and gold have properties unexpected from simple extrapolation from earlier elements in their respective columns in the periodic table. The questions are, is this correct, or are there alternative interpretations for the properties of these elements? Are we in danger of simply hanging our hat on a convenient peg without asking, is it the right one? I must confess that I dislike the relativistic interpretation, and here are my reasons.
The first involves wave-particle duality. Either the motion complies with wave properties or it does not, and the two-slit experiment is fairly good evidence that it does. Now a wave consistent with the Schrödinger equation can have only one frequency, hence only one overall energy. If a wave had two frequencies, it would self-interfere, or at the very least would not comply with the Schrödinger equation, and hence you could not claim to be using standard quantum mechanics. Relativistic effects must be consistent with the expectation energy of the particle, and should be negligible for any valence electron. 
The second relates to how the relativistic effects are calculated. This involves taking small regions of space and assigning relativistic velocities to them. That means we are assigning specific momentum enhancements to specific regions of space, and surely that violates the Uncertainty Principle. The Uncertainty Principle argues the uncertainty of the position multiplied by the uncertainty of the momentum is greater or equal to the quantum of action. In fact it may be worse than that, because when we have stationary states with nh quanta, we do not know that that is not the total uncertainty. More on this in a later blog.
On a more personal note, I am annoyed because I have published an alternative explanation [ Aust. J. Phys. 40 : 329 -346 (1987)] that proposes that the wave functions of the heavier elements do not correspond exactly to the excited states of hydrogen, but rather are composite functions, some of which have reduced numbers of nodes. ( The question, “how does an electron cross a nodal surface?” disappears, because the nodes disappear.) The concept is too complicated to explain fully here, however I would suggest two reasons why it may be relevant.
The first is, if we consider the energies of the ground states of atoms in a column of elements, my theory predicts the energies quite well at each end of a row, but for elements nearer the centre, there are more discrepancies, and they alternate in sign, depending on whether n is odd or even. The series copper, silver and gold probably show the same effect, but more strongly. The “probably" is because we need a fourth member to be sure. However, the principle remains: taking two points and extrapolating to a third is invalid unless you can prove the points should lie on a known line. If there are alternating differences, then the method is invalid. Further, within this theory, gold is the element that agrees with theory the best. That does not prove the absence of relativistic effects, but at least it casts suspicion.
The second depends on calculations of the excited states. For gold, the theory predicts the outcomes rather well, especially for the d states, which involve the colour problem. Note that copper is also coloured. (I shall post a figure from the paper later. I thought I had better get agreement on copyright before I start posting it, and as yet I have had no response. The whole paper should be available as a free download, though.) The function is not exact, and for gold the p states are more the villains, and it is obvious that something is not quite right, or, as I believe, has been left out. However, the point I would make is the theoretical function depends only on quantum numbers, it has no empirical validation procedures and depends only on the nodal structure of the waves. The only interaction included is the electron nucleus electric field so some discrepancies might be anticipated. Now, obviously you should not take my word either, but when somebody else produces an alternative explanation, in my opinion we should at least acknowledge its presence rather than simply ignore it.
Posted by Ian Miller on Aug 26, 2013 3:58 AM BST
Some time ago I had posts on biofuels, and I covered a number of processes, but for certain reasons (I had been leading a research program for a company on this topic, and I thought I should lay off until I saw where that was going) I omitted what I believe is more optimal. The process I had eventually landed on is hydrothermal liquefaction, for reasons as follows.
The first problem with biomass is that it is dispersed, and it does not travel easily. How would you process forestry wastes? The shapes are ugly, and if you chip onsite, you are shipping a lot of air. If you are processing algae, either you waste a lot of energy drying it, or you ship a lot of water. There is no way around this problem initially, so you must try to make the initial travel distance as short as possible. Now, if you use a process such as Fischer Tropsch, you need such a large amount of biomass that you must harvest over a huge area, and now your transport costs rise very fast, as does the amount of fuel you burn shipping it. Accordingly, there are significant diseconomies of scale. The problem is, as you decrease the throughput, you lose processing economies of scale. What liquefaction does is reduce the volume considerably, and in turn, liquids are very much easier to transport. But to get that advantage, you have to process relatively smaller volumes. Transportation costs are always less for transport by barge, so that gives marine algae an increased desirability factor.
A second advantage of liquefaction is that you can introduce just about any feedstock, in any mix, although there are disadvantages in having too much variation. Liquefaction produces a number of useful chemicals, but they vary depending on the feedstock, and to be useful they have to be isolated and purified, and accordingly, the more different feedstocks included, the harder this problem. Ultimately, there will be the issue of “how to sell such chemicals” because the fuels market is enormously larger than that for chemicals, but initially the objective is to find ways to maximize income while the technology is made more efficient. No technology is introduced in its final form.
Processing frequently requires something else. Liquefaction has an advantage here too. If you were to hydrogenate, you have to make hydrogen, and that in turn is an unnecessary expense unless location gives you an advantage, e.g. hydrogen is being made somewhere nearby for some other purpose. In principle, liquefaction only requires water, although some catalysts are often helpful. Such catalysts can be surprisingly cheap, nevertheless they still need to be recovered, and this raises the more questionable issue relating to liquefaction: the workup. If carried out properly, the water waste volumes can be reasonably small, at least in theory, but that theory has yet to be properly tested. One advantage is that water can be recycled through the process, in which case a range of chemical impurities get recycled, where they condense further. There will be a stream of unusable phenolics, and these will have to be hydrotreated somewhere else.
The advantages are reasonably clear. There are some hydrocarbons produced that can be used as drop-in fuels following distillation. The petrol range is usually almost entirely aromatic, with high octane numbers. The diesel range from lipids has a very high cetane number. There are a number of useful chemicals made, and the technology should operate tolerably cheaply on a moderate scale, whereupon it makes liquids that can be cheaply transported elsewhere. In principle, the technology is probably the most cost-effective.
The disadvantages are also reasonably clear. The biggest is that the technology has not been demonstrated at a reasonable scale, so the advantages are somewhat theoretical. The costs may escalate with the workup, and the chemicals obtained, while potentially very useful, e.g. for polymers, are often somewhat different from the main ones currently used now, so their large-scale use requires market acceptance of materials with different properties.
Given the above, what should be done?  As with some of the other options, in my opinion there is insufficient information to decide, so someone needs to build a bigger plant to see whether it lives up to expectations. Another point is that unlike oil processing, it is unlikely that any given technology will be the best in all circumstances. We may have to face a future in which there are many different options in play.
Posted by Ian Miller on Aug 19, 2013 5:03 AM BST
   1 ... 4 5 6 7 8 9 10 11 12 ... 18