Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

One concern I have as a scientist, and one I have alluded to previously, lies in the question of computations. The problem is, we have now entered an age where computers permit modeling of a complexity unknown to previous generations. Accordingly, we can tackle problems that were never possible before, and that should be good. The problem for me is, the reports of the computations tell almost nothing about how they were done, and they are so opaque that one might even question whether the people making them fully understand the underlying code. The reason is, of course, that the code is never written by one person, but by rather a team. The code is then validated by using the computations for a sequence of known examples, and during this time, certain constants of integration that are required by the process are fixed. My problem with this follows a comment that I understand was attributed to Fermi: give me five constants and I will fit any data to an elephant. Since there is a constant associated with every integration, it is only too easy to get agreement with observation.
An example that particularly irritated me was a paper that tried "evolved" programs on molecules from which they evolved (Moran et al. 2006. J. Am Chem Soc. 128: 9342-9343). What they did was to apply a number of readily available and popular molecular orbital programs to compounds that had been the strong point of molecular orbital theory, such as benzene and other arenes. What they found was that these programs  "predicted" benzene to be non-planar with quite erroneous spectral signals. That such problems occur is, I suppose, inevitable, but what I found of concern is that nowhere that I know was the reason for the deviations identified, and how such propensity to error can be corrected, and once such corrections are made, what do they do to the subsequent computations that allegedly gave outputs that agreed well with observation. If the values of various constants are changed, presumably the previous agreement would disappear.
There are several reasons why I get a little grumpy over this. One example is this question of planetary formation. Computations up to about 1995 indicated that Earth would take about 100 My to accrete from planetary embryos, however, because of the problem of Moon formation, subsequent computations have reduced this to about 30 My, and assertions are made that computations reduce the formation of gas giants to a few My. My question is, what changed? There is no question that someone can make a mistake, and subsequently correct it, but surely it should be announced what the correction was. An even worse problem, from my point of view, was what followed from my PhD project, which involved, do cyclopropane electrons delocalize into adjacent unsaturation? Computations said yes, which is hardly surprising because molecular orbital theory starts by assuming it, and subsequently tries to show why bonds should be localized. If it is going to make a mistake, it will favour delocalization. The trouble was, my results, which involved varying substituents at another ring carbon and looking at Hammett relationships, said it does not.
Subsequent computational theory said that cyclopropane conjugates with adjacent unsaturation, BUT it does not transmit it, while giving no clues as to how it came to this conclusion, apart from the desire to be in agreement with the growing list of observations. Now, if theory says that conjugation involves a common wave function over the region, then the energy at all parts of that wave must be equal. (The electrons can redistribute themselves to accommodate this, but a stationary solution to the Schrödinger equation can have only one frequency.) Now, if A has a common energy with B, and B has a common energy with C, why does A not have a common energy with C? Nobody has ever answered that satisfactorily. What further irritates me is that the statement that persists in current textbooks employed the same computational programs that "proved" the existence of polywater. That was hardly a highlight, so why are we so convinced the other results are valid? So, what would I like to see? In computations, the underpinning physics, the assumptions made, and how the constants of integration were set should be clearly stated. I am quite happy to concede that computers will not make mistakes in addition, etc, but that does not mean that the instructions for the computer cannot be questioned.
Posted by Ian Miller on Sep 9, 2013 4:31 AM BST
Once again there were very few papers that came to my attention in August relating to my ebook on planetary formation. One of the few significant ones (Geochim Cosmochim Acta 120: 1-18) involved the determination of magnesium isotopes in lunar rocks, and these turned out to be identical with those of Earth and in chondrites, which lead to the conclusion that there was no significant magnesium isotopic separation throughout the accretion disk, nor during the Moon-forming event. There is a difference in magnesium isotope ratios between magnesium found in low and high titanium content basalts, but this is attributed to the actual crystallization processes of the basalts. This result is important because much is sometimes made of variation in iron isotope variations, and in variations for some other elements. The conclusion from this work is that apart from volatile elements, isotope variation is probably more due to subsequent processing than in planetary formation, and the disk was probably homogeneous.
Another point was that a planet has been found around the star GJ 504, at a distance of 43.5 A.U. from the star. Commentators have argued that such a planet is very difficult to accommodate within the standard theory. The problem is, if planets form by collision of planetesimals, and as these get bigger, collisions between embryos, the probability of collision, at least initially, is proportional to the square of the concentration of particles, and the concentration of particles depends to some power between 1 and 2, and usually taken as to the power 1.5, of the radial distance from the star. Now standard theory argues that it in our solar system, it was only around the Jupiter-Saturn distance that bodies could form reasonably quickly, and in the NICE theory, the most favoured computational route, Uranus and Neptune formed closer and had to migrate out through gravitational exchanges between them, Jupiter, Saturn, and the non-accreted planetesimals. For GJ 504, the number density of planetesimals would be such that collision probability would be about 60 times slower, so how did they form in time to form a planet four times the size of Jupiter, given that, in standard theory in our system, growth of Jupiter and Saturn was only just fast enough to get a giant?
In my opinion, the relative size compared with Jupiter is a red herring, because it also depends on when the gas disk is cleaned out by a stellar outflow. The reason is, in my model, bodies do not grow largely by collision of equally sized objects, but rather they grow by melt accretion of ices at a given temperature, and the rate of growth depends on the initial concentration of solids in the disk only, and of course, the gas inflow rate because that, together with the initial gas temperature and the position of the star within a cluster, determines the temperature, and the temperature determines the position of the planet. If GJ 504 formed under exactly the same conditions as Earth, this planet lies about midway between where we might expect Neptune and Uranus to lie, and which one it represents can only be determined by finding inner planets. In previous computations, the planet should, not form; in my theory, it is larger than would normally be expected but it is not unexpected, and there should be further planets within that orbit. Why is only one outer planet detected so far? The detection is by direct observation of a very young planet that is still glowing over red hot through gravitational energy release. The inner ones will be just as young, but the closer to the star, the harder it is to separate their light from that of the star, and, of course, some may appear very close to the star by being on certain orbital phases.
Posted by Ian Miller on Sep 1, 2013 8:58 PM BST
Nullius in verba (take nobody's word) is the motto of the Royal Society, and it should be the motto of every scientist. The problem is, it is not. An alternative way of expressing this comes from Aristotle: the fallacy ad verecundiam. Just because someone says so, that does not mean it is right. We have to ask questions of both our logic and of nature, and I am far from convinced we do this often enough. What initiated this was an article in the August Chemistry World where it was claimed that the “unexpected” properties of elements such as mercury and gold were due to relativistic effects experienced by the valence electrons.
If we assume the valence electrons occupy orbitals corresponding to the excited states of hydrogen (i.e. simple solutions of the Schrödinger equation) the energy E is given by E = Z2Eo/n2h2. Here, Eo is the energy given by the Schrödinger equation, n gives the quanta of action associated with the state, and Z is a term that at one level is an empirical correction. Thus without this, the 6s electron in gold would have an energy 1/36 that of hydrogen, and that is just plain wrong. The usual explanation is that since the wave function goes right to the nucleus, there is a probability that the electron is near the nucleus, in which case it experiences greater electric fields. For mercury and gold, these are argued to be sufficient to lead to relativistic mass enhancement (or spacetime dilation, however you wish to present the effects), and these alter the energy sufficiently that gold has the colour it has, and both mercury and gold have properties unexpected from simple extrapolation from earlier elements in their respective columns in the periodic table. The questions are, is this correct, or are there alternative interpretations for the properties of these elements? Are we in danger of simply hanging our hat on a convenient peg without asking, is it the right one? I must confess that I dislike the relativistic interpretation, and here are my reasons.
The first involves wave-particle duality. Either the motion complies with wave properties or it does not, and the two-slit experiment is fairly good evidence that it does. Now a wave consistent with the Schrödinger equation can have only one frequency, hence only one overall energy. If a wave had two frequencies, it would self-interfere, or at the very least would not comply with the Schrödinger equation, and hence you could not claim to be using standard quantum mechanics. Relativistic effects must be consistent with the expectation energy of the particle, and should be negligible for any valence electron. 
The second relates to how the relativistic effects are calculated. This involves taking small regions of space and assigning relativistic velocities to them. That means we are assigning specific momentum enhancements to specific regions of space, and surely that violates the Uncertainty Principle. The Uncertainty Principle argues the uncertainty of the position multiplied by the uncertainty of the momentum is greater or equal to the quantum of action. In fact it may be worse than that, because when we have stationary states with nh quanta, we do not know that that is not the total uncertainty. More on this in a later blog.
On a more personal note, I am annoyed because I have published an alternative explanation [ Aust. J. Phys. 40 : 329 -346 (1987)] that proposes that the wave functions of the heavier elements do not correspond exactly to the excited states of hydrogen, but rather are composite functions, some of which have reduced numbers of nodes. ( The question, “how does an electron cross a nodal surface?” disappears, because the nodes disappear.) The concept is too complicated to explain fully here, however I would suggest two reasons why it may be relevant.
The first is, if we consider the energies of the ground states of atoms in a column of elements, my theory predicts the energies quite well at each end of a row, but for elements nearer the centre, there are more discrepancies, and they alternate in sign, depending on whether n is odd or even. The series copper, silver and gold probably show the same effect, but more strongly. The “probably" is because we need a fourth member to be sure. However, the principle remains: taking two points and extrapolating to a third is invalid unless you can prove the points should lie on a known line. If there are alternating differences, then the method is invalid. Further, within this theory, gold is the element that agrees with theory the best. That does not prove the absence of relativistic effects, but at least it casts suspicion.
The second depends on calculations of the excited states. For gold, the theory predicts the outcomes rather well, especially for the d states, which involve the colour problem. Note that copper is also coloured. (I shall post a figure from the paper later. I thought I had better get agreement on copyright before I start posting it, and as yet I have had no response. The whole paper should be available as a free download, though.) The function is not exact, and for gold the p states are more the villains, and it is obvious that something is not quite right, or, as I believe, has been left out. However, the point I would make is the theoretical function depends only on quantum numbers, it has no empirical validation procedures and depends only on the nodal structure of the waves. The only interaction included is the electron nucleus electric field so some discrepancies might be anticipated. Now, obviously you should not take my word either, but when somebody else produces an alternative explanation, in my opinion we should at least acknowledge its presence rather than simply ignore it.
Posted by Ian Miller on Aug 26, 2013 3:58 AM BST
Some time ago I had posts on biofuels, and I covered a number of processes, but for certain reasons (I had been leading a research program for a company on this topic, and I thought I should lay off until I saw where that was going) I omitted what I believe is more optimal. The process I had eventually landed on is hydrothermal liquefaction, for reasons as follows.
The first problem with biomass is that it is dispersed, and it does not travel easily. How would you process forestry wastes? The shapes are ugly, and if you chip onsite, you are shipping a lot of air. If you are processing algae, either you waste a lot of energy drying it, or you ship a lot of water. There is no way around this problem initially, so you must try to make the initial travel distance as short as possible. Now, if you use a process such as Fischer Tropsch, you need such a large amount of biomass that you must harvest over a huge area, and now your transport costs rise very fast, as does the amount of fuel you burn shipping it. Accordingly, there are significant diseconomies of scale. The problem is, as you decrease the throughput, you lose processing economies of scale. What liquefaction does is reduce the volume considerably, and in turn, liquids are very much easier to transport. But to get that advantage, you have to process relatively smaller volumes. Transportation costs are always less for transport by barge, so that gives marine algae an increased desirability factor.
A second advantage of liquefaction is that you can introduce just about any feedstock, in any mix, although there are disadvantages in having too much variation. Liquefaction produces a number of useful chemicals, but they vary depending on the feedstock, and to be useful they have to be isolated and purified, and accordingly, the more different feedstocks included, the harder this problem. Ultimately, there will be the issue of “how to sell such chemicals” because the fuels market is enormously larger than that for chemicals, but initially the objective is to find ways to maximize income while the technology is made more efficient. No technology is introduced in its final form.
Processing frequently requires something else. Liquefaction has an advantage here too. If you were to hydrogenate, you have to make hydrogen, and that in turn is an unnecessary expense unless location gives you an advantage, e.g. hydrogen is being made somewhere nearby for some other purpose. In principle, liquefaction only requires water, although some catalysts are often helpful. Such catalysts can be surprisingly cheap, nevertheless they still need to be recovered, and this raises the more questionable issue relating to liquefaction: the workup. If carried out properly, the water waste volumes can be reasonably small, at least in theory, but that theory has yet to be properly tested. One advantage is that water can be recycled through the process, in which case a range of chemical impurities get recycled, where they condense further. There will be a stream of unusable phenolics, and these will have to be hydrotreated somewhere else.
The advantages are reasonably clear. There are some hydrocarbons produced that can be used as drop-in fuels following distillation. The petrol range is usually almost entirely aromatic, with high octane numbers. The diesel range from lipids has a very high cetane number. There are a number of useful chemicals made, and the technology should operate tolerably cheaply on a moderate scale, whereupon it makes liquids that can be cheaply transported elsewhere. In principle, the technology is probably the most cost-effective.
The disadvantages are also reasonably clear. The biggest is that the technology has not been demonstrated at a reasonable scale, so the advantages are somewhat theoretical. The costs may escalate with the workup, and the chemicals obtained, while potentially very useful, e.g. for polymers, are often somewhat different from the main ones currently used now, so their large-scale use requires market acceptance of materials with different properties.
Given the above, what should be done?  As with some of the other options, in my opinion there is insufficient information to decide, so someone needs to build a bigger plant to see whether it lives up to expectations. Another point is that unlike oil processing, it is unlikely that any given technology will be the best in all circumstances. We may have to face a future in which there are many different options in play.
Posted by Ian Miller on Aug 19, 2013 5:03 AM BST
I devoted the last post to the question, could we provide biofuels? By that, I mean, is the land available. I cited a paper in which it showed fairly conclusively that growing corn to make fuel is not really the answer, because to get the total US fuel consumption, based on that paper you would need to multiply the total area of existing ground under cultivation in the US by a factor of 17. And you still have to eat. Of course, the US could still function reasonably well while consuming significantly less liquid fuel, but the point remains that we still need liquid fuels. The authors of this paper could also have got this wrong and have made an error in their calculations, but such errors go either way, and as areas get larger, the errors are more likely to be unfavourable than favourable because the transport costs of servicing such large areas have to be taken into account. On the other hand, the area required for obtaining fuels from microalgae is less than five per cent of current area. Again, that is probably an underestimate, although, as I argued, a large amount of microalgae could be obtained from sewage treatment plants, and they are currently in place.
One problem with growing algae, however, is you need water, and in some places, water availability is a problem (although not usually for sewage treatment). Water itself is hardly a scarce resource, as anyone who has flown over the Pacific gradually realizes. The argument that it is salty is beside the point as far as algae go because there are numerous algae that grow quite nicely in seawater. One of what I consider to be the least well-recognized biofuel projects from the 1970s energy crisis was carried out by the US navy. What they did was to grow Macrocystis on rafts in deep seawater. The basic problem with seawater far from a shore is that it is surprisingly deficient in a number of nutrients, and this was overcome by raising water from the ocean floor. Macrocystis is one of the fastest growing plants, in fact under a microscope you can watch cell division proceeding regularly. You can also mow it, so frequent replanting is not necessary. The US navy showed this was quite practical, at least in moderately deep water. (You would not want to raise nutrients from the bottom of the Kermadec trench, for example, but there is plenty of ocean that does not go to great depths.)
The experiment itself eventually failed and the rafts were lost in a storm, in part possibly because they were firmly anchored and the water-raising pipe could not stand the bending forces. That, however, is no reason to write it off. I know of no new technology that was implemented without improvements on the first efforts at the pilot/demonstration level. The fact is, problems can only be solved once they are recognized, and while storms at sea are reasonably widely appreciated, that does not mean that the first engineering effort to deal with them is going to be the full and final one. Thus the deep pipe does not have to be rigid, and it can be raised free of obstructions. Similarly, the rafts, while some form of anchoring is desirable, do not have to be rigidly anchored. So, why did the US Navy give up? The reasons are not entirely clear to me, but I rather suspect that the fact that oil prices had dropped to the lowest levels ever in real terms may have had something to do with it.
Posted by Ian Miller on Aug 12, 2013 4:55 AM BST
In previous posts I have discussed the possibility of biofuels, and the issue of greenhouse gases. One approach to the problem of greenhouse gases, or at least the excess of carbon dioxide, is to make biofuels. The carbon in the fuels comes from the atmosphere, so at least we slow down the production of greenhouse gases, and additionally we address, at least partially, the problem of transport fuels. Sooner or later we shall run out of oil, so even putting aside the greenhouse problem, we need a substitute. The problem then is, how to do it?
The first objections we see come from what I believe is faulty analysis and faulty logic. Who has not seen the argument: "Biofuels are useless? All you have to do is to see the energy balances and land requirements for corn." This argument is of the "straw man" type; you choose a really bad example and generalize. An alternative was published recently in Biomass and Bioenergy. 56: 600-606. These authors provided an analysis of the land area required to provide 50% of the US transport fuels. Corn came in at a massive 846% of current US cropping area, i.e. to get the fuels, the total US cropping area needed to be multiplied by a factor greater than 8. Some might regard that as impractical! However, microalgae came in at between 1.1 and 2.5% of US cropping area. That is still a lot of area, but it does seem to be more manageable.
There is also the question of how to grow things, fuel needed, fertilizer needed, pesticides needed, etc. Corn here comes out very poorly, in fact some have argued that you put more energy in the form of useful work in growing it than you get out. (The second law bites again!) Now, I must show my bias and confess to having participated in a project to obtain chemicals and fuels from microalgae grown in sewage treatment water. It grows remarkably easily: no fertilizer requirements, no need to plant it or look after it; it really does grow itself, although there may be a case for seeding the growing stream to get a higher yield of desirable algae. Further, the algae removes much of the nitrogen and phosphate that would otherwise be an environmental nuisance, although that is not exactly a free run because when finished processing, the phosphates in particular remain. However, good engineering can presumably end up with a process stream that can be used for fertilizer.
One issue is that microalgae in a nutrient rich environment, and particularly in a nitrogen rich environment, tend to reproduce as rapidly as possible. If starved of nitrogen, they tend to use the photochemical energy and store its reserves of lipids. It is possible, at least with some species, to reach 75% lipid content, while rapidly growing microalgae may have only 5% extractible lipids.
That leaves the choice of process. My choice, biased that I am, uses hydrothermal liquefaction. Why? Well, first, harvesting microalgae is not that easy, and a lot of energy can be wasted drying it. With hydrothermal liquefaction, you need an excess of water, so "all you have to do" is to concentrate the algae to a paste. The quotation marks are to indicate that even that is easier said than done. As an aside, simple extraction of the wet algae with an organic solvent is not a good idea: you can get some really horrible emulsions. Another advantage of hydrothermal liquefaction is, if done properly, not only do you get fuel from the lipids, but also from the phospholipids, and some other fatty acid species that are otherwise difficult to extract. Finally, you end up with a string of interesting chemicals, and in principle, the chemicals, which are rich in nitrogen heterocycles, would in the long run be worth far more than the fuel content.
The fuel is interesting as well. If done under appropriate conditions, the lipid acids mainly either decarboxylate or decarbonylate, to form linear alkanes or alkenes one carbon atom short. There is a small amount of the obvious diketone formed as well. The polyunsaturated acids fragment, and coupled with some deaminated aminoacid fragments, make toluene, xylenes, and interestingly enough, ethyl benzene and styrene. Green polystyrene is plausible.
As you may gather, I am reasonably enthusiastic about this concept, because it simultaneously addresses a number of problems: greenhouse gases, "green" chemicals, liquid fuels, and sewage treatment, with perhaps phosphate recovery thrown in. There are a number of other variations on this theme; the point of what I am trying to say is there are things we can do. I believe the answer to the question is yes. Certainly there are more things to do, but no technology is invented mature.
Posted by Ian Miller on Aug 5, 2013 5:17 AM BST
Another month, and my alternative theories on planetary formation are still alive. Most of the information that I could find was not directly relevant, but nevertheless there were some interesting papers.
One piece of interesting information (Science 341: 260-263) is that analysis of the isotopes of H, C and O in the Martian atmosphere by Curiosity rover, and comparison with carbonates in meteorites such as ALH 84001 indicate that the considerable enhancement of heavy isotopes largely occurred prior to 4 Gy BP, and while some atmospheric loss will have occurred, the atmosphere has been more or less stable since then. This is important because there is strong evidence that there were many river flows, etc on the Martian surface following this period, and such flows require a significantly denser atmosphere simply to maintain pressure, and a very much denser atmosphere if the fluid is water, and the temperature has to be greater than 273 oK. If the atmosphere were gradually ablated to space, there would be heavy isotope enhancement, so it appears that did not happen following 4 Gy BP. If there were such an atmosphere, it had to go somewhere other than space. As I have argued, underground is the most likely, but only if nitrogen was not in the form N2. It would also not be lost due to a massive collision blasting the atmosphere away, the reason being there are no craters big enough that were formed following the fluvial activity.
There was one interesting piece of modeling to obtain the higher temperatures required for water to flow. (Icarus 226: 229 – 250.) The Martian hydrological cycle was modeled, and provided there is > 250 mbar of CO2 in the atmosphere, the model gives two "stable" states: cold and dry, or warm and wet, the heat being maintained by an extreme greenhouse effect arising from cirrus ice crystals of size > 10μm, even with the early "cool sun". One problem is where the CO2 came from, because while it is generally considered that Earth's volcanoes give off CO2, most of that CO2 comes through subduction, and Mars did not have plate tectonics. Whether this model is right remains to be seen.
There was one paper that annoyed me (Nature 499: 328 – 331). The problem is that if Earth formed from collisions of protoplanetary embryos, the energy would have emulsified all silicates and the highly siderophile elements (those that dissolve in liquid iron) should have been removed to the core nearly quantitatively. Problem: the bulk silicates have these elements. An analysis of mantle type rock have chalcogen ratios similar to Ivuna-type carbonaceous chondrites, but are significantly different to ordinary and enstatite chondrites. The authors argue that the chalcogens arrived in a "late veneer", and this contributed between 20 -100% of the water on earth. What has happened is that the authors carried out a series of analyses of rocks and to make their results seem credible, Earth had to be selectively but massively bombarded with one sort of chondrite, but none of the more common ones. Why? The only reason they need this rather strange selection is because they assumed the model in which Earth formed through the collision of planetary embryos. If the Earth accreted by collecting much smaller objects, as I suggest, the problem of the chalcogens simply disappears. It is interesting that the formation of planets through the collision of embryos persists, despite the fact that there is reasonable evidence that the rocky planets formed in about 5 My or less, the Moon formed after about 30 My due to a collision with something approaching embryo size, and modeling shows that formation through such embryo collisions takes about 100 My. The time required is far too long and the evidence is that when there is such a collision, the net result is loss of mass, except possibly from the core.
A paper in Angew. Chem Int Ed. (DOI: 10.1002/anie.201303246) showed a convincing mechanism by which hydrogen cyanide can be converted to adenine. This is of particular interest to me because my suggested mechanism for the formation of ATP and nucleic acids is also photochemically assisted. If correct, life would have commenced in vesicles or micelles floating on water.
On a positive note (Nature 499: 55- 58) the authors noted that while most stars form in clusters, some are also in loose clusters with stars density at less than 100 per cubic parsec. One problem might have been that stars born in loose clusters might be the only ones that can retain planets, however the authors report transits in two sun-like stars in a dense cluster, which shows that planets can survive in such a cluster, and that the frequency of planet formation is independent of the cluster density. This makes extrasolar planets very much more probable.
Posted by Ian Miller on Jul 29, 2013 3:08 AM BST
I have another blog, to support my literary efforts, and one of the issues I have raised there is climate change. I originally raised this to show how hard it is to predict the future, yet in some ways this is a topic that is clearer than most while in others, I find it more confusing than most. It seems to me there are a number of issues that have not been made sufficiently clearly to the public, and the issue here is, what should scientists do about it, individually, or more importantly, collectively? Is this something that scientific societies should try to form a collective view on?
One thing that is clear is that all observable evidence indicates that the planet is warming. Are so-called greenhouse gases contributing? Again, the answer is, almost certainly yes. The physics are reasonably clear, even if the presentations of them to the public are often somewhat deviant from the truth. Are the models correct? My guess is, no; at best they are indications. Are carbon dioxide levels increasing? Yes. Our atmosphere now has 400 ppm of carbon dioxide, up from the 280 ppm at the beginning of the industrial revolution. I think that, on balance, however, most of the public are reasonably well-informed on what the so-called greenhouse effect is.
I am not convinced, however, that some of the aspects have made an adequate impact. For me, the biggest problem is sea-level rise. There is considerable net melting of the Greenland ice sheet, and in every one of the last four interglacials, there is evidence that the Greenland ice sheet melted and the sea levels were 7 meters higher. That was when carbon dioxide levels were 280 ppm. Now, check Google Earth and check how much land disappears if the sea is 7 meters higher. It swamps most port cities, and takes out a lot of agricultural land. Check Bangla Desh; a very large part goes. Holland is also in bad shape. Worse, if the climate scientists are correct at their more pessimistic greenhouse estimates, the 400 ppm will take out a significant fraction of the Antarctic ice sheets, and that could lead to something like a 30 meter sea level rise. Now, if such sea level rise occurs, where do all those people go?
One option is, do nothing, wait and see, and if the seas rise, tough luck. So now we have an ethical question: who pays? The people who caused the problem and benefited in the first place, or the Bangla Deshis, Pacific Islanders, and other people living in low-level countries? So, what are we doing? Apart from talking, not a lot that is effective. We have carbon trading schemes, which enriches the pseudobankers, we measure everything because some scientists like to measure things, and we devote a lot of jet fuel to having conferences. However, if the levels of greenhouse gases are of concern, we burn ten billion tonne of carbon a year, and d2/dt2[greenhouse gas] is positive for each of them. The second differential is positive! Yet it is the sum of the integrals that is important.
We are scientists, so we should be able to recommend something. What do we recommend? To the best of my knowledge, no scientific organization has recommended anything other than platitudinal "decrease greenhouse emissions". Yes, what to do is political, and everything that I can think of meets general objections. Whatever we do, many/most will be adversely affected. The problem is, if we do nothing, a very large number of different people will be adversely affected. So what do you think scientists or scientific societies should do?
Posted by Ian Miller on Jul 23, 2013 12:38 AM BST
Our thinking on the Universe changed somewhat towards the end of the 1990s, when it was found that type 1A supernovae at extreme red shift are dimmer than expected. The type 1A supernovae start out as basically white dwarfs that have burnt their fuel to carbon-oxygen, but they have a further companion that they can feed off. If they get above 1.38 solar masses, they reignite and explode, and because they do this at a defined mass from a defined starting position, their luminosity is considered to be standard. Observation has shown this up, at least with nearby 1A supernovae. If they are standard candles, that meant that the expansion of the universe was faster in recent times than in distant times. Thus was born dark energy.
I always had a problem with this: what we see is the outer shell, which has a composition that will retain a considerable history of that of the neighbour, because once the explosion gets underway, that which is on the surface will stay there. That would mean the luminosity should depend on the metallicity of the star. However, when I expressed these feelings to an astrophysicist, I was assured there was no problem - metallicity had no effect.
Two things then happened. First, I saw a review of the problem from an astrophysicist who left an email address. The second was a publication occurred (Wang et al. Science 340: 170 – 173, 2013) that showed that luminosity could vary significantly with metallicity, and hence I emailed the astrophysicist asked what effect this would have. The reason is, of course, metals in stars are formed in previous supernovae, so it follows that the earlier the stars, the fewer cycles of supernovae would have occurred, and hence the stars would have fewer metals. If so, they should be dimmer, and if they are dimmer, and not standard, then perhaps there is no accelerating expansion or dark energy. Maybe that reasoning is wrong, but all I wanted to do was to find out.
Now, the issue for me lay in the response. I was told unambiguously that the lack of metallicity had been taken into account, and there was no problem. This raises an issue for me. Either the lower luminosity resulting from less metallicity was well known or it was not. If not, how as it taken into account? You cannot account for an effect of which you are unaware, and if so, this response was a bluff. If it were known, then how come someone gets a publication in a leading well-peer-reviewed journal when he announces a new discovery? If it were well-known, surely the paper would be rejected, and if it were well-known, surely the peer-reviewers would know.
What disturbs me is that there must be a fundamental scientific dishonesty at play here. I do not have the expertise in that field to know where it lay, but I find it deeply concerning. If scientists are not honest in what they know and what they report, the whole purpose of science fails. Just because it is fashionable to believe something, that does not make it true. Worse than that, there are some issues, such as global warming, where scientists have to take the public with them. If scientists start bluffing when they do not know, then when caught out, as they will sooner or later, the trust goes. What do you think?
Posted by Ian Miller on Jul 15, 2013 12:16 AM BST
One of the most heated and prolonged debates in chemistry occurred over the so-called non-classical 2-norbornyl cation. Very specifically, during reactions, exo-2-norbornyl derivatives solvolysed about 60 times faster than the 2-endo ones. The endo derivatives behaved more or less as you might expect if the mechanism was SN2, but the exo ones behaved as if they were SN1, but there was an additional surprise: the nucleophile was about as likely to end up on C6 as C2. There were two explanations for this. Winstein suggested the presence of a non-classical ion, specifically the electrons in the C1-C6 bond partly migrated to form a "half-bond" between C2 and C6. Thus was born the "non-classical carbonium ion". On the other hand, Brown produced a sequence of papers arguing that there was no need for such an entity, and the issue could be adequately explained by more classical structures, and as often as not, by the use of proper reference materials.
That last comment refers to a problem that bedevils a lot of physical organic chemistry. You measure a rate of reaction, and decide it is faster than expected. The problem is, what was expected? This can sometimes border on being "a matter of opinion" because the structure you are working with is somewhat different from standard reference points. This problem is even worse than you might consider. I reviewed some of the data in my ebook Elements of Theory 1, and suggested that the most compelling evidence in favour of Brown's argument was that the changing of substitution at C1 made very little difference to the rates of solvolysis at C-2, from which Brown concluded there was no major change of electron density at C1, which there should be if the C1-C6 bond became a half-bond as Winstein's structure required. As it happened, Olah also produced evidence that falsified Brown's picture, and as I remarked at the end, each falsified the other, so something was missing.
In the latest edition of Science (vol 341, p 62- 64) Scholz et al. have produced an Xray structure of the 2-norbornyl cation, which was made from aluminium tribromide reacting with the exo 2-norbornyl bromide, and what we find is equal C1-C6 and C2-C6 distances, as required by the non-classical ion. Also, these are long, at about 180 pm. Case proved, right? Well, not necessarily. The first oddity is that the C1-C2 distance is 139 pm, or about the same length as benzene bonds. Which gets back to Brown's "falsification" of the non-classical ion: while the C1-C6 bond is dramatically weakened, the C1-C2 bond is strengthened, and the electron density about C6 may be not that much changed, despite the fact that the bond Brown thought he was testing was half-broken. Nobody picked that at the time.
What do I mean by, "not necessarily"? It is reasonably obvious this is not the classical structure that Brown perceived. That is correct, but there are two other considerations. The first one is that to get a structure, the structure must be in an energy well, which means it does not actually represent the activated state. To give an example, the cyclopropylcarbinyl system would presumably give, as an ion, Cyc-CH2+ would it not? The trouble is, the system rearranges and is consistent with that, as well as a cyclobutyl cation, and an allylcarbinyl cation. The actual cation is probably something intermediate. So the rate acceleration then may not be caused by the intermediate cation, but by whatever is happening on the reaction path. If this cation was the cause of the rate acceleration, it should also operate on the endo cation. Yes, the mechanism is different, but why? A product available to both cannot be the reason. There has to be something that drives the exo derivative to form the cation. My explanation for that is actually the same as that that drives the cyclopropylcarbinyl cation.
The second consideration is the structure itself: the two bonds to C6 are equal, and C1-C2 is remarkably short. There is one further way this could arise. Let us suppose we follow Winstein and break the C1-C6 bond. What Winstein, and just about everybody else, thought is that we replace that with two half σ bonds, but suppose no such σ bonds are formed? Instead, rehybridize C1 and C6 so we have two p orbitals. With two p orbitals and a carbenium centre we have the essence of the cyclopropenium cation, without two of the frame-work σ bonds. That gives us a reason why the cation is so stable: under this interpretation, it is actually aromatic, even if two of the bonds are only fractional π bonds.
Is that right? If it is, then there is a similar reason why ethylene forms edge-complexes with certain cations. Of course, it may not be correct, but as a hypothesis it seems to me to have value because it suggests further work.
Posted by Ian Miller on Jul 8, 2013 4:11 AM BST
   1 ... 5 6 7 8 9 10 11 12 13 ... 18