Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

Perhaps my challenge, "what is the most illogical thing you can associate with standard quantum mechanics? " was not to everybody's taste. Sorry, but I cannot help feeling that ignoring problems is not the way to enlightenment. Also, part of what follows is critical to my question, where do you expect violations of the Woodward Hoffmann rules?
 
Consider the following:
(1)  You cannot know something about a phenomenon until you observe it.
(2)  Prior to observation, a state is a superposition of all possible wave functions.
What I consider illogical is that it asserted that the superposition of states is a reality and when an observation is made, there is a physical event known as the "collapse of the wave function". Such a superposition of states can never be observed, because by definition, when it is observed, it collapses to one state. The electron either is or is not, there. So, why does this assertion persist? (I have no idea on this one. Anyone, please help.)
 
The most well-known example is the Schrodinger cat paradox, in which a box contains a cat and radioactive particles associated with a device that emits hydrogen cyanide. If no particle is emitted, the cat is alive; if it is emitted, the cat is dead. Before we observe, we do not know whether the particle is emitted or not, and both states may be equally probable. That is described as a superposition of states, but according to the paradox, a consequence is the cat is also in a superposition of states, neither dead nor alive.
 
The problem involves the conclusion that the square of the amplitude of a wave function gives the probability of making an observation. If the objective is to compute the probability of something happening, and you consider the states to represent probabilities, such states have to be superimposed. The probability of something that exists, such as the cat, must be 1. In this case, the cat is either dead or alive, and the probabilities have to be summed. The same thing happens with coin tossing: the coin will be either heads or tails, and there is equal probability, hence two states must be considered to make a prediction.
 
Herein lies a logic issue: the fallacy of the consequent. Thus while we have, "if we have a wave function, from the square of its amplitude we can obtain a probability" it does not follow that if we can ascribe a probability, then we have a wave function. What is the wave function for a stationary coin, or a stationary cat? It most certainly does not follow that if we have two probabilities, then the state of the object is defined by two superimposed wave functions. A wave function may characterize a state, but a state does not have to have a wave function.
 
A wave function must undulate, and to be a solution of the Schrodinger equation, it must have a phase defined by exp(2pi.i.S/h), where i is the complex number, and S is the action, defined as the time integral of the Lagrange function, which can be considered as the difference between the kinetic and potential energies. (The requirement for the difference arises from the requirement that motion should comply with Newton's second law.) As can be seen, a complete period occurs whenever S=h, which is extremely small. For the spinning coin, any wave function period returns to its initial value so quickly it is irrelevant, and classical mechanics applies. In this light, the Schrodinger cat paradox as presented says nothing about cats because there is no quantum issue relating to a cat.
 
What about the particle? When Aristotle invented logic, he allowed for such situations. To paraphrase, suppose one morning there is a ship in harbour fully laden. The question is, where will it be tomorrow? It could be somewhere out to sea, or it could be where it is now. What it won't be is in some indeterminate state until someone observes it because it is impossible for something of that size to "localize" without some additional physical effect. Aristotle permitted three outcomes: here, not here, and cannot tell, in this case because the captain has yet to make up his mind whether to sail. Surely, "cannot tell" is the reality in some of these quantum paradoxes?
 
As a final comment, suppose you consider that with multiple discrete probabilities there have to be multiple wave functions? The probabilities are additive and must total 1. The wave amplitudes are additive, and here we reach a problem, because the sum of the squares does not equal the square of the sum. Mathematically, we can renormalize, but what does that mean physically? In terms of Aristotle's third option, it is simply a procedure to compute probabilities. However, it is often asserted in quantum mechanics that all this is real and something physically happens when the wave function "collapses".
 
Now, you may well say this is unnecessary speculation, and who cares? Quantum mechanics always gives the correct answer to any given problem. Perhaps, but can you now see why there can be exceptions to the Woodward Hoffmann rules, and where they will be? The key clue is above.
Posted by Ian Miller on Aug 23, 2011 11:53 PM BST
First, the simple answer to my challenge, which was whether there were exceptions to the Woodward Hoffmann rules.  The exception I wish to discuss involves the cyclization of pentadienyl carbenium ions to cyclopentenyl carbenium ions [Campbell et al. J A C S 91 : 6404-6410 (1969)]. Part of the reason for the challenge was to illustrate a further point, namely that quite a bit of critical information is out there, but if it is not recognized quickly as being important, it gets lost. However, it can be found because there is usually someone out there that knows about it. Such information could be recovered more readily if there were a web facility for raising such questions. To succeed, it must be easy to use and abuse prevented so that scientists in general will cooperate.
 
A further point I was trying to make was to emphasise logic. This was a problem at the end of my ebook Elements of Theory and these problems had the structure that the reader had to try to come to an answer, then analyse my given answer, parts of which were not necessarily correct, the idea being to promote the concept of critical thinking. Part of the answer I gave included what I thought was an innocent enough deception, namely the Woodward Hoffmann rules were violated because of extra substitution. That, of course, has a logical flaw; without the substitution you cannot tell whether the rules are violated or not. The reason for mentioning this here is that in the abstract to the example that Chris Satterley found, the same statement is made.
 
So, for a further little challenge, what is the most illogical thing you can associate with standard quantum mechanics? My answer soon.
 
The cyclization of pentadienyl carbenium ions could be followed because there was a methyl group at each end, and the stereochemistry was known. The deviations from the W-H rules were explained in terms of the steric clashes that occurred  as the ends came together: H-H, H-Me and Me-Me. The H-H clash produced more or less complete agreement with the W-H rules, but the Me-Me clash led to an almost 50-50 product distribution between the two possible cyclopentenyl ions. There is little doubt that a steric clash occurred as the carbenium ion approached the geometry needed for cyclization to take place, however at least one further thought should be considered. The pentadienyl carbenium ion should be planar, so the methyl-methyl clash does not give a preference to any particular rotation. A 50-50 mix of products suggests that when the ends come together, random vibrations lead the methyl groups to slide one way or the other, and both ways cyclize. This is the important bit: if the W-H rules were absolute, what would happen is that the prohibited route would not react, the material would remain in the pentadienyl form, reopen, then come together again, and would react only if/when it accidentally got into the correct configuration. The steric clash would result in both reactions being slower. As it happened, in the actual experiments all cyclizations were too fast to know about that aspect.
 
However, while additional substitution was present in the ring opening reactions of benzocyclobutene, the strain issue is not quite relevant because by the time the substituents generate a good clash, the system has proceeded far enough along the reaction coordinate that extra strain would simply drive the ring opening faster.
 
Which brings me to the second little challenge: is there any chemical theory that offers a rational explanation for these exceptions to the W-H rule? I must confess that back in the late 1960s I simply assumed that this exception merely meant that the W-H rules were simply preferences, and with a little discouragement, they could be over-ruled, however I am now convinced that this is just lazy, and when nature provides exceptions like this, it is trying to tell us something.
Posted by Ian Miller on Aug 16, 2011 6:16 AM BST
For the two responses to my challenge; thank you, and also for the comment on last week's blog. I am going to leave the challenge open for another week, and add a further comment on biofuels.
 
There is little or no difference between a plan to do something and a theory on what should be done, and for producing biofuels there is no shortage of alternative theories as to what should be done. The question now is, should some basic analysis be done on the various options before we send vast sums of money after them? One argument is, until you do the research you cannot analyse the situation well enough to come to a conclusion. I would agree that you cannot make close decisions, and as noted in my previous blog, there will probably be a number of good solutions to this problem implemented. But surely we can avoid some obvious mistakes? Unfortunately, if the analysis is weak, it can be a mistake to "avoid obvious mistakes" when in fact they are not mistakes. Notwithstanding that, I can't help feeling that much of the money devoted to biofuels research is not well-spent. There is an adage, you can be fast, you can be cheap, you can be efficient; choose no more than two. I believe available capital is scarce in terms of the size of the problem and we want the solution to work. Accordingly, we should be more reluctant to jump onto the latest fashionable exercise.
 
The most obvious biofuel is ethanol, and the production of ethanol from fermentation of sugar is a technology that has been operating for about 6,000 years, so we have learned something about it. Currently, fuel ethanol is produced in Brazil from its sugar cane, in the US from corn, but growing crops for ethanol is not considered a likely solution, if for no other reason than with an expanding population, food becomes a priority. Of course, waste food can always be converted to fuel, but the size of the fuel requirements requires the bulk of the fuel to come from somewhere that does not compete with food production.
 
A more likely answer is to produce ethanol from cellulose by first fermenting the cellulose to make glucose. Also, because "biotech" has been very fashionable, this has received a lot of funding, but I believe any reasonable analysis will show this route is suboptimal. Most plant biomass contains polyphenols, such as tannins and lignins. The question is, why? The usual answer is that lignin binds cellulose and lets trees grow tall. I would argue that is a side-benefit, and the original reason for polyphenols to be incorporated into plant material is to make them more difficult to digest. Those that get eaten less reproduce more frequently and evolve to support that trait. In other words, lignocellulose has been optimized by evolution to avoid being digested by enzymes. Yes, nothing is perfect and we can find enzymes that will hydrolyse woody biomass, but the hydrolysis is slow. Ethanol manufacturing will seemingly commence with immense tanks of wood chip slurry that take days to ferment. If all the cellulose is converted to glucose, then the lignin, which is about a third of the mass, will be left as a finely divided and very wet sludge that is difficult to filter and which will contain a reasonable fraction of the glucose. In practice, much of the crystalline cellulose will not hydrolyse, but will merely expand the sludge.
 
Is this the correct way to go about the problem? It seems to me that theory suggests it is not, yet it appears that in recent times more money has been sent chasing this option than any other single option. Perhaps someone knows something I do not know and I am wrong in the assessment, but if so, whatever that is is not widely known.
 
Posted by Ian Miller on Aug 4, 2011 1:02 AM BST
So far, no response to my challenge, and in particular, nobody has posted a failure of the Woodward Hoffmann rules. Why not? One answer would be, there are none, so there is nothing to post. Then there is the possibility that there are examples, but nobody can find them. More depressingly, nobody cares, and finally, most depressingly of all, at least for me, perhaps nobody is reading my blogs.
 
Meanwhile, I have had a topic request! What about transport fuels? Conceptually, a plan is the same as a theory, and in this case there are plenty of theories as to what we should do, so the question now is, how do we choose which to follow? To slightly amend a quote from General Wesley Clark, there are two types of plans: those that won't work, and those that might. We have to select from those that might work and make them work. Accordingly, the art of selection involves the logical analysis of the various plans/theories. I must start with a caveat: I have been working in this area during both the previous oil crisis and now, and while I have considerable experience, it is in one area. The reader must accept that I may show bias in favour of my own conclusions.
 
My first bias regards methodology. A particular source of irritation is the request, when proposing a process, for an energy balance. In one sense, from time symmetry, I know energy is conserved so the energy is always balanced. If they mean, how much do you get out usefully, the answer is, the second law says, less than you put in. However, even if you account for this (usually, with varying degrees of optimism!) this is of no use in selecting a process. When you enter certain intellectual areas, there are rules to follow. Just as in chemistry it is totally pointless to try to derive a theory while ignoring atoms, in economics the only relevant means of comparing processes is based on money, because this alone is intended to balance all the different factors. It may not be perfect, e.g. environmentalists will tell you that no proper cost is placed on environmental protection, but it is all we have. To illustrate why, suppose you wanted to go somewhere and someone offered you a megajoule; would you prefer a container of gasoline, or a mass of water at 35 degrees Centigrade?
 
When forming a theory to address a commercial issue, the first question is, what is the size of the market? According to Wikipedia, world oil production is 87.5 million barrels a day (13.9 billion litres per day). To put that into perspective, I once saw a proposal to build a small demonstration plant that would convert 35 t/day of cellulosic material into biofuel. If successful, it might have produced up to 11,000 litres per day. Yes, it is a small plant (designed for a specific source of raw material) and you can imagine increasing its size, but whatever factor you multiply by, this is going to have to be repeated an enormous number of times. It is most unlikely that anybody can come up with a single raw material to supply this volume. This brings a first conclusion: there is no magic bullet. If we are to solve this problem, contributions will have to come from a number of different sources. Accordingly, we are not looking for a unique plan.
 
An immediate second conclusion is, there is no overnight solution. We have built up a system over one and a half centuries based on the assumption that oil will be cheap and widely available and that has led to some fixed infrastructure that cannot be replaced overnight. Fuels have evolved from a simple distillation to high performance fuel such as JP-10, which appears to have as a major component tetrahydrodicylopentadiene. The very large modern oil refineries are extremely complicated systems that use experience learned over that period. Different processes have to start smaller. Furthermore, it will take at least a decade, and probably longer, to even get a new process to a successful demonstration plant.
 
So, what processes might work? It may come as no surprise to hear that several quite promising processes were developed, at least partially, during the previous oil crisis, then abandoned once the price of oil fell. Over the next few weeks, interspersed with the other subjects, I shall give my views on what should be done. In the meantime, as a possible further challenge, how many of those earlier proposed processes are you aware of? My guess is, most of the information gained then is lost, other than possibly residing in the heads of retired scientists and engineers. This raises another very important point in terms of economic theory: there is an enormous amount of work to be done to solve this fuel problem, the economic system appears to be creaking under government debts incurred in times of unparalleled prosperity, so can we afford to waste what in principle we already have?
Posted by Ian Miller on Jul 23, 2011 3:10 AM BST
While I have been advocating efforts to find alternative theories, I do not wish to give the impression that I think most theory is wrong. There has to be a reason for anyone to seek an alternative theory, because without any grounds it is simply a waste of time, as illustrated by the periodic attempts by various people to defy the second law of thermodynamics and build a perpetual motion machine. Theories may come and go, but I have great faith in the lasting values of the second law.
 
Most other theories are far less robust but the question then becomes, what are reasonable grounds for seeking new theories, or at least revising current ones? One obvious answer is, a discrepancy between theory and observation. That is fine, except it raises a problem: how does the potential theoretician find the discrepancy? The experimentalist who finds it may well report it, but the experimentalist wants to get published and not buy a fight with referees, so if it is reported it is very rare for it to be highlighted and it tends to be lost somewhere in the discussion, maybe even embedded as casually as possible two thirds the way through a rather densely written paragraph. Worse, many discrepancies, when first found, tend to be ambiguous in interpretation, because since they were not sought, the experiment was not designed to specifically demonstrate what nature is trying to tell us, but rather to test some other hypothesis. Accordingly, the potential baby is lost in a sea of bathwater.
 
The reader of this blog should not simply take my word for that, which so far is simply an assertion. An illustration is required. My ebook, Elements of Theory ends with 73 problems, so my challenge to you is, try one. (If you have read the book, thank you, but this challenge is not for you.)
 
Woodward and Hoffmann have stated that there are no exceptions to their rules. One reason (somewhat simplified) why this should be correct is as follows. The signs of the wave functions correlate with the signs of the amplitude of the wave, and the square of the amplitude, within the Copenhagen Interpretation, indicates the probability of an event occurring. If plus overlaps with plus, there is reinforcement, but if plus overlaps with minus, there is cancellation, and the square of zero is zero. With zero probability, that event cannot occur. Accordingly, at a first level analysis, only permitted products can form. In practice, we do not expect perfect wave interference, so very minor contributions of the wrong products are possible.
 
Given that, here is the challenge. First, have any exceptions been found? This is important, because if so, it would show that something is wrong with theory. However, it does not follow that anybody finding such exceptions would recognize their significance, which means that finding them in the literature, if they exist, could be a real challenge. It may be that the only real way to find them is to ask as many people as possible, to dredge their memories and experience, so to speak. The second part of the challenge is, is there any theoretical reason why there could be an exception?
 
In a future blog, I shall give my answer to these questions, but before that I am particularly interested in other chemists' opinions, and in particular, any observations of which I am unaware.
Posted by Ian Miller on Jul 15, 2011 2:20 AM BST
Chris Satterley raised a number of good points in his comment to my last blog, and I shall try to respond to some in the future, however the point I wish to discuss here is, is there a demand for new theory from synthetic chemistry?
 
I believe there is. When I commenced my PhD, there were a good number of "old" reactions available that a synthetic chemist might use, and the mechanisms of these were "understood"; the quotation marks is because while they were understood in general, I suspect there are still features that require looking into. Since then, however, there have been a bewildering number of new reactions, and these appear to be discovered at quite an alarming rate (unless you are a synthetic chemist that reads about something that unblocks a problem!). I believe that the main difficulty in rationalizing these reactions is that without the salient aspects being identified and ordered by the synthetic chemists, nobody has sufficient information. The problem, in my opinion, is that because the information is so dispersed, and more importantly, will be scattered across a number of specialties, nobody can get at more than a minute fraction of it.
 
Let me provide an example of what I mean. What I regard as a rather impressive synthetic method recently appeared in JACS 133: 9724-9726, in which indium bromide, or better, indium iodide, was used as a catalyst to condense chiral propargylic alcohols into polycyclic products with high yield and stereoselectivity. Now, the question is, why pick on indium iodide? Would that be one of your picks, if you hadn't read that paper? One of the authors was E. J. Corey, and I am ready to take a bet that he did not go through the store picking on random catalysts. When he wrote "we speculated that . ." I believe his reasoning would be a lot better than that.  What he wrote as a reason was the indium salt might, by virtue of its vacant 5s and 5 p orbitals coordinate with the acetylenic unit through its pi(x) and pi(y) orbitals while also coordinating with the propargylic oxygen. The reason indium was selected was because the s and p orbital energies are closer than, say, aluminium.
 
 I suspect there is more to it than that. There is no doubt whatsoever that Professor Corey has an incredible knowledge of organic synthesis, and I believe it would also be interesting to know why he focussed on indium. There is little doubt that he recognized that some form of Lewis acid would be desirable, and I would expect that some other possible salts, including indium chloride, might be rejected on solubility grounds.
 
To me, the reasoning he gives leaves a number of puzzling thoughts. Superficially, he appears to be specifying empty sp2 orbitals, although maybe he is not. However, if so, and if we consider the electrons to have wave characteristics to their motion, there appear to be some strange refractive issues, although since we do not know the configuration he was considering, that may not apply. But even if we put that aside, if Professor Corey's reasoning is correct, surely thallium could possibly be better (if closeness of orbital energies is relevant) or possibly gallium (if Lagrangian density in the orbital is relevant) but while some other elements were mentioned, including silver and gold, these two did not appear to be mentioned. (There is an implication that some further possible catalysts were tried, but were unsuccessful, and these failures were not identified. This is unfortunate, because the failures are also important from a theoretical point of view.) Of course the paper is one describing a synthetic method, and what I am discussing was presumably outside Professor Corey's interest (and no criticism is implied for that). The only point I am trying to make is to address Chris' point: there are theoretical aspects involved in such synthetic methods which, if unravelled explicitly, might permit more general progress to be made in synthetic chemistry, and would certainly help other chemists who have to carry out syntheses to make materials for reasons other than simply specifying synthetic methods.
Posted by Ian Miller on Jul 2, 2011 5:00 AM BST
The decision to do something is always preceded by a theory, which usually involves a proposition of the "if … then" form, i.e. if I do A and the conditions G apply, then the outcome P will follow. The premise of this blog is that there are usually alternatives B, C, D . . .  What we want is for our politicians to make the best choice from the set of alternatives, but if scientists want decisions to be made based on evidence, then they must provide the necessary information to the decision makers and the public in a form they can understand. I think you are a little deluded if you think that happens often enough.
 
Consider the issue of carbon dioxide storage. On one side, the coal industry states: if all the carbon dioxide made by burning coal is buried permanently, then there is no adverse environmental effect from that carbon dioxide, therefore we can continue burning coal. Based on what is available to the general public, is this sound policy or a case of "Out of sight, out of mind"?
 
Compound propositions such as that are usually bad; the proper procedure is to separate the steps, then draw conclusions by procedure rather than lurch into it. Suppose we rewrite the first part as: if all the carbon dioxide made by burning coal is buried permanently, then there is no adverse environmental effect from that carbon dioxide entering the atmosphere. I think that is justifiable, but there is a subtle difference.
 
One problem involves the word "permanent". Where do we store it? If it is deep enough, say, approaching 1 km, the pressure will convert the gas into a supercritical fluid. The ideal situation is isolation by containment by non-permeable rock, but how easy is it really to find that? Something that traps heavy oil does not necessarily trap supercritical carbon dioxide. Can we guarantee it will not travel through porous rock, or, if an old oil structure is used, through an abandoned well whose cap fails? What about faults or breaks in rock structure? I would question anyone who states we know where all the faults are. As evidence, Christchurch in New Zealand is now undergoing an extremely prolonged sequence of earthquakes, due to a sequence of faults that were completely unknown two years ago. They are now being found because they are active; inactive faults can go undetected for a very long time. Suppose the carbon dioxide clathrates in water, and suppose the clathrate can move to a region of lower pressure? Could the resultant decomposition of the clathrate widen a gap, and thus accelerate leakage?
 
The ultimate in containment would be where it reacts with rocks to form carbonates. Olivine is one of the better weathering rocks, while peridotite (an olivine/pyroxene blend) in Oman apparently weathers very quickly. The problem, of course, is that basalt is not one of the easiest rocks in which to find storage spaces and it is somewhat unyielding while the world's emissions are not centred on Oman.
 
Does this process make sense other than as a special case? Old oil fields are often cited as disposal points, but these are often far from coal burning plants, and very frequently there will have been a number of test wells drilled which are sealed with unknown reliability. The oil is usually under sandstone, which usually comprises the remains of weathered rock. The collection, transport, compression and injection of the carbon dioxide requires considerable energy (I have seen an estimate that almost 30% more power stations are required.) and the construction of the necessary pressurized piping also generates carbon dioxide.
 
The issue here is, if the public decide to reduce carbon dioxide emissions from the atmosphere, is this concept a sensible part of the solution, or is it simply "smoke and mirrors"? Are scientists making all the relevant information available to the public on this matter? If a member of the public wants to find out what is critical, can he or she? You may protest that my analysis above is superficial, but if I cannot readily find what is needed to reach a proper conclusion, how can the public? If you argue the public does not matter, then you must argue against democracy. In my opinion, if science is to make a proper impression on our future, scientists have to lift their game.
Posted by Ian Miller on Jun 21, 2011 11:05 PM BST
If your goals include getting rich, getting promoted, winning prizes etc, time spent developing theories appears to be time wasted. Nevertheless, if you are really interested in science, there are two good reasons to do so. The first is, you do not need expensive equipment, although you certainly need access to a good library, and you can do it in your spare time. Einstein did his most productive work as a patent clerk. The second is, while all scientists experience emotional highs (success!) and lows (oops, another failure!), for the experimentalist these usually come at the end of the experiment. For the theoretician the highs can come at almost any time later, and from most surprising sources. Further, after a time you do not suffer lows; if you are found to be wrong after a reasonable length of time, at least you can persuade yourself that you persuaded someone to do something they would not have otherwise done, hence you have advanced science a little, even if not for the best of reasons. I hope the reader will forgive me because I would like to illustrate how unexpected emotional rewards can come with something that has happened to me. This arose from what I regard as a most unexpected experimental result, which goes to the heart of quantum mechanics.
 
Up until last week, I believe most physical scientists would have stated that the wave function is not a physical entity, but rather a mathematical construct whose square represents a probability distribution associated with the outcome of a determination. After all, how else does renormalization make sense? You cannot renormalize a bucket of water! That was before Lundeen et al. (Nature, 474, 188-191) published a clearly remarkable achievement: they measured the wave function of photons, determining the real part through a rotation of the polarization and the complex part through the ellipticity. Effectively, the authors say, you could construct a "wave function meter", and they propose that you could measure the wave function associated with electron motion in atoms and molecules.
 
I have two reasons to be excited. The first is, to make a measurement of both the real and complex parts of the wave surely it has to be something, and not simply a construct. As the cover of Nature said, "Direct measurement prompts the question, what is it?" The relevance to me is that in my ebook, I have over 70 problems for exercises in the development of theory, and in the more difficult ones, there is a sequence that develops yet a further interpretation of quantum mechanics (there are at least 6 others) in which the reader is offered the chance to obtain quantum mechanics from one deeper principle, including obtaining the Schrodinger equation, the Uncertainty Principle, the Exclusion Principle, and also to note why the Complementarity Principle could in principle be got around, which is what these authors did. In the solutions I give, the wave is a physical entity, the square of the amplitude of which represents the energy associated with the particle (the square of the amplitude of all other waves represents the energy associated with the vibration), although it probably vibrates in additional dimensions, thus taking it into the concepts of string theory. This theory may or may not be correct, but as far as I am aware, it is the only one for which the wave function is "something" with a clearly physical and determinable variable associated with it.
 
The second reason is that I am an advocate for atomic orbitals of multi-electron atoms that differ from the usual wave functions that correspond to excited states of hydrogen [Aust. J. Phys. 40 : 329 -346 (1987)] in that the ground state orbitals do not have radial nodes (thus solving the problem, how do electrons cross them!) and the resultant excited states have the nodes required for excitation added to them. I doubt that the methodology outlined by Lundeen et al. will really work on atomic orbitals, but if one thing is clear in science these days it is that if there is no sound reason why something cannot be done, sooner or later it will be. There is real excitement in the realization that something you proposed could be proven true one day. (Yes, it could be proven false, but that is just one of those chances you have to take.)
 
The point that I want to make is that for young scientists starting their career, while it may not help your social standing, especially in the short term, there is the possibility of experiencing quite unique feelings. And if you think there is nothing left to theorize about, if something as fundamental as the standard interpretation of the quantal wave function can be overturned by an experiment, so can a lot of other "tablets of stone". You may or may not be right, but you will stay interested.
Posted by Ian Miller on Jun 14, 2011 3:25 AM BST

The logic behind climate change seems to be, greenhouse gases trap heat, the planet is warming because of the heat trapped by these greenhouse gases , therefore if we reduce our greenhouse emissions, we can maintain our current lifestyle, more or less. There is little doubt that greenhouse gases trap heat, there is little doubt that the planet is warming, but are these really the issues? In my opinion, the real questions are, what are we going to do about it, and is current science going about the answering of that question in the right way?

 

In my first blog I mentioned that the Greenland ice sheet had melted in the previous four interglacials, with a corresponding rise of sea levels of about seven metres. What I did not mention was that this sea level rise began to start approximately 10,000 years after the demise of the Canadian ice sheet. If this cycle is a repeat of the last one, we expect to see the sea levels start rising about now, and they seem to be doing that.

 

So, what can we do about the future rising sea levels? What science should be doing is to provide evidence that falsifies the above conclusions, or failing that, recommend that we move our cities or work on alternative options. What is science doing? The main efforts seem to be in modeling that give uncertain predictions, and we are gathering data furiously, measuring various emissions, many of which we cannot do much about anyway. These are followed by calls to reduce emissions, an approach that reminds me of the self-flagellating penitents in Bergmann movies set in mediaeval times. The message seems to be that if we maximize the punishment for our errant ways, somehow our sins will be forgiven. In my opinion, all self-flagellation achieves is a sore back, which appears to be slightly more than the appeals to reduce carbon emissions are currently achieving. Carbon emissions apparently increased by 5% last year and with the savaging of nuclear power following what appears to be a certain degree of incompetence, carbon emissions are almost certain to increase. (Why a nuclear power station had to be shut down when it was still working is unexplained. Why it was completely shut down is even more incomprehensible, given that it needed electricity to operate it. Why not leave one reactor going, just in case, and use its own power? They knew they were in tsunami territory, and they knew their emergency generators were downhill.) So what we see is that provided we wave a slogan (reduce emissions) everything will be all right, even if we do not actually achieve what the slogan requires. 

 

The obvious conclusion is that only geoengineering can permit us to defend the coastline in approximately its current position. There is, of course, no guarantee that it can, but surely the scientific method suggests we should investigate the possibility, even if only at a theoretical level. However, what we find is that geoengineering is usually rejected as being unnecessary.

 

Another reason for rejecting geoengineering is that "we don't know what the unintended consequences will be".  That is almost certainly correct, but given that a rise of sea levels of several tens of metres, which is quite possible with carbon dioxide levels at 450 ppm, surely we should make the effort to try to understand? However, we are not, seemingly because there is far too little funding of the necessary research. Why not? I have a theory on that too, but before I try that out, has anybody else any ideas?

Posted by Ian Miller on Jun 7, 2011 3:14 AM BST
Chris Satterley raised a number of interesting points, and I shall comment on one of those here. He stated that global warming modelers are frequently asked, "What are your assumptions?" My issue is, if there is doubt about an assumption, will anybody do anything about it? I would like to explain, using an example from my own past, why I am a little skeptical. (This is most certainly not a whinge. The only reason I raise it is that I know the details.)
 
Early in my career, I was interested in strained molecules, and consequently I became interested in bond bending. Some time ago I went to a conference and in a session where there was nothing particularly relevant to my direct interests, I sat in on a session on molecular mechanics. In the presentations, bond bending was modeled on the assumption that it was simple harmonic, in which restoring force is proportional to deformation, hence the energy of the deformation is proportional to the square of the deformation in radians.
 
So, what else could it be? Consider a C-H bond in methane, in its equilibrium position. If a plane is drawn through the carbon atom normal to that bond, all the other bonds are on the distal side of the plane, and the charge distribution is more or less symmetrical. To my mind, that indicated that the repulsive force should be along the line of the bond, and to the extent that the deformations did not remove the symmetry, the dynamics would be similar to those of a pendulum, in which case the deformation energy would depend on the sine of the angle of deformation. If anyone is interested, the calculation of overtones was quite respectable (at least in my opinion) for a very few limited molecules (Aust J. Chem 22: 2575-2580).
 
The point of all this is, when I raised this to one of the main speakers, (a) he hadn't heard of this alternative, but more importantly, (b) he was not going to do anything about it. Why not? My view is that it depends on funding. The main function of a project leader is to get the project funded. While the nature of this problem varies from country to country, usually some form of performance review is required. I doubt anybody has the nerve to write in a fund application that they intend to turn over the last ten years' work to determine whether a primary assumption was wrong when during that ten years they have been funded based on their "remarkably good" results. (The fact that there are so many validation constants in the programs is beside the point.) Of course, there should be some form of evaluation of scientists' performances, but I am far from convinced that the current methodology is good for science. The problem is, this procedure is almost designed to lock in any previous incorrect assumptions, and while I am sure that was never the intention, it is one of the unfortunate unwanted consequences. Yes, pointing out a problem is easy; solving it is not, nevertheless, pointing it out may be a start.
 
This may seem harmless, and some may think, even if the underpinning relationships are wrong, if your models reproduce observation, does it matter? To me, the answer is, yes. The problem arises when the model is taken into new territory. The most successful theory, at least in terms of time over which good results were always predicted, was the model of Claudius Ptolemy. It always predicted where the planets would be, when the eclipses would be, etc. However, because it is wrong, if NASA used it for manned flights there would be a lot of dead astronauts. In this bond bending example, maybe it doesn't matter if we don't know exactly why certain polymer solutions have certain properties, but suppose we want to devise new biocatalysts – effectively, synthetic enzymes. Would it not be desirable that we know what we are doing?
Posted by Ian Miller on May 23, 2011 3:28 AM BST
   1 2 3