Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

So far, no response to my challenge, and in particular, nobody has posted a failure of the Woodward Hoffmann rules. Why not? One answer would be, there are none, so there is nothing to post. Then there is the possibility that there are examples, but nobody can find them. More depressingly, nobody cares, and finally, most depressingly of all, at least for me, perhaps nobody is reading my blogs.
 
Meanwhile, I have had a topic request! What about transport fuels? Conceptually, a plan is the same as a theory, and in this case there are plenty of theories as to what we should do, so the question now is, how do we choose which to follow? To slightly amend a quote from General Wesley Clark, there are two types of plans: those that won't work, and those that might. We have to select from those that might work and make them work. Accordingly, the art of selection involves the logical analysis of the various plans/theories. I must start with a caveat: I have been working in this area during both the previous oil crisis and now, and while I have considerable experience, it is in one area. The reader must accept that I may show bias in favour of my own conclusions.
 
My first bias regards methodology. A particular source of irritation is the request, when proposing a process, for an energy balance. In one sense, from time symmetry, I know energy is conserved so the energy is always balanced. If they mean, how much do you get out usefully, the answer is, the second law says, less than you put in. However, even if you account for this (usually, with varying degrees of optimism!) this is of no use in selecting a process. When you enter certain intellectual areas, there are rules to follow. Just as in chemistry it is totally pointless to try to derive a theory while ignoring atoms, in economics the only relevant means of comparing processes is based on money, because this alone is intended to balance all the different factors. It may not be perfect, e.g. environmentalists will tell you that no proper cost is placed on environmental protection, but it is all we have. To illustrate why, suppose you wanted to go somewhere and someone offered you a megajoule; would you prefer a container of gasoline, or a mass of water at 35 degrees Centigrade?
 
When forming a theory to address a commercial issue, the first question is, what is the size of the market? According to Wikipedia, world oil production is 87.5 million barrels a day (13.9 billion litres per day). To put that into perspective, I once saw a proposal to build a small demonstration plant that would convert 35 t/day of cellulosic material into biofuel. If successful, it might have produced up to 11,000 litres per day. Yes, it is a small plant (designed for a specific source of raw material) and you can imagine increasing its size, but whatever factor you multiply by, this is going to have to be repeated an enormous number of times. It is most unlikely that anybody can come up with a single raw material to supply this volume. This brings a first conclusion: there is no magic bullet. If we are to solve this problem, contributions will have to come from a number of different sources. Accordingly, we are not looking for a unique plan.
 
An immediate second conclusion is, there is no overnight solution. We have built up a system over one and a half centuries based on the assumption that oil will be cheap and widely available and that has led to some fixed infrastructure that cannot be replaced overnight. Fuels have evolved from a simple distillation to high performance fuel such as JP-10, which appears to have as a major component tetrahydrodicylopentadiene. The very large modern oil refineries are extremely complicated systems that use experience learned over that period. Different processes have to start smaller. Furthermore, it will take at least a decade, and probably longer, to even get a new process to a successful demonstration plant.
 
So, what processes might work? It may come as no surprise to hear that several quite promising processes were developed, at least partially, during the previous oil crisis, then abandoned once the price of oil fell. Over the next few weeks, interspersed with the other subjects, I shall give my views on what should be done. In the meantime, as a possible further challenge, how many of those earlier proposed processes are you aware of? My guess is, most of the information gained then is lost, other than possibly residing in the heads of retired scientists and engineers. This raises another very important point in terms of economic theory: there is an enormous amount of work to be done to solve this fuel problem, the economic system appears to be creaking under government debts incurred in times of unparalleled prosperity, so can we afford to waste what in principle we already have?
Posted by Ian Miller on Jul 23, 2011 3:10 AM BST
While I have been advocating efforts to find alternative theories, I do not wish to give the impression that I think most theory is wrong. There has to be a reason for anyone to seek an alternative theory, because without any grounds it is simply a waste of time, as illustrated by the periodic attempts by various people to defy the second law of thermodynamics and build a perpetual motion machine. Theories may come and go, but I have great faith in the lasting values of the second law.
 
Most other theories are far less robust but the question then becomes, what are reasonable grounds for seeking new theories, or at least revising current ones? One obvious answer is, a discrepancy between theory and observation. That is fine, except it raises a problem: how does the potential theoretician find the discrepancy? The experimentalist who finds it may well report it, but the experimentalist wants to get published and not buy a fight with referees, so if it is reported it is very rare for it to be highlighted and it tends to be lost somewhere in the discussion, maybe even embedded as casually as possible two thirds the way through a rather densely written paragraph. Worse, many discrepancies, when first found, tend to be ambiguous in interpretation, because since they were not sought, the experiment was not designed to specifically demonstrate what nature is trying to tell us, but rather to test some other hypothesis. Accordingly, the potential baby is lost in a sea of bathwater.
 
The reader of this blog should not simply take my word for that, which so far is simply an assertion. An illustration is required. My ebook, Elements of Theory ends with 73 problems, so my challenge to you is, try one. (If you have read the book, thank you, but this challenge is not for you.)
 
Woodward and Hoffmann have stated that there are no exceptions to their rules. One reason (somewhat simplified) why this should be correct is as follows. The signs of the wave functions correlate with the signs of the amplitude of the wave, and the square of the amplitude, within the Copenhagen Interpretation, indicates the probability of an event occurring. If plus overlaps with plus, there is reinforcement, but if plus overlaps with minus, there is cancellation, and the square of zero is zero. With zero probability, that event cannot occur. Accordingly, at a first level analysis, only permitted products can form. In practice, we do not expect perfect wave interference, so very minor contributions of the wrong products are possible.
 
Given that, here is the challenge. First, have any exceptions been found? This is important, because if so, it would show that something is wrong with theory. However, it does not follow that anybody finding such exceptions would recognize their significance, which means that finding them in the literature, if they exist, could be a real challenge. It may be that the only real way to find them is to ask as many people as possible, to dredge their memories and experience, so to speak. The second part of the challenge is, is there any theoretical reason why there could be an exception?
 
In a future blog, I shall give my answer to these questions, but before that I am particularly interested in other chemists' opinions, and in particular, any observations of which I am unaware.
Posted by Ian Miller on Jul 15, 2011 2:20 AM BST
Chris Satterley raised a number of good points in his comment to my last blog, and I shall try to respond to some in the future, however the point I wish to discuss here is, is there a demand for new theory from synthetic chemistry?
 
I believe there is. When I commenced my PhD, there were a good number of "old" reactions available that a synthetic chemist might use, and the mechanisms of these were "understood"; the quotation marks is because while they were understood in general, I suspect there are still features that require looking into. Since then, however, there have been a bewildering number of new reactions, and these appear to be discovered at quite an alarming rate (unless you are a synthetic chemist that reads about something that unblocks a problem!). I believe that the main difficulty in rationalizing these reactions is that without the salient aspects being identified and ordered by the synthetic chemists, nobody has sufficient information. The problem, in my opinion, is that because the information is so dispersed, and more importantly, will be scattered across a number of specialties, nobody can get at more than a minute fraction of it.
 
Let me provide an example of what I mean. What I regard as a rather impressive synthetic method recently appeared in JACS 133: 9724-9726, in which indium bromide, or better, indium iodide, was used as a catalyst to condense chiral propargylic alcohols into polycyclic products with high yield and stereoselectivity. Now, the question is, why pick on indium iodide? Would that be one of your picks, if you hadn't read that paper? One of the authors was E. J. Corey, and I am ready to take a bet that he did not go through the store picking on random catalysts. When he wrote "we speculated that . ." I believe his reasoning would be a lot better than that.  What he wrote as a reason was the indium salt might, by virtue of its vacant 5s and 5 p orbitals coordinate with the acetylenic unit through its pi(x) and pi(y) orbitals while also coordinating with the propargylic oxygen. The reason indium was selected was because the s and p orbital energies are closer than, say, aluminium.
 
 I suspect there is more to it than that. There is no doubt whatsoever that Professor Corey has an incredible knowledge of organic synthesis, and I believe it would also be interesting to know why he focussed on indium. There is little doubt that he recognized that some form of Lewis acid would be desirable, and I would expect that some other possible salts, including indium chloride, might be rejected on solubility grounds.
 
To me, the reasoning he gives leaves a number of puzzling thoughts. Superficially, he appears to be specifying empty sp2 orbitals, although maybe he is not. However, if so, and if we consider the electrons to have wave characteristics to their motion, there appear to be some strange refractive issues, although since we do not know the configuration he was considering, that may not apply. But even if we put that aside, if Professor Corey's reasoning is correct, surely thallium could possibly be better (if closeness of orbital energies is relevant) or possibly gallium (if Lagrangian density in the orbital is relevant) but while some other elements were mentioned, including silver and gold, these two did not appear to be mentioned. (There is an implication that some further possible catalysts were tried, but were unsuccessful, and these failures were not identified. This is unfortunate, because the failures are also important from a theoretical point of view.) Of course the paper is one describing a synthetic method, and what I am discussing was presumably outside Professor Corey's interest (and no criticism is implied for that). The only point I am trying to make is to address Chris' point: there are theoretical aspects involved in such synthetic methods which, if unravelled explicitly, might permit more general progress to be made in synthetic chemistry, and would certainly help other chemists who have to carry out syntheses to make materials for reasons other than simply specifying synthetic methods.
Posted by Ian Miller on Jul 2, 2011 5:00 AM BST
The decision to do something is always preceded by a theory, which usually involves a proposition of the "if … then" form, i.e. if I do A and the conditions G apply, then the outcome P will follow. The premise of this blog is that there are usually alternatives B, C, D . . .  What we want is for our politicians to make the best choice from the set of alternatives, but if scientists want decisions to be made based on evidence, then they must provide the necessary information to the decision makers and the public in a form they can understand. I think you are a little deluded if you think that happens often enough.
 
Consider the issue of carbon dioxide storage. On one side, the coal industry states: if all the carbon dioxide made by burning coal is buried permanently, then there is no adverse environmental effect from that carbon dioxide, therefore we can continue burning coal. Based on what is available to the general public, is this sound policy or a case of "Out of sight, out of mind"?
 
Compound propositions such as that are usually bad; the proper procedure is to separate the steps, then draw conclusions by procedure rather than lurch into it. Suppose we rewrite the first part as: if all the carbon dioxide made by burning coal is buried permanently, then there is no adverse environmental effect from that carbon dioxide entering the atmosphere. I think that is justifiable, but there is a subtle difference.
 
One problem involves the word "permanent". Where do we store it? If it is deep enough, say, approaching 1 km, the pressure will convert the gas into a supercritical fluid. The ideal situation is isolation by containment by non-permeable rock, but how easy is it really to find that? Something that traps heavy oil does not necessarily trap supercritical carbon dioxide. Can we guarantee it will not travel through porous rock, or, if an old oil structure is used, through an abandoned well whose cap fails? What about faults or breaks in rock structure? I would question anyone who states we know where all the faults are. As evidence, Christchurch in New Zealand is now undergoing an extremely prolonged sequence of earthquakes, due to a sequence of faults that were completely unknown two years ago. They are now being found because they are active; inactive faults can go undetected for a very long time. Suppose the carbon dioxide clathrates in water, and suppose the clathrate can move to a region of lower pressure? Could the resultant decomposition of the clathrate widen a gap, and thus accelerate leakage?
 
The ultimate in containment would be where it reacts with rocks to form carbonates. Olivine is one of the better weathering rocks, while peridotite (an olivine/pyroxene blend) in Oman apparently weathers very quickly. The problem, of course, is that basalt is not one of the easiest rocks in which to find storage spaces and it is somewhat unyielding while the world's emissions are not centred on Oman.
 
Does this process make sense other than as a special case? Old oil fields are often cited as disposal points, but these are often far from coal burning plants, and very frequently there will have been a number of test wells drilled which are sealed with unknown reliability. The oil is usually under sandstone, which usually comprises the remains of weathered rock. The collection, transport, compression and injection of the carbon dioxide requires considerable energy (I have seen an estimate that almost 30% more power stations are required.) and the construction of the necessary pressurized piping also generates carbon dioxide.
 
The issue here is, if the public decide to reduce carbon dioxide emissions from the atmosphere, is this concept a sensible part of the solution, or is it simply "smoke and mirrors"? Are scientists making all the relevant information available to the public on this matter? If a member of the public wants to find out what is critical, can he or she? You may protest that my analysis above is superficial, but if I cannot readily find what is needed to reach a proper conclusion, how can the public? If you argue the public does not matter, then you must argue against democracy. In my opinion, if science is to make a proper impression on our future, scientists have to lift their game.
Posted by Ian Miller on Jun 21, 2011 11:05 PM BST
If your goals include getting rich, getting promoted, winning prizes etc, time spent developing theories appears to be time wasted. Nevertheless, if you are really interested in science, there are two good reasons to do so. The first is, you do not need expensive equipment, although you certainly need access to a good library, and you can do it in your spare time. Einstein did his most productive work as a patent clerk. The second is, while all scientists experience emotional highs (success!) and lows (oops, another failure!), for the experimentalist these usually come at the end of the experiment. For the theoretician the highs can come at almost any time later, and from most surprising sources. Further, after a time you do not suffer lows; if you are found to be wrong after a reasonable length of time, at least you can persuade yourself that you persuaded someone to do something they would not have otherwise done, hence you have advanced science a little, even if not for the best of reasons. I hope the reader will forgive me because I would like to illustrate how unexpected emotional rewards can come with something that has happened to me. This arose from what I regard as a most unexpected experimental result, which goes to the heart of quantum mechanics.
 
Up until last week, I believe most physical scientists would have stated that the wave function is not a physical entity, but rather a mathematical construct whose square represents a probability distribution associated with the outcome of a determination. After all, how else does renormalization make sense? You cannot renormalize a bucket of water! That was before Lundeen et al. (Nature, 474, 188-191) published a clearly remarkable achievement: they measured the wave function of photons, determining the real part through a rotation of the polarization and the complex part through the ellipticity. Effectively, the authors say, you could construct a "wave function meter", and they propose that you could measure the wave function associated with electron motion in atoms and molecules.
 
I have two reasons to be excited. The first is, to make a measurement of both the real and complex parts of the wave surely it has to be something, and not simply a construct. As the cover of Nature said, "Direct measurement prompts the question, what is it?" The relevance to me is that in my ebook, I have over 70 problems for exercises in the development of theory, and in the more difficult ones, there is a sequence that develops yet a further interpretation of quantum mechanics (there are at least 6 others) in which the reader is offered the chance to obtain quantum mechanics from one deeper principle, including obtaining the Schrodinger equation, the Uncertainty Principle, the Exclusion Principle, and also to note why the Complementarity Principle could in principle be got around, which is what these authors did. In the solutions I give, the wave is a physical entity, the square of the amplitude of which represents the energy associated with the particle (the square of the amplitude of all other waves represents the energy associated with the vibration), although it probably vibrates in additional dimensions, thus taking it into the concepts of string theory. This theory may or may not be correct, but as far as I am aware, it is the only one for which the wave function is "something" with a clearly physical and determinable variable associated with it.
 
The second reason is that I am an advocate for atomic orbitals of multi-electron atoms that differ from the usual wave functions that correspond to excited states of hydrogen [Aust. J. Phys. 40 : 329 -346 (1987)] in that the ground state orbitals do not have radial nodes (thus solving the problem, how do electrons cross them!) and the resultant excited states have the nodes required for excitation added to them. I doubt that the methodology outlined by Lundeen et al. will really work on atomic orbitals, but if one thing is clear in science these days it is that if there is no sound reason why something cannot be done, sooner or later it will be. There is real excitement in the realization that something you proposed could be proven true one day. (Yes, it could be proven false, but that is just one of those chances you have to take.)
 
The point that I want to make is that for young scientists starting their career, while it may not help your social standing, especially in the short term, there is the possibility of experiencing quite unique feelings. And if you think there is nothing left to theorize about, if something as fundamental as the standard interpretation of the quantal wave function can be overturned by an experiment, so can a lot of other "tablets of stone". You may or may not be right, but you will stay interested.
Posted by Ian Miller on Jun 14, 2011 3:25 AM BST

The logic behind climate change seems to be, greenhouse gases trap heat, the planet is warming because of the heat trapped by these greenhouse gases , therefore if we reduce our greenhouse emissions, we can maintain our current lifestyle, more or less. There is little doubt that greenhouse gases trap heat, there is little doubt that the planet is warming, but are these really the issues? In my opinion, the real questions are, what are we going to do about it, and is current science going about the answering of that question in the right way?

 

In my first blog I mentioned that the Greenland ice sheet had melted in the previous four interglacials, with a corresponding rise of sea levels of about seven metres. What I did not mention was that this sea level rise began to start approximately 10,000 years after the demise of the Canadian ice sheet. If this cycle is a repeat of the last one, we expect to see the sea levels start rising about now, and they seem to be doing that.

 

So, what can we do about the future rising sea levels? What science should be doing is to provide evidence that falsifies the above conclusions, or failing that, recommend that we move our cities or work on alternative options. What is science doing? The main efforts seem to be in modeling that give uncertain predictions, and we are gathering data furiously, measuring various emissions, many of which we cannot do much about anyway. These are followed by calls to reduce emissions, an approach that reminds me of the self-flagellating penitents in Bergmann movies set in mediaeval times. The message seems to be that if we maximize the punishment for our errant ways, somehow our sins will be forgiven. In my opinion, all self-flagellation achieves is a sore back, which appears to be slightly more than the appeals to reduce carbon emissions are currently achieving. Carbon emissions apparently increased by 5% last year and with the savaging of nuclear power following what appears to be a certain degree of incompetence, carbon emissions are almost certain to increase. (Why a nuclear power station had to be shut down when it was still working is unexplained. Why it was completely shut down is even more incomprehensible, given that it needed electricity to operate it. Why not leave one reactor going, just in case, and use its own power? They knew they were in tsunami territory, and they knew their emergency generators were downhill.) So what we see is that provided we wave a slogan (reduce emissions) everything will be all right, even if we do not actually achieve what the slogan requires. 

 

The obvious conclusion is that only geoengineering can permit us to defend the coastline in approximately its current position. There is, of course, no guarantee that it can, but surely the scientific method suggests we should investigate the possibility, even if only at a theoretical level. However, what we find is that geoengineering is usually rejected as being unnecessary.

 

Another reason for rejecting geoengineering is that "we don't know what the unintended consequences will be".  That is almost certainly correct, but given that a rise of sea levels of several tens of metres, which is quite possible with carbon dioxide levels at 450 ppm, surely we should make the effort to try to understand? However, we are not, seemingly because there is far too little funding of the necessary research. Why not? I have a theory on that too, but before I try that out, has anybody else any ideas?

Posted by Ian Miller on Jun 7, 2011 3:14 AM BST
Chris Satterley raised a number of interesting points, and I shall comment on one of those here. He stated that global warming modelers are frequently asked, "What are your assumptions?" My issue is, if there is doubt about an assumption, will anybody do anything about it? I would like to explain, using an example from my own past, why I am a little skeptical. (This is most certainly not a whinge. The only reason I raise it is that I know the details.)
 
Early in my career, I was interested in strained molecules, and consequently I became interested in bond bending. Some time ago I went to a conference and in a session where there was nothing particularly relevant to my direct interests, I sat in on a session on molecular mechanics. In the presentations, bond bending was modeled on the assumption that it was simple harmonic, in which restoring force is proportional to deformation, hence the energy of the deformation is proportional to the square of the deformation in radians.
 
So, what else could it be? Consider a C-H bond in methane, in its equilibrium position. If a plane is drawn through the carbon atom normal to that bond, all the other bonds are on the distal side of the plane, and the charge distribution is more or less symmetrical. To my mind, that indicated that the repulsive force should be along the line of the bond, and to the extent that the deformations did not remove the symmetry, the dynamics would be similar to those of a pendulum, in which case the deformation energy would depend on the sine of the angle of deformation. If anyone is interested, the calculation of overtones was quite respectable (at least in my opinion) for a very few limited molecules (Aust J. Chem 22: 2575-2580).
 
The point of all this is, when I raised this to one of the main speakers, (a) he hadn't heard of this alternative, but more importantly, (b) he was not going to do anything about it. Why not? My view is that it depends on funding. The main function of a project leader is to get the project funded. While the nature of this problem varies from country to country, usually some form of performance review is required. I doubt anybody has the nerve to write in a fund application that they intend to turn over the last ten years' work to determine whether a primary assumption was wrong when during that ten years they have been funded based on their "remarkably good" results. (The fact that there are so many validation constants in the programs is beside the point.) Of course, there should be some form of evaluation of scientists' performances, but I am far from convinced that the current methodology is good for science. The problem is, this procedure is almost designed to lock in any previous incorrect assumptions, and while I am sure that was never the intention, it is one of the unfortunate unwanted consequences. Yes, pointing out a problem is easy; solving it is not, nevertheless, pointing it out may be a start.
 
This may seem harmless, and some may think, even if the underpinning relationships are wrong, if your models reproduce observation, does it matter? To me, the answer is, yes. The problem arises when the model is taken into new territory. The most successful theory, at least in terms of time over which good results were always predicted, was the model of Claudius Ptolemy. It always predicted where the planets would be, when the eclipses would be, etc. However, because it is wrong, if NASA used it for manned flights there would be a lot of dead astronauts. In this bond bending example, maybe it doesn't matter if we don't know exactly why certain polymer solutions have certain properties, but suppose we want to devise new biocatalysts – effectively, synthetic enzymes. Would it not be desirable that we know what we are doing?
Posted by Ian Miller on May 23, 2011 3:28 AM BST
Recently I published an ebook (http://www.amazon.com/dp/B004XMQH7Q) that looks at theory in the physical sciences, and in this blog I would like to expand on some of those thoughts. One of the assertions I made was that in general, most scientists have no formal training in the forming and analysis of theories. There will be some who have done a course in philosophy, but even most doctors of philosophy have never done any formal courses in philosophy. Maybe I am wrong; if so, let me know. Another point that I made is that since the announcement of the Woodward Hoffmann rules, there has been no significant development in theory in chemistry, despite the fact that more chemists have worked after that time than before it. By significant, I also mean that it has made a major influence on chemistry at large. I expect to hear some disagreement on that point, however while there may be exceptions, I believe it stands as essentially valid. I also made a speculative explanation, through the observation that the chemical theory we use is essentially BP, i.e. before polywater. After that debacle, I believe we have gone "theory-shy". (We have refined computational procedures, but that is not the same thing.)
 
In my opinion, the methodology for forming and analyzing theories were laid down by Aristotle, but when Galileo proved Aristotle's cosmology was just plain wrong, with the bath water, out went the baby. What I find fascinating about this is that most of the criticism comes from people who have not actually read what Aristotle wrote. Aristotle's cosmology was wrong, but ironically it came from his ignoring his own advice (possibly because Physica was probably one of the first books he wrote, and he had not finalized his logic). Aristotle was probably the first to emphasize that any theory must be fully in accord with observation, and given that, and assuming that Aristotle made one key error by failing to apply his own methodology (and then this caused all the rest) the reader might wish to contemplate what that error was. (My answer in due course.)
 
Why is this important? Consider a current issue: global warming. From the following references:
Vernol, A. de, Hillaire-Marcel, C. 2008. Science 320: 1622-25.
Kopp, R. E. et al. 2009. Nature 462: 863-867.
Tripati, A.K. et al. 2009.  Science 326: 1394-1397.
The Aristotelian would state:
(a)  In each of the last four interglacials, the sea levels rose approximately 7 meters higher than now.
(b) At each maximum, the carbon dioxide levels were approximately 280
ppm.
He would also note:
(c)  Carbon dioxide levels are now about 380 ppm.
 
Now, either carbon dioxide levels are relevant to the sea level or they are not. If they are not, then it is pointless defending against sea level rises by trying to hold carbon dioxide levels to 450 ppm, the current IPCC target. If they are, then sea levels will rise eventually if the levels are 280 ppm, so again it is pointless to attempt to hold the levels to 450 ppm.  This leads to a clear conclusion: attempting to hold carbon dioxide levels to 450 ppm, or even 350 ppm as advocated by James Hansen, will not by itself prevent sea level rising, unless it was never going to rise anyway this cycle.

Thus looking at the problem in a slightly different way leads to an entirely different conclusion from that obtained from modeling, where the answer is somewhat critically dependent on the assumptions. That does not make the conclusion correct, but it does give guidance on what to do next.
Posted by Ian Miller on May 17, 2011 4:34 AM BST
   1 ... 9 10 11 12 13 14 15 16 17 18    Next >