Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

Two recent announcements in the local news caught my eye. The first was that the E.U. intended to impose a carbon tax on airlines, the tax being proportional to the distance flown, the argument being that the further you fly, the greater the amounts of carbon dioxide is put into the upper atmosphere, and that, of course, is bad, at least according to those wishing to impose the tax. Estimates on the additional cost for a single flight to Europe from here amount to about $800, and, of course, Europe will keep all the money. It is also noticeable that there is no suggestion that the taxpayers pay to their own government and thus contribute to their own countries' efforts against climate change, so the suspicion here is that this has little to do with climate change and more an excuse to raise cash.
Be that as it may, the second article was a proposal to spend a lot of cash to find ways of sending lots of material from the surface into the stratosphere to initiate clouds, etc, and thus raise the albedo of the planet, that way reflecting more light back to space and thus cool the planet.
It seems to me this raises two interesting questions. The first one is, given airlines produce water that at minus 50 C tends to form ice, which is white and hence a good reflector of light, there is no evidence that airlines produce a net detrimental effect. For all we know, their total effect may be beneficial. Even if they did raise the stratospheric temperature by a couple of degrees, would that be bad? Carbon dioxide at that altitude should be a net radiator, and in this context the thermosphere of Earth is about 1400 degrees C, while that of Venus is about 300 degrees C. Yes, the carbon dioxide will eventually sink to lower altitudes, but even then there is no evidence there is ever net detrimental effect unless the airlines stop flying.
The second question is, can we put something in the fuels that will maintain a longer albedo enhancement? The problem with ice is that it does not take long to sublime, so the effect does not last. Suppose, however, we put in an alkyl aluminium compound, or an alkyl zinc. The oxides melt at about 2070 and 1970 respectively so they will not slag, the oxides are white, you get heat when they burn, and yes, a little more care is needed in fuel handling to avoid spontaneous combustion and self-ignition but the fuel systems on aircraft would have to be redesigned anyway because you would only desire this fuel to be burnt when cruising altitude was reached. However, when dissolved in hydrocarbon solution, these materials are safer, as shown by a Youtube video that failed to illustrate spontaneous combustion with diethyl zinc.
Of course there is the obvious objection: you have to do quite some redesigning of fuel systems and handling. My answer is, if you want to save a planet, you have to do more than raise tax!
I suppose the last question is, suppose this worked and there was a massive reduction in heat retention and the climate problem was solved; would the E.U. give put massive tax rebates or other payments as a reward for saving the world? (Note that every rhetorical question deserves a rhetorical answer!)
Posted by Ian Miller on Oct 2, 2011 10:21 PM BST
In my ebook, I maintain that choosing what to do is in effect choosing between alternative applied theories and in a previous blog, I commented on why I think the fermentation of lignocellulose first to glucose and thence to ethanol to make biofuels is not a good idea, the main reason being the first fermentation is too expensive. What nobody commented on was that that could change if ethanol was not the main product, and it is of interest to view the recent Mascoma IPO, where there is a wide range of other proposed income sources.
There is a broader question: should we pursue ethanol at all? Given sugars, the technology is mature, but what about feedstocks? There are many objections to the use of crops to make ethanol on the basis that with a growing population food is the priority. So, is ethanol a bad biofuel?
"Bad" depends on your "point of view". The farmer wants the best price for his crop, the hungry cannot pay, so who does what? There is a tendency to say, "somebody else should pay", which, in my view, is not helpful. There are also "red herrings" in the analyses, such as  "carbon efficiency", "blending efficiency" and "energy efficiency". Carbon efficiency is the worst of these: the argument is that glucose has six carbon atoms, and two disappear off as CO2. That is totally irrelevant: there is no shortage of carbon atoms. Blending efficiency is a red herring because Brazil has shown that provided proper management is undertaken, there are no inherent technical problems. The argument that ethanol has less energy density than hydrocarbons is true but somewhat misleading.  The issue is not energy, but useful work, for we have to power a transport fleet. Whether you use more (because of lower energy) is irrelevant if the issue is, can you run your motor?  Also, the work on ignition is delta PV, which increases if the pressure is increased. In a spark ignition motor, that requires a higher octane number, and ethanol has a significantly higher octane number than standard fuel, and with blending it is efficient at raising the overall octane. Accordingly we could get more efficiency by raising the motor compression, which raises a question that is seemingly always ignored: what properties will the future transport fuel have? If we do not address that, much of our planning is going to be wrong.
The real question is, how do we power transport once oil becomes scarce? Fermentation of sugars to ethanol has advantages. The first is, it is reasonably efficient on smaller scales. That means it can use wastes, which tend to be produced in smaller local amounts. There is also one other feedstock: synthesis gas. Certain anaerobes, including Clostridium, appear to be able to convert this gas stream to ethanol, and the microbes seem to be tolerant to a wide range of the mix of hydrogen and carbon monoxide. That means in principle we can get fuel from waste streams, such as the gas effluent of steel mills that otherwise have no use.
In my opinion, there appears to be no use other than to make ethanol by fermentation for small amounts of low quality synthesis gas, and no other technology that is convenient for low volumes of sugar waste. Either that ethanol can be used by the chemical industry, or we need to maintain in the long-term spark ignition motors. Long term planning for transport should take that issue into account, however in my opinion, planning for transport fuels appears to be operating on a "market rules" basis. That will have all the aspects of Darwinian evolution, and while the market enthusiasts might argue that evolution guarantees the fittest (actually, it does not – evolution involves survival of the adequate to reproduce) evolution also involves numerous extinctions. Do we wish to nearly extinguish individual transport? If not, some form of planning might seem desirable.
Posted by Ian Miller on Sep 20, 2011 11:54 PM BST
According to MO theory there can be no exceptions to the WH rules, nevertheless there are exceptions. On the other hand, in my opinion Pauling's valence bond theory that invokes canonical structures predicts where exceptions would occur, and why. In my example of the pentadienyl carbenium ion the concept of canonical structures puts positive charge evenly on C1, C3 and C5. If we substitute the ends with alkyl groups, which stabilize carbenium ions, then positive charge is preferentially located at C1 and C5. An empty orbital represents positive charge localized on a given atom and according to molecular orbital theory, the effect of an empty orbital should be the same as that of an occupied one.  As far as I can make out, this concept originated with Mullikan (Phys Rev 41: 49-71, 1932), but essentially as an assertion.
Why do the WH rules work? The usual argument is that a +signed function must overlap with another +signed function, and from that observation, the rotational characteristics of the WH rules follow. (Actually, the same rules follow if a bond forms only when plus interferes with minus, but the diagrams are more messy. This is actually the rule for forming antisymmetric wave functions, which at least in some formalisms is a requirement for the Exclusion Principle, but since the same outcome always arises, this issue is irrelevant here.) This gets to the point where we have to ask, what does the sign mean?
In general theory of wave interference it refers to the phase of the wave. When amplitudes have the same sign, they reinforce. The important point is there must be a phase relationship between the ends. Now, the phase of the wave is proportional to the action, and it changes because the action (the time integral of the Lagrange function) evolves in time. However, no matter how long zero is integrated with respect to time, you still get zero, and the Lagrange function of an entity with zero mass and zero charge, which is what an empty orbital has, is zero. The solution to the Schrodinger equation when E, V and m each equal zero is zero everywhere in all time. Zero can be added any number of times, but it makes no difference.
If so, the canonical structure with positive charge on an end carbon atom gives zero directional effect. Therefore, the strength of the preference (because there is always some of the canonical structure with the required phase relationship) is reduced whenever there is a carbenium ion involved in the transition state, and the carbenium site is substituted. The orientation of the substituent is significant too because the bigger the steric clash on the complying path, the easier it is for the canonical structure that permits non-compliance to become more significant because it forces rotation to start before significant orbital interactions.
Now, I believe this alternative interpretation is important for two reasons. The first is, it gives a specific reason why there should be exceptions to the Woodward Hoffmann rules, and it predicts where they will be found. Thus if nothing else, it will guide further experimental work. The alternative theory is either right or wrong, and there is one obvious way to find out. The second reason is more important: I believe that if this alternative interpretation is found to be correct, it forces chemists to revisit the concept canonical structures, which I believe gives far more fertile ground for understanding chemistry than the current MO theory, at least for the average bench chemist. Further, I suspect there are no aspects of organic chemistry (and probably not of other chemistry, except I am not familiar enough with that to be sure) that does not comply with the concept of canonical structures, if these are properly used. So, there is a further challenge: find some aspect of chemistry where canonical structures, properly used, fail.
Posted by Ian Miller on Sep 5, 2011 3:58 AM BST
Perhaps my challenge, "what is the most illogical thing you can associate with standard quantum mechanics? " was not to everybody's taste. Sorry, but I cannot help feeling that ignoring problems is not the way to enlightenment. Also, part of what follows is critical to my question, where do you expect violations of the Woodward Hoffmann rules?
Consider the following:
(1)  You cannot know something about a phenomenon until you observe it.
(2)  Prior to observation, a state is a superposition of all possible wave functions.
What I consider illogical is that it asserted that the superposition of states is a reality and when an observation is made, there is a physical event known as the "collapse of the wave function". Such a superposition of states can never be observed, because by definition, when it is observed, it collapses to one state. The electron either is or is not, there. So, why does this assertion persist? (I have no idea on this one. Anyone, please help.)
The most well-known example is the Schrodinger cat paradox, in which a box contains a cat and radioactive particles associated with a device that emits hydrogen cyanide. If no particle is emitted, the cat is alive; if it is emitted, the cat is dead. Before we observe, we do not know whether the particle is emitted or not, and both states may be equally probable. That is described as a superposition of states, but according to the paradox, a consequence is the cat is also in a superposition of states, neither dead nor alive.
The problem involves the conclusion that the square of the amplitude of a wave function gives the probability of making an observation. If the objective is to compute the probability of something happening, and you consider the states to represent probabilities, such states have to be superimposed. The probability of something that exists, such as the cat, must be 1. In this case, the cat is either dead or alive, and the probabilities have to be summed. The same thing happens with coin tossing: the coin will be either heads or tails, and there is equal probability, hence two states must be considered to make a prediction.
Herein lies a logic issue: the fallacy of the consequent. Thus while we have, "if we have a wave function, from the square of its amplitude we can obtain a probability" it does not follow that if we can ascribe a probability, then we have a wave function. What is the wave function for a stationary coin, or a stationary cat? It most certainly does not follow that if we have two probabilities, then the state of the object is defined by two superimposed wave functions. A wave function may characterize a state, but a state does not have to have a wave function.
A wave function must undulate, and to be a solution of the Schrodinger equation, it must have a phase defined by exp(2pi.i.S/h), where i is the complex number, and S is the action, defined as the time integral of the Lagrange function, which can be considered as the difference between the kinetic and potential energies. (The requirement for the difference arises from the requirement that motion should comply with Newton's second law.) As can be seen, a complete period occurs whenever S=h, which is extremely small. For the spinning coin, any wave function period returns to its initial value so quickly it is irrelevant, and classical mechanics applies. In this light, the Schrodinger cat paradox as presented says nothing about cats because there is no quantum issue relating to a cat.
What about the particle? When Aristotle invented logic, he allowed for such situations. To paraphrase, suppose one morning there is a ship in harbour fully laden. The question is, where will it be tomorrow? It could be somewhere out to sea, or it could be where it is now. What it won't be is in some indeterminate state until someone observes it because it is impossible for something of that size to "localize" without some additional physical effect. Aristotle permitted three outcomes: here, not here, and cannot tell, in this case because the captain has yet to make up his mind whether to sail. Surely, "cannot tell" is the reality in some of these quantum paradoxes?
As a final comment, suppose you consider that with multiple discrete probabilities there have to be multiple wave functions? The probabilities are additive and must total 1. The wave amplitudes are additive, and here we reach a problem, because the sum of the squares does not equal the square of the sum. Mathematically, we can renormalize, but what does that mean physically? In terms of Aristotle's third option, it is simply a procedure to compute probabilities. However, it is often asserted in quantum mechanics that all this is real and something physically happens when the wave function "collapses".
Now, you may well say this is unnecessary speculation, and who cares? Quantum mechanics always gives the correct answer to any given problem. Perhaps, but can you now see why there can be exceptions to the Woodward Hoffmann rules, and where they will be? The key clue is above.
Posted by Ian Miller on Aug 23, 2011 11:53 PM BST
First, the simple answer to my challenge, which was whether there were exceptions to the Woodward Hoffmann rules.  The exception I wish to discuss involves the cyclization of pentadienyl carbenium ions to cyclopentenyl carbenium ions [Campbell et al. J A C S 91 : 6404-6410 (1969)]. Part of the reason for the challenge was to illustrate a further point, namely that quite a bit of critical information is out there, but if it is not recognized quickly as being important, it gets lost. However, it can be found because there is usually someone out there that knows about it. Such information could be recovered more readily if there were a web facility for raising such questions. To succeed, it must be easy to use and abuse prevented so that scientists in general will cooperate.
A further point I was trying to make was to emphasise logic. This was a problem at the end of my ebook Elements of Theory and these problems had the structure that the reader had to try to come to an answer, then analyse my given answer, parts of which were not necessarily correct, the idea being to promote the concept of critical thinking. Part of the answer I gave included what I thought was an innocent enough deception, namely the Woodward Hoffmann rules were violated because of extra substitution. That, of course, has a logical flaw; without the substitution you cannot tell whether the rules are violated or not. The reason for mentioning this here is that in the abstract to the example that Chris Satterley found, the same statement is made.
So, for a further little challenge, what is the most illogical thing you can associate with standard quantum mechanics? My answer soon.
The cyclization of pentadienyl carbenium ions could be followed because there was a methyl group at each end, and the stereochemistry was known. The deviations from the W-H rules were explained in terms of the steric clashes that occurred  as the ends came together: H-H, H-Me and Me-Me. The H-H clash produced more or less complete agreement with the W-H rules, but the Me-Me clash led to an almost 50-50 product distribution between the two possible cyclopentenyl ions. There is little doubt that a steric clash occurred as the carbenium ion approached the geometry needed for cyclization to take place, however at least one further thought should be considered. The pentadienyl carbenium ion should be planar, so the methyl-methyl clash does not give a preference to any particular rotation. A 50-50 mix of products suggests that when the ends come together, random vibrations lead the methyl groups to slide one way or the other, and both ways cyclize. This is the important bit: if the W-H rules were absolute, what would happen is that the prohibited route would not react, the material would remain in the pentadienyl form, reopen, then come together again, and would react only if/when it accidentally got into the correct configuration. The steric clash would result in both reactions being slower. As it happened, in the actual experiments all cyclizations were too fast to know about that aspect.
However, while additional substitution was present in the ring opening reactions of benzocyclobutene, the strain issue is not quite relevant because by the time the substituents generate a good clash, the system has proceeded far enough along the reaction coordinate that extra strain would simply drive the ring opening faster.
Which brings me to the second little challenge: is there any chemical theory that offers a rational explanation for these exceptions to the W-H rule? I must confess that back in the late 1960s I simply assumed that this exception merely meant that the W-H rules were simply preferences, and with a little discouragement, they could be over-ruled, however I am now convinced that this is just lazy, and when nature provides exceptions like this, it is trying to tell us something.
Posted by Ian Miller on Aug 16, 2011 6:16 AM BST
For the two responses to my challenge; thank you, and also for the comment on last week's blog. I am going to leave the challenge open for another week, and add a further comment on biofuels.
There is little or no difference between a plan to do something and a theory on what should be done, and for producing biofuels there is no shortage of alternative theories as to what should be done. The question now is, should some basic analysis be done on the various options before we send vast sums of money after them? One argument is, until you do the research you cannot analyse the situation well enough to come to a conclusion. I would agree that you cannot make close decisions, and as noted in my previous blog, there will probably be a number of good solutions to this problem implemented. But surely we can avoid some obvious mistakes? Unfortunately, if the analysis is weak, it can be a mistake to "avoid obvious mistakes" when in fact they are not mistakes. Notwithstanding that, I can't help feeling that much of the money devoted to biofuels research is not well-spent. There is an adage, you can be fast, you can be cheap, you can be efficient; choose no more than two. I believe available capital is scarce in terms of the size of the problem and we want the solution to work. Accordingly, we should be more reluctant to jump onto the latest fashionable exercise.
The most obvious biofuel is ethanol, and the production of ethanol from fermentation of sugar is a technology that has been operating for about 6,000 years, so we have learned something about it. Currently, fuel ethanol is produced in Brazil from its sugar cane, in the US from corn, but growing crops for ethanol is not considered a likely solution, if for no other reason than with an expanding population, food becomes a priority. Of course, waste food can always be converted to fuel, but the size of the fuel requirements requires the bulk of the fuel to come from somewhere that does not compete with food production.
A more likely answer is to produce ethanol from cellulose by first fermenting the cellulose to make glucose. Also, because "biotech" has been very fashionable, this has received a lot of funding, but I believe any reasonable analysis will show this route is suboptimal. Most plant biomass contains polyphenols, such as tannins and lignins. The question is, why? The usual answer is that lignin binds cellulose and lets trees grow tall. I would argue that is a side-benefit, and the original reason for polyphenols to be incorporated into plant material is to make them more difficult to digest. Those that get eaten less reproduce more frequently and evolve to support that trait. In other words, lignocellulose has been optimized by evolution to avoid being digested by enzymes. Yes, nothing is perfect and we can find enzymes that will hydrolyse woody biomass, but the hydrolysis is slow. Ethanol manufacturing will seemingly commence with immense tanks of wood chip slurry that take days to ferment. If all the cellulose is converted to glucose, then the lignin, which is about a third of the mass, will be left as a finely divided and very wet sludge that is difficult to filter and which will contain a reasonable fraction of the glucose. In practice, much of the crystalline cellulose will not hydrolyse, but will merely expand the sludge.
Is this the correct way to go about the problem? It seems to me that theory suggests it is not, yet it appears that in recent times more money has been sent chasing this option than any other single option. Perhaps someone knows something I do not know and I am wrong in the assessment, but if so, whatever that is is not widely known.
Posted by Ian Miller on Aug 4, 2011 1:02 AM BST
So far, no response to my challenge, and in particular, nobody has posted a failure of the Woodward Hoffmann rules. Why not? One answer would be, there are none, so there is nothing to post. Then there is the possibility that there are examples, but nobody can find them. More depressingly, nobody cares, and finally, most depressingly of all, at least for me, perhaps nobody is reading my blogs.
Meanwhile, I have had a topic request! What about transport fuels? Conceptually, a plan is the same as a theory, and in this case there are plenty of theories as to what we should do, so the question now is, how do we choose which to follow? To slightly amend a quote from General Wesley Clark, there are two types of plans: those that won't work, and those that might. We have to select from those that might work and make them work. Accordingly, the art of selection involves the logical analysis of the various plans/theories. I must start with a caveat: I have been working in this area during both the previous oil crisis and now, and while I have considerable experience, it is in one area. The reader must accept that I may show bias in favour of my own conclusions.
My first bias regards methodology. A particular source of irritation is the request, when proposing a process, for an energy balance. In one sense, from time symmetry, I know energy is conserved so the energy is always balanced. If they mean, how much do you get out usefully, the answer is, the second law says, less than you put in. However, even if you account for this (usually, with varying degrees of optimism!) this is of no use in selecting a process. When you enter certain intellectual areas, there are rules to follow. Just as in chemistry it is totally pointless to try to derive a theory while ignoring atoms, in economics the only relevant means of comparing processes is based on money, because this alone is intended to balance all the different factors. It may not be perfect, e.g. environmentalists will tell you that no proper cost is placed on environmental protection, but it is all we have. To illustrate why, suppose you wanted to go somewhere and someone offered you a megajoule; would you prefer a container of gasoline, or a mass of water at 35 degrees Centigrade?
When forming a theory to address a commercial issue, the first question is, what is the size of the market? According to Wikipedia, world oil production is 87.5 million barrels a day (13.9 billion litres per day). To put that into perspective, I once saw a proposal to build a small demonstration plant that would convert 35 t/day of cellulosic material into biofuel. If successful, it might have produced up to 11,000 litres per day. Yes, it is a small plant (designed for a specific source of raw material) and you can imagine increasing its size, but whatever factor you multiply by, this is going to have to be repeated an enormous number of times. It is most unlikely that anybody can come up with a single raw material to supply this volume. This brings a first conclusion: there is no magic bullet. If we are to solve this problem, contributions will have to come from a number of different sources. Accordingly, we are not looking for a unique plan.
An immediate second conclusion is, there is no overnight solution. We have built up a system over one and a half centuries based on the assumption that oil will be cheap and widely available and that has led to some fixed infrastructure that cannot be replaced overnight. Fuels have evolved from a simple distillation to high performance fuel such as JP-10, which appears to have as a major component tetrahydrodicylopentadiene. The very large modern oil refineries are extremely complicated systems that use experience learned over that period. Different processes have to start smaller. Furthermore, it will take at least a decade, and probably longer, to even get a new process to a successful demonstration plant.
So, what processes might work? It may come as no surprise to hear that several quite promising processes were developed, at least partially, during the previous oil crisis, then abandoned once the price of oil fell. Over the next few weeks, interspersed with the other subjects, I shall give my views on what should be done. In the meantime, as a possible further challenge, how many of those earlier proposed processes are you aware of? My guess is, most of the information gained then is lost, other than possibly residing in the heads of retired scientists and engineers. This raises another very important point in terms of economic theory: there is an enormous amount of work to be done to solve this fuel problem, the economic system appears to be creaking under government debts incurred in times of unparalleled prosperity, so can we afford to waste what in principle we already have?
Posted by Ian Miller on Jul 23, 2011 3:10 AM BST
While I have been advocating efforts to find alternative theories, I do not wish to give the impression that I think most theory is wrong. There has to be a reason for anyone to seek an alternative theory, because without any grounds it is simply a waste of time, as illustrated by the periodic attempts by various people to defy the second law of thermodynamics and build a perpetual motion machine. Theories may come and go, but I have great faith in the lasting values of the second law.
Most other theories are far less robust but the question then becomes, what are reasonable grounds for seeking new theories, or at least revising current ones? One obvious answer is, a discrepancy between theory and observation. That is fine, except it raises a problem: how does the potential theoretician find the discrepancy? The experimentalist who finds it may well report it, but the experimentalist wants to get published and not buy a fight with referees, so if it is reported it is very rare for it to be highlighted and it tends to be lost somewhere in the discussion, maybe even embedded as casually as possible two thirds the way through a rather densely written paragraph. Worse, many discrepancies, when first found, tend to be ambiguous in interpretation, because since they were not sought, the experiment was not designed to specifically demonstrate what nature is trying to tell us, but rather to test some other hypothesis. Accordingly, the potential baby is lost in a sea of bathwater.
The reader of this blog should not simply take my word for that, which so far is simply an assertion. An illustration is required. My ebook, Elements of Theory ends with 73 problems, so my challenge to you is, try one. (If you have read the book, thank you, but this challenge is not for you.)
Woodward and Hoffmann have stated that there are no exceptions to their rules. One reason (somewhat simplified) why this should be correct is as follows. The signs of the wave functions correlate with the signs of the amplitude of the wave, and the square of the amplitude, within the Copenhagen Interpretation, indicates the probability of an event occurring. If plus overlaps with plus, there is reinforcement, but if plus overlaps with minus, there is cancellation, and the square of zero is zero. With zero probability, that event cannot occur. Accordingly, at a first level analysis, only permitted products can form. In practice, we do not expect perfect wave interference, so very minor contributions of the wrong products are possible.
Given that, here is the challenge. First, have any exceptions been found? This is important, because if so, it would show that something is wrong with theory. However, it does not follow that anybody finding such exceptions would recognize their significance, which means that finding them in the literature, if they exist, could be a real challenge. It may be that the only real way to find them is to ask as many people as possible, to dredge their memories and experience, so to speak. The second part of the challenge is, is there any theoretical reason why there could be an exception?
In a future blog, I shall give my answer to these questions, but before that I am particularly interested in other chemists' opinions, and in particular, any observations of which I am unaware.
Posted by Ian Miller on Jul 15, 2011 2:20 AM BST
Chris Satterley raised a number of good points in his comment to my last blog, and I shall try to respond to some in the future, however the point I wish to discuss here is, is there a demand for new theory from synthetic chemistry?
I believe there is. When I commenced my PhD, there were a good number of "old" reactions available that a synthetic chemist might use, and the mechanisms of these were "understood"; the quotation marks is because while they were understood in general, I suspect there are still features that require looking into. Since then, however, there have been a bewildering number of new reactions, and these appear to be discovered at quite an alarming rate (unless you are a synthetic chemist that reads about something that unblocks a problem!). I believe that the main difficulty in rationalizing these reactions is that without the salient aspects being identified and ordered by the synthetic chemists, nobody has sufficient information. The problem, in my opinion, is that because the information is so dispersed, and more importantly, will be scattered across a number of specialties, nobody can get at more than a minute fraction of it.
Let me provide an example of what I mean. What I regard as a rather impressive synthetic method recently appeared in JACS 133: 9724-9726, in which indium bromide, or better, indium iodide, was used as a catalyst to condense chiral propargylic alcohols into polycyclic products with high yield and stereoselectivity. Now, the question is, why pick on indium iodide? Would that be one of your picks, if you hadn't read that paper? One of the authors was E. J. Corey, and I am ready to take a bet that he did not go through the store picking on random catalysts. When he wrote "we speculated that . ." I believe his reasoning would be a lot better than that.  What he wrote as a reason was the indium salt might, by virtue of its vacant 5s and 5 p orbitals coordinate with the acetylenic unit through its pi(x) and pi(y) orbitals while also coordinating with the propargylic oxygen. The reason indium was selected was because the s and p orbital energies are closer than, say, aluminium.
 I suspect there is more to it than that. There is no doubt whatsoever that Professor Corey has an incredible knowledge of organic synthesis, and I believe it would also be interesting to know why he focussed on indium. There is little doubt that he recognized that some form of Lewis acid would be desirable, and I would expect that some other possible salts, including indium chloride, might be rejected on solubility grounds.
To me, the reasoning he gives leaves a number of puzzling thoughts. Superficially, he appears to be specifying empty sp2 orbitals, although maybe he is not. However, if so, and if we consider the electrons to have wave characteristics to their motion, there appear to be some strange refractive issues, although since we do not know the configuration he was considering, that may not apply. But even if we put that aside, if Professor Corey's reasoning is correct, surely thallium could possibly be better (if closeness of orbital energies is relevant) or possibly gallium (if Lagrangian density in the orbital is relevant) but while some other elements were mentioned, including silver and gold, these two did not appear to be mentioned. (There is an implication that some further possible catalysts were tried, but were unsuccessful, and these failures were not identified. This is unfortunate, because the failures are also important from a theoretical point of view.) Of course the paper is one describing a synthetic method, and what I am discussing was presumably outside Professor Corey's interest (and no criticism is implied for that). The only point I am trying to make is to address Chris' point: there are theoretical aspects involved in such synthetic methods which, if unravelled explicitly, might permit more general progress to be made in synthetic chemistry, and would certainly help other chemists who have to carry out syntheses to make materials for reasons other than simply specifying synthetic methods.
Posted by Ian Miller on Jul 2, 2011 5:00 AM BST
The decision to do something is always preceded by a theory, which usually involves a proposition of the "if … then" form, i.e. if I do A and the conditions G apply, then the outcome P will follow. The premise of this blog is that there are usually alternatives B, C, D . . .  What we want is for our politicians to make the best choice from the set of alternatives, but if scientists want decisions to be made based on evidence, then they must provide the necessary information to the decision makers and the public in a form they can understand. I think you are a little deluded if you think that happens often enough.
Consider the issue of carbon dioxide storage. On one side, the coal industry states: if all the carbon dioxide made by burning coal is buried permanently, then there is no adverse environmental effect from that carbon dioxide, therefore we can continue burning coal. Based on what is available to the general public, is this sound policy or a case of "Out of sight, out of mind"?
Compound propositions such as that are usually bad; the proper procedure is to separate the steps, then draw conclusions by procedure rather than lurch into it. Suppose we rewrite the first part as: if all the carbon dioxide made by burning coal is buried permanently, then there is no adverse environmental effect from that carbon dioxide entering the atmosphere. I think that is justifiable, but there is a subtle difference.
One problem involves the word "permanent". Where do we store it? If it is deep enough, say, approaching 1 km, the pressure will convert the gas into a supercritical fluid. The ideal situation is isolation by containment by non-permeable rock, but how easy is it really to find that? Something that traps heavy oil does not necessarily trap supercritical carbon dioxide. Can we guarantee it will not travel through porous rock, or, if an old oil structure is used, through an abandoned well whose cap fails? What about faults or breaks in rock structure? I would question anyone who states we know where all the faults are. As evidence, Christchurch in New Zealand is now undergoing an extremely prolonged sequence of earthquakes, due to a sequence of faults that were completely unknown two years ago. They are now being found because they are active; inactive faults can go undetected for a very long time. Suppose the carbon dioxide clathrates in water, and suppose the clathrate can move to a region of lower pressure? Could the resultant decomposition of the clathrate widen a gap, and thus accelerate leakage?
The ultimate in containment would be where it reacts with rocks to form carbonates. Olivine is one of the better weathering rocks, while peridotite (an olivine/pyroxene blend) in Oman apparently weathers very quickly. The problem, of course, is that basalt is not one of the easiest rocks in which to find storage spaces and it is somewhat unyielding while the world's emissions are not centred on Oman.
Does this process make sense other than as a special case? Old oil fields are often cited as disposal points, but these are often far from coal burning plants, and very frequently there will have been a number of test wells drilled which are sealed with unknown reliability. The oil is usually under sandstone, which usually comprises the remains of weathered rock. The collection, transport, compression and injection of the carbon dioxide requires considerable energy (I have seen an estimate that almost 30% more power stations are required.) and the construction of the necessary pressurized piping also generates carbon dioxide.
The issue here is, if the public decide to reduce carbon dioxide emissions from the atmosphere, is this concept a sensible part of the solution, or is it simply "smoke and mirrors"? Are scientists making all the relevant information available to the public on this matter? If a member of the public wants to find out what is critical, can he or she? You may protest that my analysis above is superficial, but if I cannot readily find what is needed to reach a proper conclusion, how can the public? If you argue the public does not matter, then you must argue against democracy. In my opinion, if science is to make a proper impression on our future, scientists have to lift their game.
Posted by Ian Miller on Jun 21, 2011 11:05 PM BST
   1 ... 9 10 11 12 13 14 15 16 17 18