Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

A question for the reader to contemplate before I offer my answer: what is the primary objective of an investment in a technology development?
 
It is now my intention to run a small series of blogs on investment in scientific development, and the example will be biofuels. The reader might like to participate by deciding which, if any, technology they would invest in if they had the money. An important consideration is this: if nobody invests, and if there is no development such as fusion power and fuel cells, people had better get used to walking.
 
Returning to the question, the answer is clear: to make money. That should be the only primary objective. Now I guess a number of readers will object to that, so I shall explain.
 
Businesses, in a competitive environment, are in a Darwinian environment. It should also be made clear that a common explanation of Darwinian evolution, namely Survival of the fittest, is just plain wrong. It should be: Survival of the adequate to occupy continually a niche. Perhaps as an example, I might point to the red algal genus Bangia. In terms of how cells may be arranged in multicellular forms, this is a one-dimensional, i.e  the cells are in a single line and the plant sits at the top of the intertidal splash zone. There are a number of other algae that can occupy a similar zone, such as the two-dimensional Porphyra, so it is not even the fittest in this rather limited niche, but it must be adequate because a fossil that appears indistinguishable from modern Bangia has been found that appears to be 1.3 Gy old.
 
To be adequate, life forms must feed and reproduce. Businesses simply feed; if their income does not exceed their outgoings, they do not last long.
 
For a biofuels company to succeed, income is dependent on the fuels being sold at the required price, and the market determines price.  As oil demand exceeds supply, prices will rise, but the question is, by how much? One question that must be faced is the price/demand elasticity, i.e. how much is demand affected by increased price. At first sight, judging by certain governments' taxation policies, not much, but unfortunately there is no time symmetry in economics: what works today may not work tomorrow. As the price of fuel rises, as opposed to the price of discretionary petrol, the price of everything else with a fuel content rises. Wages could keep up, if you desire hyperinflation, but if money is to have any meaning, you have to assume wages, if anything, will decline unless there are serious productivity improvements. In this context, I have seen figures that the pound sterling has inflated by a factor of over 620 in the last ninety years, so maybe more significant inflation is on the horizon. "Quantitative easing" is certainly little different from "printing money".
 
Notwithstanding that, the rich will probably buy fuel at any price. What the social consequences of that are is anyone's guess, but the implication is that if we want a future that bears any resemblance to what we have now, a significant volume is required, which means that someone has to invest in a technology for which there is a significant resource, and the technology has to be reasonably cheap to implement. That, however, does not exclude niche supplies. The fact is, the market can never be saturated in the foreseeable future with biofuel.
 
So, before I give you my guesses, where would you, the reader, place your money?
Posted by Ian Miller on Nov 13, 2011 8:18 PM GMT
This may seem an odd topic to discuss under alternative theories, but that depends on what you think a theory is. I argue that whoever sets up a system has a theory that it will work. They do not know, which makes it a theory, and as a theory it should be testable. That such politically based theories seldom seem to be subjected to evidence-based tests is a clear failure, but we could also argue that when scientists, whose business it is to test theories, remain silent then they fail.
 
I am far from convinced that current systems work properly, although that comment comes with a very big caveat, namely my experience is in New Zealand (I was on a funding panel for approximately 10 years, and I also applied for funding from time to time, so I know "both sides") and what happens here may not be even approximately typical of what happens elsewhere. In this context, comments or a discussion would be welcome.
 
The most obvious problem that I found on a funding panel was that there was one set of applications that were obviously excellent and they were funded, one set that should be put out of its misery and was, but unfortunately there was a substantial set in the middle that appeared to be highly to reasonably desirable, but had to be divided somewhere simply because there were not enough funds. Where to draw the line on these was very difficult, particularly since most if not all of them lay outside the specific experience of the panelists.
 
External referees were of surprisingly little help. Only too often, a referee would be an acquaintance of the applicant (this is almost inevitable in narrow fields, even when they were selected internationally), or alternatively, the referee became obsessively critical. Rather surprisingly, very few referees did a really competent job, and in its own way, that made the problem more difficult because how do you compare a rating of 5 (out of 10) from a clearly competent report, with an 8 from a fairly light airbrush? Some applicants hurt themselves (and I fell into this category more than once) by not giving too much away because they did not feel that all referees had adequate integrity. I had seen such an example when, somewhat unsportingly, I tracked referees' subsequent activity in respect to applications. Of course there is a corollary: it is unfair to dump something on a referee's desk and block the contents from his future when he may very well have had similar ideas. In short, I do not feel such use of referees is valid other than possibly for purely academic applications.
 
The next problem arose from the politics. Those who provide the funds face a barrage of issues, and they end up by breaking the first rule of strategy: they impose too many objectives by trying to make the money do too many things.  In my opinion, a grant of funds should be directed primarily at one objective. Of course there can be a number of other benefits, but what tends to happen when politicians impose too many objectives is that a nightmare of bureaucracy results and very complicated applications turn up with all sorts of statements that in many cases are little better than arm-waving.
 
So, what should be done? In my opinion, funding should be based on past performance ratings, with emphasis on the recent past. The advantage would be that much less time and money would be wasted on bureaucracy and the researchers could spend more time doing research. The young scientist should start in an established group, and at some stage be "given a chance". Then, perhaps, another. The more successful can then get on with it, with little more to worry about than their maintaining a success rate. Public funds might be preferentially given in certain areas, say, but only to those with sufficient expertise. The downside of this proposal, of course, is how to assess past performance? Nevertheless, such difficulties will always be with us, and I rather suspect that "where to draw the line" is usually based on past performance anyway, except that the assessment is more based on "public reputation" because considered assessment had not been done.
 
What do you think? I would be interested in responses. As a personal disclaimer, I have no current personal interest in seeking such funding.
Posted by Ian Miller on Oct 29, 2011 12:30 AM BST
This is a little off the topic of chemistry, but it is a very rare example of what Kuhn described as a crisis in theory. The crisis is that outside the claimed limit of experimental error, muon neutrinos exceeded light speed (c). The most obvious way this could happen is experimental error, however nobody seems to be able to locate any error. This is potentially the only example I can think of during the last 40 years, and it will be interesting to see whether the process Kuhn described will eventuate. According to Kuhn, there will be various stages: denial, grudging acceptance, a flurry of explanations, then something will settle into the new paradigm, in some cases not necessarily based on logic but often based on the reputation of the proposer.
 
From a personal point of view, there is a further lesson. Part 2 of my ebook contained outlines of what I think it would be helpful for chemists to know about theory in physics, and vice versa, and there follows 72 examples of problems. Not everything I thought of made it. One question I contemplated was along the lines, if Einsteinian mechanics were to break down, where would it, and why would it? (The concept of relativity was actually laid down by Galileo.) In the end I discarded the question as being too unlikely. So, what are some of the possibilities?
 
(1) Einstein's assumption of no preferred frame of reference, i.e. every observation point is equivalent, is wrong. I am uncomfortable with this because I cannot see why it is only seen with only this experiment now?
 
(2) An astronomer I know suggested the neutrinos were taking a short cut, making some motion through another dimension, say. The concept involves curved space, e.g. a 2 dimensional "flatlander" on the surface of the Earth could in principle send a neutrino through the Earth. Again, I don't see what is so special about this experiment, so why?
 
(3) My prejudice is for a yet to be discovered force. Einstein's relativity, according to Feynman, could be obtained from the mass enhancement equation, effectively by using the same equations that lead to the mass enhancement equation but going the other way. Mass is most likely to be measured from the resistance to acceleration. The electromagnetic force is mediated at light speed, so any accelerating force has to "catch" the particle, and as light speed approaches, this becomes increasingly less effective, the acceleration drops to zero and "mass" is effectively infinite. This would break down if there were an as yet undiscovered force that was mediated at a speed d instead of c, where d>c. Relativity would still work, but for this force c must be replaced with d in the various equations. The reason for this view lies in the question, how did such a neutrino even get that close to c? What accelerated it? A minor undiscovered source of experimental error does not solve the problem because the neutrino must still approach c; only a significant clanger in the experimental method does.
 
Chemists, of course, are hardly likely to make any impact on this problem, but this gives us a privileged position: we can watch while others scratch their heads. It should be interesting to watch what unfolds, because whatever else, this is a result that cannot be ignored, unless it is so wrong it should be!
Posted by Ian Miller on Oct 14, 2011 11:44 PM BST
Two recent announcements in the local news caught my eye. The first was that the E.U. intended to impose a carbon tax on airlines, the tax being proportional to the distance flown, the argument being that the further you fly, the greater the amounts of carbon dioxide is put into the upper atmosphere, and that, of course, is bad, at least according to those wishing to impose the tax. Estimates on the additional cost for a single flight to Europe from here amount to about $800, and, of course, Europe will keep all the money. It is also noticeable that there is no suggestion that the taxpayers pay to their own government and thus contribute to their own countries' efforts against climate change, so the suspicion here is that this has little to do with climate change and more an excuse to raise cash.
 
Be that as it may, the second article was a proposal to spend a lot of cash to find ways of sending lots of material from the surface into the stratosphere to initiate clouds, etc, and thus raise the albedo of the planet, that way reflecting more light back to space and thus cool the planet.
 
It seems to me this raises two interesting questions. The first one is, given airlines produce water that at minus 50 C tends to form ice, which is white and hence a good reflector of light, there is no evidence that airlines produce a net detrimental effect. For all we know, their total effect may be beneficial. Even if they did raise the stratospheric temperature by a couple of degrees, would that be bad? Carbon dioxide at that altitude should be a net radiator, and in this context the thermosphere of Earth is about 1400 degrees C, while that of Venus is about 300 degrees C. Yes, the carbon dioxide will eventually sink to lower altitudes, but even then there is no evidence there is ever net detrimental effect unless the airlines stop flying.
 
The second question is, can we put something in the fuels that will maintain a longer albedo enhancement? The problem with ice is that it does not take long to sublime, so the effect does not last. Suppose, however, we put in an alkyl aluminium compound, or an alkyl zinc. The oxides melt at about 2070 and 1970 respectively so they will not slag, the oxides are white, you get heat when they burn, and yes, a little more care is needed in fuel handling to avoid spontaneous combustion and self-ignition but the fuel systems on aircraft would have to be redesigned anyway because you would only desire this fuel to be burnt when cruising altitude was reached. However, when dissolved in hydrocarbon solution, these materials are safer, as shown by a Youtube video that failed to illustrate spontaneous combustion with diethyl zinc.
 
Of course there is the obvious objection: you have to do quite some redesigning of fuel systems and handling. My answer is, if you want to save a planet, you have to do more than raise tax!
 
I suppose the last question is, suppose this worked and there was a massive reduction in heat retention and the climate problem was solved; would the E.U. give put massive tax rebates or other payments as a reward for saving the world? (Note that every rhetorical question deserves a rhetorical answer!)
Posted by Ian Miller on Oct 2, 2011 10:21 PM BST
In my ebook, I maintain that choosing what to do is in effect choosing between alternative applied theories and in a previous blog, I commented on why I think the fermentation of lignocellulose first to glucose and thence to ethanol to make biofuels is not a good idea, the main reason being the first fermentation is too expensive. What nobody commented on was that that could change if ethanol was not the main product, and it is of interest to view the recent Mascoma IPO, where there is a wide range of other proposed income sources.
 
There is a broader question: should we pursue ethanol at all? Given sugars, the technology is mature, but what about feedstocks? There are many objections to the use of crops to make ethanol on the basis that with a growing population food is the priority. So, is ethanol a bad biofuel?
 
"Bad" depends on your "point of view". The farmer wants the best price for his crop, the hungry cannot pay, so who does what? There is a tendency to say, "somebody else should pay", which, in my view, is not helpful. There are also "red herrings" in the analyses, such as  "carbon efficiency", "blending efficiency" and "energy efficiency". Carbon efficiency is the worst of these: the argument is that glucose has six carbon atoms, and two disappear off as CO2. That is totally irrelevant: there is no shortage of carbon atoms. Blending efficiency is a red herring because Brazil has shown that provided proper management is undertaken, there are no inherent technical problems. The argument that ethanol has less energy density than hydrocarbons is true but somewhat misleading.  The issue is not energy, but useful work, for we have to power a transport fleet. Whether you use more (because of lower energy) is irrelevant if the issue is, can you run your motor?  Also, the work on ignition is delta PV, which increases if the pressure is increased. In a spark ignition motor, that requires a higher octane number, and ethanol has a significantly higher octane number than standard fuel, and with blending it is efficient at raising the overall octane. Accordingly we could get more efficiency by raising the motor compression, which raises a question that is seemingly always ignored: what properties will the future transport fuel have? If we do not address that, much of our planning is going to be wrong.
 
The real question is, how do we power transport once oil becomes scarce? Fermentation of sugars to ethanol has advantages. The first is, it is reasonably efficient on smaller scales. That means it can use wastes, which tend to be produced in smaller local amounts. There is also one other feedstock: synthesis gas. Certain anaerobes, including Clostridium, appear to be able to convert this gas stream to ethanol, and the microbes seem to be tolerant to a wide range of the mix of hydrogen and carbon monoxide. That means in principle we can get fuel from waste streams, such as the gas effluent of steel mills that otherwise have no use.
 
In my opinion, there appears to be no use other than to make ethanol by fermentation for small amounts of low quality synthesis gas, and no other technology that is convenient for low volumes of sugar waste. Either that ethanol can be used by the chemical industry, or we need to maintain in the long-term spark ignition motors. Long term planning for transport should take that issue into account, however in my opinion, planning for transport fuels appears to be operating on a "market rules" basis. That will have all the aspects of Darwinian evolution, and while the market enthusiasts might argue that evolution guarantees the fittest (actually, it does not – evolution involves survival of the adequate to reproduce) evolution also involves numerous extinctions. Do we wish to nearly extinguish individual transport? If not, some form of planning might seem desirable.
Posted by Ian Miller on Sep 20, 2011 11:54 PM BST
According to MO theory there can be no exceptions to the WH rules, nevertheless there are exceptions. On the other hand, in my opinion Pauling's valence bond theory that invokes canonical structures predicts where exceptions would occur, and why. In my example of the pentadienyl carbenium ion the concept of canonical structures puts positive charge evenly on C1, C3 and C5. If we substitute the ends with alkyl groups, which stabilize carbenium ions, then positive charge is preferentially located at C1 and C5. An empty orbital represents positive charge localized on a given atom and according to molecular orbital theory, the effect of an empty orbital should be the same as that of an occupied one.  As far as I can make out, this concept originated with Mullikan (Phys Rev 41: 49-71, 1932), but essentially as an assertion.
 
Why do the WH rules work? The usual argument is that a +signed function must overlap with another +signed function, and from that observation, the rotational characteristics of the WH rules follow. (Actually, the same rules follow if a bond forms only when plus interferes with minus, but the diagrams are more messy. This is actually the rule for forming antisymmetric wave functions, which at least in some formalisms is a requirement for the Exclusion Principle, but since the same outcome always arises, this issue is irrelevant here.) This gets to the point where we have to ask, what does the sign mean?
 
In general theory of wave interference it refers to the phase of the wave. When amplitudes have the same sign, they reinforce. The important point is there must be a phase relationship between the ends. Now, the phase of the wave is proportional to the action, and it changes because the action (the time integral of the Lagrange function) evolves in time. However, no matter how long zero is integrated with respect to time, you still get zero, and the Lagrange function of an entity with zero mass and zero charge, which is what an empty orbital has, is zero. The solution to the Schrodinger equation when E, V and m each equal zero is zero everywhere in all time. Zero can be added any number of times, but it makes no difference.
 
If so, the canonical structure with positive charge on an end carbon atom gives zero directional effect. Therefore, the strength of the preference (because there is always some of the canonical structure with the required phase relationship) is reduced whenever there is a carbenium ion involved in the transition state, and the carbenium site is substituted. The orientation of the substituent is significant too because the bigger the steric clash on the complying path, the easier it is for the canonical structure that permits non-compliance to become more significant because it forces rotation to start before significant orbital interactions.
 
Now, I believe this alternative interpretation is important for two reasons. The first is, it gives a specific reason why there should be exceptions to the Woodward Hoffmann rules, and it predicts where they will be found. Thus if nothing else, it will guide further experimental work. The alternative theory is either right or wrong, and there is one obvious way to find out. The second reason is more important: I believe that if this alternative interpretation is found to be correct, it forces chemists to revisit the concept canonical structures, which I believe gives far more fertile ground for understanding chemistry than the current MO theory, at least for the average bench chemist. Further, I suspect there are no aspects of organic chemistry (and probably not of other chemistry, except I am not familiar enough with that to be sure) that does not comply with the concept of canonical structures, if these are properly used. So, there is a further challenge: find some aspect of chemistry where canonical structures, properly used, fail.
Posted by Ian Miller on Sep 5, 2011 3:58 AM BST
Perhaps my challenge, "what is the most illogical thing you can associate with standard quantum mechanics? " was not to everybody's taste. Sorry, but I cannot help feeling that ignoring problems is not the way to enlightenment. Also, part of what follows is critical to my question, where do you expect violations of the Woodward Hoffmann rules?
 
Consider the following:
(1)  You cannot know something about a phenomenon until you observe it.
(2)  Prior to observation, a state is a superposition of all possible wave functions.
What I consider illogical is that it asserted that the superposition of states is a reality and when an observation is made, there is a physical event known as the "collapse of the wave function". Such a superposition of states can never be observed, because by definition, when it is observed, it collapses to one state. The electron either is or is not, there. So, why does this assertion persist? (I have no idea on this one. Anyone, please help.)
 
The most well-known example is the Schrodinger cat paradox, in which a box contains a cat and radioactive particles associated with a device that emits hydrogen cyanide. If no particle is emitted, the cat is alive; if it is emitted, the cat is dead. Before we observe, we do not know whether the particle is emitted or not, and both states may be equally probable. That is described as a superposition of states, but according to the paradox, a consequence is the cat is also in a superposition of states, neither dead nor alive.
 
The problem involves the conclusion that the square of the amplitude of a wave function gives the probability of making an observation. If the objective is to compute the probability of something happening, and you consider the states to represent probabilities, such states have to be superimposed. The probability of something that exists, such as the cat, must be 1. In this case, the cat is either dead or alive, and the probabilities have to be summed. The same thing happens with coin tossing: the coin will be either heads or tails, and there is equal probability, hence two states must be considered to make a prediction.
 
Herein lies a logic issue: the fallacy of the consequent. Thus while we have, "if we have a wave function, from the square of its amplitude we can obtain a probability" it does not follow that if we can ascribe a probability, then we have a wave function. What is the wave function for a stationary coin, or a stationary cat? It most certainly does not follow that if we have two probabilities, then the state of the object is defined by two superimposed wave functions. A wave function may characterize a state, but a state does not have to have a wave function.
 
A wave function must undulate, and to be a solution of the Schrodinger equation, it must have a phase defined by exp(2pi.i.S/h), where i is the complex number, and S is the action, defined as the time integral of the Lagrange function, which can be considered as the difference between the kinetic and potential energies. (The requirement for the difference arises from the requirement that motion should comply with Newton's second law.) As can be seen, a complete period occurs whenever S=h, which is extremely small. For the spinning coin, any wave function period returns to its initial value so quickly it is irrelevant, and classical mechanics applies. In this light, the Schrodinger cat paradox as presented says nothing about cats because there is no quantum issue relating to a cat.
 
What about the particle? When Aristotle invented logic, he allowed for such situations. To paraphrase, suppose one morning there is a ship in harbour fully laden. The question is, where will it be tomorrow? It could be somewhere out to sea, or it could be where it is now. What it won't be is in some indeterminate state until someone observes it because it is impossible for something of that size to "localize" without some additional physical effect. Aristotle permitted three outcomes: here, not here, and cannot tell, in this case because the captain has yet to make up his mind whether to sail. Surely, "cannot tell" is the reality in some of these quantum paradoxes?
 
As a final comment, suppose you consider that with multiple discrete probabilities there have to be multiple wave functions? The probabilities are additive and must total 1. The wave amplitudes are additive, and here we reach a problem, because the sum of the squares does not equal the square of the sum. Mathematically, we can renormalize, but what does that mean physically? In terms of Aristotle's third option, it is simply a procedure to compute probabilities. However, it is often asserted in quantum mechanics that all this is real and something physically happens when the wave function "collapses".
 
Now, you may well say this is unnecessary speculation, and who cares? Quantum mechanics always gives the correct answer to any given problem. Perhaps, but can you now see why there can be exceptions to the Woodward Hoffmann rules, and where they will be? The key clue is above.
Posted by Ian Miller on Aug 23, 2011 11:53 PM BST
First, the simple answer to my challenge, which was whether there were exceptions to the Woodward Hoffmann rules.  The exception I wish to discuss involves the cyclization of pentadienyl carbenium ions to cyclopentenyl carbenium ions [Campbell et al. J A C S 91 : 6404-6410 (1969)]. Part of the reason for the challenge was to illustrate a further point, namely that quite a bit of critical information is out there, but if it is not recognized quickly as being important, it gets lost. However, it can be found because there is usually someone out there that knows about it. Such information could be recovered more readily if there were a web facility for raising such questions. To succeed, it must be easy to use and abuse prevented so that scientists in general will cooperate.
 
A further point I was trying to make was to emphasise logic. This was a problem at the end of my ebook Elements of Theory and these problems had the structure that the reader had to try to come to an answer, then analyse my given answer, parts of which were not necessarily correct, the idea being to promote the concept of critical thinking. Part of the answer I gave included what I thought was an innocent enough deception, namely the Woodward Hoffmann rules were violated because of extra substitution. That, of course, has a logical flaw; without the substitution you cannot tell whether the rules are violated or not. The reason for mentioning this here is that in the abstract to the example that Chris Satterley found, the same statement is made.
 
So, for a further little challenge, what is the most illogical thing you can associate with standard quantum mechanics? My answer soon.
 
The cyclization of pentadienyl carbenium ions could be followed because there was a methyl group at each end, and the stereochemistry was known. The deviations from the W-H rules were explained in terms of the steric clashes that occurred  as the ends came together: H-H, H-Me and Me-Me. The H-H clash produced more or less complete agreement with the W-H rules, but the Me-Me clash led to an almost 50-50 product distribution between the two possible cyclopentenyl ions. There is little doubt that a steric clash occurred as the carbenium ion approached the geometry needed for cyclization to take place, however at least one further thought should be considered. The pentadienyl carbenium ion should be planar, so the methyl-methyl clash does not give a preference to any particular rotation. A 50-50 mix of products suggests that when the ends come together, random vibrations lead the methyl groups to slide one way or the other, and both ways cyclize. This is the important bit: if the W-H rules were absolute, what would happen is that the prohibited route would not react, the material would remain in the pentadienyl form, reopen, then come together again, and would react only if/when it accidentally got into the correct configuration. The steric clash would result in both reactions being slower. As it happened, in the actual experiments all cyclizations were too fast to know about that aspect.
 
However, while additional substitution was present in the ring opening reactions of benzocyclobutene, the strain issue is not quite relevant because by the time the substituents generate a good clash, the system has proceeded far enough along the reaction coordinate that extra strain would simply drive the ring opening faster.
 
Which brings me to the second little challenge: is there any chemical theory that offers a rational explanation for these exceptions to the W-H rule? I must confess that back in the late 1960s I simply assumed that this exception merely meant that the W-H rules were simply preferences, and with a little discouragement, they could be over-ruled, however I am now convinced that this is just lazy, and when nature provides exceptions like this, it is trying to tell us something.
Posted by Ian Miller on Aug 16, 2011 6:16 AM BST
For the two responses to my challenge; thank you, and also for the comment on last week's blog. I am going to leave the challenge open for another week, and add a further comment on biofuels.
 
There is little or no difference between a plan to do something and a theory on what should be done, and for producing biofuels there is no shortage of alternative theories as to what should be done. The question now is, should some basic analysis be done on the various options before we send vast sums of money after them? One argument is, until you do the research you cannot analyse the situation well enough to come to a conclusion. I would agree that you cannot make close decisions, and as noted in my previous blog, there will probably be a number of good solutions to this problem implemented. But surely we can avoid some obvious mistakes? Unfortunately, if the analysis is weak, it can be a mistake to "avoid obvious mistakes" when in fact they are not mistakes. Notwithstanding that, I can't help feeling that much of the money devoted to biofuels research is not well-spent. There is an adage, you can be fast, you can be cheap, you can be efficient; choose no more than two. I believe available capital is scarce in terms of the size of the problem and we want the solution to work. Accordingly, we should be more reluctant to jump onto the latest fashionable exercise.
 
The most obvious biofuel is ethanol, and the production of ethanol from fermentation of sugar is a technology that has been operating for about 6,000 years, so we have learned something about it. Currently, fuel ethanol is produced in Brazil from its sugar cane, in the US from corn, but growing crops for ethanol is not considered a likely solution, if for no other reason than with an expanding population, food becomes a priority. Of course, waste food can always be converted to fuel, but the size of the fuel requirements requires the bulk of the fuel to come from somewhere that does not compete with food production.
 
A more likely answer is to produce ethanol from cellulose by first fermenting the cellulose to make glucose. Also, because "biotech" has been very fashionable, this has received a lot of funding, but I believe any reasonable analysis will show this route is suboptimal. Most plant biomass contains polyphenols, such as tannins and lignins. The question is, why? The usual answer is that lignin binds cellulose and lets trees grow tall. I would argue that is a side-benefit, and the original reason for polyphenols to be incorporated into plant material is to make them more difficult to digest. Those that get eaten less reproduce more frequently and evolve to support that trait. In other words, lignocellulose has been optimized by evolution to avoid being digested by enzymes. Yes, nothing is perfect and we can find enzymes that will hydrolyse woody biomass, but the hydrolysis is slow. Ethanol manufacturing will seemingly commence with immense tanks of wood chip slurry that take days to ferment. If all the cellulose is converted to glucose, then the lignin, which is about a third of the mass, will be left as a finely divided and very wet sludge that is difficult to filter and which will contain a reasonable fraction of the glucose. In practice, much of the crystalline cellulose will not hydrolyse, but will merely expand the sludge.
 
Is this the correct way to go about the problem? It seems to me that theory suggests it is not, yet it appears that in recent times more money has been sent chasing this option than any other single option. Perhaps someone knows something I do not know and I am wrong in the assessment, but if so, whatever that is is not widely known.
 
Posted by Ian Miller on Aug 4, 2011 1:02 AM BST
So far, no response to my challenge, and in particular, nobody has posted a failure of the Woodward Hoffmann rules. Why not? One answer would be, there are none, so there is nothing to post. Then there is the possibility that there are examples, but nobody can find them. More depressingly, nobody cares, and finally, most depressingly of all, at least for me, perhaps nobody is reading my blogs.
 
Meanwhile, I have had a topic request! What about transport fuels? Conceptually, a plan is the same as a theory, and in this case there are plenty of theories as to what we should do, so the question now is, how do we choose which to follow? To slightly amend a quote from General Wesley Clark, there are two types of plans: those that won't work, and those that might. We have to select from those that might work and make them work. Accordingly, the art of selection involves the logical analysis of the various plans/theories. I must start with a caveat: I have been working in this area during both the previous oil crisis and now, and while I have considerable experience, it is in one area. The reader must accept that I may show bias in favour of my own conclusions.
 
My first bias regards methodology. A particular source of irritation is the request, when proposing a process, for an energy balance. In one sense, from time symmetry, I know energy is conserved so the energy is always balanced. If they mean, how much do you get out usefully, the answer is, the second law says, less than you put in. However, even if you account for this (usually, with varying degrees of optimism!) this is of no use in selecting a process. When you enter certain intellectual areas, there are rules to follow. Just as in chemistry it is totally pointless to try to derive a theory while ignoring atoms, in economics the only relevant means of comparing processes is based on money, because this alone is intended to balance all the different factors. It may not be perfect, e.g. environmentalists will tell you that no proper cost is placed on environmental protection, but it is all we have. To illustrate why, suppose you wanted to go somewhere and someone offered you a megajoule; would you prefer a container of gasoline, or a mass of water at 35 degrees Centigrade?
 
When forming a theory to address a commercial issue, the first question is, what is the size of the market? According to Wikipedia, world oil production is 87.5 million barrels a day (13.9 billion litres per day). To put that into perspective, I once saw a proposal to build a small demonstration plant that would convert 35 t/day of cellulosic material into biofuel. If successful, it might have produced up to 11,000 litres per day. Yes, it is a small plant (designed for a specific source of raw material) and you can imagine increasing its size, but whatever factor you multiply by, this is going to have to be repeated an enormous number of times. It is most unlikely that anybody can come up with a single raw material to supply this volume. This brings a first conclusion: there is no magic bullet. If we are to solve this problem, contributions will have to come from a number of different sources. Accordingly, we are not looking for a unique plan.
 
An immediate second conclusion is, there is no overnight solution. We have built up a system over one and a half centuries based on the assumption that oil will be cheap and widely available and that has led to some fixed infrastructure that cannot be replaced overnight. Fuels have evolved from a simple distillation to high performance fuel such as JP-10, which appears to have as a major component tetrahydrodicylopentadiene. The very large modern oil refineries are extremely complicated systems that use experience learned over that period. Different processes have to start smaller. Furthermore, it will take at least a decade, and probably longer, to even get a new process to a successful demonstration plant.
 
So, what processes might work? It may come as no surprise to hear that several quite promising processes were developed, at least partially, during the previous oil crisis, then abandoned once the price of oil fell. Over the next few weeks, interspersed with the other subjects, I shall give my views on what should be done. In the meantime, as a possible further challenge, how many of those earlier proposed processes are you aware of? My guess is, most of the information gained then is lost, other than possibly residing in the heads of retired scientists and engineers. This raises another very important point in terms of economic theory: there is an enormous amount of work to be done to solve this fuel problem, the economic system appears to be creaking under government debts incurred in times of unparalleled prosperity, so can we afford to waste what in principle we already have?
Posted by Ian Miller on Jul 23, 2011 3:10 AM BST
   1 ... 9 10 11 12 13 14 15 16 17 18