Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

I recently attended a talk where computations on some quite complicated molecules gave excellent agreement with observation. What concerned me was that the biggest single term was usually the energy required to provide compliance with the Exclusion Principle. I emailed the speaker for an explanation of how this could arise; so far I have not received a response so if anyone reading this wishes to explain, I would be very interested.
 
In my interpretation of quantum mechanics, compliance with the Exclusion Principle should involve zero energy change. I assume the wave is real, and a stationary state arises when the wave function is single-valued, i.e. as the wave completes any number of periods, it has the same value, which means that value and what it represents does not move. Accordingly, following any number of periods, the particle generating such a wave shows zero expectation change of position at that value of the wave function, hence it has shown zero net acceleration between quanta of action. This is the only condition consistent with Maxwell’s electrodynamics that permits the electron not to radiate energy.
 
If so, the Exclusion Principle arises as follows. The wave can only be single-valued when the action is quantized. For a single electron, such quantization requires a 2-cycle period; for the 1s orbital there are no nodes and two cycles are required for a crest and a trough. For higher wave functions the same applies, although the argument is more complicated. Now, a fundamental property of waves is that two equivalent waves in opposing directions form a stationary wave with half the wavelength. Adding a third electron, and its corresponding wave, cannot under any circumstances permit a stationary wave, in which case Maxwell’s electrodynamics ensures that state is totally unstable, and must either radiate or absorb electromagnetic energy. The Exclusion Principle follows, with, at this stage, no reference to magnetism. One important point, of course, is the waves must have the correct phase relationship, and it is from this, and the corresponding wave component due to magnetic interactions, that gives the required quantum number relationship.
 
What this claims is that the Exclusion Principle follows if the wave is real, if action is quantized, and if Maxwell’s equations apply. Whatever else, I shall back Maxwell’s equations almost beyond any other piece of physics. If so, the Exclusion Principle is NOT a piece of independent physics. The requirement to comply with the Exclusion Principle in forming a bond is that there must be a correct phase relationship between the waves of two unpaired electrons. But it is a property of waves that a phase shift does not involve a change of energy, in which case the energy term arising from the need to comply with the Exclusion Principle must be zero.
 
Either the above is wrong or the computations are wrong. Agreement with observation does not imply truth; the most successful theory ever in terms of getting correct answers over the longest period of time was the theory of Claudius Ptolemy, and that was just plain wrong. So you see why I am unimpressed by these computations.
Posted by Ian Miller on Jun 22, 2012 3:00 AM BST
One problem I am trying to allude to in these blogs is that we have to avoid wastage of effort. We have only so much money to devote to developing new technology, and once something is completed, it should not be left in a form where the knowledge decays. As another example, there is a means of making ethanol through simple pyrolysis, and it is infuriating because while a considerable amount of money was invested in developing it, I have no idea how well it would work in practice. The problem is that it was developed towards the end of the last energy crisis, the scientists will have retired or will be dead, and there is insufficient information available. Results were written up in scientific papers, however the practical experience is lost, as are the answers to questions seemingly omitted from the papers. The omissions were not important at the scale of operations, but they could become critical on scale-up.
 
The concept is that when cellulose is pyrolysed, the first product formed is mainly levoglucosan, or 1,6-anhydroglucose. This can be readily prepared on the laboratory scale through simple vacuum pyrolysis, and the conditions for doing this were extensively studied by Fred Shafizadeh. The key problem is to get the levoglucosan out of the biomass before it can further react, so Shafizadeh heated small volumes of finely divided cellulose under vacuum. That works well, but it is not that easy to scale up.
 
Claims for the solution to that problem came from the old Soviet Union. Two separate schools heated biomass chips (presumably reasonably finely divided) in fast-moving steam at about 350 degrees C and about 6 bar. There are now two options. The steam can be cooled and depressurized, in which case the sugars condense out, then the steam can be repressurized and reheated. This is the most energy efficient. The second is the water can also be condensed to give a dilute solution of sugars. There can also be a process that lies in between: some of the steam is condensed to give a stronger solution of sugars. Such processes, provided they are properly engineered to recycle heat, should be reasonably energy efficient, if they work according to plan. From what I can gather, this system was engineered up to do several kg/hr continuously, but details are hard to come by. Basically, if someone wanted to repeat this, they would have to start from scratch as far as engineering is concerned. There is also no real information available relating to lignin fragmentation products that presumably came over with the sugars. This could be important because phenolics should cause enzymes to cease functioning.
 
Is this a genuine possible answer to the problem of making biofuels? I have no idea. It passes the test of “looking reasonable”, it should be a net energy producer because the lignin alone should power the system, but there are two problem areas: can the system be engineered to run reliably, and are there too many impurities in the product to produce useful fuel? What is a real shame is that the original scientists and engineers would be able to answer those questions, at least at the level they operated on, but the information is lost. That sort of tragedy should not be repeated. 
Posted by Ian Miller on Jun 14, 2012 10:27 PM BST
Results from Dawn at Vesta were published in Science, but they were not especially informative, at least for theories of planetary accretion. There were data on the surface composition, which confirmed the proposition that certain meteorites came from Vesta and that Vesta differentiated. Somewhat confusing, the density was determined to be 3.456, which is not that much different from orthopyroxene, and this makes it difficult to see how the large core could be iron. It was argued that the outer layers must be very porous, suggestive of volatiles having been emitted. In this context, Vesta has strange stripes similar to those of Phobos, which also appears to have been struck on the end with an impact almost large enough to disrupt it. These results do not really add much to how Vesta formed, largely because they mainly confirm what was already “known”, and also since Vesta is very atypical for an asteroid, it may have formed elsewhere and migrated.
 
Soon Dawn will investigate Ceres. What do I expect there? While I made over 80 predictions in the ebook, I left out Ceres. If we assume, from its spherical shape, it has differentiated, my theory of its formation would suggest that the surface will probably be largely ice covered. The ice may be rich in organic material, or it may have a lot of blackish carbonaceous material, or it may be reasonably white, the variation depending on how much heat was generated in its interior for how long, which is unknowable.
 
Two theories were espoused that largely contradict my work. One argued that carbonaceous chondrites differ from ordinary chondrites because they formed earlier (Icarus, 220, 162). I argue that they are different because they formed at a different temperature in a different place. In this context, my criticism of the “earlier” theory is why did these bodies not continue accreting? The second paper (Icarus, 220, 144) proposed that the great water flows on Mars at about 3.8 Gy BP were caused by the Argyre  impactor heating the surface, giving a greenhouse atmosphere based on 6.5 bar water. Such water would not lie on the surface, and the erosion features would be due to water flowing as the heating collapsed. It is unclear to me why the water did not snow out as opposed to rain out, particularly since the Martian winter is twice as long as ours, and the south polar region is dark for this time. There is reasonably clear evidence fluid flow on Mars occurred intermittently over at least a 200 My period, and I for one cannot see how one impactor could manage that.
 
Three papers were in accord with my theory, but two would be in accord with most theories. The UV spectrum of Enceladus showed ammonia hydrate (Icarus, 220, 29). This is hardly surprising, though, and while required for my theory, it is expected from a number of others. The second was that it was proposed that the lunar basalts arose from very deep melting, in the presence of volatiles. Further, these are more localized to the nearside. This is consistent with the Theia impactor not being fully devolatalized, although again that is not specific to my theory. Finally, a paper in Nature (485, 490) proposed that there was a great geological discontinuity on Earth at about 2.5 Gy BP, and prior to this volcanic gases were more reduced. Standard theory has argued gases were always oxidized; my theory argues that all volatiles on the rocky planets were initially reduced because they had to be accreted as solids and became volatile by reaction with water (which is why Venus has more gas than Earth but essentially no water - Venus accreted less water because it was hotter and it used up almost all of it generating the atmosphere, and in subsequently oxidising it.)
 
Posted by Ian Miller on Jun 2, 2012 5:13 AM BST
Pyrolysis of biomass is one of the oldest technologies, originally used to make charcoal for iron smelting, however recently it has been advocated as a means of making liquid fuels. I find this difficult to come to grips with and I question, from a theoretical point of view, is this process worth persisting with or should we move onto something else? Are we wasting valuable money pursuing a lost cause?The objective of gasification is excluded from this discussion.
 
Saleh et al. (Energy and Fuels, 23, 3767 (2009) has given an outline of how to optimize pyrolysis of biomass. The required conditions to optimize liquid products are a very high heating rate, finely ground feed and temperature of 500 degrees C. The yield of liquids can be as high as 44%, although these may have up to 40% water in them. The rest is char and gas. The problem then is, what to do with the liquids? The major constituents are often formic and acetic acids, 1-hydroxy-2-propanone, furfural and a range of phenols. There are also at least a hundred minor components, some of which reach a few per cent. Saleh et al. suggest that this oil can be converted to synthesis gas for Fischer Tropsch synthesis, but this seems somewhat wasteful – if synthesis gas was required, why not gasify the biomass as a whole? Similarly, the oils can be upgraded through hydrogenation technology, (at the cost of losing the small carbon fragments) but this becomes yet a further step, and if you are going to do that, why not hydrogenate the biomass in the first place? Some suggest using it as low-grade heating oil, but if you merely want to heat something, it might be more efficient to burn the original biomass. One thing that should not be done is to store it for a significant time because most pyrolysis oils appear to be somewhat reactive, and over time they darken and become much more viscous.
 
The composition is highly dependent on how you do the pyrolysis, and on the raw material. The reason for this is that while the pyrolysis initial steps are quite simple, the products further react. Lignins are based on the free radical condensation of phenols substituted at the 4-position with CH=CH-CH2-OH groups; the phenol may have 0, 1 or 2 methoxyl groups adjacent to the phenolic group, depending on the nature of the plant. During pyrolysis, lignins tend to break either at the 4-position or one carbon further, so the products tend to be methoxylated phenols, 4-methyl methoxylated phenols, aldehydes such as vanillin, together with a variety of other 'bits and pieces' from the alkyl group. As far as I am aware there is no preferred procedure that makes some more useful products. Pieces not attached to phenols are effectively lost.
 
Cellulose first decomposes almost entirely to levoglucosan (1,6-anhydroglucose), which is volatile at the decomposition temperature. However it is also highly reactive, and if it cannot be removed from the solid in a very short time, it reacts with something else and forms a cascade of materials, most of which would generally be described as 'tars'. The same can be said for a number of other products of biomass pyrolysis, which essentially then requires the material to be very much size-reduced (which is expensive) and the heat transfer rate to be very high.
 
To summarize, this technology converts about a third to a half of the biomass to form liquid with a wide a range of reactive components that have little direct use. Superficially, pursuing this technology would seem to be something of a last resort yet there is evidence that a lot of money is being spent attempting to make it work. There appear to be more papers produced in the energy-related journals  that I read on this topic than any other. Why? Someone is just plain wrong; the question is, who?
Posted by Ian Miller on May 22, 2012 4:19 AM BST
There are three predominant interpretations of quantum mechanics: in the Copenhagen Interpretation the event (say, the path of a particle following diffraction) is determined probabilistically by the act of observation, the Multiverse interpretation has all probabilities eventuate somewhere, while the Pilot Wave has the event decided causally. One major failing with the Pilot Wave is that it becomes almost impossible, in its current form, to contribute to chemistry.
 
My interest in this problem started in my honours year, with a lecture on the hydrogen molecule. I stopped the lecture to point out that the Hamiltonian operator as presented led to the system becoming increasingly stable as the internuclear distance D diminished. The operator had no causal reason to diminish electron probability between the nuclei, and the lecturer could only agree that something was wrong. Shortly after, it occurred to me that the answer must lie in wave interference, so I tried a wave approach. Ten minutes later I had an analytical answer: 1/3 the energy of the hydrogen atom. (Look it up – it's not bad!)
 
Either the wave determines the motion of the particle or it does not. If it does not, what is it doing? Why do all quantum computations accidentally end up in agreement with the predictions that it does?  With that thought, I decided that for me, the wave must determine the particle motion, but how? Conceptually, this is the Pilot Wave, but in detail, what I have come up with is somewhat different.
 
For the wave to act on the particle, they must interact. Now, assume the particle has a velocity v. What is the wave doing? With a little algebra, it is easy to show that the phase velocity (the velocity of, say, a wave crest) is given by E/p, where p is the momentum of the particle (by derivation). The problem then is, what is E? Some textbooks woodenly quote E = m(c squared). This has the rather remarkable property that the phase velocity always exceeds the speed of light, and the stationary particle gives off waves at infinite velocity. Some say that this does not violate relativity because the wave carries no information. That is absolute nonsense: it defines the velocity of the particle. Worse than that, it defines an absolute velocity, which requires a fixed background as a reference, and that also violates the most fundamental principle of relativity. Heisenberg objected, and put E equal to the kinetic energy, which has the rather odd property of requiring the wave to travel at half the particle velocity, which makes it difficult to see how it can affect the particle.
 
Which gets to my version. I put E = m(v squared), twice the kinetic energy, which means the wave must contain energy, and hence is real. The wave now travels at exactly the same velocity as the particle, and hence can affect it. Now the wave function only works if it is in the form exp (2pi.i S/h). (A sine wave does not give you quantum mechanics.) My guess is that puts the wave in an additional dimension, which is almost required for energy conservation, in which case quantum mechanics is the first actual evidence of the additional dimensions proposed by string theorists.
 
Of course, you won't believe that. So, how do you explain how the wave function makes the particles do what they do?
Posted by Ian Miller on May 9, 2012 12:01 AM BST
Just when I thought my sequence of blogs on making ethanol as a biofuel was complete, I found another option that involved two major projects in the US (Range Fuels and Enerkem) and, in my opinion, should never have been considered.  Both first made synthesis gas by gasifying biomass, a technology that is reasonably well-developed. As an example, in the mid 1970s Union Carbide operated a 200 t/day plant at South Charleston, West Virginia, although it is unclear how many continuous hours were operated. The Purox gas typically contained about 26% hydrogen, 40% CO and 23% CO2, and some source of additional hydrogen (and a significant additional cost) appears to be required.
 
So, assume you have syngas and want to make fuels. What next? Most analyses would select one of three obvious routes that are deployed commercially: Fischer Tropsch (FT), methanol, and short alcohols, which is an abbreviated FT route. However, from what I can make out, these two investments elected to propose to make ethanol the hard way. They both started off by making methanol (and Enerkem apparently has four plants operating that are technical successes) and I love the next step, for personal reasons noted below. They propose to react methanol with restricted amounts of CO to make acetic acid/methyl acetate. All acetic acid is then converted to methyl acetate in a further step. The methyl acetate is then hydrogenated to make a methanol/ethanol mix, from which methanol is recycled and ethanol produced. After huge investments, neither venture could convince the market this is economical. While the jury is still out, it appears that these ventures will not produce ethanol.
 
 My question is, why was this alternative route to ethanol not rejected on theoretical grounds prior to the loss of so much money? Maybe I am perverse, but shouldn't a simple analysis have suggested "too many steps to compete with the single step process?" In my opinion, hundreds of millions of dollars could have been saved simply by an hour of sound theoretical analysis. The reason I wrote my first ebook, and I started this blog, is to try to convince that theory is not confined to abstractions or marathon computations. Thinking first can be productive, and it is cheap!
 
Now to round off a little personal note to explain a comment above . When I was a first year student, I undertook extra reading, and not the usual sort. Accordingly, when I got the question in a test, "Convert an alcohol to an acid with one extra carbon atom" I could not resist writing down  "R-OH  + CO  -> product", and listed a catalyst, a temperature and a pressure. Yes, I know what they wanted, but I was strong on alternatives even then. This got zero marks, so I protested. "You can't do anything like that," they explained. Now that was not satisfactory. "According to Paul Karrer," I replied, "IG Farbenindustrie used this to make thousands of tons," then I added, "He is a reputable chemist, isn't he?" (He was one of the 1937 Nobel Prize winners in chemistry.)  Rather grumpily, they had to agree he was, but they did not give me any marks because you could not do that in the lab, and when I looked as if I might question that, they added, "at least in glassware". That was not specified in the question, but I did not care. I had made a point.
Posted by Ian Miller on May 2, 2012 3:31 AM BST
You announce an alternative theory and further papers come forwards. There is a natural tenseness: are you falsified already? At the end of each month I shall report on progress of which I am aware, good, bad or indifferent. For April, the most relevant papers I came across were as follows.
 
Smith et al. (Science 336: 214-217): Mercury has a relatively high moment of inertia (0.353 + 0.014). Since a uniform sphere has a moment of inertia of 0.4, this indicates that Mercury has a relatively thin crust. What was postulated was a silicate crust ca 300 km thick, 100 km of iron sulphide, then an iron-rich liquid core, possibly over a solid interior core, all of which was claimed to be consistent with Mercury having considerably more reduced components, similar to those of enstatite chondrites. How does that sit? I made a prediction that Mercury would have more reduced components, and specified phosphides, nitrides and carbides, which are found in enstatite chondrites. A higher sulphide content for Mercury had previously been proposed, and I never considered a separate layer of iron sulphide because iron sulphide is usually considered to dissolve in an iron core.
 
Kleine et al. (Geochim. Cosmochim. Acta. 84: 186-203): a number of Angrites (a small but diverse group of refractory mafic to ultramafic meteorites from a body that had differentiated) were shown to have originated from one body and that body apparently differentiated twice, both within 2 My of the formation of calcium aluminium inclusions. Actual accretion of the parent body must have occurred within 1.5 My. This is harder to judge. I require the rocky planets other than Mercury to accrete between 1 and 2 My, with Mercury essentially complete after 1 My, although there would be numerous later crater-forming strikes. Since we do not know what the parent body was, standard theory might argue that it was one of the planetesimals, although standard theory has no mechanism by which planetesimals form; they are the assumed starting place.
 
Fastook et al. (Icarus 219: 25-40): evidence was found for subglacial meltwater channels in the south circumpolar Dorsa Argentea (Mars) dating from the Noachian/Hesperian. The authors state these data require a local temperature in the range of -50 to -75 oC, the data are consistent with basal melting but not "top-down" melting, and this contradicts the concept of a warm and wet early Mars. My explanations for the Martian fluvial systems assumed temperatures never averaged much above -60 oC, and were probably generally about -80 oC, which equally contradicts the early "warm and wet" Mars. I also provided a mechanism for basal melting to explain the great chaotic flows, and the same mechanism would apply here, except that the fluid could escape, hence the channels.
 
Yaoling Niu (RSC Advances 2: 3587-3591): Zr-Hf and Nb-Ta are effectively elemental twins in their standard valence states (+4 and +5 respectively), and standard theory argues they should not separate during geological processes. The assumption also is that the Earth has always been basically "oxidized", as shown by the distribution of vanadium and chromium in various ancient and modern magmas. As Niu shows, perhaps they should behave similarly, but seemingly they do not. My proposed mechanism for Earth's formation requires the original material to be reduced, more like enstatite chondrites. These pairs do not have equivalent redox potentials, and hence would behave differently under certain reduced conditions.
 
Readers will have to form their own opinion as to the relative success of the theory, but I feel that so far it is very much still alive.
Posted by Ian Miller on Apr 26, 2012 3:26 AM BST
In previous blogs, I tried to outline some of the pros and cons of ethanol as a biofuel, with fermentable sugars largely being provided either from crops, or from lignocellulose by fermentation. The difficulty with using lignocellulose is that lignin has evolved to protect cellulose from such enzymatic attack, with the result that processing plant is very large, and there are massive volumes of water and wet spent biomass to process. Further, there is almost as much hemicellulose as cellulose, and because of the variety of linkages in hemicellulose, a set of enzymes is required, and there is a significant additional cost involved in supplying these enzymes. The alternative is acid hydrolysis, and for some reason, this appears to have been discarded as an option. Is that premature?
 
The basic problem with acid hydrolysis is that employing dilute acid results in an unacceptably slow reaction unless heated, but glucose (the desired product) reacts with hot acid to produce hydroxymethyl furfural (HMF), which further reacts to produce levulinic acid plus dark polymeric material. The net result is that in the Madison process, which involves heating wood chips with dilute sulphuric acid, the recovered yield was about 35% of theoretical, which is clearly not good enough. Interestingly, however, there are two options that are seemingly not currently considered.
 
The first is flash hydrolysis. As shown by Chen and Grethlein (Biomass 23: 319-326, 1990) if you heat the biomass for seconds at over 200 degrees C, conditions they argue are reasonably reached in a specially designed cyclonic reactor, then yields of up to 87% are claimed. The concept is, of course, you make the glucose and quench it before the conversion to HMF can get underway. Actually, that in turn may not matter that much because if the HMF can be recovered, it can also be converted into useful chemicals/fuel. These conditions will also hydrolyse hemicellulose, and the pentoses, which do not so easily ferment to ethanol, should be recoverable as furfural, which is valuable in its own right. One problem might involve size reduction; the acid has to get at the cellulose to hydrolyse it, seconds do not permit much diffusion, so the interior of chips may not be reached. Size reduction is possible, but the work done doing it may be too costly.
 
The second option is hydrolysis with chilled 40% hydrochloric acid. Provided the concentration of acid is high enough, the cellulose simply dissolves. The cellulose converts smoothly to glucose, and further reaction is apparently trivial. The hydrochloric acid is removed by vacuum distillation and recycled. You may be skeptical, however this is one of the very few processes that were ever deployed at scale: the Germans made ethanol this way during World War II.  As far as can be determined, this worked well even on reasonably sized chips, and there was only one problem: corrosion. However, while biomass-processing may not have advanced much since then, materials have become far more advanced.
 
Would either of these processes solve any problem? I do not know, but what concerns me is that there are no data around that are readily available that would permit these processes to be either eliminated from consideration, or provisionally considered so that data can be obtained to address questions that currently have no answers. Sending public funds after random guesses seems wrong to me. Why not analyse the problem and publish the findings so that funding can be deployed on a more rational basis? A very large amount of money must be invested to compensate for declining oil supplies. Given the current financial constraints, surely it is better not to waste it, and any progress that can be made from theory will save a lot of wastage. Theory is cheap – why not use it?
Posted by Ian Miller on Apr 19, 2012 3:17 AM BST
Why is our solar system different from most of the others we see? How common are planets like Earth that have life on them? Is there life under the ice of Europa? Why will alien life have similar systems to ours? How did we get homochirality, and more to the point, why? I am uploading an ebook with the above title to Amazon that provides answers, and once they put it up, for five days it is a free download, which I why I am mentioning this here. (The answers to these questions are: accretion disks last between 1-10 My after stellar accretion, and our system is required to be one of the 50% where the disk only lasted ca 1 My; some significant fraction of that 50% around stars of similar size to Sol because life is a consequence of planetary formation; life under Europa is impossible for several reasons, an absence of nitrogen and no mechanism to make phosphate esters being two of them; RNA is the only feasible polymer that can self-reproduce and form abiogenically in dilute aqueous solution; homochirality is a requirement for life and it selects itself, but why is too complicated for such a quick comment.)
 
Why am I putting up an alternative theory as an ebook, instead of through peer-reviewed scientific papers? There are several reasons, including:
(a) The theory is largely chemical, which explains why the various planets and minor bodies have different compositions, but only a few physics journals will publish theories on planetary formation,
(b) Such journals require computer modelling. I can't do that, and in any case the growth of a planet with respect to time requires four terms, none of which have clearly defined values.
(c) Scientific papers really should assert one major point. No single point is convincing, but my argument it that the complete set is.
(d) I want to get it read. So far I have published what I believe are four very significant advances (if correct) in peer-reviewed papers, and I doubt anybody reading this could name one of them.
 
But surely standard theory is adequate, so you say? Then consider this. In standard theory you need a relatively massive amount of solids to form planetesimals (with no known mechanism to form them) which then, in our system, take about 15 My to get to the point where Jupiter can start massive gas accretion. Because of solids dilution with distance, the problem gets much worse with distance. (Originally Safranov required 10^11 yr to form Neptune. Subsequent models have greatly reduced this, but it is not entirely clear to me how, or, putting it another way, why was Safranov so wrong?) However, there is a planet LkCa 15b that is about three times Jupiter's distance from a star slightly smaller than Sol, it is about 5 times Jupiter's mass, and LkCa is a ca 2 My star. (Which is why I believe our system removed its accretion disk early.) If you are interested the download, at http://www.amazon.com/dp/B007T0QE6I is free from April 12-16, which is why I am putting it on this post. I know there should be no commercial point to these posts, but free seems to me to be different.
Posted by Ian Miller on Apr 12, 2012 12:58 AM BST
In my previous two postings, I praised David MacKay's approach to energy analysis in his book, "Sustainable Energy — without the hot air", because he used numbers to put everything into perspective. The problem with numbers is that while they can put the problem into perspective when the numbers are used correctly, the opposite happens when they are not. As the old computer saying goes, "garbage in, garbage out". In general, numbers showing the dependency of one variable (that being measured) with the change of another effectively represent solutions to partial differential equations. In the lab, this is not a problem; most chemists will have done this, assiduously keeping everything except what we wish to relate constant (or at least try to). Unfortunately, for living systems, ecological systems, economic systems, and a number of observational systems, such a separation of the variables can't be done, so elements of significant unreliability creep in.
 
There then arises a consequential problem: you have the numbers, but what do they mean? You believe you have shown something so there is a temptation to take this to an extreme. Worse, in most analyses of complicated topics there is little option but to accept somebody else's numbers for some aspect. Do they really mean what you think they do, or equally importantly, but more difficult to unravel, what the supplier of the numbers thought they meant? In my opinion we need a forum where misinterpretations can be discussed, and if the objections are valid, the conclusions corrected. I do not believe that any single person will have a broad enough and deep enough knowledge of some multi-disciplined problems to get everything right. We need the expertise out there to correct the flaws.
 
An example. In an earlier post I quoted from MacKay as follows: the calculated power available per unit area, in W/m2 for pond algae are – 4 (if fed with CO2) which is an order of magnitude higher than most land based biomass, but then he adds the productivity drops 100 fold without adding CO2, then he notes that to use the sea, country-sized areas would be required. Does anything about that strike you as odd?
 
My first reaction, perhaps afflicted by living in New Zealand, is that even if country-sized areas of sea are required, so what? Once you fly over the Pacific, it becomes obvious that whatever else we are deficient in, surface area of sea is not one of them. Certainly there are other problems using it, and possibly these will be overwhelming, but let us do some research and find out, and not simply write the possibility off by assumption.
 
My second reaction was to look in disbelief at the two orders of magnitude loss of productivity by not pumping carbon dioxide into the water. The limiting feature of carbon dioxide on photosynthesis will be its solubility in water. However, there are many more factors involved with algal growth. For example, most people have heard of algal blooms. Given the second law of thermodynamics, do we really believe there were suddenly self-assembled massive increases of carbon dioxide that led to them? Or do we accept that the masses of algae in some lakes are actually there because agricultural run-off has delivered the additional nutrients to make this possible? Algal growth is usually limited by nutrients, and not carbon dioxide availability. As an example, one of my first projects in the commercial area was to look at resources for making agar. In the Manakau harbour, Gracilaria chilensis grows at a rate of a few t/ha/y, but we found one pond where, due to the construction of a road and the dumping of gravel, productivity went up to about 110 t/ha/y.  What was different? Gravel for the plants to attach a holdfast, and the pond being close to the effluent from a sewage treatment plant. Such treatments usually discharge high levels of nitrogen compounds and phosphate, which become ideal fertilizers. There was no increase in carbon dioxide levels at this site. Farmers, after millennia of learning, get far better yields than are found in the wild; why should this not happen in the sea? Why do we think we don't have to put any work into making improvements?
 
Yes, I think numbers are important, but only if the appropriate ones are used correctly and their limitations accepted.
Posted by Ian Miller on Mar 30, 2012 3:32 AM BST
   1 ... 9 10 11 12 13 14 15 16 17 18