Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

A recent opinion in Chemistry World focused on the issues of the practicality of turning ideas into useful technologies. One of the arguments seemed to be that curiosity driven science was giving the world a false sense of what could be achieved, and worse, was taking funding away from where it could be more usefully spent. As usual in such issues, there are several ways of viewing the issue. First, look at the issue of why scientists make some of the outrageous claims. In my view, the answer is simple. It is not because the scientists have lost track of thermodynamics as implied in the article (although I guess some might) and it is not because they are snake-oil merchants. My guess is that the biggest reason is dressing up work to satisfy the providers of funding. Let me confess to one example from my own past.
 
My very first excursion into "the origin of life" issue came in the 1970s. I was supposed to be working on energy research, but funding was extremely tight, energy research needs expensive equipment that we did not have, so there was scope to do experiments that did not cost much. Gerald Smith and I had seen that the theory of the initial atmospheres required it to be carbon dioxide, which was thermodynamically very bad for biogenesis in terms of energy. Carbon dioxide is what life gets rid of at the bottom of the energy chain and is only returned to life by photosynthesis. So, if the geologists were correct, how did carbon biogenic precursors form from such an unpromising start?
 
Our idea was that the carbon dioxide could still be reduced through photochemistry. Water and carbon dioxide attacks olivine, and somewhat more slowly, pyroxenes, to dissolve magnesium ions and ferrous ions, and the concept was, Fe II and light would reduce CO2 to formic acid and thence to formaldehyde, whereupon the magnesium carbonate could help catalyse the Butlerov type reactions. So, we did some photochemistry, and persuaded ourselves that we were reducing CO2. It was then that a thought struck me. The Fe II must end up as Fe III, and what would Fe III do to organic materials? The answer was reasonably obvious: try some and find out. So we irradiated some dilute sugar with Fe III, and the carbohydrates simply fell to pieces, with an action spectrum corresponding to the spectrum of the iron complex. Many other potential biochemical precursors suffered the same fate. So, we wrote up the results, but then came the question, how were we going to justify this work? Well, since energy was the desired activity, we wrote a little comment at the bottom of the paper about the potential of photochemical fuel cells.
 
Did we think this was realistic? No, we did not. Did we think there was any theoretical possibility? Yes, while outrageously unlikely, it remained possible. Did it satisfy the keepers of returns to funding sources? Yes, because they never read past the keywords. You may say there was a little duplicity there, but first, this work cost very little and it did not distract us from doing anything else. We used equipment that otherwise would have been doing nothing, and the only real costs were trivial amounts of chemicals and the time spent writing the paper, because that was a real cost. Was the result meaningful? I leave that to you to decide, BUT for me, it was because it set me off realizing that the standard theory of atmospheric formation cannot be right. The carbon source for life could not have come from carbon dioxide initially, because in getting to reduced carbon from the most available source in the oceans, a much worse agent from the point of view of biogenesis was formed. Had we been able to show how CO2 could be the carbon source for biogenesis, I think that would have been interesting, but just because you fail in the primary objective, that does not mean the time was wasted. The recording of the effects of a failed idea are just as valuable.
Posted by Ian Miller on Nov 10, 2013 10:51 PM GMT
The  first round of results came in from Curiosity at Gale crater, and I found the results to be both comforting but also disappointing. The composition of the rocks, with one exception, and the composition of the dust were very similar to what had been found elsewhere on Mars. We now know the results are more general, but they are not exactly exciting. Dust was heated to 835 oC and a range of volatiles came off, and there was, once again, evidence of some carbonaceous matter, but the products obtained (SO2, CO2, and O2, HCN, H2S, methyl chloride, dichloromethane, chloroform, acetone, acetonitrile, benzene, toluene and a number of others) were almost certainly pyrolysis products.
 
An interesting paper (Nature Geosci. doi:10.1038/ngeo1930) found that when ices similar to those in comets were subjected to high velocity impacts, several aminoacids were produced. However, some were aminoacids such as α-aminoisobutyric acid and isovaline, which are not used for protein, and the question is, why not? One reason may be that our aminoacid resource did not come from such comets.
 
A circumstellar disk was identified around a white dwarf, and the disk was considered to have arisen from a rocky minor planet (Science 342: 218 – 220). There was an excess of oxygen present compared with the metals and silicates, and a lack of carbon, and this is consistent with the parent body having comprised 26% water by mass. This was interpreted as confirming that water-bearing planetesimals exist around A and F-type stars that end their lives as white dwarfs. Of particular interest was the lack of carbon. What sort of body could it have come from? I have seen suggestions that it would be a body like Ceres, in which case my proposed mechanism for the formation of minor planets would not be correct (because of the lack of carbon) but another option might be something that accreted in the Jovian zone, where I argue carbon is not accreted significantly.
 
Finally, Curiosity made a specific search for methane in the Martian atmosphere and put an upper limit of 1.3 ppbv, which suggests that methane seen on Mars did not come from methanogenic microbial activity, but rather from either extraplanetary or geologic sources. The latter fits nicely with my proposed mechanism of formation of Mars.
Posted by Ian Miller on Nov 4, 2013 1:35 AM GMT
The prize appears to have been given for work that leads to the modeling of how enzymes work. If I follow the information I have seen correctly, the modelling involves three different levels. The very inner site of reactivity involves a quantum mechanical evaluation of the reaction site and the reactivity. Outside this, where the protein strands fold and interact, the situation is simplified with simple (in comparison) classical physics, while outside this there is further simplification by which the situation is considered simply as a dielectric medium.
 
All of that seems eminently sensible, and there is little doubt that even with such simplifications there remains some serious work that has been done. However one thing concerns me: up until this award, I was totally unaware of it. Yes, this might indicate a lack of effort on my part, but in my defence, there is an enormous amount of information available, and for matters outside my immediate research interests, I have to simply rely on more general articles. Which gets me to the point: assuming this work has been successful, it is obviously important, but why has more not been made of it? Again, perhaps this illustrates a fault on my part, but again I feel there is more need to promote important work.
 
I guess the final point I would like to make is, could someone highlight the principles that this modeling work has uncovered? The general chemist has little interest in wading through computations of the various options open to such a complex molecule as an enzyme, but if some general principles are uncovered, could they not be better publicized? After all, they may have more general applicability.
Posted by Ian Miller on Oct 28, 2013 5:22 AM GMT
There was a recent comment to one of my posts regarding the formation of rocky planets, so I thought I should outline how I think the rocky planets formed, and why. The standard theory involves only physical forces, and is that dust accreted to planetesimals, then these collided, eventually to form embryos (Mars-sized bodies), then these collided to form planets. First, why do I think that is wrong? For me, it is difficult to see how the planetesimals form by simple collision of dust, and it is even harder to see how they stay together. One route might be through melting due to radioactivity, but if that is the case, one would need very recently formed supernova debris to get sufficient radioactivity. Then, as the objects get bigger, collisions will have greater relative velocities, which means much greater kinetic energy in impacts, and because everything is further apart, collisions become less probable and everything takes too long. The models of Moon formation generally lead to the conclusion that such massive impacts lead to a massive loss of material.
 
The difference between the standard theory and mine is that I think chemistry is involved. There are two stages involved for rocky planets. The first is during the accretion of the star, and near the star, temperatures are raised significantly. Once temperatures reach greater than 1200 oC, some silicates become semi-molten and sticky, and this leads to the accretion of lumps. By 1538 oC, iron melts, and hence lumps of iron-bodies form, while around 1500 – 1600 oC. calcium aluminosilicates form separate molten phases, although about 1300 oC a calcium silicate forms a separate phase. (The separation of phases is enhanced by polymerization.) Material at 1 A. U., say, reaches about 1550 - 1600 oC, while near Mars it reaches something like 1300 oC. Of particular relevance are the calcium aluminosilicates, as these form a range of materials that act as hydraulic cements. Also, the closer the material gets to the star, the hotter and more concentrated it gets, so bigger lumps of material form. One possibility is that Mercury is in fact essentially formed from one such accreted lump that scavenged up local lumps. Another important feature is that within this temperature range, significant other chemistry occurred, e.g. the formation of carbides, carbon, nitrides, cyanides, cyanamides, silicides, phosphides, etc.
 
When the disk cooled down, collisions between bodies formed dust, while some bodies would come together. Dust would form preferentially from the more brittle solids, which would tend to be the aluminosilicates, and when such dust accreted onto other bodies, water from the now cool disk would set the cement and make a solid body that would grow by simply accreting more dust and small bodies. Because there is a gradual movement of dust and gas towards the star, there would be a steady supply of such feed, and the bodies would grow at a rate proportional to their cross-section. Eventually, the bodies would be big enough to gravitationally attract other larger bodies, however the important point is that provided initiation is difficult, runaway growth of one body in a zone would predominate. Earth grows to be the biggest because it is in the zone most suitable for forming and setting cement, and because the iron bodies are eminently suitable for forming dust. The atmosphere and biochemical precursors form because the water of accretion reacts within the planet to form a range of chemicals from the nitrides, phosphides, carbides, etc. What is relevant here is high-pressure organic chemistry, which again is somewhat under-studied.
 
Am I right? The more detailed account, including a major literature review, took just under a quarter of a million words in the ebook, and the last chapter contains over 80 predictions, most of which are very difficult to do. Nevertheless, an encouraging sign is that the debris of a minor rocky planet around a white dwarf (what remains of an F or A type star) shows the presence of considerable amounts of water. Such water is (in my opinion) best explained by the water being involved in the initial accretion of the body, because it is extremely unlikely that such an amount of water could arrive on a minor rocky planet by collision of chondrites because the gravity of the minor planet is unlikely to be enough to hold such water. Thus this is strongly supportive of my mechanism, and it is rather difficult to see how this arose through the standard theory.
Posted by Ian Miller on Oct 21, 2013 1:57 AM BST
Leaving aside the provision of employment for modelers, I am far from convinced that the climate change models are of any use at all. As an example, we often hear the proposition that to fix climate change we should find a way to get carbon dioxide from the atmosphere, or from the gaseous effluent of power stations. This sounds simple. It is reasonably straightforward to absorb carbon dioxide: bubble the gas through a suitable base. Of course, the problem then comes down to, how do you get a suitable base? Calcium oxide is fine, except you broke down a carbonate at quite high temperatures to get it. Amines offer an easier route, but to collect a power station's output, regenerate your amine, and keep the carbon dioxide under control will require up to a third of the power from your power station. Not attractive. The next problem is, what to do with the carbon dioxide? Yes, some can be sunk into wells, preferably wet basaltic ones as this will fix the CO2, and a small amount could be used as a chemical, say to make polycarbonates, but how many power stations do you think will be accounted for by that?
 
The problem for climate change is that we currently burn about 9 Gt of carbon per annum, which means we have to fix/use something like 33 Gt of CO2 per annum just to break even, and breaking even is unlikely to fix this carbon problem. The problem is, CO2 is not a very strong greenhouse gas, but it does stay around in the atmosphere for a considerable time. One point that nobody seems to make in public is that even if we stopped emitting CO2 right now, the additional carbon we have already put in the atmosphere will remain for long enough to do a lot more damage. Everybody seems to behave as if we are in a rapid equilibrium, and that is not so. The Greenland ice sheet is the last relic of the last ice age. If we have created the net warming to melt so much per annum, that will keep going until the ice retreats to a position more resilient, at which point our climate will change significantly because we have a much different albedo over a large area. We cannot "fix" climate change by simply stopping the rate of increase of burning carbon; we have to actively reduce the total integrated amount, and not simply worry about the rate of increased production. I suggest that to fix the climate problem, assuming we see it as a problem, we would be better to put more effort into something with a stronger response than fixing CO2.
 
In the previous post, I attempted (unsuccessfully!) to irritate some people relating to how climate change research is spent. When money becomes available for this, what happens? What I believe happens is that we see numerous proposals for funding to make more accurate measurements of something. My argument is, just supposing we do get more accurate data on, say, the methane output of some swamp, what good does that do? It provides employment for those measuring the output of the swamp, but then what? Certainly it will add more to the literature, but the scientific literature is hardly short of material. Enough such measurements will help models account for what has happened, perhaps, but the one thing I am less confident about is whether such models will be able to answer the question, "Exactly what will happen if we do X?" For example, suppose we decided to try to raise the albedo of the planet by reflecting more light to space, and did this in a region that would lower the temperature of cold fronts coming into Greenland, with the aim of increasing snow deposition over Greenland, how much light would we need to reflect and where should we reflect it? My argument is, until models can give an approximate answer to that sort of question, they are useless. And unless we do something like geo-engineering, we are doomed to have to accommodate the change, because nobody has suggested any alternative that has the capacity to solve the problem. We can wave our hands and "feel virtuous" for claiming that we are doing something, but unless the sum of the somethings solves the problem, it is a complete waste of effort. Worse than that, such acts consume resources that could be better used to accommodate what will come. The only value of a model is to inform us which actions will be sufficient, and so far they cannot do that.
Posted by Ian Miller on Oct 14, 2013 10:13 PM BST
Currently, NASA is asking for public assistance for their astrobiology program, or they were up until the current government shutdown, and in particular, asking for suggestions as to where their program should be going. I think this is an extremely enlightened view, and I hope they receive plenty of good suggestions and take some of them up. This is a little different from the average way science gets funded, in which academic scientists put in applications for funds to pursue what they think is original. This is supposed to permit the uncovering of "great new advances", and in some areas, perhaps it does, but I rather suspect the most common outcome is to support what Rutherford dismissively called, "stamp collecting". You get a lot of publications, a lot of data, but there is no coherent approach towards answering "big questions". That, I think, is a strength of the NASA approach, and I hope other organizations take this up. For example, if we wish to address climate change, what questions do we really want to have answered? What we tend to get is, "Fund me to set up more data gathering," from those too uninspired to come up with something more incisive. We do not need more data to set the parameters so that current models better represent what we see; we need better models that will represent what will happen if we do or do not do X.
 
So what are the good questions for NASA to address? Obviously there are a very large number of them, but in my view, regarding biogenesis, I think there are some very important ones. Perhaps one of the most important one that has been pursued so far is how do the planets get their water, because if we want life on other planets, they have to have water. The water on the rocky planets is often thought to come from chondrites, as a "late veneer" on the planet. Now, one of the peculiarities of this explanation is that, as I argued in my ebook, Planetary Formation and Biogenesis, this explanation has serious problems. The first is, only a special class of chondrites contains volatiles; the bulk of the bodies from the asteroid belt do not. Further, the isotopes of the heavier elements are different from Earth, the ratios of different volatiles do not correspond to anything we see here or on the other planets, so why is such an explanation persisted with? The short answer is, for most there is no alternative.
 
My alternative is simple: the planets started accreting through chemical processes. Only solids could be accreted in reasonable amounts this close to the star, unless the body got big enough to hold gravitationally gases from the accretion disk. Water can be held as metal and silicon hydroxyl compound, the water subsequently being liberated. This, as far as I know, is the only mechanism by which the various planets can have different atmospheric compositions: different amounts of the various components were formed at different temperatures in the disk.
 
If that is correct, we would have a means of predicting whether alien planets could conceivably contain life. Accordingly, one way to pursue this would be to try to understand the high temperature chemistry of the dusts and volatiles expected to be in the accretion disk. That would involve a lot of work for which chemists alone would be suitable. Now, my question is, how many chemists have shown any interest in this NASA program? Do we always want to complain about insufficient research funds, or are we prepared to go out and do something to collect more?
Posted by Ian Miller on Oct 7, 2013 1:10 AM BST
Perhaps one of the more interesting questions is where did Earth's volatiles come from? The generally accepted theory is that Earth formed by the catastrophic collisions of planetary embryos (Mars-sized bodies), which effectively turned earth into a giant ball of magma, at which time the iron settled to the core though having a greater density, and took various siderophile elements with it. At this stage, the Earth would have been reasonably anhydrous. Subsequently, Earth got bombarded with chondritic material from the asteroid belt that was dislodged by Jupiter's gravitational field (including, in some models, Jupiter migrating inwards then out again), and it is from here that Earth gets its volatiles and its siderophile elements. This bombardment is often called "the late veneer". In my opinion, there are several reasons why this did not happen, which is where these papers become relevant. What are the reasons? First, while there was obviously a bombardment, to get the volatiles through that, only carbonaceous chondrites will suffice, and if there were sufficient mass to give that to Earth, there should also be a huge mass of silicates from the more normal bodies. There is also the problem of atmospheric composition. While Mars is the closest, it is hit relatively infrequently compared with its cross-section, and hit by moderately wet bodies almost totally deficient in nitrogen. Earth is hit by a large number of bodies with everything, but the Moon is seemingly not hit by wet bodies or carbonaceous bodies. Venus, meanwhile, is hit by more bodies that are very rich in nitrogen, but relatively dry. What does the sorting?
 
The first paper (Nature 501: 208 – 210) notes that if we assume the standard model by which core segregation took place, the iron would have removed about 97% of the Earth's sulphur and transferred it to the core. If so, the Earth's mantle should exhibit fractionated 34S/32S ratio according to the relevant metal-silicate partition coefficients, together with fractionated siderophile metal abundances. However, it is usually thought that Earth's mantle is both homogeneous and chondritic for this sulphur ratio,  consistent with the acquisition of sulphur  ( and other siderophile elements) from chondrites (the late veneer). An analysis of mantle material from mid-ocean ridge basalts displayed heterogeneous 34S/32S ratios that are compatible with binary mixing between a low 34S/32S  ambient mantle ratio and a high 34S/32S recycled component. The depleted end-member cannot reach a chondritic value, even if the most optimistic surface sulphur is added. Accordingly, these results imply that the mantle sulphur is at least partially determined by original accretion, and not all sulphur was deposited by the late veneer.
 
In the second (Geochim. Cosmochim. Acta 121: 67-83), samples from Earth, Moon, Mars, eucrites, carbonaceous chondrites and ordinary chondrites show variation in Si isotopes. Earth and Moon show the heaviest isotopes, and have the same composition, while enstatite chondrites have the lightest. A model of Si partitioning based on continuous planetary formation that takes into account T, P and oxygen fugacity variation during Earth's accretion. If the isotopic difference  results solely from Si fractionation during core formation, their model requires at least ~12% by weight Si in the core, which exceeds estimates based  on core density or geochemical mass balance calculations. This suggests one of two explanations: Earth's material started with heavier silicon, or (2) there is a further unknown process that leads to fractionation. They suggest vaporization following the Moon forming event, but would not this lead to lighter or different Moon material?
 
One paper (Earth Planet. Sci. Lett. 2013: 88-97) pleased me. My interpretation of the data related to atmospheric formation is that the gaseous elements originally accreted as solids, and were liberated by water as the planet evolved.  These authors showed that early early degassing of H2 obtained from reactions of water explains the "high oxygen fugacity" of the Earth's mantle. A loss of only 1/3 of an "ocean" of water from Earth would shift the oxidation state of the upper mantle from the very low oxidation state equivalent to the Moon, and if so, no further processes are required. Hydrogen is an important component of basalts at high pressure and, perforce, low oxygen fugacity. Of particular interest, this process may have been rapid. On the early Earth, over 5 times the amount of heat had to be lost as is lost now, and one proposal (501:501 - 504 ) heat pipe volcanism such as found on Io would manage this, in which case, the evolution of water and volatiles may have also been very rapid.
 
Finally, in (Icarus 226: 1489 -1498), near-infrared spectra show the presence of hydrated poorly crystalline silica with a high silica content on the western rim of Hellas. The surfaces are sporadically exposed over a 650 km section within a limited elevation range. The high abundances and lack of associated aqueous phase material indicate high water to rock ratios were present, but the higher temperatures that would lead to quartz were not present. This latter point is of interest because it is often considered that the water flows on Mars in craters were due to internal heating due to impact, such heat being retained for considerable periods of time. To weather basalt to make silica, there would have to be continuous water of a long time, and if the water was hot and on the surface it would rapidly evaporate, while if it was buried, it would stay super-heated, and presumably some quartz would result. This suggests extensive flows of cold water.
Posted by Ian Miller on Sep 30, 2013 3:30 AM BST
In a previous post, I questioned whether gold showed relativistic effects in its valence electrons. I also mentioned a paper of mine that proposes that the wave functions of the heavier elements do not correspond exactly to the excited states of hydrogen, but rather are composite functions, some of which have reduced numbers of nodes, and I said that I would provide a figure from the paper once I sorted out the permission issue. That is now sorted, and the following figure comes from my paper.


 
 
The full paper can be found at http://www.publish.csiro.au/nid/78/paper/PH870329.htm  and I thank CSIRO for the permission to republish the figure. The lines show the theoretical function, the numbers in brackets are explained in the paper and the squares show the "screening constant" required to get the observed energies. The horizontal axis shows the number of radial nodes, the vertical axis, the "screening constant".
 
The contents of that paper are incompatible with what we use in quantum chemistry because the wave functions do not correspond to the excited states of hydrogen. The theoretical function is obtained by assuming a composite wave in which the quantal system is subdivisible provided discrete quanta of action are associated with any component. The periodic time may involve four "revolutions" to generate the quantum (which is why you see quantum numbers with the quarter quantum). What you may note is that for = 1, gold is not particularly impressive (and there was a shortage of clear data) but for = 0 and = 2 the agreement is not too bad at all, and not particularly worse than that for copper.
 
So, what does this mean? At the time, the relationships were simply put there as propositions, and I did not try to explain their origin. There were two reasons for this. The first was that I thought it better to simply provide the observations and not clutter it up with theory that many would find unacceptable. It is not desirable to make too many uncomfortable points in one paper. I did not even mention "composite waves" clearly. Why not? Because I felt that was against the state vector formalism, and I did not wish to have arguments on that. (That view may not be correct, because you can have "Schrödinger cat states", e.g. as described by Haroche, 2013, Angew. Chem. Int. Ed. 52: 10159 -10178). However, the second reason was perhaps more important. I was developing my own interpretation of quantum mechanics, and I was not there yet.
 
Anyway, I have got about as far as I think is necessary to start thinking about trying to convince others, and yes, it is an alternative. For the motion of a single particle I agree the Schrödinger equation applies (but for ensembles, while a wave equation applies, it is a variation as seen in the graph above.) I also agree the wave function is of the form
ψ = A exp(2πiS/h)
So, what is the difference? Well, everyone believes the wave function is complex, and here I beg to differ. It is, but not entirely. If you recall Euler's theory of complex numbers, you will recall that exp() = -1, i.e. it is real. That means that twice a period, for the very brief instant that S = h, ψ is real and equals the wave amplitude. No need to multiply by complex conjugates then (which by itself is an interesting concept –where did this conjugate come from? Simple squaring does not eliminate the complex nature!) I then assume the wave only affects the particle when the wave is real, when it forces the particle to behave as the wave requires. To this extent, the interpretation is a little like the pilot wave.
 
If you accept that, and if you accept the interpretation of what the wave function means, then the reason why an electron does not radiate energy and fall into the nucleus becomes apparent, and the Uncertainty Principle and the Exclusion Principle then follow with no further assumptions. I am currently completing a draft of this that I shall self-publish. Why self-publish? That will be the subject of a later blog.
 
Posted by Ian Miller on Sep 23, 2013 3:30 AM BST
In the latest Chemistry World, Derek Lowe stated that keeping up with the literature is impossible, and he argued for filtering and prioritizing. I agree with his first statement, but I do not think his second option, while it is necessary right now, is optimal. That leaves open the question, what can be done about it? I think this is important, because the major chemical societies around the world are the only organizations that could conceivably help, and surely this should be of prime importance to them. So, what are the problems?
 
Where to put the information is not a problem because we now seem to have almost unlimited digital storage capacity. Similarly, organizing it is not a problem provided the information is correctly input, in an appropriate format with proper tags. So far, easy! Paying for it? This is more tricky, but it should not necessarily be too costly in terms of cash.
 
The most obvious problem is manpower, but this can also be overcome if all chemists play their part. For example, consider chemical data. The chemist writes a paper, but it would take little extra effort to put the data into some pre-agreed format for entry into the appropriate data base. Some of this is already done with "Supplementary information", but that tends to be attached to papers, which means someone wishing to find the information has to subscribe to the journal. Is there any good reason why data like melting points and spectra cannot be provided free? As an aside, this sort of suggestion would be greatly helped if we could all agree on the formatting requirements, and what tags would be required.
 
This does not solve everything, because there are a lot of other problems too, such as "how to make something". One thing that has always struck me is the enormous wastage of effort in things like biofuels, where very similar work tended to be repeated every crisis. Yes, I know, intellectual property rights tend to get in the way, but surely we can get around this. As an example of this problem, I recall when I was involved in a joint venture with the old ICI empire. For one of the potential products to make, I suggested a polyamide based on a particular diamine that we could, according to me, make. ICINZ took this up, sent it off to the UK, where it was obviously viewed with something approaching indifference, but they let it out to a University for them to devise a way to make said polyamide. After a year, we got back the report, they could not make the diamine, and in any case, my suggested polymer would be useless. I suggested that they rethink that last thought, and got a rude blast back, "What did I know anyway?" So, I gave them the polymer's properties. "How did I know that?" they asked. "Simple," I replied, and showed them the data in an ICI patent, at which point I asked them whether they had simply fabricated the whole thing, or had they really made this diamine? There was one of those embarrassed silences! The institution could not even remember its own work!
 
In principle, how to make something is clearly placed in scientific papers, but again, the problem is, how to find the data, bearing in mind no institute can afford more than a fraction of the available journals. Even worse is the problem of finding something related. "How do you get from one functional group to another in this sort of molecule with these other groups that may interfere?" is a very common problem that in principle could be solved by computer searching, but we need an agreed format for the data, and an agreement that every chemist will do their part to place what they believe to be the best examples of their own synthetic work in it. Could we get that cooperation? Will the learned societies help?
 
Posted by Ian Miller on Sep 16, 2013 8:07 PM BST
One concern I have as a scientist, and one I have alluded to previously, lies in the question of computations. The problem is, we have now entered an age where computers permit modeling of a complexity unknown to previous generations. Accordingly, we can tackle problems that were never possible before, and that should be good. The problem for me is, the reports of the computations tell almost nothing about how they were done, and they are so opaque that one might even question whether the people making them fully understand the underlying code. The reason is, of course, that the code is never written by one person, but by rather a team. The code is then validated by using the computations for a sequence of known examples, and during this time, certain constants of integration that are required by the process are fixed. My problem with this follows a comment that I understand was attributed to Fermi: give me five constants and I will fit any data to an elephant. Since there is a constant associated with every integration, it is only too easy to get agreement with observation.
 
An example that particularly irritated me was a paper that tried "evolved" programs on molecules from which they evolved (Moran et al. 2006. J. Am Chem Soc. 128: 9342-9343). What they did was to apply a number of readily available and popular molecular orbital programs to compounds that had been the strong point of molecular orbital theory, such as benzene and other arenes. What they found was that these programs  "predicted" benzene to be non-planar with quite erroneous spectral signals. That such problems occur is, I suppose, inevitable, but what I found of concern is that nowhere that I know was the reason for the deviations identified, and how such propensity to error can be corrected, and once such corrections are made, what do they do to the subsequent computations that allegedly gave outputs that agreed well with observation. If the values of various constants are changed, presumably the previous agreement would disappear.
 
There are several reasons why I get a little grumpy over this. One example is this question of planetary formation. Computations up to about 1995 indicated that Earth would take about 100 My to accrete from planetary embryos, however, because of the problem of Moon formation, subsequent computations have reduced this to about 30 My, and assertions are made that computations reduce the formation of gas giants to a few My. My question is, what changed? There is no question that someone can make a mistake, and subsequently correct it, but surely it should be announced what the correction was. An even worse problem, from my point of view, was what followed from my PhD project, which involved, do cyclopropane electrons delocalize into adjacent unsaturation? Computations said yes, which is hardly surprising because molecular orbital theory starts by assuming it, and subsequently tries to show why bonds should be localized. If it is going to make a mistake, it will favour delocalization. The trouble was, my results, which involved varying substituents at another ring carbon and looking at Hammett relationships, said it does not.
 
Subsequent computational theory said that cyclopropane conjugates with adjacent unsaturation, BUT it does not transmit it, while giving no clues as to how it came to this conclusion, apart from the desire to be in agreement with the growing list of observations. Now, if theory says that conjugation involves a common wave function over the region, then the energy at all parts of that wave must be equal. (The electrons can redistribute themselves to accommodate this, but a stationary solution to the Schrödinger equation can have only one frequency.) Now, if A has a common energy with B, and B has a common energy with C, why does A not have a common energy with C? Nobody has ever answered that satisfactorily. What further irritates me is that the statement that persists in current textbooks employed the same computational programs that "proved" the existence of polywater. That was hardly a highlight, so why are we so convinced the other results are valid? So, what would I like to see? In computations, the underpinning physics, the assumptions made, and how the constants of integration were set should be clearly stated. I am quite happy to concede that computers will not make mistakes in addition, etc, but that does not mean that the instructions for the computer cannot be questioned.
Posted by Ian Miller on Sep 9, 2013 4:31 AM BST
   1 ... 4 5 6 7 8 9 10 11 12 ... 18