Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

In a previous post (http://my.rsc.org/blogs/84/1702) I made the case that the covalent bonds of the group one metals were characteristic of the element, i.e. the energy of any A – B bond was the arithmetic mean of the A – A and the B – B bond energies. I also asked " What do you think? Are you interested?" So far, no comments. Does this mean that nobody can see a glaring problem, or does it really mean that chemists as a whole have little interest in the nature of the chemical bond?
First, the glaring problem. How can the energies be the arithmetic mean? Thus from de Broglie, we know
pλ = h
We have also established that the covalent radius is characteristic of the atom, which means that λ on the bond axis is constant. We also know that on average, there are no net forces on the nucleus, otherwise they would accelerate in the direction of the force. (Zero point motion is superimposed on such an equilibrium distance, but the forces average to zero.) With no net forces, the average wavelength as determined on the other axes should also be constant. You may protest (correctly) that the wave may have only one wavelength, but that is only true if the wave is not separable. For example, one might argue the medium changes on the bond axis due to the change in particle density due to wave interference.

Thus the constant covalent radius implies a constant wavelength for the valence electron in different molecules. But since the total energy will involve a term (p1 p2)^2 minus the original energies, and since the square of a sum does not equal the sum of the squares, and since the path length must change between, say, Li2 and LiCs, the bond energies should not be the linear sum of the components if the waves are delocalized over the whole molecule.  For a simple two-electron wave function that arises from pairing, no new nodes are placed in the wave function (other than the antibond or excited states) so the path length must change significantly. To me, this strongly suggests that the molecular orbital theory is not soundly based. Yes, they can get the right answers by adjusting the parameters/constants within the calculations, but that does not prove the theory is correct. Instead there should some algebraic reason why such additivity arises naturally.

Is there any? The answer to that, in my opinion, rests on the reason why the energy levels are stable anyway. Under Maxwell's electromagnetic theory, an accelerating electron should emit electromagnetic radiation, and this occurs always, except for the stationary states of atoms and molecules. From the Schrödinger equation, such stable states occur only when the action is exactly quantized. If the action about each atom must be quantized for σ-bonded molecules for the molecule to be stable, then we get the additivity of the energy of such simple molecules if the covalent radius is constant. Thus we have a physical reason, independent of calculations, for the observation. The importance of this is that it gives a new relationship to aid calculations, which also shows why the functional group actually occurs. Is such a potentially new physical relationship of sufficient interest to be worth further investigation?
Any comments? Please!
Posted by Ian Miller on Mar 20, 2016 10:54 PM GMT
In the last post, I presented data for the covalent bonds of the A – B compounds of the Group 1 elements that showed to a reasonable degree that the atoms each had a characteristic covalent energy, in the same way there is a covalent radius, and that the bond energy of the A – B bond is the sum of the A and B contributions. This goes against all the standard textbook writings. In an earlier post I stated that previously I had submitted a paper that would lead to a method for readily calculating these bond energies, but the paper was rejected by the editors of some journals on the grounds that either these are not very important molecules, or alternatively (or both) nobody would be interested. This annoyed me at the time, but is seems to me they had a point.  These blog posts have received absolutely no comment.  Either nobody cares, or nobody is reading the posts. Either way, it is hardly encouraging.
Now, the next point that could have been made is that when we get to more common problems, the bond energies are not additive in that way. Or are they? One problem I see is the actual data are not really suitable for reaching a conclusion.
Let's consider the P –P bond energy, which is needed for considering the bond additivity of any phosphorous compounds. I made a quick calculation of the P – P bond energy in diphosphine, on the assumption that the P – H bond energy was the same as in phosphine, and I got the energy 242 kJ/mol. If you look up some bond energy tables, you find the energy is quoted as 201 kJ/mol. How did they get that?  If you consider the heats of atomization of phosphorus, the bond energy is 221 kJ/mol, but if we assume that is in the P4 form, it would be in the tetrahedrane structure, which will be strained (although the strain will also stabilize lone pairs) and of course the standard state will be a solid, so in principle energy should be added to get it into the gas phase before atomizing to make the comparison, so it is reasonable to assume that the real bond energy will be stronger than that indicated by that calculation.
The problem is obvious: to make any sense of this, we need more accurate data. We also need the data to involve energies of atomization, and not rely on the more easily obtained bond dissociation energies. But as far as I can see, the chemical community has given up trying to establish this data. Does it matter? I think it does. For me, a problem with modern chemical theory, which is essentially extremely complicated computations, is that it offers little assistance to the issues that matter for the chemist because there are no principles enunciated, but merely results and comments on various computational programs. The principles are needed, even if the calculations are not completely accurate, so that chemists can draw conclusions, and use these to formulate new plans of action. How many really think they understand why many synthetic reactions work that way? Do we care about the very fundamental component of our discipline? And, for that matter, does anyone care whether I write this blog?
Posted by Ian Miller on Feb 29, 2016 2:14 AM GMT
In my last post, I presented evidence that the covalent radius of a Group 1 metal was constant in the dimeric compounds. I also asked whether anyone was interested. So far, no responses, and I suspect the post received something of a yawn, if that, from some because, after all, everyone "knows" there is a constant covalent radius. There is, of course a problem. Had I included hydrides, the relation would not have worked. Ha, you say, but the hydrides are ionic. Well, the constant covalent radius of hydrogen simply does not work for a lot of other compounds either. Try methane, ammonia and water.  There are various alternative explanations/reasons, but let us for the moment accept that hydrogen does not comply with this covalent radius proposition.
 
If the covalent radius of an atom is constant, then there should be a characteristic wavelength for each given atom when chemically bound, which in turn suggests from the de Broglie relation that the bonding electrons will provide a constant momentum value to the bond. While that is a little questionable, if true it would mean the bond energy of an A – B molecule is the arithmetic mean of the corresponding A – A and B – B molecules. Now, one can argue over the reasoning behind that, but much better is to examine the data and see what nature wants to tell us.
 
Pauling, in The Nature of the Chemical Bond stated clearly that that is not correct. However, if we pause for thought, we find the arithmetic mean proposition depends on no additional interactions being present in addition to those arising from the bonding electrons forming the covalent bond. Thus atoms with a lone pair would be excluded because the A – A bonds are too weak, such weakness usually attributed to lone pair interactions. Think of peroxides. Then, bonds involving hydrogen would be excluded because the covalent radius relationship does not hold. Bonds involving hybridization may produce other problems. This is where the Group 1 metals come to their own: they do not have any additional complicating features. Far from "not being very interesting" as one editor complained to me, I believe they are essential to starting an analysis of covalent bond theory. So, what have we got?
 
The energies of the A – A bonds are somewhat difficult to nail down. Values are published, but often there is more than one value, and the values lie outside their mutual error bars. With that reservation, a selection of energies (in kJ/mol) are as follows:  Li2 102.3; Na2 72.04, 73.6; K2 57.3; Rb2 47.8; Cs2 44.8
 
The observed bond energies for A – B molecules are taken from a review (Fedorov, D. A., Derevianko,  A., Varganov, S. A. J. Chem Phys. 140: 184315 (2014)) Below, the calculated value, based on the average of the A – A molecules are given, then in brackets, the observed energy, then the difference δ expressed as what has to be added to the calculated value to get the observed value.
                  Mean     Obs          δ
Li – Na        88.0   (85.0)     -3.0
Li – K         79.8    (73.6)     -6.2
Li – Rb       75.1    (70.9)     -4.2
Li – Cs       73.6     (70.3)    -3.3
Na – K       65.5     (63.1)    -2.4
Na – Rb      60.7    (60.2)    -0.5
Na – Cs      59.2     (59.3)     0.1
K – Rb        52.6    (50.5)    -2.1
K – Cs        51.1    (48.7)     3.4
Rb – Cs      46.3    (45.9)    -0.4
 
The question now is, does this show that the bond energies are the arithmetic means of the A – A and B – B molecules? Similarly to my last post, there are three options:
(1) The bond energies are the sum of the atomic contributions, and the discrepancies are observational error, including in the A – A molecules.
(2) The bond energies are the sum of the atomic contributions, and the discrepancies are partly observational error, including in the A – A molecules, and partly some very small additional effect.
(3) The bond energies are not the sum of the atomic contributions, and any agreement is accidental.
What do you think? Are you interested?

 
Posted by Ian Miller on Feb 8, 2016 2:03 AM GMT
Time to start the New Year, and wish the RSC a great 175th anniversary. If we think of the last 175 years, chemistry has made serious changes, both to itself and to our lives. Nevertheless, at heart, chemistry is about molecules, and molecules are groups of atoms held together by what we call chemical bonds. The recent Chemistry World has an article that shows how our understanding of bonding has evolved, but I am not convinced it shows how hard this was. Don't forget, as it stood then, the concept of an electron moving around a nucleus violated one of the greatest triumphs of 19th century physics, namely Maxwell's electromagnetic theory. Would you have been prepared to propose something as radical? Or has the quest to discover died out? It is a lot easier to grasp something when everyone tells you it is correct, but what about when nobody knows? I may annoy a number of people, but I believe we still do not properly understand even the simplest of chemical bonds, and I thought one contribution I could make to the year would be to illustrate a problem over a small number of posts. Amongst other things, I hope to show you how hard it is to form new theories.
 
What I am going to do is to focus on the dimeric molecules of the Group 1 metals. These are of interest to me because they are the simplest molecules, with the fewest complicating issues. Or so I thought. Some of what I am going to put in the following posts was submitted to two journals and rejected by both on what I consider spurious grounds. The first said these molecules were not very interesting; the second said nobody would be interested in what I was proposing (which involved a hitherto unrecognized quantum effect). There were no adverse comments about the physics! I wonder were these editors right? Perhaps nobody is interested? Perhaps the urge to overturn the wrong has gone? If it looks OK, then do not disturb! Welcome back, Claudius Ptolemy! Of course, just because nobody is arguing, that may be because it is correct. Maxwell's electromagnetic theory is correct, right? Apart from that pesky issue about electrons around atoms, that is. But you know why that is, don't you? Do you really? Maybe in detail things are not quite what you think. Care to try out some problems?
 
So here goes thought number one. Atoms have a characteristic covalent radius, so the bond distance of an A – B molecule is the arithmetic mean of the A – A and B – B bond distances. Do you agree with that statement?
 
Let's test it. The literature contains the necessary A – A bond distances, although of varying degrees of accuracy. However, recently a review of the bond properties of the A – B molecules of the group 1 metals has been published (Fedorov, D. A., Derevianko,  A., Varganov, S. A. J. Chem Phys. 140: 184315 (2014).).  So, let's check. First, the A – A bond distances. The following is from various literature sources with distances in pm:
Li2  133.6;  Na2 153.9;  K2 193.5;  Rb2 210.5; Cs2 230
Now, let us look at the A – B molecules. In the following, the column labelled "mean" is the arithmetic mean of the relevant A – A molecules, the column labelled observed is the measured values from Federov et al., and δ is what must be added to the calculated value to get the observed value.
 
                   Mean       Obs          δ   
Li – Na       287.5    (288.9)       1.4
Li – K         328.1    (332.3)       4.2
Li – Rb       344.1    (346.6)       2.5
Li – Cs       363.6     (366.8)      3.2
Na – K       347.4     (349.9)      2.5
Na – Rb     364.4     (364.3)     -0.1
Na – Cs      383.9     (385)         1.1
K – Rb       404       (406.9)       2.9
K – Cs       423.5     (428.4)       5.9
Rb – Cs     440.5     (437.1)      -3.4
 
What do you make of that? There are three options:
(1) The bond distance is the sum of the covalent radii, and the discrepancies are observational error, including in the A – A molecules.
(2) The bond distance is the sum of the covalent radii, and the discrepancies are partly observational error, including in the A – A molecules, and partly some very small additional effect.
(3) The bond distance is not the sum of the covalent radii, as shown by the lack of agreement.
What do you opt for? Can you discern any trends? This probably seems fairly obvious to you, but soon it will be less so. The question is, is anyone interested? Were the journal editors right in that nobody cares about the nature of the chemical bond? Will anyone respond?

 
Posted by Ian Miller on Jan 24, 2016 9:13 PM GMT
The November Chemistry World had an article on homochirality, with the question, "How did it evolve?" Clearly a problem, because the article did not really offer a solution. The problem is, the biogenetic chemicals should have been formed in both D and L forms equally. So why do we have D sugars and L amino acids? First, as the article points out, for all we know throughout the Universe there are an equal number of worlds supporting this choice as have chosen the other option. There is no reason to believe that D sugars are somehow superior, and certain red algae have polysaccharides based on alternating D and L galactose, so there is nothing that prevents the opposite form. So, how did homochirality evolve? The article offers a good survey of the guesses as to how an initial preference would feed on itself, but the problem then is, why was there an initial preference? In most cases, any means of obtaining a preference would appear to be too small to make any significant difference.
 
In my ebook, Planetary Formation and Biogenesis, I suggest there are two better questions. The first is, why did homochirality evolve? The second, and more important, is, why choose ribose, and having done that, why the furanose form? I think the answer to the last one is important. It is possible to make duplexes out of a number of pyranose pentoses, including ribose, and all of them have a slightly stronger association energy than the ribofuranose. My suggestion is that the furanose form does something the pyranose form does not do, in which case the reason for choosing ribose is clear, even though ribose is one of the least likely sugars to be formed from a synthesis that would offer a mixture: it alone has a reasonable amount of furanose form in solution. So the question then is, why prefer furanose?
 
The first step towards RNA in biogenesis is to join a purine or pyrimidine to ribose. This is a simple condensation reaction, but it does not work very well for purines, and not at all for pyrimidines, in aqueous solution. The condensation reaction is thermal, so there has to be a means of heating it more strongly, or alternatively, providing more vibrational energy at the reactive site. The formation of the phosphate ester at C-5 is also a condensation reaction. We know that both reactions go photochemically for adenine, ribose and phosphate, and while this is unlikely because adenine only absorbs photons at about 250 nm or less, I suggested there could be a different mechanism: absorption of visible light by something like a porphyrin and subsequent thermal energy transfer. If so, the reason for the furanose is the only form that will get to a phosphate ester, because it alone is flexible enough to transfer the vibrational energy to C-5.
 
If so, then the origin of homochirality is reasonably obvious. The RNA form condenses photochemically, until the RNA polymers get long enough to act as ribozymes. Once they do that, they can depolymerize as well as catalyse polymerization. For a while, anything might be formed, but once a homochiral polymer strand is formed, it can form a helix that will act as a template for a double helix. Once it does that, if the duplex separates, we have two templates. It needs the duplex to reproduce, and the duplex will not form if the strands have mixed chirality. Once reproduction starts, whatever structure was selected will predominate. If you need homochirality to reproduce, and if, once you get reproduction that form will predominate, then surely homochirality is inevitable.
 
This will be my last post here for 2015, so may I wish readers a very merry Christmas, and a successful 2016.
Posted by Ian Miller on Dec 13, 2015 10:36 PM GMT
On the international scene, it is often difficult for nations to make decisions when more than one of them is involved, but occasionally an issue comes up where it is difficult to even know how to make the decision. Climate change is one of those issues. Leaving aside some recidivists, the mechanism of greenhouse forcing is now reasonably clearly known, and accepted by the scientific community, and, judging by the recent marches, by a reasonable fraction of the public. Less well accepted is what is essentially hysteresis, which means that what happens depends on what has happened before. Almost certainly, we are not currently in a climatic equilibrium (if we ever were). Another point that many seem to have trouble with is that if there is a net heating, or positive power input, it does not follow that temperatures will increase at selected points. The obvious example is that heat going into the polar regions and melting ice does not raise the temperature. But even more significant, if some areas are getting hotter, and the poles stay the same, we have a greater temperature difference, which permits a stronger heat engine (storms) to develop. Stronger cold winds flowing from the poles will cool some regions, even if, overall, the planet is heating.
 
Our current problem is that with 400 ppm of CO2 in the atmosphere, the additional heat in the oceans are transferring warmer water to the ice sheets, thus melting glacial ice in Greenland and the Antarctic. Suppose we stopped burning fossil fuels tomorrow, the rate of melting would continue unabated for quite some time, first because the additional heat in the oceans at the equator still has to have sufficient time to get to the ice. Further, the oceans will continue to absorb the heat because the atmosphere will continue to have its 400 ppm of CO2, together with other gases such as CH4, N2O, and a number of industrially made gases. If the ice sheets melt, there will be a serious rise in sea-levels. Countries like Bangla Desh will lose half their land, some Pacific Islands will be uninhabitable. So, what should we do?
 
The current political thinking seems to be, nothing, besides reduce CO2 emissions. However, reducing emissions merely slows the development of the problem; it does not reduce it, because of what is already there. Worse, India has announced it will build a lot of new coal-fired power stations, on the basis that it should have its turn to burn coal. There is an even worse problem: the acidification of seawater due to the CO2 it has absorbed is bringing it close to the level where aragonite does not precipitate out. A very large number of shellfish, at least in their juvenile stages, depend on aragonite to make their protective shells. Accordingly, we have two problems: how to stop global warming, and how to stop ocean acidification? Each of these can be addressed by geoengineering, although the ocean acidification has the fewest options.
 

There was an article in Science (vol 347, p1293) that raised the question, what would happen if some country decided to burn a lot of sulphur, which would help form clouds and reduce the albedo? The reason the country might have decided to do this could be because it had had a series of bad harvests, and it blamed climate change. The problem, of course, might be that now some other country might have its harvests fail (and in this case, ocean acidification would hardly improve.) The problem is that anyone who does something will hardly know what their actions will do elsewhere, and even if they can guess, who is responsible for what happens? What is needed is more information, but how do we get that information? How do you carry out an experiment that will provide data on a global scale without the possibility of influencing the globe? And who will support the experiment? Who will regulate what is done, and on what basis? One unfortunate aspect is that politicians will put themselves in the deciding role, they will not understand the problem, and they will act solely in the interests of their own country. Not an attractive prospect for our grandchildren.
Posted by Ian Miller on Nov 29, 2015 9:23 PM GMT
One thing that brings joy to someone who engages in theoretical work is to find observational evidence that supports a theory that contradicted "standard theory" that everyone accepted when the theory was presented, which in this case was in 2011, in my ebook Planetary Formation and Biogenesis. That fact that nobody else takes any notice is irrelevant; the feeling that your theory alone actually meets the conditions imposed by nature is great.
 
The relevant part involves the formation of the rocky planets. The standard theory is that these formed from the collision of planetesimals (bodies up to 50 km in size, which were formed by some totally unknown process), and the volatiles came from a subsequent bombardment of carbonaceous chondrites, or something like them. The review I gave of this process (the ebook has over 600 references) shows a number of reasons why this should be wrong, mainly in the form of a whole lot of other things that should have accompanied the water, and clearly did not in the right ratios, but the theory was held onto because it was perceived that there was no alternative. When the rocky planets accreted, it was too hot for water to accrete at those pressures by any reasonable physical process.
 
My answer was that Earth formed by chemical processes. Very specifically, in the early stages of the accretion disk, there were temperatures where calcium aluminosilicates could phase separate out of melt-fused rocks, and when the disk cooled, collisions made dust, the dust adhered to rock and collected water vapour from the nebula to set the cements into effectively concretes. These were strong enough to permit them to survive the milder collisions, and they would rapidly accrete small material, effectively growing more by monarchic growth than the usually assumed oligarchic growth. Accordingly, the water that set the cements would be primordial, and this would be the source of Earth's water.
 
The good feelings I am sharing come from a recent paper by Hallis et al. (Science 350: 795 – 797) that reports the deuterium/hydrogen ratios in some primordial rock samples originating in the deep mantle. These lavas, found in Baffin Island and Iceland, have 3He/4He ratios similar to primordial gas (and up to 60 times higher than atmospheric helium) and have Pb and Nd isotopic ratios consistent with primordial ages (4.45 – 4.55 Gy). They also contain water, and the deuterium levels of the water indicate that the water almost certainly had to be primordial, from the accretion disk itself and not from chondrites. You can see why I am happy.

 
Posted by Ian Miller on Nov 16, 2015 10:12 PM GMT
In the latest "Chemistry World" there is an article arguing there is a controversy relating to the nature of the bonding in molecules such as the perchlorate anion, which appears now to be describable as having the chlorine atom with a positive charge of three, the four oxygen atoms with a charge of minus one each. The bonding is therefore one of four equal single bonds. Presumably, sulphate has the same issues, and according to Wikipedia, computational chemists put a charge of 2.45 on the sulfur atom. Crystal structures apparently indicate the four bonds are equal. Why got to these extremes? The problem is that chlorine has seven outer electrons, but six of them are usually regarded as residing in three pairs, and hence should be inert. Accordingly, chlorine has a valence of 1. Now many chlorine compounds do, but perchlorate, by definition, can be considered as the adduct of water on Cl2O7, i.e. all the outer electrons are involved. In principle, upon electron pairing, that gives 14 electrons in the outer valence shell. How can that be? The sulphur in sulphate has six outer electrons, four of which are paired. To get the required valence of six, again all electrons have to be unpaired if electron pairing is relevant.
 
The traditional method was to invoke 3d orbitals. These are empty, so they may be available for hybridization, BUT, according to the article, "quantum chemists have shown that it is energetically unfeasible to use d orbitals for extra bonds". It was asserted that this undermines a quantum mechanical account of Lewis bonding. My immediate problem with this assertion is, "how do we know?" The 3d orbital energies are obviously higher than 3p for chlorine, but how much higher, and does the energy difference remain if the orbitals are used for bonding? I am not arguing the statement is wrong, but merely that I would like to know why everyone thinks it is right. The output of computations is insufficient, because computations, according to Pople's Nobel lecture, are heavily dependent on validation, and we are a little short of the requirement to validate this statement. We can go further. The 2p orbitals are clearly at a higher energy than the 2s orbitals when we excite to them, yet boron almost never forms a B – X molecule, other than in highly energetic experiments, and not only does it use all three electrons, but it tries harder to achieve a tetrahedral configuration. So, if boron can do this, why cannot sulphur do it with 3d orbitals.
 
The article suggests that the answer might come from putting large negative charge on the oxygen atoms, and strong positive charge on the chlorine. The perchlorate anion is therefore an anion with four oxygen atoms with nearly a negative charge on each, and nearly three positive charges on the chlorine atom. The question then is, why does not this positive charge attract and polarize towards it the negative charge. If it does, we are back to the original problem.
 
What we need are data, and there are some. Consider only sulphate. We can form stable esters, such as dimethyl sulphate. If we do, the structure is consistent with two S=O and two S-O bonds. The  S – O bond length is 156.7 pm, the S = O bond length 141.7 pm. (J Mol. Str. 73, 99 – 104) while the infrared spectrum (Spectrochim Acta 28A, 1889 – 1898) gives the symmetric and asymmetric stretches of two pairs: the double bonds at 1389 and 1199 cm-1, with the single bonds at 829 and 757 cm-1. The infrared spectra of sulphates as a whole typically have medium to strong signals around 645 cm-1, and very strong signals at1110 cm-1, yet the S – O bonds in the anions all have the same length, so what does that mean? Obviously, even this common molecule still needs further work. I don't know the answer, but I would very much prefer it if the theoreticians would publish the reasons, and assumptions used, when they publish a statement saying the central atom has an extremely high positive charge. Their model might work for the sulphate anion, but it does not appear to for dimethyl sulphate, so the problem with how to explain hypervalency remains.

 
Posted by Ian Miller on Oct 26, 2015 1:53 AM GMT
One interesting paper from the not too distant past involved the reduction of carbon dioxide to either methanol or methane (J. Am. Chem. Soc., 2015, 137, 5332) using lithium o-phenylbisborate as a catalyst. What the catalyst is claimed to have done is to bend the CO2 molecule (highly plausible) and thus form an aromatic ring. It is this last part that I find hard to stomach, because it brings us back to the question, what causes "aromaticity"? Now, I should issue a warning here: I have published what I think causes aromaticity, so I am not exactly unbiased.
 
So, where is my problem? The authors seem to have argued that a six-membered ring is formed (correct) and there will be 6 π electrons in it, therefore the system will show aromaticity. I suppose if you construct molecular orbitals and then place the electrons in them, there is a case for this. However, my argument about aromaticity is there has to be 2n 1 double bonds that alternate with single bonds, which is not quite the same thing. The reason for aromaticity in this case lies in the phase of the waves. Similarly to thinking about the Woodward Hoffmann rules, run the phases around the ring, then keep going. What you find is that with aromaticity, the second round cancels the first, which cancels the double bond amplitude, and since the charge has to go somewhere, it goes to the single bonds (the other major canonical structure). But that has the same problem, and as such, molecules such as cyclohexadiene cannot exist. Cyclobutadiene, however, finds that the second cycle reinforces the displacement of the first cycle, and so it is locked into the classical structure. Now, the reason I find this reduction paper of interest is that in principle it offers an alternative. My model predicts no aromaticity because the double bonds in carbon dioxide are orthogonal. The double bond orbitals cannot overlap with each other and therefore cannot form an extended wave with one polarization.
 
Does it matter? I think so. I think it is important that chemists try to understand what is going on. Oddly enough, when I started my career with physical organic chemistry, by at large chemists thought they understood tolerably well most of the reactions of which they were aware. Now there are so many additional reactions, but I am far from convinced the understanding has increased.
 
One final point. The paper ends with a statement that "further studies" are required to adopt the transformation for "practical applications". Methanol and methane will not be there. This catalyst merely bends the CO2, the actual reductant is either triethylsilane or pinacolborane. These would be more than somewhat more expensive and harder to get than methane and methanol. That hardly seems likely to be "useful", at least from what was demonstrated in this paper.
Posted by Ian Miller on Sep 28, 2015 5:06 AM BST
My last post related to peer review and listed some of the problems with it. The question then arises, why do we want it? I think here that the answer depends on the nature of the paper.
 
Think of the paper that posts data, and as an example, data on a new molecule. It is highly desirable that this data is valid, because while in principle any scientific report should be reproducible, in practice, do we want to reproduce everything? There are something like 90 million molecules that have been reported, many of which have taken a great effort to make. Obviously, it would be highly desirable to ensure that each molecule is reported accurately, and enough is reported about it so that the work does not have to be repeated. Peer review gives an assessment that adequate methods were used, and that all reasonable data were collected. Furthermore, I know from experience at having done some reviewing, some scientists get so absorbed in their work that they do not realize that the average reader may not be able to unravel what they have done the way they have written it. So, yes, peer review that sends the paper back for revision should improve the paper.
 
However, the problem for me starts when a referee rejects a paper "because it is not very interesting". What that usually means is it did not interest him. One example from my past: I wrote a paper (with one other co-author) on the 13C NMR shifts of acetylated methylated agars. Now this may not seem very exciting, but as most chemists who use 13C NMR know, substitution changes the chemical shift of nearby atoms. Now, what I showed using a range of seaweed polysaccharides, because the structures of the sugar units were reasonably rigid, and because the linking oxygen atoms largely insulate one unit from the effects on the other, except sometimes immediately about the linking sites, the shifts due to substitution are regular, and you can use such shifts to determine substitution patterns, especially if a number of different operations are carried out in varying substitution on the "mobile" sites. (A mobile site is something like a sulphate ester, which can be removed, or a hydroxyl, which can be substituted with something like a methyl group, or an ester.)
 
Now, what causes a change of chemical shift? I think most chemists would answer that in terms of electron induction effects, wherein the substituent that is a strong electron withdrawer pulls electrons closer to the carbon atom to which it is attached, and the effect is attenuated so that two carbon atoms away (the γ site) there is only a tiny effect. Thus forming a methyl ether will change the chemical shift of the α carbon by about 10 ppm, the β carbon by about 2 ppm, and usually of opposite sign, while sulphate ester gives similar patterns, but usually about two-thirds the change in shifts. (Note4 the change of sign makes electron movement hard to swallow!) Now, what was significant about the acetylations was that the acetyl group makes a relatively small change in shift to the α carbon and a significantly bigger shift to the β carbon (about 4 ppm). Why? My argument is that the change in chemical shift has nothing to do with electron induction at all, but rather the magnetization field induced by the applied field. The magnetic potential is a through space effect, not a through bond effect, and since the magnetic potential is a vector, its orientation is also important. I argued the reason the acetyl group makes such a big change to the β carbon shift is that the acetyl group rotates about the linkage position, and the distance to the β carbon is actually quite small. Is that interesting? A means of determining substitution patterns on some polysaccharides, and evidence for the mechanism of chemical shifts? I thought so, but I seem to be in a minority. Now, would it hurt to publish it, given the electronic nature of publishing. Yes, one option would be to submit to another journal, but here I really could not be bothered. Remember, the number of publications has been irrelevant to my career; I have literally been publishing to be helpful, but when someone said they are not interested, then I also lost interest.
 
My question is, is this the way science should operate? In these electronic days, I believe there should be only two reasons to reject a paper: (a) it is wrong, and the referee should be able to show where, and (b) it adds nothing. By all means send back for clarification, but rejection should be an absolutely last resort. What do you think?
Posted by Ian Miller on Aug 31, 2015 2:58 AM BST
   1 2 3 4 5 6 7 8 9 ... 18