Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

In the recent Chemistry World, we read the heading "Technetium carbide refuted; proof that the compound cannot exist after all". The article then goes on to show that a team of computational chemists made calculations and showed that the carbide cannot exist, and what experiment had shown was there is a new phase of the metal.
 
Sorry, but that is just plain wrong. Not the calculations, necessarily, which may or may not be correct. The point is, you cannot prove anything by theory. One of the most successful theories of all time, in my opinion, was Newtonian mechanics, and when used to calculate the orbit of Uranus, observation failed to match the calculations. Either the theory was wrong or it was not, but whatever, nobody argued that the theory was right and observation wrong. The only way out was to postulate a new planet and Neptune was discovered. That was a triumph for theory. It put observable facts into the theory to predict something new, and there it was. However, when Newtonian mechanics was used to calculate the orbit of Mercury, observation failed to match the calculations. Again, either the theory was wrong or it was not, but whatever, nobody argued that the theory was right and observation wrong, and worse, no new planet could fix this problem. In the end, it was found that Newtonian mechanics is merely a good approximation to Einsteinian relativity.
 
As far as I am concerned, I have no idea whether technetium carbide exists. I know manganese carbide exists, and I know it is not especially resistant to certain reagents, so it is easy to make it and then lose it. Because of the nuclear instability, I doubt anyone really worries too much about technetium carbide, because it is extremely unlikely to be of significant use, but that does not matter. Further, the failure to make something in a synthesis does not prove the compound does not exist, but merely that is not the way to make it. There are an awful lot of unstable compounds that can be made, if you know how to go about it. As an aside, from my experience with manganese carbide, you may be better off not starting with the metal, as then that new metallic phase is far less likely to form.
 
As for the calculations, the best theory can do is make predictions. Only Nature can tell us whether they are correct. For me, calculations help us know we understand nature, but you cannot use calculations to prove something. All you can say is, if my theory is correct, then this is what you should expect.
Posted by Ian Miller on May 23, 2016 5:08 AM BST
By now it is apparent that either chemists are not reading these posts, or they are not interested in evidence suggesting bond strengths are additive, or why, sometimes, they are not.
I must change my approach.  In the most recent Chemistry World there is a comment on climate change. What we see is that two proposals have been made to reduce carbon dioxide levels. One is to introduce crushed silicates into soils. Of course, they have to be the right silicates. One that has been proposed is peridotite. Most certainly, the earth is not short of this; it makes up most of the mantle, but of course the mantle is somewhat difficult to access, and we wo0uld have to deal with outcrops that have reached the surface.
The problem with this proposal is the rate of reaction. Some rocks weather tolerably quickly but overall the process is slow. It can be accelerated by at least a million times by injecting carbon dioxide into a suitable fractured rock layer, but that requires a lot of energy. This sort of proposal depends on sufficient of the right silicates being available, and the energy demands on the processing not generating, either directly or indirectly, more carbon dioxide than is removed. One problem is the source of the rock; if you can find it on the surface, obviously it is not reacting very quickly.
A more straightforward method suggested was to greatly increase afforestation. One point noted briefly in the article was that such forests might generate unintended consequences. Does not the logic of this comment grate a little?
First, huge amounts of forest have already been cut down. Allowing them to regrow would merely return the system to where it was before. A particularly good area to let re-develop would be the tropical rain forests. Huge areas of Brazil have had their forests removed, and the land is not that useful for anything else, so it tends to lie barren or be eroded. Replanting the forest, or simply stopping cutting it down and letting it regrow and spread would be a start.
One scheme that I think is worth further consideration is ocean fertilization, to let algae grow. There are two forms of algae: micro and macro algae. Microalgae grow simply with modest fertilization, usually with iron containing materials because the ocean waters away from the coast are remarkably deficient in certain cations. This proposal has been examined, and rejected because it was argued that only a minor part of the algae sunk to the depths and thus would be taken out of circulation. That, to my mind is ridiculous. What happened to the rest? Some, at least, would be eaten by fish and if we helped the fish population regenerate, would that be such a bad idea? Similarly, in the 1970s the US Navy showed that macroalgae could be grown in deep water on rafts, fertilized by sucking up water from the depths using wave power. The experiment ran into trouble during a major storm, and the consequent drop in oil prices killed it, but it might still be worth while. Some algae are the fastest growing plants on the planet, and as I have argued in my ebook "Biofuels", it is reasonably straightforward to make biofuel from it, which would replace fossil oil.
But for me, the biggest problem with the logic of "unintended consequences" is we are going to see some really major unintended consequences. There is a possibility of a sea level rise of up to 60 meters, as a consequence of our fossil fuel consumption. London sits between 5 and 30 meters above sea level. Is not drowning London an adverse consequence? Check with Google Earth, and if you live somewhere near the coast, your descendants may not be living where you live.
My view is the Society should be making as many efforts as it can to persuade various governments to invest more money into geoengineering research, and to coordinate it, because geoengineering alone can reduce the carbon dioxide levels in the atmosphere. What do you think?
Posted by Ian Miller on Apr 18, 2016 12:50 AM BST
In a previous post (http://my.rsc.org/blogs/84/1702) I made the case that the covalent bonds of the group one metals were characteristic of the element, i.e. the energy of any A – B bond was the arithmetic mean of the A – A and the B – B bond energies. I also asked " What do you think? Are you interested?" So far, no comments. Does this mean that nobody can see a glaring problem, or does it really mean that chemists as a whole have little interest in the nature of the chemical bond?
First, the glaring problem. How can the energies be the arithmetic mean? Thus from de Broglie, we know
pλ = h
We have also established that the covalent radius is characteristic of the atom, which means that λ on the bond axis is constant. We also know that on average, there are no net forces on the nucleus, otherwise they would accelerate in the direction of the force. (Zero point motion is superimposed on such an equilibrium distance, but the forces average to zero.) With no net forces, the average wavelength as determined on the other axes should also be constant. You may protest (correctly) that the wave may have only one wavelength, but that is only true if the wave is not separable. For example, one might argue the medium changes on the bond axis due to the change in particle density due to wave interference.

Thus the constant covalent radius implies a constant wavelength for the valence electron in different molecules. But since the total energy will involve a term (p1 p2)^2 minus the original energies, and since the square of a sum does not equal the sum of the squares, and since the path length must change between, say, Li2 and LiCs, the bond energies should not be the linear sum of the components if the waves are delocalized over the whole molecule.  For a simple two-electron wave function that arises from pairing, no new nodes are placed in the wave function (other than the antibond or excited states) so the path length must change significantly. To me, this strongly suggests that the molecular orbital theory is not soundly based. Yes, they can get the right answers by adjusting the parameters/constants within the calculations, but that does not prove the theory is correct. Instead there should some algebraic reason why such additivity arises naturally.

Is there any? The answer to that, in my opinion, rests on the reason why the energy levels are stable anyway. Under Maxwell's electromagnetic theory, an accelerating electron should emit electromagnetic radiation, and this occurs always, except for the stationary states of atoms and molecules. From the Schrödinger equation, such stable states occur only when the action is exactly quantized. If the action about each atom must be quantized for σ-bonded molecules for the molecule to be stable, then we get the additivity of the energy of such simple molecules if the covalent radius is constant. Thus we have a physical reason, independent of calculations, for the observation. The importance of this is that it gives a new relationship to aid calculations, which also shows why the functional group actually occurs. Is such a potentially new physical relationship of sufficient interest to be worth further investigation?
Any comments? Please!
Posted by Ian Miller on Mar 20, 2016 10:54 PM GMT
In the last post, I presented data for the covalent bonds of the A – B compounds of the Group 1 elements that showed to a reasonable degree that the atoms each had a characteristic covalent energy, in the same way there is a covalent radius, and that the bond energy of the A – B bond is the sum of the A and B contributions. This goes against all the standard textbook writings. In an earlier post I stated that previously I had submitted a paper that would lead to a method for readily calculating these bond energies, but the paper was rejected by the editors of some journals on the grounds that either these are not very important molecules, or alternatively (or both) nobody would be interested. This annoyed me at the time, but is seems to me they had a point.  These blog posts have received absolutely no comment.  Either nobody cares, or nobody is reading the posts. Either way, it is hardly encouraging.
Now, the next point that could have been made is that when we get to more common problems, the bond energies are not additive in that way. Or are they? One problem I see is the actual data are not really suitable for reaching a conclusion.
Let's consider the P –P bond energy, which is needed for considering the bond additivity of any phosphorous compounds. I made a quick calculation of the P – P bond energy in diphosphine, on the assumption that the P – H bond energy was the same as in phosphine, and I got the energy 242 kJ/mol. If you look up some bond energy tables, you find the energy is quoted as 201 kJ/mol. How did they get that?  If you consider the heats of atomization of phosphorus, the bond energy is 221 kJ/mol, but if we assume that is in the P4 form, it would be in the tetrahedrane structure, which will be strained (although the strain will also stabilize lone pairs) and of course the standard state will be a solid, so in principle energy should be added to get it into the gas phase before atomizing to make the comparison, so it is reasonable to assume that the real bond energy will be stronger than that indicated by that calculation.
The problem is obvious: to make any sense of this, we need more accurate data. We also need the data to involve energies of atomization, and not rely on the more easily obtained bond dissociation energies. But as far as I can see, the chemical community has given up trying to establish this data. Does it matter? I think it does. For me, a problem with modern chemical theory, which is essentially extremely complicated computations, is that it offers little assistance to the issues that matter for the chemist because there are no principles enunciated, but merely results and comments on various computational programs. The principles are needed, even if the calculations are not completely accurate, so that chemists can draw conclusions, and use these to formulate new plans of action. How many really think they understand why many synthetic reactions work that way? Do we care about the very fundamental component of our discipline? And, for that matter, does anyone care whether I write this blog?
Posted by Ian Miller on Feb 29, 2016 2:14 AM GMT
In my last post, I presented evidence that the covalent radius of a Group 1 metal was constant in the dimeric compounds. I also asked whether anyone was interested. So far, no responses, and I suspect the post received something of a yawn, if that, from some because, after all, everyone "knows" there is a constant covalent radius. There is, of course a problem. Had I included hydrides, the relation would not have worked. Ha, you say, but the hydrides are ionic. Well, the constant covalent radius of hydrogen simply does not work for a lot of other compounds either. Try methane, ammonia and water.  There are various alternative explanations/reasons, but let us for the moment accept that hydrogen does not comply with this covalent radius proposition.
 
If the covalent radius of an atom is constant, then there should be a characteristic wavelength for each given atom when chemically bound, which in turn suggests from the de Broglie relation that the bonding electrons will provide a constant momentum value to the bond. While that is a little questionable, if true it would mean the bond energy of an A – B molecule is the arithmetic mean of the corresponding A – A and B – B molecules. Now, one can argue over the reasoning behind that, but much better is to examine the data and see what nature wants to tell us.
 
Pauling, in The Nature of the Chemical Bond stated clearly that that is not correct. However, if we pause for thought, we find the arithmetic mean proposition depends on no additional interactions being present in addition to those arising from the bonding electrons forming the covalent bond. Thus atoms with a lone pair would be excluded because the A – A bonds are too weak, such weakness usually attributed to lone pair interactions. Think of peroxides. Then, bonds involving hydrogen would be excluded because the covalent radius relationship does not hold. Bonds involving hybridization may produce other problems. This is where the Group 1 metals come to their own: they do not have any additional complicating features. Far from "not being very interesting" as one editor complained to me, I believe they are essential to starting an analysis of covalent bond theory. So, what have we got?
 
The energies of the A – A bonds are somewhat difficult to nail down. Values are published, but often there is more than one value, and the values lie outside their mutual error bars. With that reservation, a selection of energies (in kJ/mol) are as follows:  Li2 102.3; Na2 72.04, 73.6; K2 57.3; Rb2 47.8; Cs2 44.8
 
The observed bond energies for A – B molecules are taken from a review (Fedorov, D. A., Derevianko,  A., Varganov, S. A. J. Chem Phys. 140: 184315 (2014)) Below, the calculated value, based on the average of the A – A molecules are given, then in brackets, the observed energy, then the difference δ expressed as what has to be added to the calculated value to get the observed value.
                  Mean     Obs          δ
Li – Na        88.0   (85.0)     -3.0
Li – K         79.8    (73.6)     -6.2
Li – Rb       75.1    (70.9)     -4.2
Li – Cs       73.6     (70.3)    -3.3
Na – K       65.5     (63.1)    -2.4
Na – Rb      60.7    (60.2)    -0.5
Na – Cs      59.2     (59.3)     0.1
K – Rb        52.6    (50.5)    -2.1
K – Cs        51.1    (48.7)     3.4
Rb – Cs      46.3    (45.9)    -0.4
 
The question now is, does this show that the bond energies are the arithmetic means of the A – A and B – B molecules? Similarly to my last post, there are three options:
(1) The bond energies are the sum of the atomic contributions, and the discrepancies are observational error, including in the A – A molecules.
(2) The bond energies are the sum of the atomic contributions, and the discrepancies are partly observational error, including in the A – A molecules, and partly some very small additional effect.
(3) The bond energies are not the sum of the atomic contributions, and any agreement is accidental.
What do you think? Are you interested?

 
Posted by Ian Miller on Feb 8, 2016 2:03 AM GMT
Time to start the New Year, and wish the RSC a great 175th anniversary. If we think of the last 175 years, chemistry has made serious changes, both to itself and to our lives. Nevertheless, at heart, chemistry is about molecules, and molecules are groups of atoms held together by what we call chemical bonds. The recent Chemistry World has an article that shows how our understanding of bonding has evolved, but I am not convinced it shows how hard this was. Don't forget, as it stood then, the concept of an electron moving around a nucleus violated one of the greatest triumphs of 19th century physics, namely Maxwell's electromagnetic theory. Would you have been prepared to propose something as radical? Or has the quest to discover died out? It is a lot easier to grasp something when everyone tells you it is correct, but what about when nobody knows? I may annoy a number of people, but I believe we still do not properly understand even the simplest of chemical bonds, and I thought one contribution I could make to the year would be to illustrate a problem over a small number of posts. Amongst other things, I hope to show you how hard it is to form new theories.
 
What I am going to do is to focus on the dimeric molecules of the Group 1 metals. These are of interest to me because they are the simplest molecules, with the fewest complicating issues. Or so I thought. Some of what I am going to put in the following posts was submitted to two journals and rejected by both on what I consider spurious grounds. The first said these molecules were not very interesting; the second said nobody would be interested in what I was proposing (which involved a hitherto unrecognized quantum effect). There were no adverse comments about the physics! I wonder were these editors right? Perhaps nobody is interested? Perhaps the urge to overturn the wrong has gone? If it looks OK, then do not disturb! Welcome back, Claudius Ptolemy! Of course, just because nobody is arguing, that may be because it is correct. Maxwell's electromagnetic theory is correct, right? Apart from that pesky issue about electrons around atoms, that is. But you know why that is, don't you? Do you really? Maybe in detail things are not quite what you think. Care to try out some problems?
 
So here goes thought number one. Atoms have a characteristic covalent radius, so the bond distance of an A – B molecule is the arithmetic mean of the A – A and B – B bond distances. Do you agree with that statement?
 
Let's test it. The literature contains the necessary A – A bond distances, although of varying degrees of accuracy. However, recently a review of the bond properties of the A – B molecules of the group 1 metals has been published (Fedorov, D. A., Derevianko,  A., Varganov, S. A. J. Chem Phys. 140: 184315 (2014).).  So, let's check. First, the A – A bond distances. The following is from various literature sources with distances in pm:
Li2  133.6;  Na2 153.9;  K2 193.5;  Rb2 210.5; Cs2 230
Now, let us look at the A – B molecules. In the following, the column labelled "mean" is the arithmetic mean of the relevant A – A molecules, the column labelled observed is the measured values from Federov et al., and δ is what must be added to the calculated value to get the observed value.
 
                   Mean       Obs          δ   
Li – Na       287.5    (288.9)       1.4
Li – K         328.1    (332.3)       4.2
Li – Rb       344.1    (346.6)       2.5
Li – Cs       363.6     (366.8)      3.2
Na – K       347.4     (349.9)      2.5
Na – Rb     364.4     (364.3)     -0.1
Na – Cs      383.9     (385)         1.1
K – Rb       404       (406.9)       2.9
K – Cs       423.5     (428.4)       5.9
Rb – Cs     440.5     (437.1)      -3.4
 
What do you make of that? There are three options:
(1) The bond distance is the sum of the covalent radii, and the discrepancies are observational error, including in the A – A molecules.
(2) The bond distance is the sum of the covalent radii, and the discrepancies are partly observational error, including in the A – A molecules, and partly some very small additional effect.
(3) The bond distance is not the sum of the covalent radii, as shown by the lack of agreement.
What do you opt for? Can you discern any trends? This probably seems fairly obvious to you, but soon it will be less so. The question is, is anyone interested? Were the journal editors right in that nobody cares about the nature of the chemical bond? Will anyone respond?

 
Posted by Ian Miller on Jan 24, 2016 9:13 PM GMT
The November Chemistry World had an article on homochirality, with the question, "How did it evolve?" Clearly a problem, because the article did not really offer a solution. The problem is, the biogenetic chemicals should have been formed in both D and L forms equally. So why do we have D sugars and L amino acids? First, as the article points out, for all we know throughout the Universe there are an equal number of worlds supporting this choice as have chosen the other option. There is no reason to believe that D sugars are somehow superior, and certain red algae have polysaccharides based on alternating D and L galactose, so there is nothing that prevents the opposite form. So, how did homochirality evolve? The article offers a good survey of the guesses as to how an initial preference would feed on itself, but the problem then is, why was there an initial preference? In most cases, any means of obtaining a preference would appear to be too small to make any significant difference.
 
In my ebook, Planetary Formation and Biogenesis, I suggest there are two better questions. The first is, why did homochirality evolve? The second, and more important, is, why choose ribose, and having done that, why the furanose form? I think the answer to the last one is important. It is possible to make duplexes out of a number of pyranose pentoses, including ribose, and all of them have a slightly stronger association energy than the ribofuranose. My suggestion is that the furanose form does something the pyranose form does not do, in which case the reason for choosing ribose is clear, even though ribose is one of the least likely sugars to be formed from a synthesis that would offer a mixture: it alone has a reasonable amount of furanose form in solution. So the question then is, why prefer furanose?
 
The first step towards RNA in biogenesis is to join a purine or pyrimidine to ribose. This is a simple condensation reaction, but it does not work very well for purines, and not at all for pyrimidines, in aqueous solution. The condensation reaction is thermal, so there has to be a means of heating it more strongly, or alternatively, providing more vibrational energy at the reactive site. The formation of the phosphate ester at C-5 is also a condensation reaction. We know that both reactions go photochemically for adenine, ribose and phosphate, and while this is unlikely because adenine only absorbs photons at about 250 nm or less, I suggested there could be a different mechanism: absorption of visible light by something like a porphyrin and subsequent thermal energy transfer. If so, the reason for the furanose is the only form that will get to a phosphate ester, because it alone is flexible enough to transfer the vibrational energy to C-5.
 
If so, then the origin of homochirality is reasonably obvious. The RNA form condenses photochemically, until the RNA polymers get long enough to act as ribozymes. Once they do that, they can depolymerize as well as catalyse polymerization. For a while, anything might be formed, but once a homochiral polymer strand is formed, it can form a helix that will act as a template for a double helix. Once it does that, if the duplex separates, we have two templates. It needs the duplex to reproduce, and the duplex will not form if the strands have mixed chirality. Once reproduction starts, whatever structure was selected will predominate. If you need homochirality to reproduce, and if, once you get reproduction that form will predominate, then surely homochirality is inevitable.
 
This will be my last post here for 2015, so may I wish readers a very merry Christmas, and a successful 2016.
Posted by Ian Miller on Dec 13, 2015 10:36 PM GMT
On the international scene, it is often difficult for nations to make decisions when more than one of them is involved, but occasionally an issue comes up where it is difficult to even know how to make the decision. Climate change is one of those issues. Leaving aside some recidivists, the mechanism of greenhouse forcing is now reasonably clearly known, and accepted by the scientific community, and, judging by the recent marches, by a reasonable fraction of the public. Less well accepted is what is essentially hysteresis, which means that what happens depends on what has happened before. Almost certainly, we are not currently in a climatic equilibrium (if we ever were). Another point that many seem to have trouble with is that if there is a net heating, or positive power input, it does not follow that temperatures will increase at selected points. The obvious example is that heat going into the polar regions and melting ice does not raise the temperature. But even more significant, if some areas are getting hotter, and the poles stay the same, we have a greater temperature difference, which permits a stronger heat engine (storms) to develop. Stronger cold winds flowing from the poles will cool some regions, even if, overall, the planet is heating.
 
Our current problem is that with 400 ppm of CO2 in the atmosphere, the additional heat in the oceans are transferring warmer water to the ice sheets, thus melting glacial ice in Greenland and the Antarctic. Suppose we stopped burning fossil fuels tomorrow, the rate of melting would continue unabated for quite some time, first because the additional heat in the oceans at the equator still has to have sufficient time to get to the ice. Further, the oceans will continue to absorb the heat because the atmosphere will continue to have its 400 ppm of CO2, together with other gases such as CH4, N2O, and a number of industrially made gases. If the ice sheets melt, there will be a serious rise in sea-levels. Countries like Bangla Desh will lose half their land, some Pacific Islands will be uninhabitable. So, what should we do?
 
The current political thinking seems to be, nothing, besides reduce CO2 emissions. However, reducing emissions merely slows the development of the problem; it does not reduce it, because of what is already there. Worse, India has announced it will build a lot of new coal-fired power stations, on the basis that it should have its turn to burn coal. There is an even worse problem: the acidification of seawater due to the CO2 it has absorbed is bringing it close to the level where aragonite does not precipitate out. A very large number of shellfish, at least in their juvenile stages, depend on aragonite to make their protective shells. Accordingly, we have two problems: how to stop global warming, and how to stop ocean acidification? Each of these can be addressed by geoengineering, although the ocean acidification has the fewest options.
 

There was an article in Science (vol 347, p1293) that raised the question, what would happen if some country decided to burn a lot of sulphur, which would help form clouds and reduce the albedo? The reason the country might have decided to do this could be because it had had a series of bad harvests, and it blamed climate change. The problem, of course, might be that now some other country might have its harvests fail (and in this case, ocean acidification would hardly improve.) The problem is that anyone who does something will hardly know what their actions will do elsewhere, and even if they can guess, who is responsible for what happens? What is needed is more information, but how do we get that information? How do you carry out an experiment that will provide data on a global scale without the possibility of influencing the globe? And who will support the experiment? Who will regulate what is done, and on what basis? One unfortunate aspect is that politicians will put themselves in the deciding role, they will not understand the problem, and they will act solely in the interests of their own country. Not an attractive prospect for our grandchildren.
Posted by Ian Miller on Nov 29, 2015 9:23 PM GMT
One thing that brings joy to someone who engages in theoretical work is to find observational evidence that supports a theory that contradicted "standard theory" that everyone accepted when the theory was presented, which in this case was in 2011, in my ebook Planetary Formation and Biogenesis. That fact that nobody else takes any notice is irrelevant; the feeling that your theory alone actually meets the conditions imposed by nature is great.
 
The relevant part involves the formation of the rocky planets. The standard theory is that these formed from the collision of planetesimals (bodies up to 50 km in size, which were formed by some totally unknown process), and the volatiles came from a subsequent bombardment of carbonaceous chondrites, or something like them. The review I gave of this process (the ebook has over 600 references) shows a number of reasons why this should be wrong, mainly in the form of a whole lot of other things that should have accompanied the water, and clearly did not in the right ratios, but the theory was held onto because it was perceived that there was no alternative. When the rocky planets accreted, it was too hot for water to accrete at those pressures by any reasonable physical process.
 
My answer was that Earth formed by chemical processes. Very specifically, in the early stages of the accretion disk, there were temperatures where calcium aluminosilicates could phase separate out of melt-fused rocks, and when the disk cooled, collisions made dust, the dust adhered to rock and collected water vapour from the nebula to set the cements into effectively concretes. These were strong enough to permit them to survive the milder collisions, and they would rapidly accrete small material, effectively growing more by monarchic growth than the usually assumed oligarchic growth. Accordingly, the water that set the cements would be primordial, and this would be the source of Earth's water.
 
The good feelings I am sharing come from a recent paper by Hallis et al. (Science 350: 795 – 797) that reports the deuterium/hydrogen ratios in some primordial rock samples originating in the deep mantle. These lavas, found in Baffin Island and Iceland, have 3He/4He ratios similar to primordial gas (and up to 60 times higher than atmospheric helium) and have Pb and Nd isotopic ratios consistent with primordial ages (4.45 – 4.55 Gy). They also contain water, and the deuterium levels of the water indicate that the water almost certainly had to be primordial, from the accretion disk itself and not from chondrites. You can see why I am happy.

 
Posted by Ian Miller on Nov 16, 2015 10:12 PM GMT
In the latest "Chemistry World" there is an article arguing there is a controversy relating to the nature of the bonding in molecules such as the perchlorate anion, which appears now to be describable as having the chlorine atom with a positive charge of three, the four oxygen atoms with a charge of minus one each. The bonding is therefore one of four equal single bonds. Presumably, sulphate has the same issues, and according to Wikipedia, computational chemists put a charge of 2.45 on the sulfur atom. Crystal structures apparently indicate the four bonds are equal. Why got to these extremes? The problem is that chlorine has seven outer electrons, but six of them are usually regarded as residing in three pairs, and hence should be inert. Accordingly, chlorine has a valence of 1. Now many chlorine compounds do, but perchlorate, by definition, can be considered as the adduct of water on Cl2O7, i.e. all the outer electrons are involved. In principle, upon electron pairing, that gives 14 electrons in the outer valence shell. How can that be? The sulphur in sulphate has six outer electrons, four of which are paired. To get the required valence of six, again all electrons have to be unpaired if electron pairing is relevant.
 
The traditional method was to invoke 3d orbitals. These are empty, so they may be available for hybridization, BUT, according to the article, "quantum chemists have shown that it is energetically unfeasible to use d orbitals for extra bonds". It was asserted that this undermines a quantum mechanical account of Lewis bonding. My immediate problem with this assertion is, "how do we know?" The 3d orbital energies are obviously higher than 3p for chlorine, but how much higher, and does the energy difference remain if the orbitals are used for bonding? I am not arguing the statement is wrong, but merely that I would like to know why everyone thinks it is right. The output of computations is insufficient, because computations, according to Pople's Nobel lecture, are heavily dependent on validation, and we are a little short of the requirement to validate this statement. We can go further. The 2p orbitals are clearly at a higher energy than the 2s orbitals when we excite to them, yet boron almost never forms a B – X molecule, other than in highly energetic experiments, and not only does it use all three electrons, but it tries harder to achieve a tetrahedral configuration. So, if boron can do this, why cannot sulphur do it with 3d orbitals.
 
The article suggests that the answer might come from putting large negative charge on the oxygen atoms, and strong positive charge on the chlorine. The perchlorate anion is therefore an anion with four oxygen atoms with nearly a negative charge on each, and nearly three positive charges on the chlorine atom. The question then is, why does not this positive charge attract and polarize towards it the negative charge. If it does, we are back to the original problem.
 
What we need are data, and there are some. Consider only sulphate. We can form stable esters, such as dimethyl sulphate. If we do, the structure is consistent with two S=O and two S-O bonds. The  S – O bond length is 156.7 pm, the S = O bond length 141.7 pm. (J Mol. Str. 73, 99 – 104) while the infrared spectrum (Spectrochim Acta 28A, 1889 – 1898) gives the symmetric and asymmetric stretches of two pairs: the double bonds at 1389 and 1199 cm-1, with the single bonds at 829 and 757 cm-1. The infrared spectra of sulphates as a whole typically have medium to strong signals around 645 cm-1, and very strong signals at1110 cm-1, yet the S – O bonds in the anions all have the same length, so what does that mean? Obviously, even this common molecule still needs further work. I don't know the answer, but I would very much prefer it if the theoreticians would publish the reasons, and assumptions used, when they publish a statement saying the central atom has an extremely high positive charge. Their model might work for the sulphate anion, but it does not appear to for dimethyl sulphate, so the problem with how to explain hypervalency remains.

 
Posted by Ian Miller on Oct 26, 2015 1:53 AM GMT
   1 2 3 4 5 6 7 8 9 ... 17