Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

One issue that has puzzled me is what role, if any, does theory play in modern chemistry, other than having a number of people writing papers. Of course some people are carrying out computations, but does any of their work influence other chemists in any way? Are they busy talking to themselves? The reason why this has struck me is that the latest "Chemistry World" has an article "Do hydrogen bonds have covalent character?" Immediately below is the explanation, "Scientists wrangle over disagreement between charge transfer measurements." My immediate reaction was, what exactly is meant by "covalent character" and "charge transfer"?  I know what I think a covalent bond is, which is a bond formed by two electrons from two atoms pairing to form a wave function component with half the periodic time of the waves on the original free atom. I also accept the dative covalent bond, such as that in the BH3NH3 molecule, where two electrons come from the same atom, and where the resultant bond has a strength and length as if the two electrons originated from separate atoms. That is clearly not what is meant for the hydrogen bond, but the saviour is that word "character".  What does that imply?
 What puzzles me here is that on reading the article, there are no charge transfer measurements. What we have, instead, are various calculations based on models, and the argument is whether the model involves transfer of electrons. However, as far as I can make out, there is no observational evidence at all. In the BH3NH3 molecule, obviously the two electrons for the bond start from the nitrogen atom, but the resultant dipole moment does not indicate a whole electron is transferred, although we could say it is, and then sent back to form the bond. However, in that molecule we have a dipole moment of over 6 Debye units. What is the change of dipole moment in forming the hydrogen bond? If we want to argue for charge transfer, we should at least know that.
From my point of view, the hydrogen bond is essentially very weak, and is at least an order of magnitude less strong than similar covalent bonds. This would suggest that if there were charge transfer, it is relatively minor. Why would such a small effect not be simply due to polarization? With the molecule BH3NH3 it is generally accepted that the lone pair on the ammonia enters the orbital structure of the boron system, with both being tetrahedral in structure, more or less. The dipole moment is about 6 Debye units, which does not correspond to one electron fully transferring to the boron system. There is clear charge transfer and the bond is effectively covalent.
Now, if we then look at ammonia, do we expect the lone pair on the nitrogen to transfer itself to the hydrogen atom of another ammonia molecule to form this hydrogen bond? If it corresponded to the boron example, then we would expect a change of at least several Debye units but as far as I know, there is no such change of dipole moment that is not explicable in terms of it being a condensed system. The article states there are experimental data to support charge transfer, but what is it?
Back to my original problem with computational chemistry: what role, if any, does theory play in modern chemistry? In this article we see a statement such as the NBO method falls foul of "basis set superposition error". What exactly does that mean, and how many chemists appreciate exactly what it means? We have a disagreement where one is accused of focusing on energies, while they focus on charge density shifts.  At least energies are measurable. What bothers me is that such arguments on whether different people use the same terminology differently is a bit like arguing about how many angels can dance on the head of a pin.  What we need from theory is a reasonably clear statement of what it means, and a clear statement of what assumptions are made, and what part validation plays in the computations.
Posted by Ian Miller on Apr 24, 2017 12:54 AM BST
An interesting thing happened for planetary science recently: two papers (Nature, vol 541 (Dauphas, pp 521 – 524; Fischer-Gödde and Kleine, pp 525 – 527) showed that much of how we think planets accreted is wrong. The papers showed that the Earth/Moon system has isotope distributions across a number of elements exactly the same as that found in enstatite chondrites, and that distribution applied over most of the accretion. The timing was based on the premise that different elements would be extracted into the core at different rates, and some not at all. Further, the isotope distributions of these elements are known to vary according to distance to the star, thus Earth is different from Mars, which in turn is clearly different from the asteroid belt. Exactly why they have this radial variation is an interesting question in itself, but for the moment, it is an established fact. If we assume this variation in isotope distribution follows a continuous function, then the variations we know about have sufficient magnitude that we can say that Earth accreted from material confined to a narrow zone.
 Enstatite chondrites are highly reduced, their iron content tends to be as the metal or as a sulphide rather than as an oxide, and they may even contain small amounts of silicon as a silicide. They are also extremely dry, and it is assumed that they were formed at a very hot part of the accretion disk because they contain less forsterite and additionally you need very high temperatures to form silicides.
In my mind, the significance of these papers is two-fold. The first is, the standard explanation that Earth's water and biogenetic material came from carbonaceous chondrites must be wrong. The ruthenium isotope analysis falsifies the theory that so much water arrived from such chondrites. If they did, the ruthenium on our surface would be different. The second is the standard theory of planetary formation, in which dust accreted to planetesimals, these collided to form embryos, which in turn formed oligarchs or protoplanets (Mars sized objects) and these collided to form planets must be wrong. The reason is that if they did collide like that, they would do a lot of bouncing around and everything would get well-mixed. Standard computer simulations argue that Earth would have formed from a distribution of matter from further out than Mars to inside Mercury's orbit. The fact that the isotope ratios are so equivalent to enstatite chondrites shows the material that formed Earth came from a relatively narrow zone that at some stage had been very strongly heated. That, of course, is why Earth has such a large iron core, and Mars does not. At Mars, much of the iron remained as the oxide.
In my mind, this work shows that such oligarchic growth is wrong and that the alternative, monarchic growth, which has been largely abandoned, is in fact correct. But that raises the question, why are the planets where they are, and why are there such large gaps? My answer is simple: the initial accretion was chemically based, and certain temperature zones favoured specific reactions. It was only in these zones that accretion occurred at a sufficient rate to form large bodies. That, in turn, is why the various planets have different compositions, and why Earth has so much water and is the biggest rocky planet: it was in a zone that was favourable to the formation of a cement, and water from the disk gases set it. If anyone is interested, my ebook "Planetary Formation and Biogenesis" explains this in more detail, and a review of over 600 references explains why. As far as I am aware, the theory outlined there is the only one that requires the results of those papers. So, every now and again, something good happens! It feels good to know you could actually be correct where others are not.
So, will these two papers cause a change of thinking. In my opinion, it may not change anything because scientists not directly involved probably do not care, and scientists deeply involved are not going to change their beliefs. Why do I think that? Well, there was a more convincing paper back in 2002 (Drake and Righter, Nature 416
: 39-44) that came to exactly the same conclusions. Instead of ruthenium isotopes, it used osmium isotopes, but you see the point. I doubt these two papers will be the straw that broke the camel's back, but I could be wrong. However, experience in this field shows that scientists prefer to ignore evidence that falsifies their cherished beliefs than change their minds. As a further example, neither of these papers cited the Drake and Righter paper. They did not want to admit they were confirming a previous conclusion, which is perhaps indicative they really do not wish to change people's minds, let alone acknowledge previous work that is directly relevant.
Posted by Ian Miller on Feb 5, 2017 9:41 PM GMT
The recent edition of Chemistry World had an article in which it was reported that the American Chemical Society is planning to start up a ChemRxiv, a preprint server. In this context, I note that chemists are strangely resistant to change, and the article was hardly overly enthusiastic about the concept. You probably know that the physics community has preprints on the ArXiV server, and by and large, physicists submit papers there for community peer review before submitting to a final journal. I started writing an article for this blog a little while ago on this topic, and never finished it, but this part of what I wrote. This, to me, makes a lot of sense, and a number of years ago some prominent chemists started up the Chemistry Preprint Server. The response from chemists was strangely disappointing. Very few submitted anything, and of those submitted, some were distinctly of low quality. Nevertheless, it seemed a good idea at the time. However, the main part of the chemistry community then proceeded to ignore it. Now I feel this should have been a great place for two things: getting an important paper into a form that the average community would accept, and secondly getting enough support for it that a journal would publish it. However, that preprint server did not work because very few were interested.
According to the Chemistry World article, the previous preprint server died because the quality of the publications was too low, and because the American Chemical Society would not publish anything that had been published before. That may be unfair on the ACS; I doubt it was unique in that. My belief is it died because most of the chemists of stature refused to take part, although in fairness at the time I also refused to put my experimental work on it because I was afraid that it would not be accepted later by the journal where most of my experimental work was then being published.
What are the purposes of a preprint server? One might be to get information out there earlier, but my feeling is that is not really that important in most cases. What I used the earlier server for was to archive material that was otherwise difficult to publish, and possibly get some idea what I could do to improve its acceptability.  Now, you might say, reading that, that I was a contributor to the "low quality" chemistry. Before jumping to such conclusions, consider these.
I submitted a logic analysis once to a chemical journal, the purpose was to show that the published data did not support the concept that the cyclopropane ring could delocalize electrons into adjacent unsaturated substituents. The standard position was it did, largely because it stabilized adjacent positive charge and gave bathochromic shifts to many UV absorptions, and extending the wave function would do that, as in the allyl cation. However, there is another way, namely a polarization field, i.e. the problem is one of electromagnetic theory rather than quantum mechanics. I wrote a logic analysis that showed not only did this alternative give correct predictions, but there were over 60 different types of experiment that falsified, or at least cast a lot of doubt, over the then accepted "delocalization" explanation. Note that demonstrating that cyclopropane stabilized adjacent positive charge 250 times still only adds one datum into a logic analysis. As a further aside, one previous review argued that quantum computations verified the presence of extra stability through delocalization. You may be amused to note that such computations used the same MO programs that "proved" the stability of polywater.
The first journal rejected it because they argued "it was not what the average chemist wanted to see" (or something like that). There were not enough diagrams (I assumed that a triangle headed towards a substituent X should have been general enough for many cases). One journal rejected the proposal "because this issue is settled". Who cares about the data and the logic? Others simply said they did not publish logic analyses. Fair enough in one way, but how could it get published? I will concede immediately that it could have been made easier to follow, but what I hoped was that chemists would tell me what it should have looked like to convince them. No such luck.
Similarly with a paper where I showed a pencil and paper route to getting a reasonably accurate estimate of the covalent bonds of the group 1 elements. That was rejected by editors because "nobody would be interested". They may or may not have been right, but the interesting point is it argued there was a hitherto unrecognized quantum effect applying.
Now some will say that was rubbish. Either it was right or it was wrong. If it were right, should it not be published somewhere? If it were wrong, should not someone be able to show where? And that is where a preprint server should be useful. It gives the scientific community the opportunity to comment and act as public referees, and takes away the ability to block something by editors who do not have to give excuses.
Posted by Ian Miller on Nov 3, 2016 9:44 PM GMT
The RSC should be complimented for its "Future of the Chemical Sciences" initiative, but this raises the question, is anything seriously missing from the output so far. I think the answer is, unfortunately, yes.
First, consider the four plausible scenarios. There is no doubt that chemistry will be needed to address many of the problems facing the world. That suggests there will always be a job for chemists, although not necessarily every type of chemist, so that is well identified, if not somewhat obvious. Of course some of the other options contradict this option.
The concept that we can have "do it yourselves" chemists where chemical processes that are stocked away in computers, and the future chemist can go to a "black box", key in what he wants, the computer tells him what feedstocks it needs, the chemist gets them, fills up some hoppers or whatever, presses "go", and waits for the product to come out, neatly bottled, might be plausible eventually, but this does not really require much in the way of chemistry, and it suggests there is not much future for synthetic chemists. This, to me, is an invitation to disaster, because like it or not, you should understand what you are doing. How many tragedies would there have to be before people were stopped from making something that was highly dangerous?
The third option, "no chemists" speaks for itself, and it is interesting that it focuses on "no new fundamental research". The free market chemistry option simply focuses on funding, and on all funding coming from the private sector. Neither of these options offers much hope for chemistry making our future better.
What do we note about these scenarios? What I see is a fixation on solving immediate problems, with the top problem being jobs for emerging students, and the implication is that current education, etc., is unsuited for purpose. How did we come to this? As far as I know, the major problems we face now have arisen largely because people with little knowledge of science have been at the helm, and the free market has let them at it. Exactly how will the free market solve the problems of climate change or pollution when it is the financial interests of certain players to keep doing what is counterproductive? The only thing stopping them now is regulations, and the voice of those who know. If nobody knows, where then?
What I notice is missing is the concept of trying to understand our discipline. There is no doubt that experienced chemists in a small section of chemistry know all about that small section very well, although whether it is understanding, or more the ability to recall what happened last time something like this was tried, is another matter. I have this feeling that chemists have now become very keen on "how to do something", but seem to have lost the spirit of discovery, wherein we might ask, why does x, y, z happen? Am I right? Who stands up and confesses they want to understand the basis for what they are doing? I do, but how many others are there? I recall earlier in the year I put up a blog post that raised the issue that maybe the energies of the A – B molecules of the group 1 elements were additive for their component atoms. That verges on heresy, and despite the evidence it may well be wrong, but nobody said a thing. So, what I would like to see is at least a little encouragement for understanding. After all, chemistry might be the first science to have all its fundamentals settled. We know all of chemistry is determined by quantum mechanics and electromagnetic theory. Why don't we want to use these a little more deeply?
Finally, here is a little example problem. In my novel, "A Face on Cydonia" the story needed small amounts of powerful explosives, so I "invented" a molecule: tetranitrotetrahedrane. My argument was that apart from its propensity to turn itself into carbon dioxide and nitrogen gas with serious vigour, it would be one of the more stable tetrahedranes. Can you guess why that might be? Or why I think it might be? Then, do you want little black box synthesizers in the hands of some of the terrorists who seem to want to go around blowing stuff up?
Posted by Ian Miller on Oct 2, 2016 10:46 PM BST
Far away, on the other side of this planet (in New Zealand, to be precise) we have watched the Brexit issue with a sort of stunned puzzlement. Why on Earth did this happen? And more to the point, now it has happened, why is it unclear what to do next? Even when trying an experiment, my view is you think out what you expect to happen, what you have to do to make it happen, then you try. It does not always work out correctly, because sometimes the molecules decide to be perverse, but I can't recall ever doing an experiment where I did not know what I wanted from it.
With Brexit, it seems the opposite has happened. Britain did it, and the question now is, now what? All of which raises an interesting question for the scientific community: you are trained to ask questions relating to procedure, so why did nobody or no organization stand up and demand to know from those advocating exiting what the consequences of exiting were? Why is it that that a vote was organized, but nobody knew the pros and cons of each position? Why is an informed vote something to be avoided?
All of which raises the question, where now for UK science? Presumably it will be out of major EU projects, although in principle there is no reason why that aspect of cooperation with the EU cannot continue. Perhaps the biggest single problem is immigration. Restricting immigration from the East seems to have been a major reason why the exit vote won, but when Switzerland as an associate began to impose immigration restrictions, it had that status suspended from Horizon 2020. The EU has to remain consistent, and since immigration was a major cause of the vote, the UK government is effectively ignoring the voters if it goes back on that. Chemists may feel that such ejection is no bad thing because chemistry mainly avoids big projects. It is physics that has the LHC, and ESA. I would hope that that sort of attitude is voided, and chemists, and the RSC, start lobbying for science as a whole, and not an insulated part.
Posted by Ian Miller on Jul 11, 2016 4:13 AM BST
In the recent Chemistry World, we read the heading "Technetium carbide refuted; proof that the compound cannot exist after all". The article then goes on to show that a team of computational chemists made calculations and showed that the carbide cannot exist, and what experiment had shown was there is a new phase of the metal.
 
Sorry, but that is just plain wrong. Not the calculations, necessarily, which may or may not be correct. The point is, you cannot prove anything by theory. One of the most successful theories of all time, in my opinion, was Newtonian mechanics, and when used to calculate the orbit of Uranus, observation failed to match the calculations. Either the theory was wrong or it was not, but whatever, nobody argued that the theory was right and observation wrong. The only way out was to postulate a new planet and Neptune was discovered. That was a triumph for theory. It put observable facts into the theory to predict something new, and there it was. However, when Newtonian mechanics was used to calculate the orbit of Mercury, observation failed to match the calculations. Again, either the theory was wrong or it was not, but whatever, nobody argued that the theory was right and observation wrong, and worse, no new planet could fix this problem. In the end, it was found that Newtonian mechanics is merely a good approximation to Einsteinian relativity.
 
As far as I am concerned, I have no idea whether technetium carbide exists. I know manganese carbide exists, and I know it is not especially resistant to certain reagents, so it is easy to make it and then lose it. Because of the nuclear instability, I doubt anyone really worries too much about technetium carbide, because it is extremely unlikely to be of significant use, but that does not matter. Further, the failure to make something in a synthesis does not prove the compound does not exist, but merely that is not the way to make it. There are an awful lot of unstable compounds that can be made, if you know how to go about it. As an aside, from my experience with manganese carbide, you may be better off not starting with the metal, as then that new metallic phase is far less likely to form.
 
As for the calculations, the best theory can do is make predictions. Only Nature can tell us whether they are correct. For me, calculations help us know we understand nature, but you cannot use calculations to prove something. All you can say is, if my theory is correct, then this is what you should expect.
Posted by Ian Miller on May 23, 2016 5:08 AM BST
By now it is apparent that either chemists are not reading these posts, or they are not interested in evidence suggesting bond strengths are additive, or why, sometimes, they are not.
I must change my approach.  In the most recent Chemistry World there is a comment on climate change. What we see is that two proposals have been made to reduce carbon dioxide levels. One is to introduce crushed silicates into soils. Of course, they have to be the right silicates. One that has been proposed is peridotite. Most certainly, the earth is not short of this; it makes up most of the mantle, but of course the mantle is somewhat difficult to access, and we wo0uld have to deal with outcrops that have reached the surface.
The problem with this proposal is the rate of reaction. Some rocks weather tolerably quickly but overall the process is slow. It can be accelerated by at least a million times by injecting carbon dioxide into a suitable fractured rock layer, but that requires a lot of energy. This sort of proposal depends on sufficient of the right silicates being available, and the energy demands on the processing not generating, either directly or indirectly, more carbon dioxide than is removed. One problem is the source of the rock; if you can find it on the surface, obviously it is not reacting very quickly.
A more straightforward method suggested was to greatly increase afforestation. One point noted briefly in the article was that such forests might generate unintended consequences. Does not the logic of this comment grate a little?
First, huge amounts of forest have already been cut down. Allowing them to regrow would merely return the system to where it was before. A particularly good area to let re-develop would be the tropical rain forests. Huge areas of Brazil have had their forests removed, and the land is not that useful for anything else, so it tends to lie barren or be eroded. Replanting the forest, or simply stopping cutting it down and letting it regrow and spread would be a start.
One scheme that I think is worth further consideration is ocean fertilization, to let algae grow. There are two forms of algae: micro and macro algae. Microalgae grow simply with modest fertilization, usually with iron containing materials because the ocean waters away from the coast are remarkably deficient in certain cations. This proposal has been examined, and rejected because it was argued that only a minor part of the algae sunk to the depths and thus would be taken out of circulation. That, to my mind is ridiculous. What happened to the rest? Some, at least, would be eaten by fish and if we helped the fish population regenerate, would that be such a bad idea? Similarly, in the 1970s the US Navy showed that macroalgae could be grown in deep water on rafts, fertilized by sucking up water from the depths using wave power. The experiment ran into trouble during a major storm, and the consequent drop in oil prices killed it, but it might still be worth while. Some algae are the fastest growing plants on the planet, and as I have argued in my ebook "Biofuels", it is reasonably straightforward to make biofuel from it, which would replace fossil oil.
But for me, the biggest problem with the logic of "unintended consequences" is we are going to see some really major unintended consequences. There is a possibility of a sea level rise of up to 60 meters, as a consequence of our fossil fuel consumption. London sits between 5 and 30 meters above sea level. Is not drowning London an adverse consequence? Check with Google Earth, and if you live somewhere near the coast, your descendants may not be living where you live.
My view is the Society should be making as many efforts as it can to persuade various governments to invest more money into geoengineering research, and to coordinate it, because geoengineering alone can reduce the carbon dioxide levels in the atmosphere. What do you think?
Posted by Ian Miller on Apr 18, 2016 12:50 AM BST
In a previous post (http://my.rsc.org/blogs/84/1702) I made the case that the covalent bonds of the group one metals were characteristic of the element, i.e. the energy of any A – B bond was the arithmetic mean of the A – A and the B – B bond energies. I also asked " What do you think? Are you interested?" So far, no comments. Does this mean that nobody can see a glaring problem, or does it really mean that chemists as a whole have little interest in the nature of the chemical bond?
First, the glaring problem. How can the energies be the arithmetic mean? Thus from de Broglie, we know
pλ = h
We have also established that the covalent radius is characteristic of the atom, which means that λ on the bond axis is constant. We also know that on average, there are no net forces on the nucleus, otherwise they would accelerate in the direction of the force. (Zero point motion is superimposed on such an equilibrium distance, but the forces average to zero.) With no net forces, the average wavelength as determined on the other axes should also be constant. You may protest (correctly) that the wave may have only one wavelength, but that is only true if the wave is not separable. For example, one might argue the medium changes on the bond axis due to the change in particle density due to wave interference.

Thus the constant covalent radius implies a constant wavelength for the valence electron in different molecules. But since the total energy will involve a term (p1 p2)^2 minus the original energies, and since the square of a sum does not equal the sum of the squares, and since the path length must change between, say, Li2 and LiCs, the bond energies should not be the linear sum of the components if the waves are delocalized over the whole molecule.  For a simple two-electron wave function that arises from pairing, no new nodes are placed in the wave function (other than the antibond or excited states) so the path length must change significantly. To me, this strongly suggests that the molecular orbital theory is not soundly based. Yes, they can get the right answers by adjusting the parameters/constants within the calculations, but that does not prove the theory is correct. Instead there should some algebraic reason why such additivity arises naturally.

Is there any? The answer to that, in my opinion, rests on the reason why the energy levels are stable anyway. Under Maxwell's electromagnetic theory, an accelerating electron should emit electromagnetic radiation, and this occurs always, except for the stationary states of atoms and molecules. From the Schrödinger equation, such stable states occur only when the action is exactly quantized. If the action about each atom must be quantized for σ-bonded molecules for the molecule to be stable, then we get the additivity of the energy of such simple molecules if the covalent radius is constant. Thus we have a physical reason, independent of calculations, for the observation. The importance of this is that it gives a new relationship to aid calculations, which also shows why the functional group actually occurs. Is such a potentially new physical relationship of sufficient interest to be worth further investigation?
Any comments? Please!
Posted by Ian Miller on Mar 20, 2016 10:54 PM GMT
In the last post, I presented data for the covalent bonds of the A – B compounds of the Group 1 elements that showed to a reasonable degree that the atoms each had a characteristic covalent energy, in the same way there is a covalent radius, and that the bond energy of the A – B bond is the sum of the A and B contributions. This goes against all the standard textbook writings. In an earlier post I stated that previously I had submitted a paper that would lead to a method for readily calculating these bond energies, but the paper was rejected by the editors of some journals on the grounds that either these are not very important molecules, or alternatively (or both) nobody would be interested. This annoyed me at the time, but is seems to me they had a point.  These blog posts have received absolutely no comment.  Either nobody cares, or nobody is reading the posts. Either way, it is hardly encouraging.
Now, the next point that could have been made is that when we get to more common problems, the bond energies are not additive in that way. Or are they? One problem I see is the actual data are not really suitable for reaching a conclusion.
Let's consider the P –P bond energy, which is needed for considering the bond additivity of any phosphorous compounds. I made a quick calculation of the P – P bond energy in diphosphine, on the assumption that the P – H bond energy was the same as in phosphine, and I got the energy 242 kJ/mol. If you look up some bond energy tables, you find the energy is quoted as 201 kJ/mol. How did they get that?  If you consider the heats of atomization of phosphorus, the bond energy is 221 kJ/mol, but if we assume that is in the P4 form, it would be in the tetrahedrane structure, which will be strained (although the strain will also stabilize lone pairs) and of course the standard state will be a solid, so in principle energy should be added to get it into the gas phase before atomizing to make the comparison, so it is reasonable to assume that the real bond energy will be stronger than that indicated by that calculation.
The problem is obvious: to make any sense of this, we need more accurate data. We also need the data to involve energies of atomization, and not rely on the more easily obtained bond dissociation energies. But as far as I can see, the chemical community has given up trying to establish this data. Does it matter? I think it does. For me, a problem with modern chemical theory, which is essentially extremely complicated computations, is that it offers little assistance to the issues that matter for the chemist because there are no principles enunciated, but merely results and comments on various computational programs. The principles are needed, even if the calculations are not completely accurate, so that chemists can draw conclusions, and use these to formulate new plans of action. How many really think they understand why many synthetic reactions work that way? Do we care about the very fundamental component of our discipline? And, for that matter, does anyone care whether I write this blog?
Posted by Ian Miller on Feb 29, 2016 2:14 AM GMT
In my last post, I presented evidence that the covalent radius of a Group 1 metal was constant in the dimeric compounds. I also asked whether anyone was interested. So far, no responses, and I suspect the post received something of a yawn, if that, from some because, after all, everyone "knows" there is a constant covalent radius. There is, of course a problem. Had I included hydrides, the relation would not have worked. Ha, you say, but the hydrides are ionic. Well, the constant covalent radius of hydrogen simply does not work for a lot of other compounds either. Try methane, ammonia and water.  There are various alternative explanations/reasons, but let us for the moment accept that hydrogen does not comply with this covalent radius proposition.
 
If the covalent radius of an atom is constant, then there should be a characteristic wavelength for each given atom when chemically bound, which in turn suggests from the de Broglie relation that the bonding electrons will provide a constant momentum value to the bond. While that is a little questionable, if true it would mean the bond energy of an A – B molecule is the arithmetic mean of the corresponding A – A and B – B molecules. Now, one can argue over the reasoning behind that, but much better is to examine the data and see what nature wants to tell us.
 
Pauling, in The Nature of the Chemical Bond stated clearly that that is not correct. However, if we pause for thought, we find the arithmetic mean proposition depends on no additional interactions being present in addition to those arising from the bonding electrons forming the covalent bond. Thus atoms with a lone pair would be excluded because the A – A bonds are too weak, such weakness usually attributed to lone pair interactions. Think of peroxides. Then, bonds involving hydrogen would be excluded because the covalent radius relationship does not hold. Bonds involving hybridization may produce other problems. This is where the Group 1 metals come to their own: they do not have any additional complicating features. Far from "not being very interesting" as one editor complained to me, I believe they are essential to starting an analysis of covalent bond theory. So, what have we got?
 
The energies of the A – A bonds are somewhat difficult to nail down. Values are published, but often there is more than one value, and the values lie outside their mutual error bars. With that reservation, a selection of energies (in kJ/mol) are as follows:  Li2 102.3; Na2 72.04, 73.6; K2 57.3; Rb2 47.8; Cs2 44.8
 
The observed bond energies for A – B molecules are taken from a review (Fedorov, D. A., Derevianko,  A., Varganov, S. A. J. Chem Phys. 140: 184315 (2014)) Below, the calculated value, based on the average of the A – A molecules are given, then in brackets, the observed energy, then the difference δ expressed as what has to be added to the calculated value to get the observed value.
                  Mean     Obs          δ
Li – Na        88.0   (85.0)     -3.0
Li – K         79.8    (73.6)     -6.2
Li – Rb       75.1    (70.9)     -4.2
Li – Cs       73.6     (70.3)    -3.3
Na – K       65.5     (63.1)    -2.4
Na – Rb      60.7    (60.2)    -0.5
Na – Cs      59.2     (59.3)     0.1
K – Rb        52.6    (50.5)    -2.1
K – Cs        51.1    (48.7)     3.4
Rb – Cs      46.3    (45.9)    -0.4
 
The question now is, does this show that the bond energies are the arithmetic means of the A – A and B – B molecules? Similarly to my last post, there are three options:
(1) The bond energies are the sum of the atomic contributions, and the discrepancies are observational error, including in the A – A molecules.
(2) The bond energies are the sum of the atomic contributions, and the discrepancies are partly observational error, including in the A – A molecules, and partly some very small additional effect.
(3) The bond energies are not the sum of the atomic contributions, and any agreement is accidental.
What do you think? Are you interested?

 
Posted by Ian Miller on Feb 8, 2016 2:03 AM GMT
< Prev    1 2 3 4 5 6 7 8 9 ... 17