Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

In my last post, I started by explaining why I had embarked on finding a new quantum mechanical interpretation. The actual answer to why I got started was of course personal and not that important, but there was a better reason why I kept going: currently, theoretical chemistry depends on some hideously difficult computations that arise when one considers it in terms of particle-particle interactions. That means you have to compute the probability locations of all the electrons, the position of each thus depending on what the others are doing. The fundamental underlying equation cannot be solved, so various procedures are used to approach the solutions, and as Pople noted in his Nobel lecture, certain constants are "validated" by comparison with observation. This is not quite as ab initioas some may think. 

My current Guidance Wave solution to this problem approaches the issue from a different direction: it states that certain properties can be defined solelyby considering waves. This has three consequences: as a consequence of linearity it is far simpler, it automatically leads to localised bonds in defined circumstances through wave interference properties and hence predicts the functional group, and finally it requires a previously unrecognized quantum effect. That last point is critical; if correct, it means that many current computations cannot get the correct answer for the right reason. 
So, is my approach correct? That is for you to think about. Which raises its own interesting question: most readers of this blog will either have PhDs or will be intent on getting one. So, how often do you think philosophically? Only too many adopt "Shut up and calculate something", or their thoughts relate to how to do something, or how to make something. This is the artisan or gild way of thinking; you become an expert at doing something, but you are not concerned with why it works. But, you argue, if the mathematics give the right answer, you must have the right theory. Not so. Consider planetary motion. It is not that difficult if you can manage something like Fourier Transforms to end up with the epicycle theory of Claudius Ptolemy. You get the right answers to your calculations, but I would argue your physics would be wrong.

The major differences between this guidance wave approach and the standard approach are:
  1. Like the pilot wave, I assume a wave causes diffraction in the two-slit experiment.
  2. To do that, it has to arrive at the slits at the same time as the particle.
  3. If so, the wave transmits energy proportional to the particle energy. This is similar to the quantum potential of Bohm, except now it has a precise value.
  4. Since the phase of the wave is given by exp(2πiS/h), Sthe action, from Euler the wave becomes real at the antinode.
  5. The square of the amplitude is proportional to the energy transmitted by the wave. (That is what waves do generally.)
  6. Therefore, for the stationary state, the energy at the antinode is equal to the energy of the particle.
  7. The nodal structure of the wave is that given by . I. J. Miller 198 Aust. J. Phys. 40: 329 -346. That represents the energy of the atomic orbitals solely in terms of quantum numbers.
  8. The charge distribution of each atomic orbital is represented in terms of Cartesian components that are separable.
  9. The waves interfere linearly, which is what waves usually do.
  10.  New interactions are introduced, which means new wave components. The bond energy comes from these new interactions (because of linearity).
  11. The position of the antinode is determined by the constancy of action (because it is quantized) which is whythere is a covalent radius.
  12. The orbital of hydrogen is different, therefore there is partial wave reflection at the antinode and "overlap" is less complete. Therefore the intensity of the new interactions are less than simply additive in charge, so to maintain constant action, the bond has to shorten.
  13. The zone between the nuclei has a wave component similar to the particle in the box. That means that the nodal structure determined for the atomic orbitals (7 above) has to change. Again, it is dependent only on quantum numbers.
  14. Zero point energies are not calculated. Either observed zero point energies or estimated zero point energies were used.
Anyone can say that. The question then is, performance. In the following, a selection of calculated data (pm for bond lengths, kJ/mol for bond energies) with the best observational data I could get in brackets are:
Bond lengths: H237.4  (37.1); Li2134.6 (133.6);  Cs2234.5 (230); Si-Si 232.8 (232 – 236); C–H sp3108.9 (109.1); C-C sp3151.4 (151 – 154); P2188.6 (189.3) C-C sp 121.4 (120.3); N2109.9 (109.76)
Covalent radii  P 111.1 (110 – 111); Sb 140.0, (138 – 143); S 103.1 (104 – 105); Te 139.0 (138 – 141); Cl 95.7 (99); I 135.4 (133.6); spSi  111.9 (111)
Bond Energies: H2: 435.6 – 438.1, depending on zero point energies from two different sources (436); D2445.6(443.5); Li2105.4 (102.3) 41.6 (44.8); P-H 141.9 (142); Sb-H 247.1 (257); S-H 366.8 (365); Te-H 267.0 (265); Cl-H 432.4 (432); I-H 310 (298);  C-C sp3361.6 (358.3 – 360.5); C-H sp3411.4 (411); Sn-H 224.7 (219); O-H sp3462.7 (463); F-H sp3570.6 (570.4); P2491.8 (489.5); Sb298.1 (299.2); C-C sp 831.9 (835); N2945 (945.3)
The bond energies for hybridized elements follow the analysis of Dewar and Schmeising. As can be seen, given their simplicity, I argue the calculations show something useful. 

So, as can be seen, this is somewhat different from the standard approach, but the calculations are sufficiently straightforward that a hand-held calculator should be all you need. The bond energies so calculated are not in exact agreement with observation, in part because the atomic orbital energies are not exactly given by the quantum number relationship, and since there are further small regularities in these differences, there is seemingly something further that is yet to be understood. However, most of these differences cancel out in the bond energy calculation, although there are three atoms, boron, sodium and bismuth where this does not happen. A further reason the agreement may not be as good as required could be that there are errors in the observational data. It is usually wrong to criticise that, however, the single bond energy agreement for antimony is poor, yet when that is used in one of the bonds for Sb2, with two πbonds added in, the triple bond strength is in very good agreement.

So, does this interest you? I guess I shall see in due course.
Posted by Ian Miller on Nov 4, 2018 8:36 PM GMT
In my last post, I mentioned that in my alternative interpretation of quantum mechanics, I rejected the Born interpretation as being fundamental, and instead I argued that because the wave must keep up with the particle it had to transmit energy. That means the square of the amplitude of the wave should equal that energy, as it does in general wave physics. There is a further difference of definition. I use a term, the wave displacement, to reflect at a coordinate the value of ψ.ψ*, whereupon the amplitude becomes the maximum value of that displacement. This does not mean that Born is wrong because if we think of energy density at a point, that should be roughly proportional to probability, but there are differences. The wave has nodes, and there is a lot of arm-waving in answer to the question, in a stationary state such as an orbital, how does an electron cross a nodal surface, because as soon as it is on it, the probability of it being there is non-zero, but you cannot go from plus to minus, i.e. crest to trough, without going through zero. The problem goes away if the square of the amplitude reflects an energy rather than probability.

There is one further difference. The phase of the wave is given by exp(2πiS/h). That is generally agreed in all textbooks, although it is usually written differently, using %u0127 instead. Now, what is action? Basically, it is the time integral of the Lagrange function, which in turn is an energy (usually the difference between the kinetic and potential energy).  What is important here is that it is something that increases with time, which is why the phase proceeds as an oscillation. The stationary wave, as in the particle in a box, is actually really two running waves proceeding in equal and opposite directions. The reason that is relevant is that from Euler, who developed the mathematics of complex numbers,  exp(iπ) = -1. Thus when S = nh  the value of the phase is 1; when S = nh/2, n odd, the value of the phase is -1.  All other values of S lie somewhere between. In the textbooks, you will see that ψ is always complex.  That is not exactly true and at the antinodes, it becomes real, with A, A the amplitude.
So, what does that mean? Let us assume, for the moment, that the wave only means something physically when real. You may well say, that is just another assumption, but I argue it is not an additional one to what we already have because if so, then it is easy to derive the Uncertainty Principle, which, as an aside, now becomes a physical principle and has nothing to do with the observer. Physicists would generally accept that. As for what it means for general quantum mechanics, apart from requiring the Uncertainty Principle, which was already required, not much because you cannot know the phase of a quantal matter running wave.

However, for a stationary wave, such as the particle in a box, or the wave for a stationary state, as in a free atom or molecule, the antinode defines the amplitude of the wave, and the square of the amplitude of the wave is proportional to the energy of the system. If so, the problem for the chemical bond reduces itself to finding the location of the antinode, and applying the electric field coupling at that point. That is relatively easy because when the nature of the motion is independent of time, the action can be represented as ∫pdq, p the momentum, q the generalized coordinate, and from our discussion on Euler, the action must be quantized over a period, which gives the de Broglie relation pλ = h. So, it is comforting to know that, coming from a somewhat different route that is not conventional, we arrive at a totally conventional relationship. The important point here is the quantization of action gives the covalent radius and that radius gives the bond energy through calculating the potential energy at that point. This means that a good approximation to the bond length and energy of the hydrogen molecule is essentially no more complicated than mental arithmetic, or with a hand held calculator, solely from thinking of the electric field. Errors are about 0.3%.  Better agreement can be found by including some of the more minor contributions.

For molecules in general, there is a little more to it than that because the nature of the waves between the nuclei change for atoms other than hydrogen but the need for computation is still within the hand-held calculator's ability. There are still deviations from observed values, but the results should be close enough to be useful, and as an example, the dissociation of the Sb2 molecule is calculated to be 298.1 kJ/mol (obs 299.2).  In fairness, that is one of the better results, and bismuth, boron and sodium are relatively poorly behaved, but that is in part because their atomic ionization potentials do not behave well either. If interested, details can be obtained in an ebook: https://www.amazon.com/dp/B07GCDYDRR
 
Posted by Ian Miller on Oct 7, 2018 10:02 PM BST
In my last post, I announced that I had self-published an ebook that used my alternative interpretation of quantum mechanics to calculate properties of the chemical bond, and obvious questions include why do it that way, and why not use standard quantum mechanics? The answers are, of course, linked, and go way back to when I was an undergraduate.

The first question I felt required answering was why did the two-slit experiment give a diffraction pattern. In standard quantum mechanics the answer to that is the equations give the probability pattern, so shut up and calculate. Do not ask why they give a diffraction pattern, even when the particles go through the slit one at a time (provided you send enough through.) The equations certainly seem to predict what happens nicely; while there is a rather limited set of situations where you can actually solve the equations, even without solving them they give a good account of what we see. Nevertheless, they do not answer why whatever happens. In logic, there seem to be three possibilities: there is a particle; there is a wave; there is a particle and a wave, and the wave guides the particle. This third option is the concept used by de Broglie and Bohm with their pilot wave interpretation. I agree with that concept, so why do I think I am different still?

I am defining "particle" as an entity with mass that is constrained to a limited volume of space. My view was that only a particle going through one slit would give the pattern that you got when you closed a slit, while the idea of a particle going through both slits would mean the electron was not a particle within the given definition. Therefore there should be a wave and a particle. As to why you cannot detect this guidance wave, there are two reasons. The first is it is mainly complex, although, from Euler it is real at the antinode, however there is a more interesting reason.

If you do a little mathematics, you can find that the phase velocity of the wave is E/p, E the energy, p the momentum. The momentum is easily defined, but what is the energy? Heisenberg put the energy as the kinetic energy, which gives the somewhat odd result that the wave proceeds at half the velocity as the particle. Somehow, that does not look right. To get around that, others put E = mc^2. That means the wave is superluminal, and moves at infinite speed when the particle is stationary. That, of course, raises the frame of reference issue: stationary with respect to what? There is a huge difference between infinite and finite. The phase velocity of the wave should not be infinite for some observers, but not others. Added to which, I do not think something that is fundamental should ever have an infinite value.

My opinion is that the simplest answer to that is to ask the wave to be at the slits at the same time as the particle, so it can guide the particle. It cannot do that if it is long gone, or yet to arrive. But if that is the case, then E is twice the kinetic energy of the particle. If so, then the wave does what every other wave does: it transmits energy, and the energy within the wave equals the energy of the particle (assuming the particle actually contains the kinetic energy and that is not also in the wave; either way, the square of the amplitude of the wave is proportional to the energy of the particle). Accordingly, you cannot detect the wave because to detect something you have to interact with it, and that usually involves changing its energy. If you change the energy of the wave, you also change that of the particle, which means you have also interacted with the particle. That is the reason why it is so difficult to detect the wave, at least in this interpretation.

So, why is this interesting? It means the square of the amplitude of the wave gives the energy of the wave, and the amplitude is located at the wave antinode. For many cases, this makes no real difference, but for molecules it is important. So why I am arguing for this different interpretation is in principle it should greatly simplifies chemical theory, if it is valid. Can you see how, before next post? Test your ability at generating theory by assuming that wave description above is correct. Of course you still have to test it later, but you find it difficult to get anywhere without a provisional assumption.
Posted by Ian Miller on Sep 17, 2018 3:24 AM BST
A long time ago I gave a computer game to my son, and it had characters that aged. If you aged too far, all you were good for was sitting around the campfire telling stories. Maybe I have got to that age, but the 50th anniversary of the Czech invasion by the Russian military has me looking backwards. As some may have realized, especially after my post on an alternative interpretation of quantum mechanics, I sometimes do not fit in with what everyone expects. So it was then. I was doing a post-doc at The University, Southampton under professor Cookson, and while most people took holidays doing popular touristy things, I did a road trip behind the Iron Curtain. I am putting together a series of posts on that, the first one being at https://wordpress.com/post/ianmillerblog.wordpress.com/867
There will be at least two more, each Thursday. For those who are at post-doc level, or who can recall what they were like, you might want to check them out and see what you might have done. Not a lot of chemistry there; the nearest was comparing Czech and Polish beer, and a search for hydraulic oil. Nevertheless, it was a different summer vacation to anything you will have had.
 
Posted by Ian Miller on Aug 27, 2018 3:58 AM BST
My last post her related to the use of quantum mechanics in chemistry, and it was intended as a prelude to a post about the ebook I had written and was editing. As you may see from looking at the dates, this has taken somewhat longer than I expected. This book outlines a methodology by which, ignoring minor effects, the chemical bond length and energy for covalent bonds involving only s and p electrons can be calculated often within less that 1% error solely by means of wave properties, the quantization of action, and the electric field coupling at the wave antinode. The only inputs are quantum numbers, the Exclusion Principle, and the number of electrons, hence simple analytical functions are obtained. The procedure uses atomic orbitals that do not correspond to the excited states of hydrogen, and this leads to a previously unrecognised quantum effect, and then counts the number of interactions, and for bonds between different sized atoms, especially hydrides, a wave reflection procedure is proposed that has the consequence that the less the sharing, the shorter the bond. The effects of lone pair interactions and delocalization are presented. A new hybridisation effect is proposed that, in the absence of lone pair back donation, leads to bond lengthening and weakening when n = 3 and 5.
The basis of this is what I call a guidance wave. The concept of this is very similar to the de Broglie/Bohm pilot wave, but it has some significant differences. The wave function ψ is, in all quantum mechanic interpretations of which I am aware, given by ψ = A exp (2πiS/h), where S is the action, and an important point is that action evolves. That means that from Euler, the wave function becomes real at the antinode. I then make the assumption that the wave front has to travel at the same velocity as the particle, the reason being that in the two slit experiment, the diffraction does not depend on the distance to the slits and the particle should get there at the same time. That means the square of the amplitude is proportional to the particle energy and that is why you can calculate the bond properties from any position of the antinode (because the particle can only have one energy). It remains to be seen whether anyone has any interest in this, and the results are not totally accurate, nevertheless a molecule like Sb2 has a bond energy within a few kJ/mol of the calculated value. At the risk of self promotion, "The Covalent Bond from Guidance Waves" is at https://www.amazon.com/dp/B07GCDYDRR
Posted by Ian Miller on Aug 12, 2018 3:53 AM BST
The February edition of Chemistry World had an article on the prospects for life throughout our solar system, and this was of interest because I intend to give a paper at an International Conference on Astrobiology in Rotorua in June. In my opinion, many of the statements in this article were overly optimistic, which raises the question, when would chemical signatures indicate the possibility, even, of life. The problem is, a chemical signal only indicates one thing when the set of possible causes leading to the signal has one element.
The article stated that there were three essential needs for life: an abundance of chemical building blocks (although these were unspecified), liquid water, and an energy source. The article seems to think that heat is adequate for an energy source, but I disagree. I think photons are critical. The reason comes from the thought that one key requirement for life is that it can reproduce. To do that, it needs a functional group that can link the information-carrying mers into a polymer, and that requires two bonds. Such links also need to be able to be hydrolysed, but not too readily. The reason for this is that initially we are going to get random polymerization, and if the consequences are effectively locked away for ever, we run out of raw materials before something sensible appears. Finally the link needs a variable solubilizing ability because to reproduce, there has to be a way to pull the strands apart so they can act as scaffold for new duplexes. (Without a duplex you have no means of transferring information to the new entity.) The only trifunctional linking group that I see as satisfactory is phosphate, which links through ester formation. Further, it is only marginally satisfactory, because divalent cations usually precipitate phosphate. Our modern life forms might be able to use very dilute phosphate solutions, but the initial life forms would not.
The only way I know of that has been shown to lead to adenosine monophosphate (as well as ATP) was powered by light. Accordingly, anything under permanent ice will not get such light. The issue here is not whether life could live there; it is whether it could evolve there. That alone, in my opinion, rules out the ice moons. Equally, if they do have liquid seas, we would expect some weathering of the dust, and the extraction of calcium and magnesium into the waters. That would remove most phosphate from the waters.
A further issue with reproduction is the necessity of having prodigious amounts of reduced nitrogen material. The Saturnian moons avoid this difficulty, as they seem to have or seem likely to have, ammonia in their oceans, if they have oceans. Enceladus has had ammonia detected in its geyser effluent. Europa has an extremely tenuous atmosphere. The most common species are oxygen and hydrogen, which are products from the photolysis of water. Also present are oxygen atoms, hydroxyl radicals, sodium, and at up to five orders of magnitude less common than oxygen, carbon dioxide and sulphur dioxide.  These species are believed to be formed by photolysis of surface ice, or ice fragments ejected by sputtering due to high-energy particle impacts. Despite measurements over five orders of magnitude in concentration in barely detectable pressures, there are no nitrogen species detected. This, at least, is in accord with what is outlined in my ebook "Planetary Formation and Biogenesis": Saturnian moons potentially have nitrogen because they were formed by the coalescence of dust/ice, where the ice had methanol and ammonia within it. By the time the dust got to the Jovian system, the ammonia and methanol had boiled away in the higher disk temperatures.
Accordingly, in my opinion, there will be no life in the outer solar system. So what about Mars? That is a more complicated story.
Posted by Ian Miller on Mar 4, 2018 8:44 PM GMT
The usual approach to the chemical bond is to "solve the Schrödinger equation", and this is done by attempting to follow the dynamics of the electrons. As we all know, that is impossible; the equation as usually presented requires you to know the potential field in which every particle moves, and since each electron is in motion, the problem becomes insoluble. Even classical gravity has no analytical solution for the three-body problem. We all know the answer – there are various assumptions and approximations made, and as Pople noted in his Nobel lecture, validation of very similar molecules allows you to assign values to the various difficult terms and you can get quite accurate answers for similar molecules.

However, you can only be sure of that if there are suitable examples from which to validate. So, quite accurate answers are obtained, but the question remains, is the output of any value in increasing the understanding of what is going on for chemists? In other words, can they say why A behaves differently to a seemingly similar B?

There is a second issue. Because validation and the requirement to obtain results equivalent to those observed, can we be sure they are obtained the right way? As an example, in 2006 some American chemists decided to test some programs that were considered tolerable advanced and available to general chemists on some quite basic compounds. The results were quite disappointing, even to the extent of showing that benzene was non-planar. (Moran, D. and five others. 2006. J. Amer. Chem. Soc. 128: 9342-9343.)
There is a third issue, and this seems to have passed without comment amongst chemists. In the state vector formalism of quantum mechanics, it is often stated that you cannot factorise the overall wave function. That is the basis of the Schrödinger cat paradox. The whole cat is in the superposition of states that differ on whether or not the nucleus has decayed. If you can factorise the state, the paradox disappears. You may still have to open the box to see what has happened to the cat, but the cat, being a macroscopic being, has behaved classically and was either dead or alive before you opened it. This, of course, is an interpretive issue. The possible classical states are "cat alive" (that has amplitude A) and "cat dead" (which has amplitude B). According to the state vector formalism, the actual state has amplitude (A B), hence thinking that the cat is in a superposition of states. The interesting thing about this is it is impossible to prove this wrong, because any attempt to observe the state collapses it to either A or B, and the "or" is the exclusive form. Is that science or another example of the mysticism that we accuse the ancients of believing, and we laugh at them for it? Why won't the future laugh at us? In my opinion, the argument that this procedure aids calculation is also misleading; classically you would calculate the probability that the nucleus had decayed, and the probability the rest of the device worked, and you could lay bets on whether the cat was alive or dead.
Accordingly, I am happy with factorizing the wave function. Indeed, every time you talk about a p orbital interacting with . . . you have factorized the atomic state, and in my opinion chemistry would be incomprehensible unless we do this sort of thing. However, I believe we can go further. Let us take the hydrogen atom, and accept that a given state has the action equal to nh associated with any state. We can factorise that (Schiller, R. 1962. Phys Rev 125 : 1100 – 1108 ) such that
            nh  = [(nr + ½) + ( l 
+ ½)h
Here, while the quantum numbers count the action, they also count the number of radial and angular nodes respectively. What is interesting is the half quanta; why are they there? In my opinion, they have separate functions from the other quanta. For example, consider the ground state of hydrogen. We can rewrite (1) as
            h  = [( ½ ) + ( ½)]h  (2)
What does (2) actually say? First there are no nodes. The second is the state actually complies with the Uncertainty Principle. Suppose instead, we put the RHS  of (2) simply equal to 1. If we assign that to angular motion solely, we have the Bohr theory, and we know that is wrong. If we assign it to radial motion solely, we have the motion of the electron as lying on a line through the nucleus, which is actually a classical possibility. While that turns up in most text books, again I consider that to be wrong because it has zero angular uncertainty. You know the angular momentum (zero) and you know (or could know if you determined it) the orientation of the line. (The same reasoning shows why Bohr was wrong, although of course at the time he had no idea of the Uncertainty Principle.)
 There is another good point about (2): it asserts the period involves two "cycles". That is a requirement for a wave, which must have a crest and a trough. If you have no nodes separating them, you need two cycles. Now, I wonder how many people reading this (if any??) can see what happens next?
Which gets me to a final question, at least for this post: how many chemists are actually happy with what theory offers them? Comments would be appreciated.

 
Posted by Ian Miller on Oct 22, 2017 9:45 PM BST
Following the Alternative interpretations theme, I shall write a series of posts about the chemical bond. As to why, and I hope to suggest that there is somewhat more to the chemical bond than we now consider. I suspect the chemical bond is something almost all chemists "know" what it is, but most would have trouble articulating it. We can calculate its properties, or at least we believe we can, but do we understand what it is? I think part of the problem here is that not very many people actually think about what quantum mechanics implies.

In the August Chemistry World it was stated that to understand molecules, all you have to do is to solve the Schrödinger equation for all the particles that are present. However, supposing this were possible, would you actually understand what is going on? How many chemists can claim to understand quantum mechanics, at least to some degree? We know there is something called "wave particle duality" but what does that mean? There are a number of interpretations of quantum mechanics, but to my mind the first question is, is there actually a wave? There are only two answers to such a discrete question: yes or no. De Broglie and Bohm said yes, and developed what they call the pilot wave theory. I agree with them, but I have made a couple of alterations, so I call my modification the guidance wave. The standard theory would answer no. There is no wave, and everything is calculated on the basis of a mathematical formalism.

Each of these answers raises its own problems. The problem with there being a wave piloting or guiding the particle is that there is no physical evidence for the wave. There is absolutely no evidence so far that can be attributed solely to the wave because all we ever detect is the particle. The "empty wave" cannot be detected, and there have been efforts to find it. Of course just because you cannot find something does not mean it is not there; it merely means it is not detectable with whatever tool you are using, or it is not where you are looking. For my guidance wave, the problem is somewhat worse in some ways, although better in others. My guidance wave transmits energy, which is what waves do. This arises because the phase velocity of a wave equals E/p, where E is the energy and p the momentum. The problem is, while the momentum is unambiguous (the momentum of the particle) what is the energy? Bohm had a quantum potential, but the problem with this is it is not assignable because his relationship for it did not lead to a definable value. I have argued that to make the two slit experiment work, the phase velocity should equal the particle velocity, so that both arrive at the slits at the same time, and that is one of the two differences between my guidance wave and the pilot wave. The problem with that is, it puts the energy of the system at twice the particle kinetic energy. The question then is, why cannot we detect the energy in the wave? My answer probably requires another dimension. The wave function is known to be complex; if you try to make it real, e.g. represent it as a sine wave, quantum mechanics does not work.

However, the "non-real" wave has its problems. If there is actually nothing there, how does the wave make the two-slit experiment work? The answer that the "particle" goes through both slits is demonstrably wrong, although there has been a lot of arm-waving to preserve this option. For example, if you shine light on electrons in the two slit experiment, it is clear the electron only goes through one slit. What we then see is claims that this procedure "collapsed the wave function", and herein lies a problem with such physics: if it is mysterious enough, there is always an escape clause. However, weak measurements have shown that photons go though only one slit, and the diffraction pattern still arises, exactly according to Bohm's calculations (Kocsis, S. and 6 others. 2011. Observing the Average Trajectories of Single Photons in a Two-Slit Interferometer Science 332: 1170 – 1173.) There is another issue. If the wave has zero energy, the energy of the particle is known, and following Heisenberg, the phase velocity of the wave is half that of the particle. That implies everything happens, then the wave catches up and sorts things out. That seems to me to be bizarre in the extreme.

So, you may ask, what has all this to do with the chemical bond? Well, my guidance wave approach actually leads to a dramatic simplification because if the waves transmit energy that equals the particle energy, then the stationary state can now be reduced to a wave problem. As an example of what I mean, think of the sound coming from a church organ pipe. In principle you could calculate it from the turbulent motion of all the air particles, and you could derive equations to statistically account for all the motion. Alternatively, you could argue that there will be sound, and it must form a standing wave in the pipe, so the sound frequency is defined by the dimensions of the pipe. That is somewhat easier, and also, in my opinion, it conveys more information.

All of which is all very well, but where does it take us? I hope to offer some food for thought in the posts that will follow.
Posted by Ian Miller on Aug 28, 2017 12:19 AM BST
In a recent Chemistry World there was an item on chemistry in India, and one of the things that struck me was that Indian chemists seemed to be criticized because they published a very low proportion of the papers in journals such as JACS and Angewandte Chemie. The implication was, only the "best" stuff gets published there, hence the Indian chemists were not good enough. The question I want to raise is, do you think that reasoning is valid?
One answer might be that these journals (but not exclusively) publish the leading material, i.e. they lead the way that chemistry is taking in the future. When I started my career, these high profile journals were a "must read" because they were where papers that at least editors felt was likely to be of general interest or of practical interest to the widest number of chemists were published.
But these days, these sort of papers do not turn up. There may be new reactions, but they are starting to involve difficult to obtain reagents, and chemical theory has descended into the production of computational output. These prestige journals have moved on to new academic fields, which is becoming increasingly specialized, which increasingly needs expensive equipment, and which also needs a school that has been going for some time, so that the background experience is well embedded. There are exceptions, but they do not last, thus graphene was quite novel, but not for long. There are still publications involving graphene, but chemists working there have to have experience in the area to make headway. More importantly, unless the chemist is actually working in the area, (s)he will never touch something like graphene. I am certainly not criticizing this approach by the journals. Rather I am suggesting the nature of chemical research is changing, but I feel that in countries where the funding is not there to the same extent, chemists may well feel they might be more productive not trying to keep up with the Joneses.
Another issue is, by implication it is claimed that work published in the elite journals is more important. Who says? Obviously, the group who publish there, and the editorial board will, but is this so? There may well be work that is more immediately important, but to a modest sized subset of chemists working in a specific area. Now the chemist should publish in the journal that that subset will read.
My view is that chemistry has expanded into so many sub-fields that no chemist can keep up with everything. When I started research, organic chemists tended not to be especially interested in inorganic or physical chemistry, not because they were not important, but simply because they did not have the time. Now it has got much worse. I doubt there is much we can do about that, but I think it is wrong to argue that some chemistry that can only be done in very richly funded Universities is "better" or more important than a lot of other work that gets published in specialized journals. What do you think?
Posted by Ian Miller on Jul 3, 2017 3:23 AM BST
Some time ago now I published an ebook "Planetary Formation and Biogenesis", which started with a review including over 600 references, following which I tried analyzing their conclusions and tried to put them together to make a coherent whole. This ended up with a series of conclusions and predictions on what we might find elsewhere. It was in light of this I saw the article in the May edition of "Chemistry World". That article put up reasons to back some of the various thoughts as to where life started, but I found it interesting that people formed their views based on their chemical experience, and they tended to carry out experiments to support that hypothesis. That, of course, is fair enough, but it still misses what I believe to bee the key point, and that is, what is the most critical problem to overcome to get life started, and how hard is it to do?

The hardest thing, in my opinion, is not to make polymers. I know that driving condensation reactions forwards in water is difficult, but as Deamer pointed out in the article, if you can get a lipid equivalent, it is by no means impossible. No, in my opinion, the hardest thing to do is to make phosphate esters. Exactly how do you make a phosphate ester? As Stanley Miller once remarked, you don't start with phosphoryl chloride in the ocean. The simplest way is to heat a phosphate and an alcohol to about 200 degrees C.  Of course, water will hydrolyse phosphate esters at 200 degrees C, so unless you drive off the water, which is difficult to do in an ocean, high temperature is not your friend because the concentration of water in the ocean always exceeds the concentration of phosphate or alcohol. You simply cannot do that around black smokers.

The next problem is, why did nature choose ribose? Ribose is not the only sugar that permits the formation of a duplex when suitably phosphated and bound to a nucleotide. Almost all other pentoses do it. So the question remains, why ribose? The phosphate ester is an important solubilizing agent for a number of biochemicals necessary for life but it invariably occurs bound to a ribose, which in turn is usually bound to adenine. The question then is, is this a clue? If so, why is it largely unnoticed? My conclusion was, ribose alone can form a phosphate ester on a primary alcohol group in solution because only ribose naturally has reasonable concentrations of itself in the furanose form.

It was not always unnoticed. There is a clearly plausible route, substantiated by experiment (Ponnamperuma, C., Sagan, C., Mariner, R., 1963. Synthesis of adenosine triphosphate under possible primitive earth conditions. Nature 199: 222-226.) that shows the way. What was shown here was that if you have a mixture of adenine, ribose and phosphate, and shine UV light that can be absorbed by the adenine, you make adenosine, and then phosphate esters, mainly at the 5 position of the furanose form, so you can end up with ATP, a chemical still used by life today. Why is that work neglected? Could it be that nobody these days goes back and reads the literature from 1963?
Why does this synthesis work? My explanation is this. You do not have to get to 200 degrees to form a phosphate ester. What you have to do is provide an impact between the alcohol group and phosphate equivalent to that expected at 200 degrees. If we think about the experiment described above, there is no way an excited electronic state of adenine can be delocalized into the ribose, so why is the light necessary?

My conclusion was that the excited state of the adenine can decay so that quite a considerable amount of vibrational energy is generated. That will help form the adenosine, but after that the vibrational energy will spread through the sugar. Now we see the advantage of the furanose: it is relatively floppy, and it will vibrate well, and even better, the vibrational waves will focus at C-5. That is how the phosphate ester is formed, and why ribose is critical. The pyranose forms are simply too rigid to focus the mechanical vibrations. Once you get adenosine phosphate, in the above experiment the process continued to make polyphosphates, but if  some adenosine was also close by, it would start to form the polymer chain. Now, if that is true, then life must have started on the surface, either of the sea or on land. My view is the sea is more probable, because on land it is difficult to see where further biochemicals can come from.
Posted by Ian Miller on May 28, 2017 11:45 PM BST
< Prev    1 2 3 4 5 6 7 8 9 ... 18