Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

One interesting paper from the not too distant past involved the reduction of carbon dioxide to either methanol or methane (J. Am. Chem. Soc., 2015, 137, 5332) using lithium o-phenylbisborate as a catalyst. What the catalyst is claimed to have done is to bend the CO2 molecule (highly plausible) and thus form an aromatic ring. It is this last part that I find hard to stomach, because it brings us back to the question, what causes "aromaticity"? Now, I should issue a warning here: I have published what I think causes aromaticity, so I am not exactly unbiased.
 
So, where is my problem? The authors seem to have argued that a six-membered ring is formed (correct) and there will be 6 π electrons in it, therefore the system will show aromaticity. I suppose if you construct molecular orbitals and then place the electrons in them, there is a case for this. However, my argument about aromaticity is there has to be 2n 1 double bonds that alternate with single bonds, which is not quite the same thing. The reason for aromaticity in this case lies in the phase of the waves. Similarly to thinking about the Woodward Hoffmann rules, run the phases around the ring, then keep going. What you find is that with aromaticity, the second round cancels the first, which cancels the double bond amplitude, and since the charge has to go somewhere, it goes to the single bonds (the other major canonical structure). But that has the same problem, and as such, molecules such as cyclohexadiene cannot exist. Cyclobutadiene, however, finds that the second cycle reinforces the displacement of the first cycle, and so it is locked into the classical structure. Now, the reason I find this reduction paper of interest is that in principle it offers an alternative. My model predicts no aromaticity because the double bonds in carbon dioxide are orthogonal. The double bond orbitals cannot overlap with each other and therefore cannot form an extended wave with one polarization.
 
Does it matter? I think so. I think it is important that chemists try to understand what is going on. Oddly enough, when I started my career with physical organic chemistry, by at large chemists thought they understood tolerably well most of the reactions of which they were aware. Now there are so many additional reactions, but I am far from convinced the understanding has increased.
 
One final point. The paper ends with a statement that "further studies" are required to adopt the transformation for "practical applications". Methanol and methane will not be there. This catalyst merely bends the CO2, the actual reductant is either triethylsilane or pinacolborane. These would be more than somewhat more expensive and harder to get than methane and methanol. That hardly seems likely to be "useful", at least from what was demonstrated in this paper.
Posted by Ian Miller on Sep 28, 2015 5:06 AM BST
My last post related to peer review and listed some of the problems with it. The question then arises, why do we want it? I think here that the answer depends on the nature of the paper.
 
Think of the paper that posts data, and as an example, data on a new molecule. It is highly desirable that this data is valid, because while in principle any scientific report should be reproducible, in practice, do we want to reproduce everything? There are something like 90 million molecules that have been reported, many of which have taken a great effort to make. Obviously, it would be highly desirable to ensure that each molecule is reported accurately, and enough is reported about it so that the work does not have to be repeated. Peer review gives an assessment that adequate methods were used, and that all reasonable data were collected. Furthermore, I know from experience at having done some reviewing, some scientists get so absorbed in their work that they do not realize that the average reader may not be able to unravel what they have done the way they have written it. So, yes, peer review that sends the paper back for revision should improve the paper.
 
However, the problem for me starts when a referee rejects a paper "because it is not very interesting". What that usually means is it did not interest him. One example from my past: I wrote a paper (with one other co-author) on the 13C NMR shifts of acetylated methylated agars. Now this may not seem very exciting, but as most chemists who use 13C NMR know, substitution changes the chemical shift of nearby atoms. Now, what I showed using a range of seaweed polysaccharides, because the structures of the sugar units were reasonably rigid, and because the linking oxygen atoms largely insulate one unit from the effects on the other, except sometimes immediately about the linking sites, the shifts due to substitution are regular, and you can use such shifts to determine substitution patterns, especially if a number of different operations are carried out in varying substitution on the "mobile" sites. (A mobile site is something like a sulphate ester, which can be removed, or a hydroxyl, which can be substituted with something like a methyl group, or an ester.)
 
Now, what causes a change of chemical shift? I think most chemists would answer that in terms of electron induction effects, wherein the substituent that is a strong electron withdrawer pulls electrons closer to the carbon atom to which it is attached, and the effect is attenuated so that two carbon atoms away (the γ site) there is only a tiny effect. Thus forming a methyl ether will change the chemical shift of the α carbon by about 10 ppm, the β carbon by about 2 ppm, and usually of opposite sign, while sulphate ester gives similar patterns, but usually about two-thirds the change in shifts. (Note4 the change of sign makes electron movement hard to swallow!) Now, what was significant about the acetylations was that the acetyl group makes a relatively small change in shift to the α carbon and a significantly bigger shift to the β carbon (about 4 ppm). Why? My argument is that the change in chemical shift has nothing to do with electron induction at all, but rather the magnetization field induced by the applied field. The magnetic potential is a through space effect, not a through bond effect, and since the magnetic potential is a vector, its orientation is also important. I argued the reason the acetyl group makes such a big change to the β carbon shift is that the acetyl group rotates about the linkage position, and the distance to the β carbon is actually quite small. Is that interesting? A means of determining substitution patterns on some polysaccharides, and evidence for the mechanism of chemical shifts? I thought so, but I seem to be in a minority. Now, would it hurt to publish it, given the electronic nature of publishing. Yes, one option would be to submit to another journal, but here I really could not be bothered. Remember, the number of publications has been irrelevant to my career; I have literally been publishing to be helpful, but when someone said they are not interested, then I also lost interest.
 
My question is, is this the way science should operate? In these electronic days, I believe there should be only two reasons to reject a paper: (a) it is wrong, and the referee should be able to show where, and (b) it adds nothing. By all means send back for clarification, but rejection should be an absolutely last resort. What do you think?
Posted by Ian Miller on Aug 31, 2015 2:58 AM BST
When I started this post, it was to be about search engines, the problem being that a search is never specific enough, and just about anything can be mixed up in the answer. To illustrate the problem, I thought I would Google "Peer review criticisms", and by doing so, I got 3,570,000 hits. Surely they can't all be relevant. However, curiosity got the better of me, and after an admittedly ridiculously inadequate look at the literature, I decided to post about peer review itself. The question to be addressed is, is peer review helpful to unraveling the secrets of nature, or is it part of the problem? So here are some comments that I picked up.
 
However, before I start, a different question might be, does peer review do any good? One possibility is that it sends the half-baked back to the author for further baking. It may also be claimed to help emerging scientists to present their ideas in a way that would lead to better understanding of what they are doing. If so, this could be very helpful, however, that only applies to papers sent back for revision. There is also the question of whether papers being sent back for revision truly need it? That someone does not write in the style of the reviewer is beside the point. It also eliminates the "crank" stuff, but herein lies the problem: what happens if an upcoming Einstein is labeled a crank? The current argument is, if it is that good, it will find a place eventually. Perhaps, but will it then be read? However, let us return to the literature.
 
First, a quote from a source that shall remain anonymous: "Peer review seldom detects fraud, or even mistakes. It is biased against women and against less famous institutions. Its benefits are statistically insignificant and its risks – academic log-rolling, suppression of unfashionable ideas, and the irresistible opportunity to put a spoke in a rival's wheel – are seldom examined." To that I would add, it is most certainly biased against individuals that do not have a University or major institutional address. The individual scientist does not exist, according to the reviewing system. If you do not have a suitable address, you must be a crank. No need to waste time reading the paper. Herein lies another real problem: papers can be rejected by the Editor without peer review, or even without any evidence of having been read, at least past the title.
 
Now, from www.evolutionnews.org/2012/02/problems_with_p056241.html
This article made six points:
  1. Good science does not have to be published in the peer-reviewed literature. The examples cited include rejections, and I shall deal with those later.
  2. The peer-review system wrongly rejects scientifically valid papers, while it wrongly accepts scientifically flawed papers. Personally, I feel that it is too much to expect a peer reviewer to find fraud and you cannot expect it to uncover faulty experimental procedures that are not specified. In this sense, journals are increasingly encouraging methods to be explained as a reference to somewhere else, which in turn is a reference to somewhere else again, and so on. However, it also makes the valid point that peer-reviewing is both time-consuming and expensive, and often excludes people for no good reason.
  3. Scientific peer-reviewers are not perfectly objective. Again, the performance of the reviewers is questioned, but a very interesting point followed: journals have a very strong economic interest in preserving the current system, and scientists go along with it because it helps them maintain their position. In my mind, this is not a good reason to persist with it.
  4. The "peer-review card" is often played to silence scientific dissent. This, to my mind is a serious criticism, although more of the scientific elite than the reviewing system.
  5. Peer-review is often biased against non-majority viewpoints." Denyse O'Leary is quoted as saying: "The overwhelming flaw in the traditional peer review system is that it listed so heavily toward consensus that it showed little tolerance for genuinely new findings and interpretations."
  6. Not being recognized in peer-reviewed literature does not imply a lack of scientific merit. The simple fact is, in logic, the contrary position is a fallacy in the ad verecundiam class.
In a later post I shall continue on this theme, but before I do, what are your thoughts on this matter?
Posted by Ian Miller on Aug 16, 2015 9:25 PM BST
Since my last post, I received a telephone questionnaire relating to what the RSC should be doing, which raises the question, what should it be doing? First, some things are obvious, particularly relating to practicing chemists, and in my opinion, the RSC does these rather well. There are obviously some additional things it could be doing, and any organization has room for improvement, but I suspect the average response to such a questionnaire will be to make minor adjustments to what is already there. Leaving aside the need to work for chemists, I think there are three major areas for the society to consider.
 
The first is to get a basic understanding of chemistry to the general public. In the recent RSC poll, 55% of the public believed it is important to know about chemistry, but I bet most who answered that way would admit they know very little. By basic understanding, I mean enough to understand the problems the world faces, enough to see them safely through their lives, and enough to sense whether someone making a public statement is speaking sensibly. In New Zealand recently, one family died by accidental carbon monoxide poisoning, and had they understood the problem, this would not have happened. I know you can never totally prevent such incidents, nor can you cover every eventuality, but I think the society could be more active in trying, perhaps by giving some basic information in an easily comprehended form on its website, or perhaps by making Youtube video educational items.
 
The second is to make a broader explanation of certain important environmental issues available to the general public. I have seen a lot of irrational and false comments about matters like climate change, and I feel the Society should make a bigger effort to show the public how to handle the chemical aspects, or perhaps with the Institute of Physics, a proper overall picture, including a discussion of what we do not know for sure. The problem, as I see it, is that science tends to present very technical statements with proper scientific statements of uncertainty, but the public cannot understand them, and instead fall to "snake oil" merchants. I am not saying there cannot be dissent, but the public must realize that dissent requires logical analysis and evidence. I think the public is smarter than we give them credit for, BUT they lack specific information.
 
The third is I think the Society should show how chemistry can assist the economy, and not just by helping big companies. In the RSC survey, many people believed that chemistry can help the economy, and the fact is that now the more wealthy countries have a strong knowledge input into their economies. It can also show up problems smaller companies have and make basic chemical information available. Not all companies are associated with a University, and it is important that graduates, when they go into the world, have the opportunity to stay on top of their profession.
 
There will be other things the Society could do, and many may be more significant, but they are my thoughts as to what could be done. What are yours? One of the better things the RSC does is to put up blogs like this, and while this offers the chance to involve all the members in idea creation, that only works if the members participate, so why not throw in your thoughts?
 
Posted by Ian Miller on Jul 27, 2015 3:35 AM BST
Towards the end of last year, Nature published two articles that raised issues with the peer review system. In one, the claim was made that the reviewer of one paper had not even read the paper, because he commented on something that might usually be found in such a paper, but in this case was not. Now, naturally this would be a cause for concern for the author. However, the article then went on to a somewhat deeper issue, namely that in 2013, the articles indexed in Elsevier's Scopus rose to 2.7 million. Now not all of these would be peer reviewed, but you see the problem. There are just not enough experts to deal with this flood of material. What happens is that scientists are given papers that are more outside their specialty, and yes, the reviewer may be able to evaluate the methods and results section, but, according to the claim, lack the expertise to evaluate the introduction and discussion.
 
The article then made the claim that reviewers should verify the authors are quoting the right literature to support their views. Now, I dispute this. The reason for citing literature is when the papers put forward views or results that are of significance to the argument that will follow, but an introduction of a paper is not a general review of anything that is vaguely associated with the topic. So the first question is, is the work novel? The probability of recognizing plagiarism unfortunately increases very rapidly as the level of expertise and experience of the reviewer becomes more significant. Recall the word "peer"? If the work is accepted as novel, then the next question is, are there any facts or assumptions seemingly pulled out of thin air that should be referenced to some other paper? If so, those papers should be referenced, but there is a problem in that the reviewer should not be expected to do a full literature search. Surely the author has to take responsibility. The important thing is the author does not claim that to which he is not entitled. The third problem is, is there a reference that shows the work to be wrong? Again, an expert in the field should be able to show this, but how to find enough such experts? Experts, by definition, are a small fraction of the total scientists.
 
The second Nature article illustrated what I believe is a far more insidious problem: the peer review loop. Authors are often asked to recommend peer reviewers, so they do, within a small group of friends. In return, they know they will be recommended back. In a nice cozy circle, of course you recommend publication, with maybe the odd correction to show you did something. Even worse was the case where one author did his own reviews, and sent them to colleagues who in turn would submit them to the editor. This was caught out because everyone did it far too quickly. If you are going to cheat the system, obviously you should do it slowly, when the editor will be finally glad to have something on his table!
 
I believe recommendations from scientists should stop, but that then raises the question, how does the editor find reviewers? Personal knowledge is valuable, but with the great flood of papers coming in, can the editor know enough experts so that those he chooses are not overwhelmed? Then there is a problem with multi-author papers. If you look at some of the papers regarding the results from some of the NASA space probes, there may be up to fifty authors cited. In other words, anyone who knows sufficient is already an author.
 
But this raises the questions, why would scientists want to game the system, and why is peer review required? The answer to the first appears to be, to get more papers that go towards building up a reputation for promotion, awards, whatever. I suspect the scientific literature would become a lot more manageable if this practice were to stop. But the answer to the second one might come from something similar to the physicists' ArXiv; the authors can publish anything in e-form, but it is then open to public peer review, in which relevant comments from others can be attached. If the paper survives, it is worth keeping. If there is obvious plagiarism then the work is deleted and the plagiarist publicly identified. If some references have been left out, they can be added, although some check on relevance as opposed to self-citing would be required. If there is evidence the work is wrong, then that work can be submitted as a linked paper, and if  the first paper is not adequately defended, then everyone will disregard it. Let the scientific community do the peer review. This would not have worked while journals were printed on paper, but when they are essentially e-journals, why not? What do you think?
 
Posted by Ian Miller on Jul 5, 2015 11:47 PM BST
Those funding research want the outcome of their funding to be useful, which means that the research either leads to something useful, or it does so indirectly by inspiring someone else. The problem with this attitude is that it inspires some writers to present their research in a way that looks a lot more relevant than it actually is. The best that can be expected is that it will inspire someone else to pick up the challenge. But how does that come about?
 
One interesting problem involving invention, or of the development of new technology, is the ability to see things before others do, to see the possibilities and also to quickly see what will become a dead end well before it wastes too much of your time. So with that in mind, what to make of a recent paper on CO2 reduction (JACS 137: 5332)? The statement made in the paper is that in the presence of Li2[2,2-C6H4(BH3)2 ], CO2 can be reduced to methane or methanol if a suitable reducing agent is present. What appears to happen is that the CO2 forms some sort of complex with the two BH3- groups, and the CO2 molecule becomes bent, and an aromatic system is formed. This would significantly lower the activation energy for reaction at the carbon atom, hence the catalytic activity. Unfortunately, the reducing agent is not one that obviously comes to mind for making bulk chemicals.
 
So, is this useful? The first thing to note is that we need to get rid of CO2, and in principle, this does it. Why "in principle"? Because, when we look at the table of yields, the yields quoted are those of the reducing agent's products. Thus triethylsilane was used as the reducing agents, and all the products noted were silicon derivatives. This certainly supports the concept that a reduction occurred BUT it gives no clues for specific reactions, how much of what product was obtained. Elsewhere, it adds that the products were either methane or methanol, and one diagram shows excellent yield of an intermediate , and implies up to 89% yield of methanol. That sounds promising, until we start to consider the reducing agent. If you have to use reducing agents like triethylsilane, then the reagents will be far more expensive and difficult to get than the products, particularly since several mole equivalents will be required.
 
In my opinion, as it stands it is unlikely to be useful, as it uses difficult-to-get reagents to make commonly available chemicals. On the other hand, it seems to me that there is potential here to make use of the aromatic intermediate is some way yet to be discovered. This is the challenge of chemistry.
 
How would you respond to this challenge? My guess is that pressurized hydrogen with a hydride transfer agent that acted catalytically would be the best bet for methanol. However, some reactive anion that did not react with the boron groups might offer the chance to add to the carbon atom and make a carboxylic acid, which in turn might be of synthetic interest. Your opinions?
Posted by Ian Miller on Jun 21, 2015 11:06 PM BST
By the title, I mean, what sort of body is it, and how did it form. Ceres is the largest body in the asteroid belt, and it is essentially spherical, from gravitational energy minimization. It lies at a distance at which the remaining bodies are mainly carbonaceous asteroids, and are made of rock with some water and organic material. It should be noted that the part of the asteroid belt closest to the star contains mainly silicaceous asteroids, so an interesting question is, how did these different bodies form? The issue is made more complicated because there are also some such as Vesta that appear to have an iron core. To get an iron core, the temperature of the body had to get above 1538 oC, yet the evidence from meteorites is the carbonaceous bodies never got above ~200 oC. How did all this happen?
 
In my Planetary Formation and Biogenesis, I supported the hypothesis that Vesta and one or two other bodies really formed much closer to the star, and were moved out by gravitational interactions and became circularized where they are now. If that is right, that gets rid of that problem. As to why, after forgetting about Vesta and similar asteroids, there are two major classes of asteroids that are quite different. Within my interpretation, I propose planetary formation starts basically through chemistry, and the bodies stick together initially through chemical (including physical chemical) interactions. As for these asteroids, I suggested they formed by different methods, and consequently should have different chemical composition. In particular, the carbonaceous ones formed as or after the accretion disk cooled down. The concept was that at the higher temperatures, organic materials such as methanol, known to be in the disk, pyrolysed on silica particles and formed tarry material, and later this tarry material permitted bodies to stick together. One possible reason why the bodies are so small is that the tar would only be sticky over a modest range of temperatures. The net result of this is, based on meteorite samples, the carbonaceous asteroids tend to be black, and have various small rock-crystals distributed through them. Accordingly, if you break the meteorite the interior remains black, or at least dark coloured.
 
The puzzle, for the moment, is that the space vehicle Dawn has observed bright spots on Ceres' surface. These are quite white, in amongst the otherwise depressing grey-black. The nature of them is not that difficult to explain. In my opinion, they are most likely to comprise exposed ice. The problem then is, how does ice get there? Carbonaceous chondrites have between 3 and 22% water in them, but this water level may be inflated because the rocks have been lying around on Earth for some time before being picked up. The density relative to water of Ceres is 2.17, which means that in the absence of an iron core, the composition is richer in silicates than anything else. (Granite and silica tend to have densities in the order of 2.5, while olivines/pyroxenes will be in the order of 3.3.). One possibility is that in differentiation, the ice melted and accumulated, like lava, as deposits. But if that were the case, why would not the impact simply remelt it and mix it with the silicates? That would leave at best very dirty ice.
 
The white spots appear to be within craters, so it is possible that the impacts have melted the water deeper below and subsequently water has flowed out and solidified. That requires the water not to be dirty, to get the bright spots, and that suggests there were richer deposits of ice at some depth below the surface, and after impact, the pressure of steam cleared a pipe through the rock, and later the residual water flowed to the surface. So, what are the options? As I see them, Ceres may be an abnormally larger carbonaceous asteroid and the water has been mobilized by impact. The other possibility is that Ceres started life in the Jovian accretion zone, and was thrown inwards, during which it picked up more dust. This assumes it started life a bit like Ganymede/Callisto (densities between 1.83 and 1.93), and gained more dust and silicates on its surface. My guess is it started life in the Jovian region because that is the easiest way for it to get so big. If this is so, Ceres is not a typical body within the carbonaceous asteroid distribution, and Dawn will add no more information as to their formation. What remains to be seen is what information Dawn can gain.
 
Assuming all goes well, I shall add a photo of such spots and you can form your own opinion. The photo is, of course, due to NASA. What do you think?


  more...
Posted by Ian Miller on Jun 7, 2015 11:46 PM BST
In two previous posts, I have mentioned two of the seven sins of academics in the natural sciences mentioned in an article by van Gunsteren  (Angew. Chem. Int Ed. 52: 118 – 122).  The third sin was insufficient connection between data and hypothesis, or over-interpretation of data. My personal view is this is not a sin at all, as long as you are honest about what you are doing. Perhaps the best-known example is that of Kepler. Strictly speaking, his data were not really sufficiently robust to justify his laws, but Kepler decided (correctly) that the planets should follow some sort of function, and the ellipse fitted the data better than anything else. Similarly, in one sense it was an act of faith for Newton to accept Kepler's laws as laws, but look what came from it. My view is, that as long as you are honest, there is no harm in drawing a conclusion from data that does not fully support it, as long as it is clear what you are doing, and as long as the conclusion is not put to a critical use. This if considering whether something is safe, then if the data does not prove safety, it does not hurt to hypothesise that it could be safe, as the hypothesis takes everyone forward, but only if it is clear that it is a hypothesis.
 
The next sin mentioned is the reporting of only favourable results. Here I am in total agreement. If some result does not support your hypothesis, you should investigate it thoroughly, and if it persists, you not only report it, but you should confess that the hypothesis is wrong as stated. To me, it is a sin, albeit a less serious one, to report the data and make no comment on it. The statement that it is unexpected, or to state it and end the sentence with an exclamation mark, is not adequate. The reason for this is that in logic, ONE observation that cannot be explained by the theory is sufficient to falsify it.
 
Another sin mentioned was the neglect of errors found after publication. If the error is in the reporting of the data, such as a spectral peak listed in the wrong place, obviously this should be reported. However, I am less sure of reporting when the report does not make a significant difference and does not conclude the matter. In my opinion, it is almost as big a sin to put out a sequence of papers on the same subject, and having a conclusion that moves around a little from paper to paper. If the first conclusion is near enough, in my opinion there should be no corrections until the author is convinced the subject is sorted. There is far too much in the literature already, without salting and peppering it with minor variations, none of which significantly improve the issue.
 
The remaining sins listed were plagiarism and the direct fabrication of data. I agree these are bad sins, but do they actually happen? I have heard there are examples from students, but surely this is as much the fault of supervisors. I would hope that professional scientists would never even think of this. As far as I know, I have never run across an example of either of these. Have you?

 
I realize these opinions might be controversial, but so what? I hope it does stimulate discussion. I also think the list given in this article is incomplete, and I feel there are more sins that are equally bad (except possibly for the last two). More on them some other time.
Posted by Ian Miller on May 25, 2015 4:54 AM BST
In January of this year I started a series of posts based on an article in Angew. Chem. Int Ed. 52: 118 – 122, where van Gunsteren mentioned the seven deadly sins of chemists. I commented on the first one (inadequate descriptions of methodology), inspired in part by an example that help up progress on my PhD when an eminent chemist left out a very critical piece of the experimental methodology and I was not smart enough to pick it, but then I got distracted by a series of what I thought were important announcements, coupled with one or two things that were happening in my life.
 
The second sin was "Failure to perform obvious, cheap tests that could repudiate or confirm a model, theory or measurement." The defence, of course, is that the experimenter did not think of it, and I am far from thinking that one should blame an experimenter for failing to do the "obvious". The problem with "obvious" is it is always so when pointed out in retrospect, but far from it at the time. Nevertheless late in my career I have an example that is a nuisance, and in this case, it is not even chemistry, but rather physics. My attempts at understanding the chemical bond, and, for that matter, some relationships I found relating to atomic orbitals (I. J. Miller, 1987.  Aust. J. Phys. 40 : 329 -346) led me to an alternative interpretation of quantum mechanics. It is a little like de Broglie's pilot wave, except in this case I assume there are only physical consequences when the wave is real, which, for a travelling wave, from Euler, is once per period. (Twice for certain stationary states.) As with the Schrödinger equation, the wave here is fully deterministic. (For the Schrödinger equation, if you know ψ for any given set of conditions, you know ψ for any changed conditions, hence the determinism. The position of the particle is NOT deterministic. The momentum is in as much as it is conserved, but not at a specific point in space.) Now, my interpretation of quantum mechanics has a serious disagreement with standard QM in terms of the delayed quantum eraser. Let me explain the experiment, details of which can be found at Phys. Rev. Lett. 84: 1 – 5.
 
But first, for those who do not know of it, the two slit experiment. Suppose you fire electrons at two slits spaced appropriately. On the screen behined, eventually you get a diffraction pattern. Now, suppose on the other side, you shine light on the screen. As the electrons emerge from a slit, and an electron only goes through one slit, the electron scintillates, and you know through which slit the electron passed, however, now the diffraction pattern disappears, and the resultant pattern is of two strips, and if the photomultiplier can assign the signal to a specific electron (requiring low intensity) then it is shown that a given strip is specific to a given slit. Standard quantum mechanics states that it is because you know the passage, there is no diffraction. By knowing the path, you have converted the experiment into a particle experiment, and all wave characteristics are lost. You can know particle properties or wave properties, but not both.
 
Now, this experiment starts the same way, but at the back of the slits there are two down converters, each of which turns a given photon into two photons of half energy. One of these, called the signal photon, goes to the photomuliplier, while the other photon, called an idler photon, sets off on a separate path from each down converter, so at this point, there are two streams that define which slit the photon went through. Accordingly, by recording the signal photons paired to one of these streams, it is known which path the signal photon took, and there should be no diffraction pattern if standard quantum mechanics is correct on this issue. What was actually done was that each stream was directed at a beam splitter, and half of each stream of idler photons went to a separate photomultiplier, and when the paired signal electrons were studied, there was no diffraction pattern. If, on the other hand, the the other half went to two further beam splitters such that the beams were mixed, and knowledge of which slit the parent photon went through was lost, the paired signal photons gave a diffraction pattern. Weirder still, the path lengths were such that what the idler photons did occurred after the signal photons had been recorded, i.e. the diffraction pattern either occurred or did not occur depending on a future event.
 
So where is the sin? Do you see what should have been done? The alternative explanation may seem a bit hard to swallow, but is it harder than believing the photons would give a diffraction pattern or not depending totally on what was going to happen in the future? Remember,  the idler photons could have been sent to Alpha Centauri to do the critical mix/not mix and the theory states clearly that the signal photons will, er, what? Rearrange the records eight years years later if the physicist does something different at the other end?
 
What I would have liked to see was that one stream of the idler photons heading to the mixing was blocked. The theory is, in the down converter, it would be possible that only one of the photons carried diffraction information, and that would go equally to signal or idler photons by chance. However, the next beam splitter could split idler photons not on chance but by whether they carried diffraction information, or appropriate polarization. The difference is, the separation is causal, and nothing to do with what the experimenter knows. If the partners of these two streams of idler photons heading to the mixing step carry the diffraction information, cutting out one of those streams will merely delete half of the information (because only hald the signal photons are now counted) if the patterns arise deterministically (and recall in terms of wave properties the Schrödinger equation is deterministic.) If the experimenter's knowledge is critical, then the diffraction pattern will go because the experimenter knows which path the photons have taken.
 
The point is, if physicists over the last decade have not commented on this, then maybe it is not that obvious. Maybe it is not a sin not to do the "obvious", because it is seldom obvious at the time. Hindsight is great, but if you did not see the sin before I told you, maybe you will be more generous when others appear to have sinned.
Posted by Ian Miller on Apr 21, 2015 4:37 AM BST
Ever wondered why planets rotate the way they do? All the outer ones appear to have prograde rotation, i.e. they rotate in the direction as of they were rolling along. However, Mercury and Venus are exceptions. Mercury has a very slow rotation that is explained by it being in a tidal resonance with the sun, so that is no mystery, but Venus rotates slowly, and the wrong way. Most people have viewed this in terms of the standard theory of planetary accretion, where the central body gets hit by a large number of planetesimals, or even larger bodies, from random directions, and the resultant spin is a result of preferential strikes. Earth may well have included this effect when it was struck by Theia to form the Moon. In this case the Moon's orbit also take sup angular momentum from the collision. Venus has no moon, and it spins slowly so the theory went, it was just unlucky and got hit the wrong way at the end by something big. But if that were the case, why no satellite?
 
There was a recent paper in Science (346: pp 632 – 635) that put a different picture on this. If the planet has an atmosphere, atmospheric temperatures oscillate from night and day, which creates large-scale mass redistribution within the atmosphere, the so-called thermal tides. The retrograde motion occurs because the hottest part of the day is a few hours after midday, due to the thermal inertia of the ground. Because of this asymmetry in atmospheric mass redistribution, the stellar gravity exerts a non-zero torque on the atmosphere, and through frictional coupling, the spin of the planet is modified. This is why Venus has retrograde spin. Atmospheric modelling then showed that the resultant torques for a planet in the Venusian position with a 1 bar atmosphere are an order of magnitude stronger than for Venus, mainly because the very thick atmosphere scatters or absorbs most of the sunlight before it reaches the surface. As a consequence, rocky planets in the habitable zone around lower mass stars may well have retrograde rotation.
 
During these posts, the reader may have noticed that I sometimes view computer models with scepticism. Here are two examples that illustrate why. The first is from Planet. and Space Sci. 105: 133 – 147, where two models were made of atmospheric precipitation on Mars ca 3.8 Gy BP. The valley network analysis suggests an average of 1.5 – 10.6 mm/d liquid water precipitation, whereas the atmospheric model predicts about 0.001 – 1 mm/d of snowfall, depending on CO2 partial pressure (which varied from 20 mb to 3 bar in the models) and with global mean temperatures below freezing point. The authors suggest that this shows there was a cold early Mars with episodic snow-melt as a source of the run-off. I rather fancy it shows something is left out of the analysis, i.e. there is something we do not understand, because all the evidence to date makes a persistent 3 bar atmosphere most unlikely, and even then, it only works by a near miss at the extremes. The other came from Icarus 252: 161 – 174. Here, an extensive suite of terrestrial planet formation simulations showed that the rocky planets have overlapping stochastic feeding zones. Worse, Theia, the body that formed the Moon, has to be significantly more stochastic than that of Earth, and the probability that the two would have the same isotopic composition is very small, yet the isotopic composition is essentially identical. The authors state there is no scenario for the Moon's origin consistent with its isotopic composition and a high probability event. Why not concede that the premises behind the model are wrong? And there, in my opinion, is the basic problem. Almost nobody goes back and checks initial assumptions once they have been accepted for a reasonable time. And if you do, as I have done for planetary formation, nobody cares. As it happens, each of these is properly accounted for in my Planetary Formation and Biogenesis.
 
There is a clear published model from Belbruno and Gott that would permit the Moon to have the same isotopic ratios as Earth, and that assumes Theia accreted at one of the two Lagrange points L4 or L5 (Astron. J. 129: 1724–1745). (Lagrange points are where gravitational effects of two major bodies more or less cancel, and a third body can be at L4 or L5 indefinitely, as long as it does not become big enough to be gravitationally significant. L4 and L5 are actually saddle points, and bodies that fall off the "saddle" have net forces that pull them back, so they carry out motion about the point. Jupiter's Trojans are examples.) So why did these other authors not cite this model as a possible way out of their problem? One possible reason is they have never heard of the model, which is almost never cited, one of the reasons being that within the standard model of stochastic accretion of ever increasingly large bodies, nothing could accrete at the Lagrange points because collisions would knock them off. So now we have a problem. The standard model would not permit the conditions by which one model would explain the observations, but the observations also effectively falsify the standard model. So, what will happen? Because there is no way to have discussions on topics such as these, other than in blogs, the whole issue will be forgotten for some length of time. Progress is held up because the modern method of disseminating information has so much information in it that linking it does not always occur.
Posted by Ian Miller on Apr 5, 2015 11:33 PM BST
   1 2 3 4 5 6 7 8 9 ... 17