Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

The November Chemistry World had an article on homochirality, with the question, "How did it evolve?" Clearly a problem, because the article did not really offer a solution. The problem is, the biogenetic chemicals should have been formed in both D and L forms equally. So why do we have D sugars and L amino acids? First, as the article points out, for all we know throughout the Universe there are an equal number of worlds supporting this choice as have chosen the other option. There is no reason to believe that D sugars are somehow superior, and certain red algae have polysaccharides based on alternating D and L galactose, so there is nothing that prevents the opposite form. So, how did homochirality evolve? The article offers a good survey of the guesses as to how an initial preference would feed on itself, but the problem then is, why was there an initial preference? In most cases, any means of obtaining a preference would appear to be too small to make any significant difference.
 
In my ebook, Planetary Formation and Biogenesis, I suggest there are two better questions. The first is, why did homochirality evolve? The second, and more important, is, why choose ribose, and having done that, why the furanose form? I think the answer to the last one is important. It is possible to make duplexes out of a number of pyranose pentoses, including ribose, and all of them have a slightly stronger association energy than the ribofuranose. My suggestion is that the furanose form does something the pyranose form does not do, in which case the reason for choosing ribose is clear, even though ribose is one of the least likely sugars to be formed from a synthesis that would offer a mixture: it alone has a reasonable amount of furanose form in solution. So the question then is, why prefer furanose?
 
The first step towards RNA in biogenesis is to join a purine or pyrimidine to ribose. This is a simple condensation reaction, but it does not work very well for purines, and not at all for pyrimidines, in aqueous solution. The condensation reaction is thermal, so there has to be a means of heating it more strongly, or alternatively, providing more vibrational energy at the reactive site. The formation of the phosphate ester at C-5 is also a condensation reaction. We know that both reactions go photochemically for adenine, ribose and phosphate, and while this is unlikely because adenine only absorbs photons at about 250 nm or less, I suggested there could be a different mechanism: absorption of visible light by something like a porphyrin and subsequent thermal energy transfer. If so, the reason for the furanose is the only form that will get to a phosphate ester, because it alone is flexible enough to transfer the vibrational energy to C-5.
 
If so, then the origin of homochirality is reasonably obvious. The RNA form condenses photochemically, until the RNA polymers get long enough to act as ribozymes. Once they do that, they can depolymerize as well as catalyse polymerization. For a while, anything might be formed, but once a homochiral polymer strand is formed, it can form a helix that will act as a template for a double helix. Once it does that, if the duplex separates, we have two templates. It needs the duplex to reproduce, and the duplex will not form if the strands have mixed chirality. Once reproduction starts, whatever structure was selected will predominate. If you need homochirality to reproduce, and if, once you get reproduction that form will predominate, then surely homochirality is inevitable.
 
This will be my last post here for 2015, so may I wish readers a very merry Christmas, and a successful 2016.
Posted by Ian Miller on Dec 13, 2015 10:36 PM GMT
On the international scene, it is often difficult for nations to make decisions when more than one of them is involved, but occasionally an issue comes up where it is difficult to even know how to make the decision. Climate change is one of those issues. Leaving aside some recidivists, the mechanism of greenhouse forcing is now reasonably clearly known, and accepted by the scientific community, and, judging by the recent marches, by a reasonable fraction of the public. Less well accepted is what is essentially hysteresis, which means that what happens depends on what has happened before. Almost certainly, we are not currently in a climatic equilibrium (if we ever were). Another point that many seem to have trouble with is that if there is a net heating, or positive power input, it does not follow that temperatures will increase at selected points. The obvious example is that heat going into the polar regions and melting ice does not raise the temperature. But even more significant, if some areas are getting hotter, and the poles stay the same, we have a greater temperature difference, which permits a stronger heat engine (storms) to develop. Stronger cold winds flowing from the poles will cool some regions, even if, overall, the planet is heating.
 
Our current problem is that with 400 ppm of CO2 in the atmosphere, the additional heat in the oceans are transferring warmer water to the ice sheets, thus melting glacial ice in Greenland and the Antarctic. Suppose we stopped burning fossil fuels tomorrow, the rate of melting would continue unabated for quite some time, first because the additional heat in the oceans at the equator still has to have sufficient time to get to the ice. Further, the oceans will continue to absorb the heat because the atmosphere will continue to have its 400 ppm of CO2, together with other gases such as CH4, N2O, and a number of industrially made gases. If the ice sheets melt, there will be a serious rise in sea-levels. Countries like Bangla Desh will lose half their land, some Pacific Islands will be uninhabitable. So, what should we do?
 
The current political thinking seems to be, nothing, besides reduce CO2 emissions. However, reducing emissions merely slows the development of the problem; it does not reduce it, because of what is already there. Worse, India has announced it will build a lot of new coal-fired power stations, on the basis that it should have its turn to burn coal. There is an even worse problem: the acidification of seawater due to the CO2 it has absorbed is bringing it close to the level where aragonite does not precipitate out. A very large number of shellfish, at least in their juvenile stages, depend on aragonite to make their protective shells. Accordingly, we have two problems: how to stop global warming, and how to stop ocean acidification? Each of these can be addressed by geoengineering, although the ocean acidification has the fewest options.
 

There was an article in Science (vol 347, p1293) that raised the question, what would happen if some country decided to burn a lot of sulphur, which would help form clouds and reduce the albedo? The reason the country might have decided to do this could be because it had had a series of bad harvests, and it blamed climate change. The problem, of course, might be that now some other country might have its harvests fail (and in this case, ocean acidification would hardly improve.) The problem is that anyone who does something will hardly know what their actions will do elsewhere, and even if they can guess, who is responsible for what happens? What is needed is more information, but how do we get that information? How do you carry out an experiment that will provide data on a global scale without the possibility of influencing the globe? And who will support the experiment? Who will regulate what is done, and on what basis? One unfortunate aspect is that politicians will put themselves in the deciding role, they will not understand the problem, and they will act solely in the interests of their own country. Not an attractive prospect for our grandchildren.
Posted by Ian Miller on Nov 29, 2015 9:23 PM GMT
One thing that brings joy to someone who engages in theoretical work is to find observational evidence that supports a theory that contradicted "standard theory" that everyone accepted when the theory was presented, which in this case was in 2011, in my ebook Planetary Formation and Biogenesis. That fact that nobody else takes any notice is irrelevant; the feeling that your theory alone actually meets the conditions imposed by nature is great.
 
The relevant part involves the formation of the rocky planets. The standard theory is that these formed from the collision of planetesimals (bodies up to 50 km in size, which were formed by some totally unknown process), and the volatiles came from a subsequent bombardment of carbonaceous chondrites, or something like them. The review I gave of this process (the ebook has over 600 references) shows a number of reasons why this should be wrong, mainly in the form of a whole lot of other things that should have accompanied the water, and clearly did not in the right ratios, but the theory was held onto because it was perceived that there was no alternative. When the rocky planets accreted, it was too hot for water to accrete at those pressures by any reasonable physical process.
 
My answer was that Earth formed by chemical processes. Very specifically, in the early stages of the accretion disk, there were temperatures where calcium aluminosilicates could phase separate out of melt-fused rocks, and when the disk cooled, collisions made dust, the dust adhered to rock and collected water vapour from the nebula to set the cements into effectively concretes. These were strong enough to permit them to survive the milder collisions, and they would rapidly accrete small material, effectively growing more by monarchic growth than the usually assumed oligarchic growth. Accordingly, the water that set the cements would be primordial, and this would be the source of Earth's water.
 
The good feelings I am sharing come from a recent paper by Hallis et al. (Science 350: 795 – 797) that reports the deuterium/hydrogen ratios in some primordial rock samples originating in the deep mantle. These lavas, found in Baffin Island and Iceland, have 3He/4He ratios similar to primordial gas (and up to 60 times higher than atmospheric helium) and have Pb and Nd isotopic ratios consistent with primordial ages (4.45 – 4.55 Gy). They also contain water, and the deuterium levels of the water indicate that the water almost certainly had to be primordial, from the accretion disk itself and not from chondrites. You can see why I am happy.

 
Posted by Ian Miller on Nov 16, 2015 10:12 PM GMT
In the latest "Chemistry World" there is an article arguing there is a controversy relating to the nature of the bonding in molecules such as the perchlorate anion, which appears now to be describable as having the chlorine atom with a positive charge of three, the four oxygen atoms with a charge of minus one each. The bonding is therefore one of four equal single bonds. Presumably, sulphate has the same issues, and according to Wikipedia, computational chemists put a charge of 2.45 on the sulfur atom. Crystal structures apparently indicate the four bonds are equal. Why got to these extremes? The problem is that chlorine has seven outer electrons, but six of them are usually regarded as residing in three pairs, and hence should be inert. Accordingly, chlorine has a valence of 1. Now many chlorine compounds do, but perchlorate, by definition, can be considered as the adduct of water on Cl2O7, i.e. all the outer electrons are involved. In principle, upon electron pairing, that gives 14 electrons in the outer valence shell. How can that be? The sulphur in sulphate has six outer electrons, four of which are paired. To get the required valence of six, again all electrons have to be unpaired if electron pairing is relevant.
 
The traditional method was to invoke 3d orbitals. These are empty, so they may be available for hybridization, BUT, according to the article, "quantum chemists have shown that it is energetically unfeasible to use d orbitals for extra bonds". It was asserted that this undermines a quantum mechanical account of Lewis bonding. My immediate problem with this assertion is, "how do we know?" The 3d orbital energies are obviously higher than 3p for chlorine, but how much higher, and does the energy difference remain if the orbitals are used for bonding? I am not arguing the statement is wrong, but merely that I would like to know why everyone thinks it is right. The output of computations is insufficient, because computations, according to Pople's Nobel lecture, are heavily dependent on validation, and we are a little short of the requirement to validate this statement. We can go further. The 2p orbitals are clearly at a higher energy than the 2s orbitals when we excite to them, yet boron almost never forms a B – X molecule, other than in highly energetic experiments, and not only does it use all three electrons, but it tries harder to achieve a tetrahedral configuration. So, if boron can do this, why cannot sulphur do it with 3d orbitals.
 
The article suggests that the answer might come from putting large negative charge on the oxygen atoms, and strong positive charge on the chlorine. The perchlorate anion is therefore an anion with four oxygen atoms with nearly a negative charge on each, and nearly three positive charges on the chlorine atom. The question then is, why does not this positive charge attract and polarize towards it the negative charge. If it does, we are back to the original problem.
 
What we need are data, and there are some. Consider only sulphate. We can form stable esters, such as dimethyl sulphate. If we do, the structure is consistent with two S=O and two S-O bonds. The  S – O bond length is 156.7 pm, the S = O bond length 141.7 pm. (J Mol. Str. 73, 99 – 104) while the infrared spectrum (Spectrochim Acta 28A, 1889 – 1898) gives the symmetric and asymmetric stretches of two pairs: the double bonds at 1389 and 1199 cm-1, with the single bonds at 829 and 757 cm-1. The infrared spectra of sulphates as a whole typically have medium to strong signals around 645 cm-1, and very strong signals at1110 cm-1, yet the S – O bonds in the anions all have the same length, so what does that mean? Obviously, even this common molecule still needs further work. I don't know the answer, but I would very much prefer it if the theoreticians would publish the reasons, and assumptions used, when they publish a statement saying the central atom has an extremely high positive charge. Their model might work for the sulphate anion, but it does not appear to for dimethyl sulphate, so the problem with how to explain hypervalency remains.

 
Posted by Ian Miller on Oct 26, 2015 1:53 AM GMT
One interesting paper from the not too distant past involved the reduction of carbon dioxide to either methanol or methane (J. Am. Chem. Soc., 2015, 137, 5332) using lithium o-phenylbisborate as a catalyst. What the catalyst is claimed to have done is to bend the CO2 molecule (highly plausible) and thus form an aromatic ring. It is this last part that I find hard to stomach, because it brings us back to the question, what causes "aromaticity"? Now, I should issue a warning here: I have published what I think causes aromaticity, so I am not exactly unbiased.
 
So, where is my problem? The authors seem to have argued that a six-membered ring is formed (correct) and there will be 6 π electrons in it, therefore the system will show aromaticity. I suppose if you construct molecular orbitals and then place the electrons in them, there is a case for this. However, my argument about aromaticity is there has to be 2n 1 double bonds that alternate with single bonds, which is not quite the same thing. The reason for aromaticity in this case lies in the phase of the waves. Similarly to thinking about the Woodward Hoffmann rules, run the phases around the ring, then keep going. What you find is that with aromaticity, the second round cancels the first, which cancels the double bond amplitude, and since the charge has to go somewhere, it goes to the single bonds (the other major canonical structure). But that has the same problem, and as such, molecules such as cyclohexadiene cannot exist. Cyclobutadiene, however, finds that the second cycle reinforces the displacement of the first cycle, and so it is locked into the classical structure. Now, the reason I find this reduction paper of interest is that in principle it offers an alternative. My model predicts no aromaticity because the double bonds in carbon dioxide are orthogonal. The double bond orbitals cannot overlap with each other and therefore cannot form an extended wave with one polarization.
 
Does it matter? I think so. I think it is important that chemists try to understand what is going on. Oddly enough, when I started my career with physical organic chemistry, by at large chemists thought they understood tolerably well most of the reactions of which they were aware. Now there are so many additional reactions, but I am far from convinced the understanding has increased.
 
One final point. The paper ends with a statement that "further studies" are required to adopt the transformation for "practical applications". Methanol and methane will not be there. This catalyst merely bends the CO2, the actual reductant is either triethylsilane or pinacolborane. These would be more than somewhat more expensive and harder to get than methane and methanol. That hardly seems likely to be "useful", at least from what was demonstrated in this paper.
Posted by Ian Miller on Sep 28, 2015 5:06 AM BST
My last post related to peer review and listed some of the problems with it. The question then arises, why do we want it? I think here that the answer depends on the nature of the paper.
 
Think of the paper that posts data, and as an example, data on a new molecule. It is highly desirable that this data is valid, because while in principle any scientific report should be reproducible, in practice, do we want to reproduce everything? There are something like 90 million molecules that have been reported, many of which have taken a great effort to make. Obviously, it would be highly desirable to ensure that each molecule is reported accurately, and enough is reported about it so that the work does not have to be repeated. Peer review gives an assessment that adequate methods were used, and that all reasonable data were collected. Furthermore, I know from experience at having done some reviewing, some scientists get so absorbed in their work that they do not realize that the average reader may not be able to unravel what they have done the way they have written it. So, yes, peer review that sends the paper back for revision should improve the paper.
 
However, the problem for me starts when a referee rejects a paper "because it is not very interesting". What that usually means is it did not interest him. One example from my past: I wrote a paper (with one other co-author) on the 13C NMR shifts of acetylated methylated agars. Now this may not seem very exciting, but as most chemists who use 13C NMR know, substitution changes the chemical shift of nearby atoms. Now, what I showed using a range of seaweed polysaccharides, because the structures of the sugar units were reasonably rigid, and because the linking oxygen atoms largely insulate one unit from the effects on the other, except sometimes immediately about the linking sites, the shifts due to substitution are regular, and you can use such shifts to determine substitution patterns, especially if a number of different operations are carried out in varying substitution on the "mobile" sites. (A mobile site is something like a sulphate ester, which can be removed, or a hydroxyl, which can be substituted with something like a methyl group, or an ester.)
 
Now, what causes a change of chemical shift? I think most chemists would answer that in terms of electron induction effects, wherein the substituent that is a strong electron withdrawer pulls electrons closer to the carbon atom to which it is attached, and the effect is attenuated so that two carbon atoms away (the γ site) there is only a tiny effect. Thus forming a methyl ether will change the chemical shift of the α carbon by about 10 ppm, the β carbon by about 2 ppm, and usually of opposite sign, while sulphate ester gives similar patterns, but usually about two-thirds the change in shifts. (Note4 the change of sign makes electron movement hard to swallow!) Now, what was significant about the acetylations was that the acetyl group makes a relatively small change in shift to the α carbon and a significantly bigger shift to the β carbon (about 4 ppm). Why? My argument is that the change in chemical shift has nothing to do with electron induction at all, but rather the magnetization field induced by the applied field. The magnetic potential is a through space effect, not a through bond effect, and since the magnetic potential is a vector, its orientation is also important. I argued the reason the acetyl group makes such a big change to the β carbon shift is that the acetyl group rotates about the linkage position, and the distance to the β carbon is actually quite small. Is that interesting? A means of determining substitution patterns on some polysaccharides, and evidence for the mechanism of chemical shifts? I thought so, but I seem to be in a minority. Now, would it hurt to publish it, given the electronic nature of publishing. Yes, one option would be to submit to another journal, but here I really could not be bothered. Remember, the number of publications has been irrelevant to my career; I have literally been publishing to be helpful, but when someone said they are not interested, then I also lost interest.
 
My question is, is this the way science should operate? In these electronic days, I believe there should be only two reasons to reject a paper: (a) it is wrong, and the referee should be able to show where, and (b) it adds nothing. By all means send back for clarification, but rejection should be an absolutely last resort. What do you think?
Posted by Ian Miller on Aug 31, 2015 2:58 AM BST
When I started this post, it was to be about search engines, the problem being that a search is never specific enough, and just about anything can be mixed up in the answer. To illustrate the problem, I thought I would Google "Peer review criticisms", and by doing so, I got 3,570,000 hits. Surely they can't all be relevant. However, curiosity got the better of me, and after an admittedly ridiculously inadequate look at the literature, I decided to post about peer review itself. The question to be addressed is, is peer review helpful to unraveling the secrets of nature, or is it part of the problem? So here are some comments that I picked up.
 
However, before I start, a different question might be, does peer review do any good? One possibility is that it sends the half-baked back to the author for further baking. It may also be claimed to help emerging scientists to present their ideas in a way that would lead to better understanding of what they are doing. If so, this could be very helpful, however, that only applies to papers sent back for revision. There is also the question of whether papers being sent back for revision truly need it? That someone does not write in the style of the reviewer is beside the point. It also eliminates the "crank" stuff, but herein lies the problem: what happens if an upcoming Einstein is labeled a crank? The current argument is, if it is that good, it will find a place eventually. Perhaps, but will it then be read? However, let us return to the literature.
 
First, a quote from a source that shall remain anonymous: "Peer review seldom detects fraud, or even mistakes. It is biased against women and against less famous institutions. Its benefits are statistically insignificant and its risks – academic log-rolling, suppression of unfashionable ideas, and the irresistible opportunity to put a spoke in a rival's wheel – are seldom examined." To that I would add, it is most certainly biased against individuals that do not have a University or major institutional address. The individual scientist does not exist, according to the reviewing system. If you do not have a suitable address, you must be a crank. No need to waste time reading the paper. Herein lies another real problem: papers can be rejected by the Editor without peer review, or even without any evidence of having been read, at least past the title.
 
Now, from www.evolutionnews.org/2012/02/problems_with_p056241.html
This article made six points:
  1. Good science does not have to be published in the peer-reviewed literature. The examples cited include rejections, and I shall deal with those later.
  2. The peer-review system wrongly rejects scientifically valid papers, while it wrongly accepts scientifically flawed papers. Personally, I feel that it is too much to expect a peer reviewer to find fraud and you cannot expect it to uncover faulty experimental procedures that are not specified. In this sense, journals are increasingly encouraging methods to be explained as a reference to somewhere else, which in turn is a reference to somewhere else again, and so on. However, it also makes the valid point that peer-reviewing is both time-consuming and expensive, and often excludes people for no good reason.
  3. Scientific peer-reviewers are not perfectly objective. Again, the performance of the reviewers is questioned, but a very interesting point followed: journals have a very strong economic interest in preserving the current system, and scientists go along with it because it helps them maintain their position. In my mind, this is not a good reason to persist with it.
  4. The "peer-review card" is often played to silence scientific dissent. This, to my mind is a serious criticism, although more of the scientific elite than the reviewing system.
  5. Peer-review is often biased against non-majority viewpoints." Denyse O'Leary is quoted as saying: "The overwhelming flaw in the traditional peer review system is that it listed so heavily toward consensus that it showed little tolerance for genuinely new findings and interpretations."
  6. Not being recognized in peer-reviewed literature does not imply a lack of scientific merit. The simple fact is, in logic, the contrary position is a fallacy in the ad verecundiam class.
In a later post I shall continue on this theme, but before I do, what are your thoughts on this matter?
Posted by Ian Miller on Aug 16, 2015 9:25 PM BST
Since my last post, I received a telephone questionnaire relating to what the RSC should be doing, which raises the question, what should it be doing? First, some things are obvious, particularly relating to practicing chemists, and in my opinion, the RSC does these rather well. There are obviously some additional things it could be doing, and any organization has room for improvement, but I suspect the average response to such a questionnaire will be to make minor adjustments to what is already there. Leaving aside the need to work for chemists, I think there are three major areas for the society to consider.
 
The first is to get a basic understanding of chemistry to the general public. In the recent RSC poll, 55% of the public believed it is important to know about chemistry, but I bet most who answered that way would admit they know very little. By basic understanding, I mean enough to understand the problems the world faces, enough to see them safely through their lives, and enough to sense whether someone making a public statement is speaking sensibly. In New Zealand recently, one family died by accidental carbon monoxide poisoning, and had they understood the problem, this would not have happened. I know you can never totally prevent such incidents, nor can you cover every eventuality, but I think the society could be more active in trying, perhaps by giving some basic information in an easily comprehended form on its website, or perhaps by making Youtube video educational items.
 
The second is to make a broader explanation of certain important environmental issues available to the general public. I have seen a lot of irrational and false comments about matters like climate change, and I feel the Society should make a bigger effort to show the public how to handle the chemical aspects, or perhaps with the Institute of Physics, a proper overall picture, including a discussion of what we do not know for sure. The problem, as I see it, is that science tends to present very technical statements with proper scientific statements of uncertainty, but the public cannot understand them, and instead fall to "snake oil" merchants. I am not saying there cannot be dissent, but the public must realize that dissent requires logical analysis and evidence. I think the public is smarter than we give them credit for, BUT they lack specific information.
 
The third is I think the Society should show how chemistry can assist the economy, and not just by helping big companies. In the RSC survey, many people believed that chemistry can help the economy, and the fact is that now the more wealthy countries have a strong knowledge input into their economies. It can also show up problems smaller companies have and make basic chemical information available. Not all companies are associated with a University, and it is important that graduates, when they go into the world, have the opportunity to stay on top of their profession.
 
There will be other things the Society could do, and many may be more significant, but they are my thoughts as to what could be done. What are yours? One of the better things the RSC does is to put up blogs like this, and while this offers the chance to involve all the members in idea creation, that only works if the members participate, so why not throw in your thoughts?
 
Posted by Ian Miller on Jul 27, 2015 3:35 AM BST
Towards the end of last year, Nature published two articles that raised issues with the peer review system. In one, the claim was made that the reviewer of one paper had not even read the paper, because he commented on something that might usually be found in such a paper, but in this case was not. Now, naturally this would be a cause for concern for the author. However, the article then went on to a somewhat deeper issue, namely that in 2013, the articles indexed in Elsevier's Scopus rose to 2.7 million. Now not all of these would be peer reviewed, but you see the problem. There are just not enough experts to deal with this flood of material. What happens is that scientists are given papers that are more outside their specialty, and yes, the reviewer may be able to evaluate the methods and results section, but, according to the claim, lack the expertise to evaluate the introduction and discussion.
 
The article then made the claim that reviewers should verify the authors are quoting the right literature to support their views. Now, I dispute this. The reason for citing literature is when the papers put forward views or results that are of significance to the argument that will follow, but an introduction of a paper is not a general review of anything that is vaguely associated with the topic. So the first question is, is the work novel? The probability of recognizing plagiarism unfortunately increases very rapidly as the level of expertise and experience of the reviewer becomes more significant. Recall the word "peer"? If the work is accepted as novel, then the next question is, are there any facts or assumptions seemingly pulled out of thin air that should be referenced to some other paper? If so, those papers should be referenced, but there is a problem in that the reviewer should not be expected to do a full literature search. Surely the author has to take responsibility. The important thing is the author does not claim that to which he is not entitled. The third problem is, is there a reference that shows the work to be wrong? Again, an expert in the field should be able to show this, but how to find enough such experts? Experts, by definition, are a small fraction of the total scientists.
 
The second Nature article illustrated what I believe is a far more insidious problem: the peer review loop. Authors are often asked to recommend peer reviewers, so they do, within a small group of friends. In return, they know they will be recommended back. In a nice cozy circle, of course you recommend publication, with maybe the odd correction to show you did something. Even worse was the case where one author did his own reviews, and sent them to colleagues who in turn would submit them to the editor. This was caught out because everyone did it far too quickly. If you are going to cheat the system, obviously you should do it slowly, when the editor will be finally glad to have something on his table!
 
I believe recommendations from scientists should stop, but that then raises the question, how does the editor find reviewers? Personal knowledge is valuable, but with the great flood of papers coming in, can the editor know enough experts so that those he chooses are not overwhelmed? Then there is a problem with multi-author papers. If you look at some of the papers regarding the results from some of the NASA space probes, there may be up to fifty authors cited. In other words, anyone who knows sufficient is already an author.
 
But this raises the questions, why would scientists want to game the system, and why is peer review required? The answer to the first appears to be, to get more papers that go towards building up a reputation for promotion, awards, whatever. I suspect the scientific literature would become a lot more manageable if this practice were to stop. But the answer to the second one might come from something similar to the physicists' ArXiv; the authors can publish anything in e-form, but it is then open to public peer review, in which relevant comments from others can be attached. If the paper survives, it is worth keeping. If there is obvious plagiarism then the work is deleted and the plagiarist publicly identified. If some references have been left out, they can be added, although some check on relevance as opposed to self-citing would be required. If there is evidence the work is wrong, then that work can be submitted as a linked paper, and if  the first paper is not adequately defended, then everyone will disregard it. Let the scientific community do the peer review. This would not have worked while journals were printed on paper, but when they are essentially e-journals, why not? What do you think?
 
Posted by Ian Miller on Jul 5, 2015 11:47 PM BST
Those funding research want the outcome of their funding to be useful, which means that the research either leads to something useful, or it does so indirectly by inspiring someone else. The problem with this attitude is that it inspires some writers to present their research in a way that looks a lot more relevant than it actually is. The best that can be expected is that it will inspire someone else to pick up the challenge. But how does that come about?
 
One interesting problem involving invention, or of the development of new technology, is the ability to see things before others do, to see the possibilities and also to quickly see what will become a dead end well before it wastes too much of your time. So with that in mind, what to make of a recent paper on CO2 reduction (JACS 137: 5332)? The statement made in the paper is that in the presence of Li2[2,2-C6H4(BH3)2 ], CO2 can be reduced to methane or methanol if a suitable reducing agent is present. What appears to happen is that the CO2 forms some sort of complex with the two BH3- groups, and the CO2 molecule becomes bent, and an aromatic system is formed. This would significantly lower the activation energy for reaction at the carbon atom, hence the catalytic activity. Unfortunately, the reducing agent is not one that obviously comes to mind for making bulk chemicals.
 
So, is this useful? The first thing to note is that we need to get rid of CO2, and in principle, this does it. Why "in principle"? Because, when we look at the table of yields, the yields quoted are those of the reducing agent's products. Thus triethylsilane was used as the reducing agents, and all the products noted were silicon derivatives. This certainly supports the concept that a reduction occurred BUT it gives no clues for specific reactions, how much of what product was obtained. Elsewhere, it adds that the products were either methane or methanol, and one diagram shows excellent yield of an intermediate , and implies up to 89% yield of methanol. That sounds promising, until we start to consider the reducing agent. If you have to use reducing agents like triethylsilane, then the reagents will be far more expensive and difficult to get than the products, particularly since several mole equivalents will be required.
 
In my opinion, as it stands it is unlikely to be useful, as it uses difficult-to-get reagents to make commonly available chemicals. On the other hand, it seems to me that there is potential here to make use of the aromatic intermediate is some way yet to be discovered. This is the challenge of chemistry.
 
How would you respond to this challenge? My guess is that pressurized hydrogen with a hydride transfer agent that acted catalytically would be the best bet for methanol. However, some reactive anion that did not react with the boron groups might offer the chance to add to the carbon atom and make a carboxylic acid, which in turn might be of synthetic interest. Your opinions?
Posted by Ian Miller on Jun 21, 2015 11:06 PM BST
< Prev    1 2