Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Papers That Are "not Very Interesting".

My last post related to peer review and listed some of the problems with it. The question then arises, why do we want it? I think here that the answer depends on the nature of the paper.
Think of the paper that posts data, and as an example, data on a new molecule. It is highly desirable that this data is valid, because while in principle any scientific report should be reproducible, in practice, do we want to reproduce everything? There are something like 90 million molecules that have been reported, many of which have taken a great effort to make. Obviously, it would be highly desirable to ensure that each molecule is reported accurately, and enough is reported about it so that the work does not have to be repeated. Peer review gives an assessment that adequate methods were used, and that all reasonable data were collected. Furthermore, I know from experience at having done some reviewing, some scientists get so absorbed in their work that they do not realize that the average reader may not be able to unravel what they have done the way they have written it. So, yes, peer review that sends the paper back for revision should improve the paper.
However, the problem for me starts when a referee rejects a paper "because it is not very interesting". What that usually means is it did not interest him. One example from my past: I wrote a paper (with one other co-author) on the 13C NMR shifts of acetylated methylated agars. Now this may not seem very exciting, but as most chemists who use 13C NMR know, substitution changes the chemical shift of nearby atoms. Now, what I showed using a range of seaweed polysaccharides, because the structures of the sugar units were reasonably rigid, and because the linking oxygen atoms largely insulate one unit from the effects on the other, except sometimes immediately about the linking sites, the shifts due to substitution are regular, and you can use such shifts to determine substitution patterns, especially if a number of different operations are carried out in varying substitution on the "mobile" sites. (A mobile site is something like a sulphate ester, which can be removed, or a hydroxyl, which can be substituted with something like a methyl group, or an ester.)
Now, what causes a change of chemical shift? I think most chemists would answer that in terms of electron induction effects, wherein the substituent that is a strong electron withdrawer pulls electrons closer to the carbon atom to which it is attached, and the effect is attenuated so that two carbon atoms away (the γ site) there is only a tiny effect. Thus forming a methyl ether will change the chemical shift of the α carbon by about 10 ppm, the β carbon by about 2 ppm, and usually of opposite sign, while sulphate ester gives similar patterns, but usually about two-thirds the change in shifts. (Note4 the change of sign makes electron movement hard to swallow!) Now, what was significant about the acetylations was that the acetyl group makes a relatively small change in shift to the α carbon and a significantly bigger shift to the β carbon (about 4 ppm). Why? My argument is that the change in chemical shift has nothing to do with electron induction at all, but rather the magnetization field induced by the applied field. The magnetic potential is a through space effect, not a through bond effect, and since the magnetic potential is a vector, its orientation is also important. I argued the reason the acetyl group makes such a big change to the β carbon shift is that the acetyl group rotates about the linkage position, and the distance to the β carbon is actually quite small. Is that interesting? A means of determining substitution patterns on some polysaccharides, and evidence for the mechanism of chemical shifts? I thought so, but I seem to be in a minority. Now, would it hurt to publish it, given the electronic nature of publishing. Yes, one option would be to submit to another journal, but here I really could not be bothered. Remember, the number of publications has been irrelevant to my career; I have literally been publishing to be helpful, but when someone said they are not interested, then I also lost interest.
My question is, is this the way science should operate? In these electronic days, I believe there should be only two reasons to reject a paper: (a) it is wrong, and the referee should be able to show where, and (b) it adds nothing. By all means send back for clarification, but rejection should be an absolutely last resort. What do you think?
Posted by Ian Miller on Aug 31, 2015 2:58 AM Europe/London

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Leave a comment?

You must be signed in to leave a comment on MyRSC blogs.

Register free for an account at