Is science sometimes in danger of getting tunnel vision? Recently published ebook author, Ian Miller, looks at other possible theories arising from data that we think we understand. Can looking problems in a different light give scientists a different perspective?

Share this |

Share to Facebook Share to Twitter Share to Linked More...

Latest Posts

My ebook, "Planetary Formation and Biogenesis" was first published on Amazon 1 year ago, it argues that quite a lot of the standard theory needs rethinking, in particular that initial accretion is dependent on chemistry, not gravity, and while I have found a number of otherwise puzzling observations for the standard theory, as far as I can tell, nothing I have found contradicts my propositions. Readers may forgive me, but I find that rather satisfying. Part of the reason, of course, might be that the year has been relatively quiet regarding discoveries. That will change, because it is inevitable that sooner or later a large number of papers will come out regarding findings from Curiosity. That will be far more critical as far as my ideas go. A further possibility is that the theory is somewhat elastic, and hence difficult to falsify. That is true in some ways. There are a number of options for planets, but once one is chosen, there are very specific consequences. Unfortunately, some of those are as yet too difficult to test, which may also be why the theory has survived!
The most interesting evidence came from the Kepler satellite. It discovered (Science 340: 262) that Kepler 62 has five planets that range from half to twice Earth's diameter. These are at 0.715, 0.427, 0.12, 0.929, 0.055 A.U., around a star of 0.69 times the sun's mass. It is estimated that the outer one is in the centre of the habitable zone, and the next inner one possibly. The problem then is, are these truly rocky planets or ice planets sent inwards according to one of the possible mechanisms proposed? If they are all rocky planets, and were spaced according to my "expectation prediction" (which requires the star to have accreted at a rate proportional to its stellar mass squared, which in turn is observed, but only loosely, so there should be a range from the expectation position) the typical planet equivalents should be Mars-type  0.58, Earth-type 0.328, Venus-type (if there is one, and this is somewhat flexible) 0.22, Mercury-type 0.12. (This also assumes the secondary accretion rate, critical for exactly how the rocky planet evolves, was similarly scaled to our star, and observational evidence shows a possible order of magnitude deviation each way.) If the outer one is the Earth-type, then the theory predicts that accretion was significantly faster, and any Venus-type should be at 0.47 A.U., and  Mercury at 0.22 A.U. and there should be a Mars-type at about 1.14 A.U. Additional inner planets (Vulcans, which are predicted to be Mercury-like) would seem unlikely as the temperatures would grow too hot over a shorter radial difference. If the 0.427 A.U. planet is an Earth-type, then accretion was slower, and more material was available, in which case the Mars-type should be at about 0.68 A.U., the Venus-type at about 0.28 A.U., and the Mercury-type at 0.15 A.U. This agreement is not too bad, and the slower accretion rates could permit Vulcans. On the other hand, some of these bodies could be quite different, without violating the theory because if the accretion is slow, a variety of additional options might arise. Their densities should define their nature.
Slower stellar accretion rates permit planetary bodies to grow bigger, at which point they interact chaotically. It is generally considered that one major body (Theia) collided with Earth and formed the Moon. However, it is possible that modest-sized bodies might have collided and retained much of their structure provided collisions were not too violent. There is evidence this occurred (Science 340: 22-24). The Earth's deep mantle behaves as if there are two major piles of different composition, one below Africa and one mainly below the South Pacific. Plumes rise from the edges of these and give rise to the volcanic islands. These piles are thousands of kilometers across, but their composition remains unknown. An important point is that these "piles" are denser than much of the remaining mantle. Within my proposition, this is suggestive that they accreted closer to the star than the bulk of the Earth, which increases the pyroxene content, they differentiated, then eventually collided with Earth. The increased density arises through shedding aluminosilicates during collisions, including shedding them to Earth's crust. Is that right? That is unknown, but it is an interesting thought, at least for me.
Posted by Ian Miller on May 6, 2013 4:08 AM BST
In a previous post, I commented on an article in Nature by Robert Antonucci, in which he complained that only too many scientists do not spend enough time thinking, and are only too willing to accept what is in the literature, without checking. This was followed by another article by Keith Weaver, entitled "Scientists are snobs", who asserted that there was another problem: scientists are only too willing to believe that the best comes from the best institutions. This is also a serious issue, if true.
Specifically, he complained that:
(a) Scientists prefer to cite the big names, even when smaller names made the discovery, and the big names merely used it later. Yes, this may well be through sloth, and not doing a proper literature search, and in some ways it may seem not to matter. The problem is, it does when the original discoverer puts in a funding application. Too few citations, and the work is obviously not important – No funding! Mean while the scientists who did nothing to advance the technique gets all the citations, and the funding, and the conference invitations, and the "honours". The problem is thus made worse because of positive feedback.
(b)  An individual scientist gains more recognition if they work in a prestige institution. The implication is, the more prestigious the institution, the better the scientists. There is truth in that some scientists at more prestigious institutes are better, whatever that means, but if so, it is not because they are there, but rather the rich institutions pay more to the prestige scientists.
(c)  Even at conferences, scientists go to hear the “big names”, and ignore the lesser names. This is harder to comment on because I know that having been to many conferences, there are some names I want to hear, and many of the “unknowns” can produce really tedious presentations. Choosing sessions tends to be to maximize the chance of getting something from the conference. For me, the problem often ends up choosing between the big name, who as often as not will produce recycled stuff, or the little name, who may not have anything of substance. Conference abstracts sometimes help, but not always.
What do you think about this? In my opinion, leaving aside the “sour grapes” aspect, Weaver raises an important point. The value of an article has nothing to do with the prestige of where it came from. To think otherwise leaves one open to the logic fallacy ad verecundiam. I wonder how many others fall into the trap Weaver notes? My guess is everyone is guilty to some degree of (c), but I do not regard that as a particularly bad sin. However, only citing big names is a sin. The lesser-known scientist needs citations and recognition far more than the big names.
One might also notice that the greatest contributions to science have frequently come from almost anywhere. In 1905 the Swiss patent office was hardly the most prestigious source of advanced physics, but contributions from there changed physics forever. What is important is not where it came from, but what it says. Which gets back to where this post started: scientists should cover less ground and think more. Do you agree?
Posted by Ian Miller on Apr 29, 2013 12:34 AM BST
Polywater might have been an obvious error for chemistry, but I still question, what did we learn from it? My guess is, not much. What we eventually realized is that while fused silica does not dissolve in water at any appreciable rate, it does if it is on the surface of a very small capillary. Why? Is it due to the curvature of the surface, or is a micro-column of water somehow more active? A general theory here could be of great help to medicine, or to much of research into nanotechnology, but such was the scorn thrown at polywater that a potential advance of great significance was dealt with like the baby discarded with the bath water.
In previous posts I mentioned the problem of whether cyclopropane could delocalize is ring electrons into adjacent unsaturation. The textbooks say it can, and this is justified because MO theory says it can. Do you believe that? Are you still convinced when you are told that the computational programs that "settled" this issue were the same ones that asserted that polywater had very significant enhanced stability? The original MO treatment of cyclopropane was due to Walsh. His concept was that the methylene units were trigonal sp2 centres, with the third orbital of each carbon forming three-orbital overlap at the centre of the ring system. This left a p orbital on each methylene to overlap with the two p orbitals from the other methylene carbon atoms in partial side-on overlap. Since only two electrons were in the three-centre bond, there were four electrons for the three p-electron bonds, which led to two pairs for three bonds, one such bond being a "non-bond". These were obviously delocalized (assuming the model was correct in the first place) but the p orbitals were also properly aligned to overlap with adjacent p orbitals on unsaturated centres, so conjugation should follow. This was a perfectly good theory because it made predictions, however it is also imperative that such predictions were tested by observation.
There is an obvious consequence to this theory. Perhaps the biggest reason cited for cyclopropane conjugation is that a cyclopropane ring adjacent to a carbenium ion centre has an additional stabilization of about 100 kJ/mol over other comparable carbenium ions. Of course electron delocalization might be the reason for this, but if it is, then the p electrons of the cyclopropane ring must become localized in the orbitals that can overlap with the carbenium centre, at least to some extent, therefore the “non-bond” must become localized, to the same extent, in the distal bond. With less electron density in the distal bond, it should lengthen. There have been alternative MO computations, which drastically shorten the distal bond, e.g. to 143.6 pm, but significantly lengthen the vicinal bonds e.g. to 159 pm (J. Am. Chem. Soc. 1982, 104, 2605-2612) although it is far from clear why this change of bond length happens. The predicted lengthening of the vicinal bonds presumably occurs because charge in them is delocalized towards the carbenium ion, but it is unclear to me why the "non-bond" shortens.  As it happens, it is not important.  A structural study has been carried out on such a carbenium ion, and the distal bond is so considerably shortened but the vicinal bonds are not so lengthened (J. Am. Chem.Soc. 1990, 112, 8912-8920). Accordingly, the computations are wrong. The polarization theory I mentioned in previous posts is in accord with this observation: the vicinal bonds remain unchanged because nothing much changes while the distal bond shortens because the positive field allows the electrons in the bond to align better with the internuclear axis.
Now, the interesting point about this is that when the measurement was made, nobody questioned whether the Walsh MO theory might be wrong. Such is the power of established theory that even when observation brings in a result opposite to that predicted, and even when there is clear evidence (from polywater) that the computational methodology that led to this result is just plain wrong, we do not want to revisit it. Why is this? A general lack of interest in why things happen? Simple sloth? Who knows? More to the point, who cares?
Posted by Ian Miller on Apr 22, 2013 5:49 AM BST
I believe that just because everybody thinks standard theory is quite adequate, that is no excuse to reject a non-standard theory. On the other hand, many will argue that there is no need to fill the literature up with nonsense, so where do we draw the line? In my opinion, not in the right place, and part of the reason is that a certain rot in refereeing standards set in following the polywater debacle. Polywater was an embarrassment, and only too many referees did/do not want to be associated with a rerun. That, however, is no reason to adopt the "Dr NO" syndrome, namely that rejection guarantees the absence of a debacle. That policy would certainly have led to the rejection of Einstein's "Electrodynamics of bodies in motion". He was describing the dynamics of bodies without electric charge! And as for common sense, he was abandoning the principles of Galilean relativity and of Newton's laws of motion, both of which were "obviously correct". (Actually, he was abandoning the concept of instant action at a distance, which nobody really believed.)
Anyway, back to polywater. This unfortunate saga began when Nikolai Fedyakin condensed water in or repeatedly forced water through quartz capillaries, following which Boris Deryagin improved production techniques (although he never produced more than very small amounts) and determined a freezing point of – 40 oC, a boiling point of ≈ 150 oC, and a density of 1.1-1.2. This was not water, but what else could it be? Everyone “knew” quartz was inert to water and there was no other explanation than the water had polymerized. Unfortunately, nobody thought to do an analysis for silicon. There followed the collection of considerable amounts of data, and in general these were correct (although the collection of an IR spectrum of sweat was probably not a highlight of science). Meanwhile a vast number of theoretical calculations emerged to “prove” the existence of polywater.
So what went wrong? Apart from the absence of an analysis, not much initially. The referees had to accept that the experimental work was done satisfactorily. The computational work was simply a case of “jump on the bandwagon and verify what was known”. Unfortunately, those data were wrong. Nevertheless, the question might be asked, should the referees have permitted the computational papers? What the papers gave was the assertion that a certain program was applied, and this is what came out. In general, the assumptions were never clearly stated, nor were the consequences of the assumptions being wrong. The major problem with the computations was that, being based on molecular orbital theory, the proposed systems were assumed to be delocalized, and the calculations showed they were. As Aristotle remarked, concluding what you assumed is not exactly a triumph.
The consequences of this unfortunate sequence of events were as follows:
(a)  Experimenter’s careers were wrecked.
(b)  Computationalist’s careers were unaffected. John Pople was relatively prominent in showing why there was considerable stability in water polymers, but that did not hinder his career (although his work on polywater did not feature strongly in his Nobel citation).
(c) When exposed, work ceased. Nobody was ever interested in trying to work out why water in constrained space dissolved silica.
(d)  Little or no genuinely different theoretical work emerged in chemistry following polywater.
(e)  Most importantly, nobody ever stated what went wrong within the computations. In short, we learned nothing, or at least the general chemical community learned nothing.
The question that must be asked regarding (d) is, was this because there is no further scope for theory in chemistry, and all that we can do now is deploy computational programs, have the referees killed any attempts, or have chemists simply lost interest?  Your views?
Posted by Ian Miller on Apr 15, 2013 1:59 AM BST
What for me were the most important papers that I found during March were papers relating to the oxidative state of the Earth during accretion. In my ebook, Planetary Formation and Biogenesis, I argued that the availability of reduced organic material is critical for biogenesis, and that as far as carbonaceous and nitrogenous materials were concerned, the Earth's mantle was reducing. Part of the reason was because the isotope composition of Earth's materials is closest to that of enstatite chondrites, which are highly reducing, and because meteorites that have originated from bodies closer to the star than the asteroid belt have increasingly reduced compositions, thus phosphorus occurs as phosphides. A further reason is that water on the ferrous ions in many olivines produces hydrogen, and is the source of methane of geochemical origin. The great bulk of the outer Earth has reduced iron, e.g. in the ferrous state in olivines and pyroxines, and the overall oxidation state of a closed system is constant. The Earth is gradually oxidizing because water reacts with ferrous to make ferric and hydrogen, and while hydrogen in the presence of carbon or nitrogen makes reduced compounds, it can also be lost to space. Geologists seem very keen on the oxidized mantle and argue that gases initially produced by volcanoes were carbon dioxide and molecular nitrogen.
The first of the papers, (Siebert et al. Science 339: 1194-1197) argued that the abundance of certain slightly siderophile elements such as V and Cr are better explained through initial oxidizing conditions, which were subsequently reduced to present values by transfer of oxygen to the core. They argue that reduced conditions leads to more Si in the core than is compatible with sonic measurements. For me, there were a number of difficulties with this argument, one being that too many components known to be present were left out of the calculations, and secondly, the effect of water seemed to be omitted. Water would oxidize silicon, thus reducing that available to the core, and make hydrogen. In the second paper, Vočlado (Nature 495: 177-178) carried out a theoretical study using the conditions at the present boundary between inner and outer core (330 billion pascals and a temperature up to 6000 degrees K) and argued that Si is equally probable in the inner solid core and outer liquid core and iron oxide is also there to account for oxygen. Perhaps, but the seismic properties and density of the core have yet to be matched with this proposal. It is also not exactly clear how the properties ascribed to components at these conditions were obtained (there will be no experimental data!) and finally, these calculations left out a number of components, including nickel.
Two papers were more helpful to my cause. Bali et al. (Nature 495: 220 – 222) showed that water and hydrogen will exist as two immiscible phases in the mantle, which explains why there can be very reducing conditions while the upper mantle can appear to be readily oxidized in relation to minor components like V and Cr. Meanwhile, Walter and Cottrell (Earth Planet Sci Let. 365: 165-176) note that while multi-variable statistical modeling of siderophile element partitioning between core-forming metallic liquids and silicate melts form the basis for physical models of core formation, experimental data are too imprecise to discriminate between current models and variations in statistical regression of partitioning data exerts a fundamental control on physical model outcomes. Such modeling also invariably depends on the assumption of the magma ocean.
To summarize these papers, on balance I do not think they falsify my proposal, however some geologists may not agree with that assessment. On the other hand, with slightly good news for my proposal, NASA Science announced Curiosity has drilled into a sedimentary rock in Gale Crater at a place where water was assumed to have formed a small lake and found in amongst the rock, nitrogen, hydrogen, oxygen, phosphorus and carbon, the elements necessary for forming life. What I found important was the presence of nitrogen, because that almost assures us that there was originally reduced nitrogen, as my proposal requires. The nitrogen is most unlikely to have come from N2 in the atmosphere, because the atmosphere contains so little of it. Only a radically different atmosphere in earlier times would deliver sufficient to fossilize nitrogen. The nature of the clay present is consistent with water of relatively low salinity weathering olivine. Also present was calcium sulphate, which is suggestive of neutral or mildly alkaline conditions at the time. Link:
Posted by Ian Miller on Apr 8, 2013 3:17 AM BST
In a previous post, I argued that the issue of whether cyclopropane ring electrons are delocalizable was not exactly handled well. By itself, that may seem to be merely an irritant, but the question now is, how widespread is such an issue?
In a recent edition of Nature there was an article by Robert Antonucci, (Nature 495: 165 - 167) who argued that the scientific community was failing when trying to explain quasars. A quote: "In my opinion, the greatest limiting factor in understanding quasars is not a lack of intelligence, effort or creativity, nor is it a dearth of fantastic new facilities. It is a widespread lack of critical thought among many researchers. Theories are being published that have already been ruled out by observations. Observers cling to falsified theories when interpreting their data. Most of the AGN community is mesmerized by unphysical models that have no predictive power."
This is fairly stern stuff! It got worse. He accused scientists of continuing to use and refine overly simple versions of models that include disproved assumptions and which do not match observations without lots of special pleading.  Observers were not left out: “Some astronomers like to see what they believe." Even worse was to come. In 1984 a temperature was measured and found to be in accord with the disk accretion model. Later, an amateur found the calculation involved a factor of ten missing in Newton's gravitational constant! The correction, and the fact that the method now fails to account for observation, is hardly cited, while the original paper has 100 citations. He complained that this scientific community was producing fewer and fewer theoretical papers, but there is a burgeoning effort to find more examples, leading to statistical analyses leading to further problems, such as claiming causal links in plots of dependent variables. 
The question now is, is Chemistry in a better condition? I do not think so. How much original theory, as opposed to opaque computations, have you seen lately? My guess is, not many. How many of you think there is no further theory to find? I think the problem lies in reductionism. Everyone seems to believe that all chemistry is a consequence of the Schrodinger equation, but that cannot be solved. Therefore there is no point looking further. That, in my opinion, is simply false. I am not doubting that the Schrodinger equation is generally correct, but that does not mean that the only way to produce theoretical work is to solve it.
Final advice from Antonucci: "I urge my junior colleagues to spend 15 minutes every day thinking, palms down, eyes on the ceiling." Follow a Californian bumper sticker: "Don't just do something, sit there".
Posted by Ian Miller on Mar 24, 2013 10:45 PM GMT
Many/most scientists would probably say, no, you cannot; all you can do is to falsify a theory, while you believe a theory to be true because all evidence supports it. This raises the problem, what happens when the evidence that contradicts the theory are suppressed?
Thus further to my previous posts, there were further subtle cues as to why cyclopropane did not demonstrate conjugation. For example, if you have a sequence of olefins, pronounced conjugative effects are demonstrated. On the other hand, while a cyclopropane ring adjacent to either positive charge, or to potential positive charge such as with UV transitions, gives effects similar to conjugative units, add a second cyclopropane ring to the first, thus have two in a row, and the second one has no noticeable effect. On the other hand, if we put three cyclopropane rings around a positively charged centre, the effects are very close to being additive, which does not seem to happen with cross-conjugation.
Similarly, with the cyclopropylcarbinyl carbenium ion, you would expect the bond to the carbinyl centre to either make an angle of 120 degrees to the plane of the cyclopropyl ring (as required by the Walsh MO treatment) or approach closer to this angle as the ion forms, but it does not. Instead, the centre moves towards the cyclopropane ring, as if there were an attractive force pulling it. That, of course is exactly what should happen with my polarization field. While the fact that cyclopropane stabilized adjacent charge was taken as proof of conjugation, the associated minor details that contradicted that proposition were ignored.
An observation can be used to prove a scientific statement, provided you can write it in the form: “If, and only if, theory X is true, then you will observe Y”. The observation of Y proves theory X is true, as stated. Of course it may be incomplete, but it will be true as far as it goes. The problem is to justify the ”only if” part of the statement, because how can you know that there is not an alternative that has not been thought of yet?
The reason I have been writing these blogs on cyclopropane conjugation is not to justify my own youth. From a personal point of view, I could not care less whether anyone believes me, although I do feel that everyone should have the opportunity to consider the issue for themselves. If people want to believe the Earth is flat, well, I cannot do much about that. But people cannot form reasonable views on such matters if the “trivial details” that falsify a theory are suppressed. A review should be critical and complete, not merely fashionable. But suppose, you argue, the reviewer does not know about these details? That is why I think we need a new form of review, like the wiki, where everyone can contribute, and a number of moderators bring order to what is produced. . What do you think?
One final comment on this. One reason why everyone said cyclopropane conjugates was because they expected it to, because molecular orbital theory, mainly the CNDO/2 version popular at the time, and also a more sophisticated version of MO theory championed by John Pople, said it would. Remember, molecular orbital theory starts by assuming total electron delocalization, and special reasons are required to produce bond localization. As Aristotle would have said, to find delocalization when you assume it in the first place is not a great achievement. More on this issue later.
Posted by Ian Miller on Mar 18, 2013 1:50 AM GMT
How do you tell which of two theories is likely to be correct? The answer is that each gives a set of predictions, and you have to find an experiment where the two theories predict discernibly different effects. More formally, you cannot state that one theory applies and the other does not from data in the intersection of the two sets. Thus one could not decide whether cyclopropane conjugates with adjacent unsaturation from the fact that positive charge adjacent to a cyclopropane ring is stabilized, because both the electron delocalization theory and my polarization field theory gave essentially the same prediction that positive charge would be stabilized. Worse, calculations showed that to within the uncertainties inherent in each calculation, the two gave essentially the same degree of stabilization: a little over 100 kJ/mol. for the bare carbenium ion in a vacuum. On the other hand, the effects were qualitatively opposite for negative charge. As noted in an earlier post, a case could be made that the required destabilization occurred, and there was certainly no evidence of significant stabilization, but it was difficult to say this was definitive. Then I got lucky: key evidence was published.
One further piece of evidence sometimes quoted in favour of cyclopropane conjugating was that cyclopropane adjacent to a chromophore generally gave a bathochromic shift, and an enhanced extinction coefficient. Now, to absorb electromagnetic radiation, to reach the excited state, the system must undergo a change in electric moment, and the probability of a photon being absorbed is proportional to the change in electric moment. Thus something like benzene must have an instantaneous dipole moment in the excited state. The net effect is probably most easily seen using the canonical structure representation, even if it is not strictly accurate. The net result is that for most transitions, a positive charge can be adjacent to the cyclopropane ring in the excited state, hence the polarization field interpretation predicts a bathochromic shift and an enhanced electric moment, exactly the same as does the conjugation theory.
It was some time after this that for me a key observation was made: the change of electric moment was measured for the n → π* UV transition of formaldehyde. The important point was the change of electric moment was from oxygen to carbon, hence the same transition on a carbonyl adjacent to a cyclopropane ring would lead to a change of dipole moment with the negative end directed towards the cyclopropane ring. Now, that change of electric moment would interact with my proposed polarization field, which would lead to any strained compound giving a hypsochromic shift to that transition when compared with alkyl. This was important, because it was well-known that conjugative effects give a pronounced bathochromic shift to all such transitions. For example, the transition in acrolein has a bathochromic shift of approximately 25 nm from a saturated aldehyde. I used my pseudocharge to calculate the magnitude of the hypsochromic shift for some strained systems, and got the shift for cyclopropane to within a half a nanometer. (There was probably a certain amount of luck there because observation of these transitions gives broad signals, and picking the maximum is a little subjective.) Of course I could also calculate proportional shifts for π → π* transitions, which have bathochromic shifts. An interesting point here is that it was thought that a carbonyl adjacent to the bridgehead of bicyclobutyl had no n → π* transition. According to my calculation, it would have one but it would be buried underneath the π → π* transition, a consequence of the larger shifts due to the higher strain moving them in opposite directions and thus eliminating most of their separation.
So, a triumph? Well, actually, no. Two reviews on the issue of electron delocalization in cyclopropane came out around this period. The first (Bul. Soc. Chim. France  1967, 357-370.) simply stated that the hypsochromic shifts occurred, but they were unimportant! The second (Angew. Chem. Internat. Ed. (Engl.) 1979, 18, 809-886) got around this problem by simply ignoring it. It also ignored my work, and worse, it ignored all the references I had found to work that suggested there was no electron delocalization. That is not the science that I signed up for.
The problem with reviews is that once one is declared definitive, there is no place to debate a review. I later wrote a review that found over sixty different types of observation that falsified the delocalization theory but I could not get it published. Accordingly, I and the textbooks disagree on this matter.
Posted by Ian Miller on Mar 11, 2013 2:35 AM GMT
Since I found nothing during February relevant to my theory of planetary formation, I thought I should outline why I think we need an alternative. The following is a very condensed look at the giant planets, and my ebook (Planetary Formation and Biogenesis) has much more detail.
The standard theory for a giant is that solids come together by some unknown mechanism and form planetesimals, and these, through gravity, form larger bodies, and finally planets. It is usually assumed for giants that this takes less than a million years (My), then over a period of time that depends on the assumptions, these collide to form larger planets, until they reach about 10 times the mass of the Earth, then they start accreting gas as well. (Actually, they will accrete gas by the time they get to the size of Mars, but such early atmospheres contribute little to the mass. They then take about 10 – 15 My to get to a size where runaway gas accretion starts. So, what are the problems. I consider some of these to be:
(1) After 60 years, there is still no firm idea how the planetesimals form, therefore the distribution of them is simply an unverified assumption,
(2) Simulations agree that planetesimal collisions to reach Earth take about 30 My, so how does Neptune get there so fast, when matter density is much lower and velocities are much slower? Collision probability depends on the square of particle density, and initial particle density is proportional to r^-q, where q is usually taken as 1.5, although that too is an assumption, and it could be 2. If the average body contains n initial particles, particle density is now 1/n initial particle density.
(3) If material comes together by collision, to get things to go fast enough, relative velocities have to increase as particle size increases, so why do the bodies simply not smash to pieces, assuming they do form?
(4) The star LkCa 15 is approximately 2 My old, it is slightly smaller than the sun, and it apparently has a planet of nearly 6 times Jupiter's mass at about 16 A.U, or about three times as distant from the star as Jupiter.
In my opinion, (4) is critical. Accretion disks last between 1 – 10 My after primary accretion, so the LkCa 15 system is a very young one, so how did its gas giant get so big? Obviously, everything has to happen a lot faster than under standard theory. What are the possibilities? To start with, standard theory ignores chemistry, so what happens if we include it?
My concept is that the initial cores grow like snowballs. In the outer disk water and silicates condense to form amorphous particles that adsorb other gases (Icarus 63: 317-332) and retain them to past the melting point. As the particles fall inwards, the temperature rises, and at some point, occluded volatiles that have passed their melting point are emitted. If, however, the melting point is not reached, the volatile is retained, more or less as a solid, and fills the pores. Suppose two particles collide. If they are sufficiently below a melting point, they bounce off each other, but if the volatile can melt, the energy of collision is absorbed in melting it, in other words, kinetic energy is converted to heat and the collision is, for the moment, inelastic. Now, the liquid trapped in pores between the particles cannot escape, but it can merge, then when it cools, it solidifies, thus we have pressure-induced melt-welding of the particles. This is similar to how a snow-ball grows with pressure. If so, then we look at the ices, these are (separated into subsets that have similar melting properties) in order of decreasing melting points (temperatures in degrees K): {water (273)}, {methanol/ammonia/water eutectic and CO2 (164-195)}, {CH4 and Ar, (84-90)},  {CO and N2 (63-68)}, and {neon (25)}.
There are, therefore, zones where ice can accrete into larger bodies, which depend on the temperatures in the disk. The surfaces of the disks usually have temperature proportional to r^-0.75, but the interior should retain heat better. If we put the index = -0.825, and assume Jupiter is the optimal place for a water-based core, then we predict the solar system as Saturn (water/ammonia/methanol) at 7.8 – 9.6 A.U. (actual, 9.5); Uranus (methane/argon) at 20-21.7 A.U. (actual, 19.7); Neptune (CO and N2) at 28 – 31 A.U. (actual 30) and possibly a planet based on neon at about 95 A.U. The satellites are based on the same compositions so we predict the Jovian system to be based only on water (the rest having volatilized); the Saturnian system to have ammonia and methanol (which can undergo chemistry to produce nitrogen and methane, which explains why Titan has an atmosphere and Ganymede does not); the Uranian system to be the slowest starter, because methane and argon are relatively minor components, but which will grow faster than Neptune once total accretion gets under way because matter density is greater, while Neptune will initially grow faster than Uranus, because nitrogen and carbon monoxide are common, but slower when gravity becomes the driving force. As far as I know, this is the only theory that requires Neptune to be bigger than Uranus to start with, and always to be denser. It also predicts that planets grow proportional to their cross-sectional area, because they grow initially in a flow of ice particles, all of which are continually renewed by the stream of gas heading starwards. By not involving collisions between equally sized objects, the rate of formation increases dramatically. Note it also predicts no life under-ice at Europa because the Europan sea will be deficient in both nitrogen and carbon. There are thin atmospheres around the major Jovian moons (Thinner than the gases in a light bulb!) but on Europa, there appear to be no nitrogenous species to 7 orders of magnitude less than the major species. What do you think? 
Posted by Ian Miller on Mar 4, 2013 1:35 AM GMT
I was feeling remarkably happy when my thesis was written, because I felt I had made an important advance, and then, disaster! Maerker and Roberts published a paper (J.A.C.S. 1966, 88, 1742-1759) that asserted that the cyclopropyl ring also stabilized adjacent negative charge. If this were correct, the cyclopropane ring did conjugate with adjacent charge, and my polarization explanation, and my PhD thesis, were just plain wrong. The reason is, of course, that a polarization field will stabilize one charge, but must destabilize the other, because the force between like charge is repulsive. My first response was deep despair; my second was, perhaps I had better read this paper carefully.
There are three major complications. The first is that if the lack of stability is indicated through rearrangement, that only means something else is more stable. Thus a Grignard reagent made from cyclopropylmethyl chloride leads to a ring-opened rearranged "carbanion". As it happens, the cation made from cyclopropylmethyl chloride, or cyclopropylmethyl alcohol, or a tosylate, is also unstable and promptly rearranges. (This is the famous bicyclobutonium "non-classical" carbenium ion.) Rearrangements of the carbenium ion are inhibited by bulky substituents, but the stabilizing effect of the cyclopropyl ring is easily shown by considering the rates of formation of the ion, or the energy of the species by mass spectrometry. However, at the time neither of these techniques were available for the anion. The second is solvation. Carbanions tend to be generated in solvents such as ether or petrol, which almost forces ion pairing, whereas the carbenium ions tend to be made in acids more acidic than concentrated sulphuric acid, and hence have very high dielectric constants and strong solvating properties. Thus the cyclopropycarbinyl carbenium ion made in solution is never stabilized to anywhere near the extent as is found by mass spectrometry. The third is that the polarization field is not a simple field. Four orbitals move towards a substituent, but at the corner of the cyclopropane ring, there is weak positive polarization field, due to the movement of the three orbitals about that atom, which, being very close, over-ride the stronger effect of the more distant movement. Further, while the cyclopropyl anion receives some localized stabilization together with the expected destabilization, the associated cation in the ion pair is strongly stabilized, and overall, the "anion" appears to be slightly stabilized. This effect is most strongly seen in calcium carbide, which, of course cannot be stabilized by conjugation without violating the Exclusion Principle. Further, according to the polarization interpretation, the "bare anion" formed on a carbon atom adjacent to the cyclopropyl ring should be destabilized, but by less than half as much as the cation is stabilized (because the charge is more distant, and by applying the virial theorem). In solution, solvation becomes an issue, as does the location of the cation.
So what was the evidence Roberts found relating to the conjugative stabilization of the anion? Some of the evidence, in my opinion, falsified the conjugative explanation because the anion refused to form when it should have. Such failures included: treating with butyl lithium (which meant that the protons were less acidic than those of butane), refluxing for 46 hr with phenylpotassium in heptane, treating with pentylpotassium (which reacts smoothly with ethylbenzene), stirring at 80o in heptane with potassium and sodium monoxide. My theory might be still alive!
However, evidence for conjugation was claimed when the phenylmethylcyclopropylcarbinyl anion formed with potassium as the counterion, but it rearranged to the corresponding allylcarbinyl anion with any tendency towards covalent character. Roberts argued (almost certainly correctly) that when something like lithium was the counterion, the lithium would get close to the anionic centre and partial covalent binding would occur. Potassium was big enough that the bulky substituents forced it away. However, it was here that we differed in our interpretation. Roberts claimed that the extra stability with potassium was due to the fact that the "pure anion" was formed, and the cyclopropyl ring provided conjugative stabilization. My interpretation was, the potassium formed the "pure anion", which permitted delocalization, which in turn permitted the anionic charge to be delocalized by the benzene ring, out of range of the cyclopropane ring. Any attempt at localizing the negative charge on the carbinyl carbon led to repulsive interactions from the cyclopropane ring, and hence rearrangement.
There were two problems with that explanation. The first is, is it convincing? You, the reader, can judge. The second was, there was no way to publish it. The problem with a scientific paper was, once something was asserted, that explanation stood. Falsification with independent evidence was required, not a simple assertion. Nevertheless there is another lesson here. Just because somebody asserts that something has happened, that does not make it so. Read the evidence carefully!
Posted by Ian Miller on Feb 24, 2013 10:56 PM GMT
   1 ... 7 8 9 10 11 12 13 14 15 ... 18