The Unnatural Nature of Science Page 12
Direct evidence for competition in science comes from the finding that 60 per cent of scientists in a survey reported that at least once in their careers some other scientist had preceded them in the publication of their findings. Scientists will be involved in multiple discoveries if they are highly prolific and in an area with many competitors. For this reason, scientists have to take strategic decisions about which areas to work in, since it is a disadvantage to be the ‘second’ discoverer. The psychologist B. F. Skinner advocates one strategy: ‘a first principle not formally recognized by scientific methodologists: when you run into something interesting, drop everything else and study it.’ The difficulty is to know what is interesting and when to do the dropping.
The Nobel laureate Barbara McClintock made exactly that sort of decision when, at the age of forty-two and already a scientist of distinction, she made the observation on maize that led eventually to her discovery of what became known as transposition of genes. She came across patches of cells in maize with different colours:
Something had to have occurred at an early mitosis (cell division), to give such a pattern. It was so striking that I dropped everything, without knowing – but I felt sure that I would be able to find out what it was that one cell gained and the other cell lost, because that was what it looked like … I do not know why, but I knew I would find the answer.
Six years later, in 1950, her talk at a symposium was met with silence and incomprehension. Her ideas were premature. It was very hard to incorporate into current knowledge her idea that pieces of the chromosome DNA moved around, that they were transposed. Stability of the position of genes on a chromosome was fundamental to genetic thinking. It required the work of others on different systems to make her work acceptable and recognized as of fundamental importance. Only in the late 1960s did scientists discover transposition in bacteria. Because of bacteria’s very rapid life cycle – minutes, not a year like maize – the system was much more amenable to analysis and could be used to demonstrate the validity of McClintock’s theory.
Stories similar to that of McClintock are not all that rare. Two classic examples are Wegener’s ideas on continental drift and Lord Kelvin’s ideas on the age of the earth. The former was right; the latter was wrong. Briefly, Alfred Wegener, a relatively unknown German geologist, put forward the idea, quite astonishing in the 1920s, that the continents of Africa and South America were once joined together but had, over millions of years, drifted apart. There was enormous opposition to his ideas – even vitriolic hostility. Among the reasons why his arguments were rejected were that they required a major rethink of many geological concepts and that there did not seem to be any mechanism that could provide for the movement of the continents. Only in the 1960s did physicists provide both new evidence, based on measurements of the earth’s magnetic field, and a mechanism for movement of the continents which made the proposition acceptable. In a way, the case of Lord Kelvin shows the other side of the coin, for Kelvin was already a very famous physicist and his authority at the end of the nineteenth century was enormous. He opposed ideas suggesting that the age of the earth was much greater than previously thought. He would not accept an age of the order of thousands of millions of years – a time-scale proposed by geologists – because he argued that this was a contradiction of the data available on cooling of the earth. What he did not know, which only later became established, was that the natural phenomenon of radioactivity heated the earth and this rendered his analysis, and objections, untenable. But it took a long time to overcome his opposition.
It should now come as no surprise that psychological studies of the Apollo moon scientists found that those scientists judged the most creative were also the most resistant to change their ideas. All agreed that the notion of the objective, emotionally disinterested scientist is naïve. The image of the disinterested, dispassionate scientist is no less false than that of the mad scientist who is willing to destroy the world for knowledge.
New results that confound current expectations are always treated with suspicion: in fact, it is this critical doubt that determines the way in which a scientific paper is read. The most important features are the title and the summary, for they decide whether one needs to know more. If the conclusions are not surprising, one may not read the results with any great care. If they are novel, however, they will be carefully scrutinized. But if one has reason to doubt the validity of the results one will examine the section on methods in detail.
What, then, determines the acceptance of new ideas? According to Max Planck, ‘A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents generally die, and a new generation grows up that is familiar with it.’ While there are many anecdotes about how, with increasing age, scientists generally become opponents of new ideas, this claim should be treated with caution and not be used to demonstrate the conservatism of science. Scientists do not like to give up ideas to which they have devoted their lives: there is no pleasure in having been wrong. And the resistance to new ideas is not necessarily age-related, for a new theory may simply be wrong. Scientists have to choose the best place for the investment of resources, and so, quite rightly, they will not give up their current theories, even if they involve discrepancies, unless they have something better with which to replace them.
An area of controversy is the claim that there is no rational basis for the objective assessment of rival theories which claim to be able to account for the same set of phenomena: since the rival theories may use concepts that are quite different, they cannot be meaningfully compared – they are incommensurable. The historian Thomas Kuhn claims, for example, that the concepts of Newtonian and Einsteinian mechanics are so different that they cannot be expressed in the same terms. However, this is disputed by most physicists, who seem to have no difficulty in comparing them, teaching them and showing how Newtonian mechanics can be thought of as a special case of the Einsteinian theory. And where there are conflicting theories in modern science, there are almost invariably ways of devising experiments that would, in principle, distinguish between them.
The idea of incommensurability forms an important part of Kuhn’s image of how science works, which he originally set out in his highly influential book The Structure of Scientific Revolutions (1962). Kuhn characterizes as ‘normal science’ those periods when scientists are working within a shared set of ideas which define the field. He terms the dominant conceptual framework the ‘paradigm’. ‘Paradigm’ is a contentious concept, poorly defined, but it nevertheless captures an important component of science. For example, there is a big difference in working within the paradigm of Newtonian mechanics as compared to the Einsteinian paradigm. In Newtonian mechanics, for example, mass and velocity are independent entities; but in Einstein’s theory a body’s mass changes with its velocity, and space and time are relative rather than absolute. Or, to take two biological examples, there was with Darwin a paradigm-shift away from the constancy of species to an evolutionary paradigm in which species change, and, more recently, the revolution in molecular biology changed the paradigm from metabolism to information. Before the role of DNA was understood, most attention was focused on where the energy for making proteins came from; modern molecular biology introduced the idea that this was not the important issue and that the problem was what information determined the sequence of amino acids in the protein. DNA, as we have seen (Chapter 1), contains the necessary information.
Kuhn has further claimed that paradigm-changes come about through revolutions in science which result from the increasing number of strains put upon the existing paradigm. These strains arise because of the difficulties that are being experienced with the ideas constrained by the current paradigm. Since rival paradigms are regarded by Kuhn as being incommensurable, and so cannot be compared, there is thus no rational basis for the change from one to another: rather, one has to explain the revolutions in terms of the social structure of the scientific c
ommunity. That is, there is a social process by which the community is persuaded to adopt the new paradigm, since, as we have seen, scientists do not like to give up their hard-won ideas. One may recall Planck’s remark that some scientists never do this and the new ideas become established because their opponents die. This may be rather a cynical view. Is it not much more likely that the community will adopt the new view – however painful, like Wegener’s ideas about continental drift – when new evidence shows that the new theory provides a more satisfactory explanation? Nevertheless, Kuhn is correct in emphasizing the importance of social process in biology, but in acknowledging this we approach the abyss of relativism (See Chapter 6).
There are indeed examples which show just the opposite to the process claimed by Kuhn. In these cases anomalies – that is, observed facts which are difficult to explain in terms of a current set of ideas – are only recognized after a new theory has been generally accepted. Before this, peculiar or uncomfortable evidence may just have been ignored. However, when the new theory appears these anomalies acquire a compelling explanation and are used to support the new concepts. For example, the creationist view in the middle of the nineteenth century held that species were fixed and all animals were made perfectly adapted to their environment. But this was clearly not true of some animals: some ducks with webbed feet did not swim and why should blind animals that lived in caves have eyes? Only with Darwin’s theory of evolution by natural selection were these anomalies recognized and explained, and then used to support the theory.
Karl Popper has argued that scientific theories can never be verified, only falsified, and that falsification is the true aim of the scientific endeavour (see Chapter 6). Bold conjecture is to be followed by attempts at falsification. But is this how scientists work? Scientists may pay lip service to the idea of prediction and falsification, but they do not always use it: the process is much more complex. There are, in fact, a number of excellent examples to show this neglect of ‘falsifying’ evidence. Galileo’s comment on Copernicus’s theory expresses this aspect forcefully. Copernicus’s theory about the movement of planets had difficulties with the phases of Venus, and these difficulties were resolved only with Galileo’s telescope, more than fifty years later. Galileo considered it praiseworthy in Copernicus that he had not permitted one unexplained puzzle to worry him. And if Copernicus had indeed known the explanation, ‘How much less would this sublime intellect be celebrated among the learned! For, as I said before, we may see that with reason as his guide he resolutely continued to affirm what sensible experience seemed to contradict.’
This neglect of falsification is a stance taken by scientists again and again. Robert Boyle, a giant of English experimental science in the seventeenth century, is an example. Two smooth bodies, such as marble discs, stick to one another when pressed together. Boyle thought that they were held together by air pressure and so predicted that in a vacuum they would come apart. His first experiments did not work, but, rather than give up his hypothesis, he attributed the failure to the vacuum in the apparatus being insufficient. With an improved apparatus he tried again and again until, as described in his experiment number 50, he succeeded:
When the engine was filled and ready to work we shook it so strongly that those that were wont to manage it, concluded that it would not bear to be so much shaken by the operation. Then beginning to pump out the air, we observed the marble to continue joined, until it was so far drawn out, that we began to be diffident whether they would separate; but at the 16th suck … the shaking of the engine being almost, if not quite, over, the marbles spontaneously fell asunder, wanting that pressure of air that formerly had held them together.
His conjecture had been shown to be right.
Consider now the famous disagreement around 1910 between Robert A. Millikan in Chicago and J. Ehrenhaft in Vienna, which has been studied in detail by the physicist and historian Gerald Holton. Their disagreement concerned the value of the smallest electrical charge found in nature – the charge on the electron. Millikan, in his first major paper, pointed out that this value ranks with the velocity of light as a fundamental physical constant. The value of the charge of the electron could be deduced from Faraday’s work on electrolysis, but he wished to measure it directly – particularly since Ehrenhaft had reported finding charges of only a fraction of that expected to be carried by the electron.
Millikan’s experimental approach was to study the behaviour of oil drops that could be charged such that when a small droplet was moving upwards in an electric field against gravitational pull ‘with the smallest speed that it could take on, I could be certain that just one isolated electron was sitting on its back. The whole apparatus then represented a device for catching and essentially seeing an individual electron riding on a drop of oil.’ Thus Millikan’s technique involved observing single tiny oil droplets in what was effectively a very sensitive balance. In 1910 Millikan put forward a value for e, the charge on the electron, of 4.65 x 10-10 e.s.u. While Ehrenhaft’s average value was similar, he also found much smaller values, and in his results the value of the charge seemed to vary continuously.
Holton has examined Millikan’s papers and notebooks in detail. In the notebooks used for a 1910 publication, each of the thirty-eight observations is given a more or less personal rating from ‘three stars’ to none, and the sets of observations are given a weighting from 1 to 7. Millikan was effectively saying that he knew a good run when he saw one. Some observations were discarded altogether because he was unhappy with the experiments. But he goes on to say, ‘I would have discarded them had they not agreed with the results of other observations.’ In effect he is saying that he has assumed a particular value for the correct results, and that the fundamental charge is a constant. Having examined Millikan’s notebooks for the years 1911 and 1912, Holton writes, ‘it is clear what Ehrenhaft would have said had he obtained such data or had access to this notebook. Instead of neglecting the second observation, and many others like it in these two notebooks that shared the same fate, he would very likely have used all of these.’ The notebooks contain many exclamations such as ‘Very low. Something wrong.’ ‘This is almost exactly right and the best one I ever had!!!’ ‘Agreement poor.’
In the end Millikan’s view prevailed and he was awarded the Nobel Prize. He rejected data that did not fit his basic idea, and he would perhaps have justified that in terms of how good the experiment that produced the data was. This is a judgement which all scientists make and which is a crucial feature in distinguishing the good, even great, scientist from the less so. It is that remarkable ability not only to have the right ideas but to judge which information to accept or reject.
Experimental skills themselves should not be underestimated. Humphry Davy, a great experimentalist in the nineteenth century, recognized how much knowledge was involved in doing an experiment on electricity: ‘To describe more minutely all the precautions observed would be tedious to those persons who are accustomed to experiments with voltage apparatus, and unintelligible to others.’ And attempts to reproduce some of the experiments of Michael Faraday, an even greater experimentalist, have revealed how much skill is required – and even then it was often difficult actually to see what Faraday recorded that he saw. Indeed, like so many others, Faraday showed considerable determination to continue when he obtained negative results. Even today in molecular biology there are those with ‘green fingers’. The ability to get experiments to work is more than just following a rigid set of instructions. If repeating the work of others can be tricky, initiating a new investigation requires even more skill.
It must be admitted that Millikan may have taken his judgement beyond reasonable boundaries; nevertheless, as Holton argues, the graveyard of science is littered with those who did not practise a suspension of disbelief who did hold in abeyance the final judgements concerning the validity of apparent falsifications of promising hypotheses. At least one of the reasons for suspension of disbelief is that experiments ar
e sometimes wrong. One must keep in mind Crick’s remark that a theory that fits all the facts is bound to be wrong as some of the facts will be wrong.
There is a relevant story about Charles II, who once invited fellows of the Royal Society to explain to him why a fish when it is dead weighs more than when it was alive. The fellows responded with ingenious explanations, until the King pointed out that what he had told them was just not true.
There are several examples of Holton’s principle. The first one illustrates a very important point: falsification can itself be false. There is no guarantee that the experimental falsification itself will not turn out to be flawed. The theory of the physicists Weinberg and Salam on the unification of two of the fundamental forces in matter – the strong and weak nuclear forces in the atom – was tested by experiments carried out in enormous machines – particle accelerators – designed to drive particles to high speed. The initial experiments showed that the theory was wrong. Only later experiments showed that the initial experiments were themselves wrong and the theory was confirmed.
The second example illustrates this point even more clearly, as, unlike with Salam’s theory, the experiments were done by the scientist himself.
In 1960 Denis Burkitt, a doctor who had been working in Africa, gave a talk in a London medical school in which he described a tumour, now known as Burkitt’s lymphoma, which was the commonest children’s tumour in tropical Africa. Not only was this the first description of the disease but Burkitt showed that its causation was dependent on rainfall and temperature. Anthony Epstein, a virologist present at Burkitt’s lecture, concluded that the cause had to be a virus, even though the evidence that cancer could be caused by viruses was at that time regarded with deep suspicion and the possibility that human tumours could have a viral origin was regarded as almost absurd.