But it seems to me that our present theories, even the successful ones, are not yet constructed so completely in accord with sound principles, but that in this day and generation criticism is a most necessary and useful enterprise for the physicist.
Physical science stands today in a highly anomalous position. On the one hand, no branch of knowledge has ever occupied a higher place in general public esteem. The spectacular way in which the abstract ideas of the theoretical scientist and the discoveries of his colleagues in the laboratories have been applied to the fashioning of ingenious devices that have drastically changed the whole world picture has made a profound impression on the man in the street, and the word “scientific” has acquired an unparalleled prestige. To some degree, at least, these sentiments are shared by the rank and file of the professional scientists, and the confident words “We know…” continually echo and reecho through the halls of learning.
But this unbounded confidence is completely lacking among the “insiders” of the profession, the relatively small group of theorists who bear the burden of keeping scientific theory equal to the growing demands upon it: demands which continually become more urgent as the pace of discovery quickens in the laboratories and in the observatories. These men make no secret of their dissatisfaction with present-day theory and their grave concern over the future of physical science. P.A.M. Dirac, for instance, tells us flatly that after many years of intensive research the efforts of the world’s physicists to find a satisfactory theory have been a failure.2 Coming as it does from one who has had a prominent part in the development of those modern theories which he characterizes as failures, this statement stands in marked contrast to the complacent attitude of the scientific profession in general with regard to the adequacy of the underpinnings of their theoretical structure.
Nor is Dirac alone in his opinion. Practically all of the most prominent leaders in theoretical science have expressed somewhat the same thought, explicitly or tacitly, at one time or another during recent years. Erwin Schrödinger, another of the developers of current theory, fervently hoped for some upheaval of old beliefs which in the end will lead to something better than the mess of formulas which now surrounds our subject.3 P.W. Bridgman aimed many a sharp barb at the most cherished doctrines of modern science. Is this honestly… a very impressive performance? he asks, referring to wave mechanics, Is it not exactly the sort of compromise that we should have predicted in advance would be the only possible one if it should prove that we were incapable of inventing any vitally new way of thinking about small-scale things?4 And he sums up his impressions of the General Theory of Relativity in these words: …It seems to me that the arguments that have led up to the theory and the whole state of mind of most physicists with regard to it may some day become one of the puzzles of history.5 Even Werner Heisenberg, whose attitude toward present-day theory is particularly sympathetic because of his close personal identification with some of the outstanding features of the system of thought currently in vogue, reveals his true appraisal of the existing situation when he admits, It is obvious that at the present state of our knowledge it would be hopeless to try to find the correct theory of elementary particles.6 Such comments and admissions by some of the principal architects of currently accepted theory are particularly significant, but there is no lack of confirmation from other sources. Some theorists are beginning to doubt whether an adequate theory can ever by constructed. C.N. Yang of Princeton, for example, was quoted in a recent news release as expressing some doubts about the ability of the human brain in general, and his in particular, to accomplish this task.7 Truly, as Philip M. Morse characterized the existing situation, It is an unhappy time for theory.8
The question Why? naturally suggests itself. Why do the acknowledged leaders in the field take such a pessimistic view of the theoretical structure that is regarded so highly by the rank and file of the scientific profession: a structure that enjoys an acceptance so complete that even the most extravagant claims in its behalf are received without demur? The answer to this question is not at all difficult to find; on the contrary, it is almost immediately evident that these leading theorists are appraising currently popular theory much less favorably because they are applying more rigid standards in judging the validity of current claims to knowledge. The scientist who attempts to clarify or improve the structure of theory cannot afford to follow the general practice of accepting today’s best guess as the equivalent of fact; he has to do his best to make certain of the solidity of his foundation before he attempts to build anything upon it. All too often that foundation crumbles in spite of all of the care that is taken to check it thoroughly; anything less than this maximum care would simply invite disaster.
To those who look upon present-day scientific “knowledge” from this critical viewpoint it is obvious that much of it is not knowledge at all. Such a statement may seem incredible on first consideration, since it is generally understood that physical science is an “exact” science and that it draws a clear line of demarcation between factual and non-factual material. In theory this is true. Speculation and hypothesis play and important part in scientific research, but the products of such activity are not supposed to be considered in any way authoritative unless and until they are verified by experiment or observation. The most distinctive feature of science is that its acceptance of the established facts as the ultimate authority. As it happens, however, scientists are not only scientists, they are human beings, and in this latter capacity they are subject to the ordinary weaknesses of the human race, including a strong bias in favor of familiar and commonly accepted ideas, a totally unscientific reliance on presumably authoritative pronouncements, and a distinct reluctance to admit ignorance. All of these add up to a marked tendency to regard general acceptance as equivalent to proof, a tendency that has the effect of diluting the firmly established factual material of science with a larger admixture of matter of an unproved and uncertain character.
It is generally conceded that physical science is faced with a difficult and formidable task in readjusting its basic concepts to enable overcoming the obstacles that now stand in the way of further progress. Here are some of the recent comments on the subject: from J.R. Oppenheimer, It is clear that we are in for one of the very difficult, probably very heroic, and at least thoroughly unpredictable revolutions in physical understanding to physical theory9; from Freeman J. Dyson, For the last ten years it has been clear to most physicists that a basic conceptual innovation will be needed in order to come to grips with the properties of elementary particles10; from David Bohm, Moreover, physics is now faced with a crisis in which it is generally admitted that further changes will have to take place, which will probably be as revolutionary compared to relativity and the quantum theory as these theories are compared to classical physics12; from Norwood R. Hanson, The whole [quantum] theory may topple; in places the foundations seem far from secure13; from Ernest Hutten, Most physicists feel that the time is ripe, again, for a radical change in our ideas, and for a new theory.14
But what happens if the hopes of Schrödinger, Hutten, et al., materialize and the revolutionary new theory which they anticipate so eagerly actually does appear? As matters now stand, such a theory will be summarily rejected, as it will inevitably conflict with many of the ideas and concepts that we are not permitted to question because they are part of the basic dogma of present-day science, even though they may owe that standing merely to general acceptance rather than to any factual support. The fate of these new ideas is all the more certain because the task of appraising them is normally left to a small group of individuals who, although they may be willing to concede the necessity for radical changes in principle, are strongly opposed to any change in the general lines of thought to which they are now committed. The average scientist does not normally feel that he can take the time to examine basic scientific concepts thoroughly. As Bridgman points out, many of the old ideas to which he subscribes have not been thought through carefully but are held in the comfortable belief that some one must have examined them at some time.15)
The objective of this memorandum is to bring out that under present conditions the scientific profession cannot afford to rely on this indefinite some one to put its theoretical house in order; the situation is too acute for that. These pages will emphasize the astounding degree to which general acceptance has been substituted for proof in current scientific practice, and the almost incredible number of non-factual items which are masquerading as established facts. Obviously the new and better theory which is so greatly desired cannot be erected on any such dubious foundation, but if the debris is to be cleared away it will be necessary for the individual scientist to take a h and in the matter and arrive at his own conclusions, rather than to assume that some one will do it for him. It is not difficult for anyone to see how much of the scientific “knowledge” of the present day is merely pseudo-knowledge, once an effort is made to look the situation squarely in the face. Let us begin by pointing out that
A substantial part of what now passes for knowledge in scientific circles actually consists of extrapolations from observed facts rather than true factual material. As Bridgman once observed, many of these are perfectly hair-raising extrapolations. A good example is the almost universal belief that we now “know” the nature of the processes which furnish the energy supply for the stars. Even in a day when “hair-raising” extrapolations are somewhat commonplace, this one sets some kind of a record. In view of the gigantic extrapolation that is required to pass from the relatively insignificant temperatures and pressures obtainable on earth to the immensely greater magnitudes which we believe (also through extrapolation) exist in the stellar interiors, even the thought that the answers might be correct calls for the exercise of no small degree of faith in the validity of our processes; any contention that the extrapolated results constitute actual knowledge is simply ridiculous.
To make matters worse, this is not merely an extrapolation. It also involves the assumption that the isotope of hydrogen which is stable under terrestrial conditions will become unstable under stellar conditions: an assumption that has no factual support. It is popularly believed that the hypothetical hydrogen-to-helium conversion process attributed to the stars is simply another “atomic bomb” type of reaction; we often hear the statement that in using atomic power we are drawing on energy from the same source utilized by the stars. The truth is, however, that all of our known atomic energy-producing processes depend on the existence of unstable isotopes that will ultimately disintegrate of their own accord with the production of just as much energy if we let them follow their own course. All that we actually accomplish is to increase the rate of disintegration. The hypothetical conversion of the H¹ isotope to helium is not another process of this same kind; it is a process that does not take place spontaneously on earth or anywhere else that we know of.
Many atomic reactions which do not occur naturally, including this hydrogen-to-helium reaction, can be forced to take place under appropriate conditions. Small-scale experiments in the laboratories have indicated that some of these reactions are exothermic (or perhaps we should say exoergic, to be more general). From this the conclusion has been drawn that if the temperature is raised high enough, the kinetic energy of the atoms themselves will be sufficient to “ignite” a fusion reaction that will be self-sustaining. This sounds very plausible, to be sure, especially since we are accustomed to thinking of atomic reactions in terms of an analogy with combustion, but if we examine the conclusion carefully it is apparent that it involves the assumption that an exothermic process is a naturally occurring process, and we know that this is not always true. If it were, we would not need atomic power—we could beet our power requirements by desalting sea water. The currently popular theory of stellar energy generation is therefore not only a “hair-raising” extrapolation; it is also based on a very questionable assumption.
So far as the objective of this present memorandum is concerned, it is sufficient to show that the accepted ideas as to the nature of the stellar energy process are not necessarily true and that treating them as established facts is totally unjustified. It may be mentioned in passing, however, that there are actually several items of evidence which indicate that these ideas are not only open to question but are completely erroneous. For instance, it is a general rule in the laboratory experiments involving high energy impacts that the degree of fragmentation becomes greater as the incident energy increases. If this rule holds good at stellar temperatures and pressures, and we have no reason to think otherwise, it favors the existence of hydrogen atoms rather than helium atoms. Even the combustion analogy suggests this same conclusion. Extremely high temperatures do not favor exothermic chemical combinations; on the contrary, they dissociate the combinations that already exist. Furthermore, the hypothesis of a self-sustaining hydrogen reaction ignited by high temperature in a body consisting primarily of hydrogen introduces a problem of control for which there seems to be no answer. The steady and relatively slow generation of energy which we actually observe in most stars requires some kind of a definite limitation on the energy supply or on the process itself which is wholly incompatible with current concepts. Other lines of reasoning based on such evidence as the ratio of hydrogen to helium in the cosmic rays and many observed facts from the astronomical field lead to similar conclusions.
If a physical theory that has been generally accepted on the strength of faulty or inadequate evidence is actually valid in spite of these deficiencies, as we can expect will frequently be true, no harm has been done. On the other hand, if such a theory is not valid, the bad effects are not necessarily confined to the area directly affected; they may be multiplied manifold by the use of the original erroneous conclusion as a base for the erection of additional theories. A mistaken idea as to the source of the energy of the stars is not too serious in itself but, as it happens, the whole structure of stellar evolutionary theory has been based on this assumption as to the nature of the stellar energy generation process, and the net result has been a serious distortion of the entire astronomical picture.
The astronomer does his work under rather severe handicaps. He cannot experiment; all that he can do is to observe. Furthermore, the range of conditions within his field of observation is so much greater than the range existing on earth that much of what he observes can be related to familiar phenomena only by long extrapolations, and these, as has been brought out, are always subject to serious question as to their validity. Then also, the astronomer is, for the most part, limited to what is essentially an instantaneous picture, even though it encompasses a wide expanse of time as well as space. He sees astronomical objects in what are apparently various stages of evolution, but he cannot determine the direction of evolution from direct observation; he must draw his conclusions on this point from collateral evidence of some kind. In addition to these inherent and unavoidable handicaps of his profession the astronomer has voluntarily accepted another: a blind and unquestioning faith in the conclusions of the physicists with respect to the stellar energy process.
Some quite definite evolutionary direction signs are available in the astronomical field itself, but rather than take issue with the physicists, the astronomers have chosen to ignore these signs and to base their ideas as to direction solely on the hypothetical energy generation process. If this process is actually operative then it necessarily follows that the hot, massive stars, which are radiating enormous amounts of energy, must be comparatively young, as their supply of hydrogen could not maintain this tremendous energy output for more than a relatively short time. But this raises some very difficult questions. As Bart J. Bok states, It is no small matter to accept as proven the conclusion that some of our most conspicuous supergiants, like Rigel, were formed so very recently on the cosmic scale of time measurement,16 and Cecilia Payne-Gaposchkin tells us that the results of age calculations based on this hydrogen conversion process are staggering.17 However, the astronomers have, almost without exception, shrugged off the contradictory evidence from their own observations and have accepted this product of a “hair-raising” extrapolation by the physicists as an incontestable fact. Explanations of stellar evolution almost invariably begin with some such statement as this which introduces Payne-Gaposchkin’s discussion of the subject: The problem of the ages of stars in closely interwoven with the problem of stellar nutrition… A star can shine only so long as hydrogen is available18; or this from Struve: …Our new insight into the age of the stars stems from our new knowledge of how these bodies produce their energy.19
Here is eloquent testimony to the serious consequences of the lack of adequate discrimination between factual and non-factual material in present-day scientific practice. The atomic physicists themselves are not deceived. They know that the concept of a self-sustaining process of conversion of the stable hydrogen isotope to helium is only a hypothesis, not a fact, and if we read the fine print, we will find them admitting that this is the case. For example, R.E. Marshak in an article entitled The Energy of the Stars makes this statement: So we can safely assume that the stars produce energy by the combinations of light elements through the collisions of their swiftly-moving nuclei,20 and Louis Ridenour sums up Marshak’s entire article in these words: …As Robert E. Marshak explained in the previous article, there is excellent reason to believe that the energy source of most stars…is rather a complicated chain of nuclear transformations whose end result is to form one atom of helium out of four atoms of hydrogen.21
But unfortunately this concept of the energy generation process is not usually presented in its true light as something “we can safely assume” or something “there is excellent reason to believe.” Because of the overwhelming confidence of the physicists in the validity of their hypothesis, a confidence based more on the prestige of the profession than on the merits of the hypothesis itself, there is a general tendency to talk in terms of positive knowledge, and by the time this hypothesis reaches the astronomers it has become an article of faith against which factual evidence is powerless. E.J. Opik tells us: This knowledge [of the conversion of hydrogen to helium] is so well founded that it furnishes a reliable basis for the calculation of time rates of stellar evolution.22 The high estate to which the “assumption” and “belief” of the physicist has now risen is all the more remarkable since Opik admits on the very next page that this reliable basis is clearly unreliable. The energy source of the giants remains a puzzle, he says, and hence a more uneasiness may be felt about the application of the theory to the white dwarfs. Otto Struve’s confidence in the energy generation process, as reflected in the statement previously quoted, is equally remarkable in that it is apparently undisturbed by the fact that this “new insight” of which he speaks forces him to characterize factual knowledge from his own field as apparent defiance of the modern theory of stellar evolution.23
The damaging effect of these unjustified claims to positive knowledge in basic physics is by no means confined to relatively distant fields such as astronomy. Even physics itself is highly compartmental in present-day practice, and the individual physicists are reluctant to cross boundary lines. Leprince-Ringuet describes the situation in these words: A physicist bears the stamp imposed by the rigid and precise demands of his discipline. At meetings he rarely comments on anything that lies outside of his own well-defined specialty.24 If the findings of each minuscule division of science are to be accepted without question by the rest of the scientific profession, this merely underlies the necessity of more accurate definition of the true nature of the conclusions that are research, and some measures that will discourage the tendency to say “We know…” when the expression should be “We think…” Perhaps we have reached the point where we need some interdisciplinary agency to restrain the enthusiasm with which the various specialists overstate the case for the current popular theories in their respective fields.
In addition to the type of extrapolation which has been discussed, there are also what we may call extrapolations of the negative. Here we find through observation that certain things do not happen in regions directly accessible (the earth, primarily). We then generalize this observation by extrapolating it to the regions that are not accessible, and we say that such things never happen. There is nothing inherently wrong about such an extrapolation, or any other type of extrapolation; on the contrary, reasoning from the known to the unknown is sound practice. But it must not be forgotten that an extrapolation of an observed fact is something totally different from the fact itself, and conclusions reached by extrapolation cannot be more than tentative until they are confirmed in some way.
A good example of this type of extrapolation is proved by the question of isotopic stability. Under terrestrial conditions the isotope Fe56 is stable. In the absence of any evidence to the contrary, we assume that stability is an inherent property and that Fe56 is always stable. Similarly we assume that the neutron is always (with one curious exception) unstable because it is unstable in the region where we can observe it. These are natural and logical assumptions under the circumstances, but they are only assumptions, not established facts. Consequently they cannot be used to refute any theory which contends that isotopic stability is determined by the environment and that under appropriate conditions the neutron may be stable and Fe56 unstable. If such a theory is to be attacked, it must be challenged on other grounds. There mere fact that it conflicts with an extrapolation of terrestrial experience is irrelevant, since that extrapolation has no factual standing, regardless of the unanimity with which it is currently accepted.
Here again we find the extrapolations serving as the basis for additional conclusions and again it should be emphasized that such conclusions are not facts and they cannot be used in lieu of positive knowledge. Mush of the current thinking about the cosmic rays, for instance, follows along lines dictated by the belief that those particles which are short-lived in the terrestrial environment are likewise short-lived in interstellar and intergalactic space. The available factual information neither confirms nor denies this assumption. Certainly the burden of proof rests upon anyone who suggests that isotopic stabilities in free space differ from those which we find in the terrestrial environment, but if any evidence can be produced in support of such a suggestion there is nothing in the currently available data from experiment or observation that can refute such evidence. The mere fact that practically everyone believes that the stability or lack of stability is inherent does not make this true. Scientific questions cannot be settled by public opinion polls. However numerous its supporters may be, this is still nothing more than a conclusion based on an unsupported extrapolation of the observed facts, and as such it cannot qualify as knowledge. In this connection it is interesting to note that current ideas regarding stability are not consistent. The same physicist who reacts violently to the suggestion that stability may be a function of the environment and that his conclusions as to the atom-building and similar processes in the extra-terrestrial environments may therefore be wide of the mark, does not hesitate to advance exactly the same hypothesis when he finds this necessary in order to fit the theory that the normally unstable neutron is a constituent of stable atoms.
Somewhat analogous to the practice of extrapolation, but of a more questionable characters, is the practice of exaggeration; that is, claiming more than what the observations or measurements actually substantiate. A classic example is Einstein’s theory that mass is a function of velocity. Throughout scientific literature this theory is described as having been “proved” by the results of experiment and by the successful use of the predictions of the theory in the design of the particle accelerators. Yet at the same time that a host of scientific authorities are proclaiming this theory as firmly established and incontestable experimental fact, practically every elementary physics textbook admits that it is actually nothing more than an arbitrary selection from among several possible alternative explanations of the observed facts. The experiments simply show that if a particle is subjected to an unchanged electric or magnetic force, the resulting acceleration decreases at high velocities and approaches a limit of zero at the velocity of light. The further conclusion that the decrease in acceleration is due to an increase in mass is a 0pure assumption that has no factual foundation whatever.
As one textbook author explains the situation: There seems to be no reason to believe that there is any change in the charge, and we therefore conclude that the mass increases. Another says: This decrease is interpreted as in increase of mass with speed, charge being constant. Obviously an interpretation of the observed facts is not a fact in itself, and it is rather strange that the theorists have been so eager to accept this particular interpretation that they have not even taken the time to examine the full range of possible alternative interpretations. As these quotations from the textbooks indicate, it has been taken for granted that either the charge or the mass must be variable, but actually it is the acceleration that has been measured, and the acceleration is a relation of force to mass, not of charge to mass. The accepted interpretations of the observed facts therefore contain the additional assumption that the effective force exerted by a charge is constant irrespective of the velocity of the object to which it is applied. The possibility that this assumption is invalid cannot logically be excluded from consideration; on the contrary, there are some distinct advantages in maintaining both charge and mass as constant magnitudes. When we get down to bedrock it is clear that the theory of an increase in mass is not something that has been proved by experiment, as is so widely claimed; it is a pure assumption that goes beyond the scope of the experiment, and is only one of several possible alternatives. Any theory which leads to the observed decrease in acceleration at high velocities is equally as consistent with the observed facts as Einstein’s theory that the mass increases.
One of the disturbing features of current scientific practice is the increasing tendency to rationalize failure to solve difficult problems by setting up postulates that solutions to these problems are impossible. As Alfred Lande puts it: In short, if you cannot clarify a problematic situation, declare it to be ’fundamental,’ then proclaim a corresponding ’principle.’25 Herbert Dingle quotes a physicist who even goes so far as to contend that we are at last firmly grounded on the principle of uncertainty.26 And these are no minor matters out of the periphery of physical theory; they are matters of vital importance to the foundations of the theoretical structure. Quoting R.B. Braithwaite: Such propositions [principles of impotence] play a very large part at the present time in the fundamental theories of physics.27
It should hardly be necessary to point out that from their very nature these principles of impotence are incapable of proof and are never entitled to the status of established facts. It is very doubtful whether they should even be recognized as legitimate scientific devices, to say nothing of being given any authoritative standing. Some of them should certainly be barred. The underlying concept upon which all scientific research is based, and without which the application of time and effort to this task would be wholly unjustified, is the conviction that the physical universe is essentially reasonable and operates according to fixed principles. Thus far we have never encountered any actual evidence to the contrary; as scientific knowledge has expanded one after another of the phenomena that were inexplicable to our early ancestors has been found to follow fixed and unchanging laws. But now there is a growing use of a practice which is not only questionable by nature, in that it is an easy way of avoiding the difficult task of solving complex problems, but also leads to conclusions which are in direct opposition to the philosophical premise which is our only justification for undertaking scientific research in the first place.
Conclusions such as this from Heisenberg: …The idea of an objective real world whose smallest parts exist objectively in the same sense as stones or trees exist, independently of whether or not we observe them…is impossible,28 or this from Bridgman: …The world is not intrinsically reasonable or understandable; it acquires these properties in ever-increasing degree as we ascend from the realm of the very little to the realm of everyday things,29 or this from Herbert Dingle: The ’real’ world is not only unknown and unknowable but inconceivable—that is to say, contradictory or absurd,30 are completely at odds with the underlying philosophy of scientific research. If we had some definite and positive evidence that they were true, we should have to accept them, however unpalatable they may be, but accepting them purely on the strength of principles of impotence, which means that they have no factual support at all, is totally illogical.
The bald truth is that these statements are simply efforts to avoid admitting failure. The theorists have failed in their attempt to discover the exact physical and mathematical properties of the component parts of the atom, and rather than admit that their abilities are unequal to the task, as C.N. Yang suggested in the interview previously mentioned, they prefer to postulate that these properties do not exist and that even the atom itself, as Heisenberg says, has no immediate and direct properties at all.31 The ironic part of it is that it is now beginning to become evident that the task at which the theorists have failed is not an impossible task, as they would like to have us believe, it is a meaningless task. As will be brought out later in this memorandum, a growing mass of evidence indicates that the atomic constituents whose properties have been so difficult to define simply do not exist at all: a possibility that should have been given serious consideration long ago. When the most strenuous efforts over a long period of years by the best minds in the scientific profession fail to clarify the properties of the hypothetical constituents of the atom, and finally lead to the conclusions that these entities have no definite properties and do not even “exist objectively,” mere common sense certainly calls for a thorough examination of the obvious possibility that they do not exist at all. But this natural and logical explanation of the difficulties that have been experienced has been completely ignored while the theorists have gone on an uncontrolled excursion into a weird land of fantasy completely divorced from physical reality.
In this instance the great prestige of physical science, and physics in particular, has operated to its detriment by permitting it to transcend the limits of logic and common sense unchecked by those restraints which are applied to all less glorified branches of knowledge. As James R. Newman very aptly remarks: In this century the professional philosophers have let the physicists get away with murder. It is a safe bet that no other group of scientists could have passed off and gained acceptance for such an extraordinary principle as complementarily, nor succeeded in elevating indeterminacy to a universal law.32
Since a principle of impotence is inherently incapable of proof, it is obvious that claims of proof of such propositions are fallacious. Ordinarily the fallacy is not difficult to locate. For example, the First Postulate of Relativity, the denial of the existence of absolute motion, is a principle of impotence. It is commonly presented in the guise of positive knowledge, not because it is contended that this postulate itself have been proved, but because the Relativity Theory is claimed to have been proved, and this is a part of that theory. The fallacy lies in the fact that the Relativity Theory has not been, and cannot be, proved as a whole, since it includes four independent postulates. Two of these, the constant velocity of light and the equivalence of gravitational and inertial mass, are supported by sufficient factual evidence to justify the contention that they have been proved. Actually these are not theories at all; they are experimental facts which most physicists were willing to concede even before they were incorporated into the Relativity Theory. But there is no logical justification for extending the evidence in favor of these two postulates to the First Postulate, which has no necessary connection with the other two, beyond the fact that it was wrapped up in the same package by the originator of that theory.
The First Postulate is simply one of the possible ways of getting around the contradiction introduced by the experimental discovery of the constant velocity of light, and it has won general acceptance by default, because no one has seen fit to develop a case in favor of any of the various alternatives. Within the framework of accepted theories of space, time, and motion, the constant velocity of light is definitely incompatible with the concept of absolute velocity, and Einstein chose to sacrifice absolute velocity. It is equally feasible to retain absolute velocity and to modify present space-time concepts instead, particularly since there are no great demands on this postulate; the only purpose that is serves is to evade the contradiction that would otherwise exist. It is rather surprising that this possibility has not been explored, considering the tremendous amount of time and effort that has been devoted to research in theoretical physics. There is no obvious advantage in such a substitution of postulates, since it would not change any of he essential aspects of Relativity, but the mere fact that there is no definite purpose in plain sight does not ordinarily deter basic research. Exploration of the various alternatives would at least clarify the status of the so-called “paradoxes,” as these are consequences of the First Postulate and replacement of this postulate by one of the alternatives would either eliminate the paradoxes or substitute a new set for the ones now existing.
The truth of this statement is self-evident and it may seem superfluous to mention it, yet some of the most important items of present-day physical “knowledge” fail to qualify as such under this rule. For example we know that positively charged particles repel each other with a force which becomes very great at short distances. On the basis of what we know, therefore, it is impossible for a number of protons to remain together in a unit such as the atomic nucleus. Likewise we know that the neutron is unstable in the local environment, and on the basis of what we know the existence of neutrons in a stable atom is impossible. This does not mean that the nuclear theory is necessarily wrong; there may be factors in the situation of which we are ignorant. But it does mean that the nuclear atom is not something that we know; it is a hypothesis, and one of a very dubious character. Yet the pressure for conformity to accepted ideas is so strong that the existence of the hypothesis is considered proof of the existence of some unknown force: a “nuclear” force opposing the known force that would otherwise disrupt the hypothetical structure. The logical justification for this line of thought is certainly hard to detect.
It may be assumed, however, that these conclusions have been influenced to a considerable degree by the impression that the existence of an atomic nucleus was proved by the early experiments of Rutherford, who showed that fast-moving particles pass through thin sheets of solid matter without interference except in cases where they make direct hits on what is now identified as the nucleus. As a result of these and similar experiments it has been concluded that almost all of the mass of the atom is concentrated in this relatively small region. But here again we encounter the same curious failure to explore alternatives that has been noted earlier in this discussion. The experimental results are consistent with the conclusion that almost all of the mass is concentrated in a relatively small volume, to be sure, but they are equally consistent with the alternative conclusion that all of the mass is concentrated at this point; in other words, that this is the atom, not the nucleus of the atom. It seems to have been completely overlooked that while there is ample evidence of the existence of something massive at the center of the region in which the atom is located, there is nothing at all to confirm the existence of the hypothetical outer parts. The idea that the atoms are in contact in the solid state is pure guesswork; indeed the fact that the inter-atomic distance can be reduced very substantially by application of pressure, to less than half in some cases, strongly suggests that the spacing in the solid is imply the result of a force equilibrium. The conventional diagram of the NaCl crystal found in most elementary chemistry textbooks may be a fairly accurate representation of the physical facts.
The other mainstay of the present-day theory of the atom is the fact that electrons can be obtained from atoms, particularly as products of atomic disintegrations. This is the overriding point that has convinced the scientific world that electrons are constituents of atoms, and has made it easy for any additional finding which is consistent with the nuclear hypothesis to be accepted as proof of its validity. The argument may be summarized as follows:
At first glance this argument may seem sound, and certainly it has been accepted without serious question, but its true status can be brought out clearly by stating the analogous argument concerning the photon:
Here we find that on the basis of exactly the same evidence, current practice arrives at diametrically opposite conclusions. Because preconceived ideas concerning the electron suggest that it could be an atomic constituent, the evidence from the disintegrations is accepted as proof that it is, whereas similar preconceived ideas concerning the photon suggest that it could not be an atomic constituent, and exactly the same evidence is therefore taken to mean that the photon was created in the process. Actually, of course, the physical evidence does not distinguish between these alternatives, nor does it preclude the possibility that some other explanation may be correct. What the evidence shows is that the electron either
The clear and positive picture of the atom set forth in the physics textbooks has long since been abandoned by the “front-line” theorists, as by this time it is evident that the concept of such a structure is untenable. What started out in the original nuclear atom conceived by Rutherford and developed theoretically by Bohr as the identifiable and physically observable entity which we know as the electron has now become a mysterious something which no one seems to be quite able to define. Heisenberg tries. The indivisible elementary particle of modern physics, he tells us, is not a material particle in space and time but, in a way, only a symbol on whose introduction the laws of nature assume an especially simple form.33
It is nothing short of ludicrous to find our elementary textbooks explaining the present-day “knowledge” of the structure of the atom in positive terms and in great detail, while at the same time Heisenberg and the Copenhagen school, who represent the “official” viewpoint of present-day theoretical physics, tell us that the atom of modern physics can only be symbolized by a partial differential equation in an abstract multidimensional space.34 The statements commonly found in the textbooks, such as this one: There is so much physical and chemical evidence for the correctness of the modern atomic picture that there can be no reasonable doubt of its validity,35 become nothing but absurdities when the “modern atomic picture” explained in detail by the textbook authors is flatly repudiated by the leading theorists in the physical field.
The truth is that the “physical and chemical evidence” which is given so much weight by these authors is not actual evidence of the nuclear atom; it is evidence which is consistent with that hypothesis, but which is equally consistent with other hypotheses, and therefore is not a proof of any of them. Many physical and chemical phenomena, for example, are functions of the number of “valence electrons” which the atom presumably possesses, according to the nuclear theory, and these relations constitute a major part of the “evidence” to which the textbook authors refer. But if these relationships are examined critically it will be found that the electron, as such, plays on part in them. The are relations which involve only numbers, and whether these are numbers of electrons or numbers of something else in completely immaterial. We find, for instance, that Moseley’s Law relates the x-ray frequencies of potassium to the number 19, and this is taken as a confirmation of the hypothesis that the atom of potassium contains 19 electrons. But there is not the slightest evidence that electrons have anything to do with this situation. The relation is solely with the number 19 and any theory which leads to the existence of 19 units of any kind in the potassium atom is perfectly consistent with the experimental knowledge.
Much the same kind of statement can be made about almost any experimental result that is now expressed in terms of the nuclear theory. The experimenters who are busily engaged in measuring “nuclear cross-sections,” for example, will probably be horrified at the suggestion that there is no such thing as a nucleus. But these investigators are measuring cross-sections, not nuclear cross- sections. The identification with the nucleus is theoretical, not experimental, and if the theory has to be changed this does not affect the experimental results—it simply changes the language in which they are expressed. We just substitute “atomic” for “nuclear” and everything else goes on just the same. The prevailing impression that the huge mass of experimental information of this kind accumulated in recent years constitutes evidence of the validity of the nuclear theory is entirely erroneous.
Summarizing the foregoing discussion, it is not necessary to assume the existence of an atomic nucleus to explain the results of Rutherford’s experiments and Occam’s Principle, one of the sound common-sense rules of science, tells us that we should not make unnecessary assumptions. Furthermore, the fact that electrons can be obtained from matter, as a result of disintegrations or other processes, is not a proof that the electron is a constituent of matter, nor does it even prove that the electron existed prior to the event that caused it to appear. This demolishes the two primary arguments in favor of the nuclear theory, the arguments which have been relied upon to offset the two definite and positive conflicts between this theory and known facts. At this point, therefore, it would appear that the nuclear theory is not only unproved, but is actually disproved. The question then arises, must the entire electrical theory of matter be discarded? But when we remember that the electrical theory antedated the nuclear atom by about one hundred years it is evident that this theory is not tied to the nuclear hypothesis; it simply goes back to where it was before Rutherford. However, this does not necessarily mean that it stands on solid ground, so let us take a look at the facts.
The electrical theory of matter originated from the observation that certain substances on being dissolved separate into two parts, or ions, one bearing a positive charge, the other a negative. Later, various phenomena such as the thermionic emission of electrons, the photoelectric effect, etc., which involve ejection of electrons from matter, where discovered. Since the experimentally observed electron is a negatively charged and highly mobile particle, it was concluded that the ionic charges were due to an excess or deficiency of electrons relative to the number which the atoms presumably should contain in the neutral condition. This is plausible, and in many respects a rather attractive theory, but for present purposes these aspects are irrelevant. What we ant to know is, does this theory constitute positive knowledge, as the textbooks contend? The answer to this question is definitely no; close scrutiny shows that here again we have nothing more than a very questionable hypothesis.
Looking first at the matter of ionization in solution, it is clear that this is the same kind of a situation that exists in the case of the electrons which appear among the atomic disintegration products. The theory that the electric charges existed prior to the time that the substance dissolved is only one of the possible explanations of the observed facts. It is not even the best of the readily available explanations, because if we accept it, we find it necessary to conclude that there are two different ionization mechanisms. The general chemistry textbook on the author’s desk admits that ions are formed no only by the so-called “ionic” compounds but also by compounds that are definitely not ionic, and goes on to say: If ions are not presently in an electrolyte before it is dissolved, they must be formed from the molecules of the compound as it dissolves.36 But as long as we have to assume that some ionic charges are created in the process of ionization, it is clearly within the bounds of possibility that all ionic charges are thus created: an explanation that has a considerable advantage from the standpoint of simplicity. The existence of charged particles in solution is therefore far from being proof of the existence of electric charges in undissolved matter. Similarly, everything that is known about the photoelectric effect, the thermionic emission of electrons, and other phenomena of this kind is entirely consistent with the hypothesis that the charged particles are created in the process. Obviously we cannot regard any theory as having been verified by experiment if the experimental results are equally consistent with some different hypothesis, and hence none of these results qualifies as a proof of the electrical theory of matter.
At the time the nuclear atom was originally conceived, the idea that electrons might be created in some physical process seemed so remote that it was probably never even given consideration, but today it is commonplace. Such creation is currently being observed in a great variety of processes, ranging all the way from the production of a single electron-positron pair by an energetic photon to the production of a shower of millions of particles by a cosmic ray primary. We now find that the electron with which we deal experimentally is something altogether different from the atomic building block envisioned by Rutherford; it is a tangible but evanescent particle that can be produced or destroyed with relative ease.
If we take a broad view of the entire picture of the electron, including both theory and experiment, it is apparent that there is a curious dichotomy in the usual concepts of this particle. The experimental electron is a definite and well-defined thing, notwithstanding its impermanence. We can produce it at will by specific processes. We can measure its mass, its charge, and its velocity. We can control its movement and we have processes by which we can record that path that it takes in response to these controls. Indeed, we even have such precise control over the electron movement that we can utilize it as a powerful means of producing magnified images of objects that are too small for optical magnification. In short, the experimental electron is a well-behaved and perfectly normal physical entity. On the other hand, the theoretical electron, which, according to currently accepted theory, is one of the constituents of the atom, is a very strange phenomenon. It presumably moves in an orbit around the atomic nucleus, but we cannot locate either the orbit or the position of the electron in the orbit; the best we can do, the experts say, is to compute a probability that it might be found at a certain location. Unlike the experimental electron, this theoretical electron does not follow ordinary physical laws, but has some unique and unprecedented behavior characteristics of its own, including a strange and totally unexplained ability to jump from one orbit to another with no apparent cause. Furthermore, as already mentioned, the leading theorists of the present day tell us that it cannot be accommodated within the three-dimensional framework of physical space; it must be regarded merely as a symbol rather than as an objectively real particle.
Under the circumstances it becomes pertinent to inquire whether there is actually any adequate justification for identifying the theoretical electron of the nuclear atom with the experimental electron. Once this question is asked, it clearly must be answered in the negative. Certainly we cannot logically contend that two entities that differ widely in practically every respect are actually identical. What has happened is that the theorists originally started out with the assumption that one of the constituents of the atom is a particle identical with the experimental electron, but as more and more information was accumulated it became clear that the properties of the experimental electron were incompatible with the requirements of the hypothetical atomic constituent. The assumed properties of the atomic electron have therefore been progressively modified to meet these requirements until they now bear no resemblance to those of the experimental electron. It is high time, therefore, that the pretense of identity should be abandoned and that the “electron” constituent of the nuclear atom should be recognized for what it actually is: a purely hypothetical concept which has no relation nor resemblance to any particle that has been observed experimentally.
Naturally the physicist is reluctant to take this step, in spite of the absurdity of claiming identity for two particles which have practically nothing in common, since the modern theory of the atom rests on the assumption that it is a composite structure built up by a combination of some of the smaller particles which have been observed experimentally. Once it becomes necessary to admit that the atomic electron is a purely hypothetical creation, unrelated to anything that has ever been observed, an abstract thing, no longer intuitable in terms of the familiar aspects of everyday experience, as H. Margenau describes it,37 this basic concept of the atomic structure is destroyed and the whole theory crumbles. After having been so confident for so long, the physicists find it very distasteful to face the realities of this situation and to admit that the concept of an atom constructed from known particles of sub-atomic size is no longer tenable. But this is exactly the kind of catharsis that we need in order to clear the way for a new physical theory adequate to meet present-day demands. After all, how can we expect a theory based on the assumption that matter is composed of permanent “indivisible elementary particles,” similar in all respects, other than size, to the atoms of Democritus, to give us the right answers in an age when it is admitted that we do not really know how to define an elementary particle38 but we do know that no particle is permanent; that all particles, large or small, material and non-material, are subject to exchange of identities in a bewildering variety of transformation reactions?
Actually we must go still a step further. Not only is it now clear that the constituent parts of the atom are not known particles such as electrons, it is also becoming apparent, as mentioned earlier in the discussion, that the atom has no constituent parts at all. Of course, the immediate reaction to this statement will be that it is preposterous, since we can readily break the atom into separate parts. But let us examine this situation a little more closely. Suppose we have a certain object moving with a high velocity, and we then detach the velocity by transferring it to something else. Must we then conclude that the original object consisted of two separate parts and that we have broken it into its two constituents? It is very doubtful whether anyone would ever support such a conclusions as this, but if we compare this situation with the break-up of the atom, it is evident that the only basis on which we can claim that we have done something different with the atom is by contending that the parts which are detached from the atom are inherently of a different character than the motion which was detached from our hypothetical object.
Can such a contention be justified? We have found that both the proton and the electron can be transformed into radiation simply by contact with their respective antiparticles. All matter seems to be radiation, says Morse,39 and so far as we know, radiation is nothing more than a vibratory motion. Can we say that the proton is inherently different from motion when we can transform it into motion? Are we not forced to the conclusion that the atom could very well be an integral entity endowed with specific amounts of various kinds of motion (or something equivalent to motion) and that what we call breaking it up into parts amounts to nothing more than detaching portions of this motion (or the equivalent thereof)?
And is it not true that the trend of discovery in the sub-atomic field is driving us slowly but inexorably in this direction, toward just such a conclusion as the foregoing? Already it is evident that there must be some common denominator, not only for the particles, but for radiation as well. Particles are materialized from radiation and are “annihilated” back to radiation again, protons become neutrons and vice versa, mesons are created from kinetic energy and ultimately decay into electrons and neutrinos. The atomic reactors transform mass into energy, while at the same time the particle accelerators are busily engaged in converting energy back into mass. This interchangeability between the “elementary” particles, between the particles and radiation, and between mass and energy, has already administered the coup de grace to the popular theory that the universe is constructed from different kinds of elementary “building blocks” even though the inertia of customary habits of thought and a strong attachment to familiar ideas are delaying general recognition of this fact. It is already evident that the raw material of atom building is more analogous to modeling clay than to building blocks.
The question of the structure of matter in general, and of the atom in particular, is a perfect example of the kind of thing with which this memorandum is concerned. Here we have a theory, the concept of the nuclear atom, which is accepted as positive knowledge on the strength of a number of supposedly incontestable items of experimental evidence. Yet when we examine each of these items carefully and critically, we find that all of them dissolve into thin air; there is no positive knowledge to be found anywhere. Even the supposed “discovery” of the atomic nucleus which originated the whole line of thought turns out to be fictitious. On critical analysis we see no reason to attribute any significance to this discovery other than that the atom itself is smaller than had been thought. Now, as a fitting climax, we see that it is quite unlikely that there is any such thing as a “part” of an atom. It is becoming increasingly evident that there are no “elementary particles” and that both the atoms and the sub-atomic particles belong to essentially the same class, a class that should be called “primary” rather than “elementary,” in that these are te entities which are formed directly from the basic substance of the universe: the permissible forms, we might say, into which the basic clay can be shaped. The difference between particles and atoms is one of degree only; we may appropriately look upon the particles as incomplete atoms. This entire theory is interesting today chiefly because it demonstrates how a false, almost ludicrous, hypothesis may explain a large body of facts, and how widely, if it is not subjected to the most rigorous experimental scrutiny, it may be accepted.40 The foregoing comments were actually made about the phlogiston theory, but in the light of the points brought out in the preceding paragraphs it can be safely predicted that future physics textbooks will say essentially the same thing about the theory of the nuclear atom. It is too much to expect that the raising of issues such as those covered in this memorandum will be greeted with any degree of enthusiasm. Old beliefs, like shoes, are comfortable and they are given up only with great reluctance. But if the insistent demands for “radical changes” and “basic conceptual innovations” which emanate from our foremost theorists are to be met, many of these old beliefs must be sacrificed, no matter now distressing the parting may be. As Sir George Thomson expresses it: There is some new item wanted to make these new pieces [mesons, etc.] fall into place in the puzzle… when the idea comes it may very probably involve a recasting of fundamental ideas and the abandonment of something that we now take completely for granted.41
This memorandum is not intended to specify which of the old beliefs should be abandoned, or what should replace them. Rather it is addressed to the prerequisite task of pointing out which of the old beliefs are vulnerable; that is, which of them have no sound factual basis and therefore could be erroneous. Of course, it has been necessary in some instances to go a step farther and show that some currently popular ideas, such as the nuclear atom and the conversion of hydrogen to helium in the stars, are almost certainly wrong, but the primary thesis of the discussion is that general acceptance is no guarantee of the validity of a theory; all too often this acceptance is based merely on the lack of anything better, rather than on the merits of the theory itself. The principal point which it is intended to emphasize is that the new and improved basic theory that is so fervently desired must conflict with some of these ideas that owe their present standing to general acceptance rather than to factual proof, and it may, indeed it probably will, conflict with many of them. It therefore follows that such conflicts, as long as they are confined to items of the categories herein enumerated, do not constitute valid arguments against any new theory that may be proposed, and they should not be allowed to block consideration of new theories. An open mind toward conceptual innovations is particularly important under conditions such as those that now exist: …It must be recognized, says John A. Wheeler, that the present situation calls for a certain daring in consideration and testing new ideas.42