18 Retreat from Reality

CHAPTER 18

The Retreat From Reality

In the eight chapters from 9 to 17 (excluding Chapter 12) we have described the general features of electricity—both current electricity and electric charges—as they emerge from a development of the consequences of the postulates of the theory of the universe of motion. This development arrives at a picture of the place of electricity in the physical universe that is totally different from the one that we get from conventional physical theory. However, the new view agrees with the electrical observations and measurements, and is entirely consistent with empirical knowledge in related areas, whereas conventional theory is deficient in both respects. Thus there is ample justification for concluding that the currently accepted theories dealing with electricity are, to a significant degree, wrong.

This finding that an entire subdivision of accepted physical theory is not valid is difficult for most scientists to accept, particularly in view of the remarkable progress that has been made in the application of existing theory to practical problems. But neither a long period of acceptance nor a record of usefulness is sufficient to verify a theory. The history of science is full of theories that enjoyed general acceptance for long periods of time, and contributed significantly to the advance of knowledge, yet eventually had to be discarded because of fatal defects. Present-day electrical theory is not unique in this respect; it is just another addition to the long list of temporary solutions to physical problems.

The question then arises, How is it possible for errors of this magnitude to make their way into the accepted structure of physical theory? It is not difficult to find the answer. Actually, there are so many factors tending to facilitate acceptance of erroneous theories, and to resist parting with them after they are once accepted, that it has been something of an achievement to keep the error content of physical theory as low as it is. The fundamental problem is that physical science deals with so many entities and phenomena whose basic nature is not understood. For example, present-day physics has no understanding of the nature of the electric charge. We are simply told that we must not ask; that the existence of charges has to be accepted as one of the given features of nature. This frees theory construction from the constraints that would normally apply. In the absence of an adequate understanding, it is possible to construct and secure acceptance of theories in which charges are assigned functions that are clearly seen to be incompatible with the place of electric charge in the pattern of physical activity, once that place is specifically defined.

None of the other basic entities of the physical universe—about six or eight of them, the exact number depending on the way in which the structure of fundamental theory is erected—is much, if any, better known than the electric charge. The nature of time, for instance, is even more of a mystery. But these entities are the foundation stones of physics, and in order to construct a physical theory it is necessary to make some assumptions about each of them. This means that present-day physical theory is based on some thirty or forty assumptions about entities that are almost totally unknown.

Obviously, the probability that all of these assumptions about the unknown are valid is near zero. Thus it is practically certain, simply from a consideration of the nature of its foundations, that the accepted structure of theory contains some serious errors.

In addition to the effects of the lack of understanding of the fundamental entities of the physical universe, there are some further reasons for the continued existence of errors in conventional physical theory that have their origin in the attitudes of scientists toward their subject matter. There is a general tendency, for instance, to regard a theory as firmly established if, according to the prevailing scientific opinion, it is the best theory of the subject that is currently available. As expressed by Henry Margenau, the modern scientist does not speak of a theory as true or false, but as “correct or incorrect relative to a given state of scientific knowledge.”64

One of the results of this policy is that conclusions as to the validity of theories along the outer boundaries of scientific knowledge are customarily reached without any consideration of the cumulative effect of the weak links in the chains of deductions leading to the premises of these theories. For example, we frequently encounter statements similar to the following:

The laws of modern physics virtually demand that black holes exist.65 No one who accepts general relativity has found any way to escape the prediction that black holes must exist in our galaxy.66

These statements tacitly assume that the reader accepts the “laws of modern physics” and the assertions of general relativity as incontestable, and that all that is necessary to confirm a conclusion—even a preposterous conclusion such as the existence of black holes—is to verify the logical validity of the deductions from these presumably established premises. The truth is, however, that the black hole hypothesis stands at the end of a long line of successive conclusions, included in which are more than two dozen pure assumptions. When this line of theoretical development is examined as a whole, rather than merely looking at the last step on a long road, it can be seen that arrival at the black hole conclusion is a clear indication that the line of thought has taken a wrong turn somewhere, and has diverged from physical reality. It will therefore be appropriate, in the present connection, to undertake an examination of this line of theoretical development, which originated with some speculations as to the nature of electricity.

The age of electricity began with a series of experimental discoveries: first, static electricity, positive* and negative*, then current electricity, and later the identification of the electron as the carrier of the electric current. Two major issues confronted the early theorists, (1) Are static and current electricity different entities, or merely two different forms of the same thing?, and (2) Is the electron only a charge, or is it a charged particle? Unfortunately, the consensus reached on question (1) by the scientific community was wrong. The theory of electricity thus took a wrong direction almost from the start. There was spirited opposition to this erroneous conclusion in the early days of electrical research, but Rowland’s experiment, in which he demonstrated that a moving charge has the magnetic properties of an electric current, silenced most of the critics of the “one electricity” hypothesis.

The issue as to the existence of a carrier of electric charge—a “bare” electron—has not been settled in this manner. Rather, there has been a sort of a compromise. It is now generally conceded that the charge is not a completely independent entity. As expressed by Richard Feynman, “there is still ‘something’ there when the charge is removed.”67 But the wrong decision on question (1) prevents recognition of the functions of the uncharged electron, leaving it as a vague “something” not credited with any physical properties, or any effect on the activities in which the electron participates. The results of this lack of recognition of the physical status of the uncharged electron, which we have now identified as a unit of electric quantity, were described in the preceding pages, and do not need to be repeated. What we will now undertake to do is to trace the path of a more serious retreat from reality that affects a large segment of present-day physical theory, and accounts for a major part of the difference between current theory and the conclusions derived from the postulates that define the universe of motion.

This theoretical development that we propose to examine originated as a result of the discovery of radioactivity and the identification of the three kinds of emanations from the radioactive substances as positively* charged alpha particles (helium atoms), negatively* charged electrons, and electromagnetic radiation. It was taken for granted that when certain particles are ejected from an atom during radioactivity, these particles must have existed in the atom prior to the radioactive disintegration. This conclusion does not seem so obvious today, when the photon of radiation (which no one suggests as a constituent of the undisturbed atom) is recognized as a particle, and a whole assortment of strange particles is observed to be emitted from atoms during high energy disintegrations. At any rate, it is clearly nothing more than an assumption.

An extension of this assumption led to the conclusion that the atom is a composite structure in which the emitted particles are the constituent parts. Some early suggestions as to the arrangement of the parts gained little support, but a discovery, in Rutherford’s laboratory, that the mass of the atom is concentrated in a very small volume in the center of the space that it presumably occupies, led to the construction of the Rutherford atom-model, the prototype of the atom of modern physics. In this model the atom is viewed as a miniature analog of the solar system, in which negatively* charged electrons are in orbit around a positively* charged “nucleus.”

The objective of this present discussion is to identify the path that the development of theory on the basis of this atom-model has taken, and to demonstrate the fact that currently accepted theory along the outer boundaries of scientific knowledge, such as the theory that leads to the existence of black holes, rests on an almost incredible succession of pure assumptions, each of which has a finite probability—in some cases a very strong probability—of being wrong. As an aid in emphasizing the overabundance of these assumptions, we will number those that we identify as being definitely in the direct line of the theoretical development that leads eventually to the concepts of the black hole and the singularity.

In the construction of his model, Rutherford accepted the then prevailing concepts of the properties of electricity, including the two assumptions previously mentioned, and retained the assumption that the atom is constructed of separable parts. The first of the assumptions that he added will therefore be given the number 4. These new assumptions are:

  1. The atom is constructed of positively* and negatively* charged components.
  2. The positive* component, containing most of the mass, is located in a small nucleus.
  3. Negatively* charged electrons are in orbit around the nucleus.
  4. The force of attraction between unlike charges applied to motion of the electrons results in a stable orbital equilibrium.

This model met with immediate favor in scientific circles, but it was faced with two serious problems. The first was that the known behavior of unlike charges does not permit their coexistence at the very short distances in the atom. Even at substantially greater distances they neutralize each other. Strangely enough, little attention was paid to this very important point. It was tacitly assumed (8) that the observed behavior of charges does not apply in this case, and that the hypothetical charges inside the atom are stable. There is no evidence whatever to support this assumption, but neither is there any evidence to contradict it, as the inside of the atom is unobservable. Here, as in many other areas of present-day physical theory, we are being asked to accept absence of disproof as the equivalent of proof.

Another of the problems encountered by the new theory involved the stability of the assumed electronic orbits. Here there was a direct conflict with empirical knowledge. From experiment it is found that charged objects moving in circular orbits (and therefore accelerated) lose energy and spiral in toward the center of the circle. On this basis the assumed electronic orbits would be unstable. This conflict was taken more seriously than the other, and remained a source of theoretical difficulty until Bohr “solved” the problem with another assumption, postulating, entirely ad hoc, that the constituents of the atom do not follow normal physical laws. He assumed (9) that the hypothetical electronic orbits are quantized, and can take only certain specific values, thus eliminating the spiraling effect.

At this point, further impetus was given to the development of the atom-model by the discovery of a positively* charged particle of mass one on the atomic weight scale. This particle, called the proton, was promptly assumed (10) to be the bare nucleus of the hydrogen atom. This led to the further assumption (11) that the nuclei of other atoms were made up of a varying number of protons. But here, again, there was a conflict with observation. According to the observed behavior of charged particles, the protons in the hypothetical nucleus would repel each other, and the nucleus would disintegrate. Again an ad hoc assumption was devised to rescue the atom-model. It was assumed (12) that an inward-directed “nuclear force” (of unknown origin) operates against the outward force of repulsion, and holds the protons in contact.

This assumed proton-electron composition quickly encountered difficulties, one of the most immediate being that in order to account for the various atoms and isotopes it had to be assumed that some of the electrons are located in the nucleus—admittedly a rather improbable hypothesis. The theorists were therefore much relieved when a neutral particle, the neutron, was discovered. This enabled changing the assumed atomic composition to identify the nucleus as a combination of protons and neutrons (assumption 13). But the observed neutron is unstable, with an average life of only about 15 minutes. It therefore does not qualify as a possible constituent of a stable atom. So once more an ad hoc assumption was called upon. It was assumed (14) that the ordinarily unstable neutron becomes stable when it enters the atomic structure (where, fortunately for the hypothesis, it is undetectable if it exists).

As a result of the critical study to which the Bohr atom-model was subjected in the next few decades, this model, in its original form, was found untenable. Various “interpretations” of the model have therefore been offered as remedies for the defects in this original version. Each of these adds some further assumptions to those included in Bohr’s formulation, but none of these additions can be considered definitely in the main line of the theoretical development that we are following, and they will not be taken into account in the present connection. It should be noted, however, that all 14 of the assumptions that we have identified in the foregoing paragraphs enter into the theoretical framework of each modification of the atom-model. Thus all 14 are included in the premises of the “atom of modern physics,” regardless of the particular interpretation that is accepted.

It should also be noted that four of these 14 assumptions (numbers 8, 9, 12, and 14) have a status that is quite different from that of the others. These are ad hoc assumptions, untestable assumptions that are made purely for the purpose of evading conflicts with observation or firmly established theory. Assumption 12, which asserts the existence of a “nuclear force,” is a good example. There is no independent evidence that this assumed force actually exists. The only reason for assuming its existence is that the nuclear atom cannot survive without it. As one current physics textbook explains, “A very strong attractive force is needed to hold the nucleons in the nucleus.”68 What the physicists are doing here is giving us an untestable excuse for the failure of the nuclear theory to pass the test of agreement with experience. Such evasive tactics are not new. In Aristotle’s physical system, which was the orthodox view of the universe for nearly two thousand years, it was assumed that the planets were attached to transparent spheres that rotated around the earth. But according to the laws of motion, as they were understood at that time, this motion could not be maintained except by continual application of a force. So Aristotle employed the same device that his modern successors are using: the ad hoc assumption. He postulated the existence of angels who pushed the planets along in their respective orbits. The “nuclear force” of modern physics is the exact equivalent of Aristotle’s “angels” in all but language.

With the benefit of the additional knowledge that has been accumulated in the meantime, we of the present era have no difficulty in arriving at an adverse judgment on Aristotle’s assumption. But we need to recognize that this is an illustration of a general proposition. The probability that an untestable assumption about a physical entity or phenomenon is a true representation of physical reality is always low. This is an unavoidable consequence of the great diversity of physical existence. When one of these untestable assumptions is used in the ad hoc manner—that is, to evade a discrepancy or conflict—the probability that the assumption is valid is much lower.

All of these points are relevant to the question as to whether the present-day nuclear atom-model is a representation of physical reality. We have identified 14 assumptions that are directly involved in the main line of theoretical development leading to this model. These assumptions are sequential; that is, each adds to the assumptions previously made. It follows that unless every one of them is valid, the atom-model in its present form is untenable. The issue thus reduces to the question: What is the probability that all of these 14 assumptions are physically correct?

Here we need to consider the status of assumptions in the structure of scientific theory. The construction of physical theory is a process of applying reasoning to premises derived initially from experience. Where the application involves going from the general to the particular, the process is deductive reasoning, which is a relatively straightforward operation. To go from the particular to the general requires the application of inductive reasoning. This is a two-step process. First, a hypothesis is formulated by any one of a number of means. Then the hypothesis is tested by developing its consequences and comparing them with empirical knowledge. Positive verification is difficult because of the great complexity of physical existence. It should be noted, in this connection, that agreement of the hypothesis with the observation that it was designed to fit does not constitute a verification. The hypothesis, or its consequences, must be shown to agree with other factual knowledge.

Because of the verification difficulties, it has been found necessary to make use, at least temporarily, of many hypotheses whose verification is incomplete. However, a prominent feature of “modern physics” is the extent to which the structure of theory rests on hypotheses that are entirely untested, and, in many cases, untestable. Hypotheses that are accepted and utilized without verification are assumptions. The use of assumptions is a legitimate feature of theory or model construction. But in view of the substantial uncertainty as to their validity that always exists, the standard scientific practice is to avoid pyramiding them. One or two steps into the unknown are considered to be in order, but some consolidation of the exposed positions is normally regarded as essential before a further unsupported advance is undertaken.

The reason for this can easily be seen if we consider the way in which the probability of validity is affected. Because of the complexity of physical existence mentioned earlier, the probability that an untestable assumption is valid is inherently low. In each case, there are many possibilities to be conceived and taken into account. If each assumption of this kind has an even chance (50 percent) of being valid, there is some justification for using one such assumption in a theory, at least tentatively. If a second untestable assumption is introduced, the probability that both are valid becomes one in four, and the use of these assumptions as a basis for further extension of theory is a highly questionable practice. If a third such assumption is added, the probability of validity is only one in eight, which explains why pyramiding assumptions is regarded as unsound.

A consideration of the points brought out in the foregoing paragraphs puts the status of the nuclear theory into the proper perspective. The 14 steps in the dark that we have identified in the path of development of the currently accepted atom-model are totally unprecedented in physical science. The following comment by Abraham Pais is appropriate:

Despite much progress, Einstein’s earlier complaint remains valid to this day. “The theories which have gradually been associated with what has been observed have led to an unbearable accumulation of individual assumptions.”69

Of course, it is possible for an assumption to be upgraded to the status of established knowledge by discovery of confirmatory evidence. This is what happened to the assumption as to the existence of atoms. But none of the 14 numbered assumptions identified in the preceding discussion has been similarly raised to a factual status. Indeed, some of them have lost ground over the years. For example, as noted earlier, the assumption that emission of certain particles from an atom during a decay process indicates that these particles existed in the atom before decay, assumption (3), has been seriously weakened by the large increase in the number of new particles that are being emitted from atoms during high energy processes. The present uncritical acceptance of the nuclear atom-model is not a result of more empirical support, but of increasing familiarity, together with the absence (until now) of plausible alternatives. A comment by N. R. Hanson on the quantum theory, one of the derivatives of the nuclear atom model, is equally applicable to the model itself. This theory, he says, is “conceptually imperfect” and “riddled with inconsistencies.” Nevertheless, it is accepted in current practice because “it is the only extant theory capable of dealing seriously with microphenomena.”70

The existence, or non-existence, of alternatives has no bearing, however, on the question we are now examining, the question as to whether the nuclear atom-model is a true representation of physical reality. Neither general acceptance nor long years of freedom from competition has any bearing on the validity of the model. Its probability of being correct depends on the probability that the 14 assumptions on which it rests are all individually valid. Even if no ad hoc assumptions were involved, this composite probability, the product of the individual probabilities, would be low because of the cumulative effect. This line of theoretical development is the kind of product that Einstein called “an unbearable accumulation of individual assumptions.” Even if we assume the relatively high value of 90 percent for the probability of the validity of each individual assumption, the probability that the final result, the atom-model, is correct would be less than one in four. When the very low probability of the four purely ad hoc assumptions is taken into account, it is evident that the probability of the nuclear atom-model, “the atom of modern physics,” being a correct representation of physical reality is close to zero.

This conclusion derived from an examination of the foundations of the currently accepted model will no doubt be resisted—and probably resented—by those who are accustomed to the confident assertions in the scientific literature. But it is exactly what many of those who played leading roles in the development of the long list of assumptions leading to the present version of the nuclear theory have been telling us. These scientists know that the construction of the model in terms of electrons moving in orbits around a positively* charged nucleus does not mean that such entities actually exist in an atom, or behave in the manner specified in the theory. Erwin Schrödinger, for instance, emphasized that the model is “only a mental help, a tool of thought.”71 and asserted that if the question, “Do the electrons actually exist in these orbits within the atom?” is asked, it “is to be answered with a decisive No.72 Werner Heisenberg, another of the architects of the modern version of Bohr’s atom-model, tells us that the physicists’ atom does not even “exist objectively in the same sense as stones or trees exist.”73 It is, “in a way, only a symbol,”9 he says.

These statements, applying specifically to the nuclear theory of the atom, that have been made by individuals who know the true status of the assumptions that entered into the construction of that theory, agree with the conclusions that we have reached on the basis of probability considerations. Thus the confident statements that appear throughout the scientific literature, asserting that the nature of the atomic structure is now “known,” are wholly unwarranted. A hypothesis that is “only a mental help” is not a representation of reality. A theoretical line of development that culminates in nothing more than a “symbol” or a “tool of thought” is not an exploration of the real world; it is an excursion into the land of fantasy.

The finding that the nuclear atom-model rests on false premises does not necessarily invalidate the currently accepted mathematical relationships derived from it, or suggested by it. This may appear contradictory, as it implies that a wrong theory may lead to correct answers. However, the truth is that the conceptual and mathematical aspects of physical theories are, to a large extent, independent. As Feynman puts it, “Every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics.”74 Such a physicist recognizes this many different conceptual explanations that agree with the mathematical relations. A major reason for this is that the mathematical relations are usually identified first, and an explanation in the form of a theory is developed later as an interpretation of the mathematics. As noted earlier, many such explanations are almost always possible in each case. In the course of the investigation on which this present work is based, this has been found to be true even where the architects of present-day theory contend that “there is no other way.”

Since the practical applications of a theory are primarily mathematical, or quantitative, one might be led to ask, Why do we want an explanation? Why not just use the mathematics without any concern as to their meaning? The answer is that while the established mathematical relations may serve the specific purposes for which they were developed, they cannot be safely extrapolated beyond the ranges of conditions over which they have been tested, and they make no contribution toward an understanding of relations in other areas. On the contrary, they lead to wrong conclusions, and constitute roadblocks in the way of identifying the correct principles and relations in related areas.

This is what has happened as a result of the assumptions that were made in the course of developing the nuclear atom-model. Once it was assumed that the atom is composed primarily of oppositely charged particles, and some valid mathematical relations were developed and expressed in terms of this concept, the prevailing tendency to accept mathematical agreement as proof of validity, together with the absence (until now) of any serious competition, elevated this product of multiple assumptions to the level of an accepted fact. “Today we know that the atom consists of a positively charged nucleus composed of protons and neutrons surrounded by negatively charged electrons.” This positive statement, or its equivalent, can be found in almost every physics textbook. But any proposition that rests on assumptions is hypothesis, not knowledge. Classifying a model that rests upon more than a dozen independent assumptions, mostly untestable, and including several of the inherently dubious “ad hoc” variety, as “knowledge” is a travesty on science.

When the true status of the nuclear atom-model is thus identified, it should be no surprise to find that the development of the theory of the universe of motion reveals that the atom actually has a totally different structure. We now find that it is not composed of individual particles, and in its normal state it contains no electric charges. This new view of atomic structure was derived by deduction from the postulates that define the universe of motion, and it therefore participates in the verification of the Reciprocal System of theory as a whole. However, in view of the crucial position of the nuclear theory in conventional physics it is advisable to make it clear that this currently accepted theory is almost certainly wrong, on the basis of current physical knowledge, even without the additional evidence supplied by the present investigation, and that some of the physicists who were most active in the construction of the modern versions of the nuclear model concede that it is not a true representation of physical reality. This is the primary purpose of the present chapter.

In line with this objective, the most significant of the errors introduced into electric and magnetic theory by acceptance of this erroneous model of atomic structure have been identified in the preceding pages. But this is not the whole story. This product of “an unbearable accumulation of individual assumptions” has had an even more detrimental effect on astronomy. The errors that it has introduced into astronomical thought will be discussed in detail in Volume III, but it will be appropriate at this time to point out why astronomy has been particularly vulnerable to an erroneous assumption of this nature.

The magnitudes of the basic physical properties extend through a much wider range in the astronomical field than in the terrestrial environment. A question of great significance, therefore, in the study of astronomical phenomena, is whether the physical laws and principles that apply under terrestrial conditions are also applicable under the extreme conditions to which many astronomical objects are subjected. Most scientists are convinced, largely on philosophical, rather than scientific, grounds, that that the same physical laws do apply throughout the universe. The results obtained by development of the consequences of the postulates that define the universe of motion agree with this philosophical assumption. However, there is a general tendency to interpret this principle of universality of physical law as meaning that the laws that have been established as applicable to terrestrial conditions are applicable throughout the universe. This is something entirely different, and our findings do not support it.

The error in this interpretation of the principle stems from the fact that most physical laws are valid, in the form in which they are usually expressed, only within certain limits. Many of the currently accepted laws applicable to solids, for example, do not apply at temperatures above the melting points of the various material substances. The prevailing interpretation of the uniformity principle carries with it the unstated assumption that there are no such limits applicable to the currently accepted laws and principles other than those that are recognized in present-day practice. In view of the very narrow range of conditions through which these laws and principles have been tested, this assumption is clearly unjustified, and our findings now show that it is definitely incorrect. We find that while it is true that the same laws and principles are applicable throughout the universe, most of the basic laws are subject to certain modifications at critical magnitudes, which often exceed the limiting magnitudes experienced on earth, and are therefore unknown to present-day science. Unless a law is so stated that it provides for the existence and effects of these critical magnitudes, it is not applicable to the universe as a whole, however accurate it may be within the narrow terrestrial range of conditions.

One property of matter that is subject to an unrecognized critical magnitude of this nature is density. In the absence of thermal motion, each type of material substance in the terrestrial environment has a density somewhere in the range from 0.075 (hydrogen) to 22.5 (osmium and iridium), relative to liquid water at 4° C as 1.00. The average density of the earth is 5.5. Gases and liquids at lower densities can be compressed to this density range by application of sufficient pressure. Additional pressure then accomplishes some further increase in density, but the increase is relatively small, and has a decreasing trend as the pressure rises. Even at the pressures of several million atmospheres reached in shock wave experiments, the density was only increased by a factor of about two. Thus the maximum density to which the contents of the earth could be raised by application of pressure is not more than about 15.

The density of most of the stars of the white dwarf class is between 100,000 and 1,000,000. There is no known way of getting from a density of 15 to a density of 100,000. And present-day physics has no general theory from which an answer to this problem can be deduced. So the physicists, already far from the solid ground of reality with their hypotheses based on an atom-model that is “only a symbol,” plunge still farther into the realm of the imagination by adding more assumptions to the sequence of 14 included in the nuclear atom-model. It is first assumed (15) that at some extremely high pressure the hypothetical nuclear structure collapses, and its constituents are compressed into one mass, eliminating the vacant space in the original structure, and increasing the density to the white dwarf range.

How the pressure that is required to produce the “collapse” is generated has never been explained. The astronomers generally assume that this pressure is produced at the time when, according to another assumption (16), the star exhausts its fuel supply.

With its fuel gone it [the star] can no longer generate the pressure needed to maintain itself against the crushing force of gravity.75

But fluid pressure is effective in all directions; down as well as up. If the “crushing force of gravity” is exerted against a gas rather than directly against the central atoms of the star, it is transmitted undiminished to those atoms. It follows that the pressure against the atoms is not altered by a change of physical state due to a decrease in temperature, except to the extent that the dimensions of the star may be altered. When it is realized that the contents of ordinary stars, those of the main sequence, are already in a condensed state (a point discussed in detail in Volume III), it is evident that the change in dimensions is too small to be significant in this connection. The origin of the hypothetical “crushing pressure” thus remains unexplained.

Having assumed the fuel supply exhausted, and the star cooled down, in order to produce the collapse, the theorists find it necessary to reheat the star, since the white dwarfs are relatively hot, as their name implies, rather than cold. So again they call upon their imaginations and come up with a new assumption to take care of the problem. They assume (17) that when the atomic structure collapses, the matter of the star enters a new state. It becomes degenerate matter, and acquires a new set of properties, among which is the ability to generate a new supply of energy to account for the observed temperature of the white dwarf stars.

Even with the wide latitude for further assumptions that is available in this purely imaginary situation, the white dwarf hypothesis could not be extended sufficiently to encompass all of the observed types of extremely dense stars. To meet this problem it was assumed (18) that the collapse which produces the white dwarf is limited to relatively small stars, so that the white dwarfs do not exceed a limiting mass of about two solar masses. Larger stars are assumed (19) to explode rather than merely collapse, and it is further assumed (20) that the pressure generated by such an explosion is sufficient to compress the residual matter to such a degree that the hypothetical constituents of the atoms are all converted into neutrons, producing a neutron star (currently identified with the observed objects known as pulsars). There is no evidence to support this assumption. The existence of a process that accomplishes such a conversion under pressure is itself an assumption (21), and the concept of a neutron star requires the further assumption (22) that neutrons can exist as stable independent particles under the assumed conditions.

Although this is the currently orthodox explanation of the origin of the pulsars, it is viewed rather dubiously even by some of the astronomers. Martin Harwit, for instance, concedes that “we have no theories that satisfactorily explain just how a massive star collapses to become a neutron star.”76

The neutron star, too, is assumed to have a limiting mass. It is assumed (23) that the compression due to the more powerful explosion of the larger star reduces the volume of the residual aggregate enough to enable its self-gravitation to continue the compression. It is then further assumed (24) that the reduction of the size of the aggregate eventually reaches the point where the gravitational force is so great that radiation cannot escape. What then exists is a black hole.

While it is not generally recognized as such, the “self-gravitation” concept, in application to atoms, is another assumption (25). Observations show only that gravitation operates between atoms or independent particles. The hypothesis that it is also applicable within atoms is derived from Einstein’s general theory of relativity, but since there is no proof of this theory (the points that have thus far been adduced in its favor are merely evidence) this derivation does not alter the fact that the hypothesis of gravitation within atoms rests on an assumption.

Most astronomers who accept the existence of black holes apparently prefer to look upon these objects as the limiting state of physical existence, but others recognize that if self-gravitation is a reality, and if it is once initiated, there is nothing to stop it at an intermediate stage such as the black hole. These individuals therefore assume (26) that the contraction process continues until the material aggregate is reduced to a mere point, a singularity.

This line of thought that we have followed from the physicists’ concept of the nature of electricity to the nuclear model of atomic structure, and from there to the singularity, is a good example of the way in which unrestrained application of imagination and assumption in theory construction leads to ever-increasing levels of absurdity—in this case, from atomic “collapse” to degenerate matter to neutron star to black hole to singularity. Such a demonstration that extension of a line of thought leads to an absurdity, the reductio ad absurdum, as it is called, is a recognized logical method of disproving the validity of the premises of that line of thought. The physicist who tells us that “the laws of modern physics virtually demand that black holes exist” is, in effect, telling us that there is something wrong with the laws of modern physics. In the preceding pages we have shown just what is wrong: too much of the foundation of conventional physical theory rests on untestable assumptions and “models.”

The physical theory derived by development of the consequences of the postulates that define the universe of motion differs very radically from current thought in some areas, such as astronomy, electricity, and magnetism. Many scientists find it hard to believe that the investigators who constructed the currently accepted theories could have made so many mistakes. It should be emphasized, therefore, that the profusion of conflicts between present-day ideas and our findings does not indicate that the previous investigators have made a multitude of errors. What has happened is that they have made a few serious errors that have had a multitude of consequences.

The astronomical theories based on the nuclear atom-model that have been mentioned in this chapter provide a good example of how one basic error distorts the pattern of thinking over a wide area. In this case, an erroneous theory of the structure of the atom leads to an erroneous theory of extremely high density, which then results in the construction of erroneous theories of all of the astronomical objects composed of ultra-dense matter; not only the white dwarfs, but also quasars, pulsars, x-ray emitters, and compact galactic cores. Once the pyramiding of assumptions begins, such spurious results are inevitable.

latest_greatest_rs_research