Copyright © 1959 by Dewey B. Larson. All rights reserved.
Copyright © 1959 by Dewey B. Larson. All rights reserved.
The objective of this work is to show that two simple postulates incorporating four assumptions as to the physical nature of the universe and three as to its mathematical behavior are sufficient to account for all physical phenomena. In the first few pages these postulates are developed and explained. The remainder of the work is a demonstration that a logical and mathematical development of the consequences of the postulates necessarily leads to a theoretical universe identical both qualitatively and quantitatively with the actual physical universe. The following is a detailed description of the contents.
This book is a somewhat unusual expedient which is being used as a means of meeting an unusual situation. For the past twenty-five or thirty years I have been engaged in the study and analysis of basic physical processes and by virtue of the immense number of hours that have been devoted to the task, together with what I like to consider as a sound plan of procedure and a little more than a fair share of good fortune, I have arrived at some very significant results. As it happens, my findings indicate the necessity for a drastic change in the accepted concept of the fundamental relationship which underlies the whole structure of physical theory: the relation between space and time. Since there is practically no major sector of physical activity which is not affected in some manner by this change, the work as a whole involves a very radical departure from current scientific thought.
In theory, new scientific ideas are always welcome and most of the vast amount of effort now being devoted to fundamental research is aimed at the discovery of new facts and relations. In actual practice, however, the welcome is reserved for those discoveries which are in essence extensions or minor revisions of the existing body of scientific thought, and an altogether different reception awaits a discovery which challenges any of the fundamentals of currently accepted doctrine. Max Planck once said that new scientific truths never succeed in convincing their opponents and must wait for a new generation of scientists to grow up before they can triumph. This may be somewhat of an overstatement, but at least there is no question but that the general reaction to any innovation in fundamental theory is decidedly antagonistic.
The full force of this situation naturally falls on anything as heretical as this present work, and it introduces some serious complications into the problem of publication. The usual practice of publishing the results piecemeal in the scientific journals is out of the question in this ease since the findings in any one of the subsidiary areas arc not acceptable without some explanation of the process whereby they were obtained, whereas the fundamental theory underlying this process is so far off the beaten track that it is futile to try to present it without the massive support that can only come from demonstrating its validity in a great many of these subsidiary areas. Anything short of a book-length presentation is therefore precluded, but the normal book publishing procedure involves running the gauntlet of some of the very individuals who are the least inclined to spend the time and effort necessary to understand new concepts that conflict with established doctrines : those who are regarded as authorities in their fields. The general attitude was expressed very succinctly by one prominent American scientist to whom I communicated some of my early findings. He found it entirely unnecessary to consider my arguments or to look at the facts and figures which I had assembled in support of those arguments ; he merely enumerated the existing theories with which my results were in conflict and then laid down the dictum, “There is no chance that they are wrong.” This is a very human reaction on the part of one who has spent a lifetime working with and teaching these theories, but it does present a formidable obstacle to any new ideas on basic subjects: a fact which has been widely recognized and has been the subject of comment by many writers on scientific methods.
In view of the factors which are delaying the release of my findings in a normal manner it has seemed to me that it would be advisable to publish a preliminary report which would cover enough of the major results to show the nature and scope of the work, without including all of the great mass of numerical data which I have assembled to corroborate the qualitative findings, and to make this report available for study and analysis pending completion of arrangements for publication of the work as a whole.
This preliminary edition includes the first six sections of the complete work, in which the general physical principles are derived from the postulates on which the work is based, and the nature of the mathematical development is indicated by calculations of the inter-atomic distances of the elements and representative compounds. The next 18 sections, which are devoted primarily to similar detailed calculations of the physical properties of matter in different states and under different conditions, are omitted, but their contents are described in Appendix B. Except for two short omissions of the same kind of material the full text of the remaining 14 sections is included. In this portion of the work the principles and relationships developed in the earlier sections are applied to the major physical problems of the present day : the nature of electrical and magnetic phenomena, radioactivity, atom building, cosmic rays, and the varied problems of astronomy and cosmology. The general objective of this presentation is to show that the two fundamental postulates which are developed and explained in the first few pages define a theoretical universe which is identical With the observed physical universe both qualitatively and quantitatively. Aside from a few instances where references to other theories assist in clarifying the new theoretical picture, no comparisons have been made with the results obtained by other methods. The theoretical universe herein developed is shown to be in agreement with observation and measurement of the actual physical universe, within the scope and accuracy of the observations, and this establishes the validity of the new theoretical structure and the postulates from which it was derived. Whether or not other conflicting theories are also valid within their own limited fields has no bearing on this situation; no previous theory even approaches the point of being universally applicable.
Omission of more than half of the text from this preliminary edition interferes with the continuity of the development to some extent and also launches the new theory without the impressive mathematical support which I have accumulated by calculating a vast number of values of physical properties directly from theoretical foundations by means of the derived relationships. I believe, however, that the earlier release of the findings which has been made possible by this expedient justifies overlooking the disadvantages.
DEWEY B. LARSON
It is generally recognized that present-day physical theory is no longer adequate to meet the growing demands upon it. Those theoretical concepts which only a few years ago were hailed as the keys to the innermost mysteries of nature are now totally unable to cope with the flood of new discoveries emanating from our laboratories and it has become obvious that some very different approach to the problem is essential. As one observer, Ernest Hutten, sums up the situation in a recently published book, “Most physicists feel that the time is ripe, again, for a radical change in our ideas, and for a new theory.”
In retrospect it is clear that this is not a new development but a recurring crisis; each time a major advance is made in the observational field existing physical theory finds itself unable to account for the newly discovered facts and a drastic revision of the theory is necessary. But each successive revision no more than takes shape before a new crisis is upon us; a new set of facts is discovered which the revised theory does not anticipate and cannot explain. A certain amount of modification and revision is no doubt inevitable in the early stages of any theory but the same pattern of helplessness in the face of new experimental advances has been repeated so often that it becomes pertinent to inquire whether modern theory is actually proceeding in the right direction. The continually renewed demand for a “radical change in our ideas” strongly suggests that something more than a minor reconstruction is required and that we should back up and take a fresh start along a different path.
When we review the evolution of modern physical theory to appraise the direction in which we are now moving as a preliminary to charting a different course, it is apparent that the outstanding general trend in the theoretical development has been the gradual loosening of the ties between fundamental theory and the facts of everyday life. Beginning with Einstein’s introduction of the concept of physical quantities whose actual magnitude varies with the position of the observer, the divergence has increased at an ever accelerating rate until the latest theoretical developments have passed completely beyond the bounds of objective reality and have placed the basic processes of nature in what Bridgman calls “a shadowy domain which he (the physicist) cannot even mention without logical inconsistency.”
In the scientific field we are inclined to think that we have come a long way since the days when all unexplained phenomena were attributed to demons and ogres, but the current tendency to meet all difficult problems by assuming the existence in the unobservable region of the universe of phenomena and relationships totally unlike those which are found in the known world is essentially the same pattern that was followed by primitive man: a resort to the supernatural when ever explanation becomes difficult. Aside from the personification of forces, which is no longer in vogue, it is hard to detect any material difference between the mysterious unobservable forces of modern science and the demons of old.
Of course we must admit that when we are dealing with the unknown any assumption may be valid, no matter how fantastic it may seem when judged by the standards of our everyday experience, and if the bizarre theories of modern physics were adequate to meet the demands upon them they would certainly be acceptable regardless of the doubts with which their basic assumptions might have been regarded initially. But when these theories are not adequate and when insistent demands for revision are heard from all directions, it is in order to suggest the possibility that an undue readiness to part company with reality and to build the foundations of theory on unsupported assumptions may be the root of the present difficulty. In this connection it should be recognized that although we cannot arbitrarily reject any fundamental assumption that is proposed, since we must concede that the true relationships in the areas beyond the frontiers of knowledge are unknown, there is in each case one possible assumption which is initially so far superior to all others, so much more likely to represent the true situation, that we are never justified in turning to anything else unless and until we have established beyond a reasonable doubt that the consequences of this assumption are not in accord with the facts. This greatly superior hypothesis is, of course, the assumption that the relationships which are found to apply in the regions accessible to observation also apply in the unknown regions.
It will no doubt be contended that in physical science the extrapolated relationships have always been examined and their inapplicability has in each case been demonstrated before any other assumptions were made. Someone is sure to point out that relativity theory was formulated only alter Newton’s Laws failed at high velocities; that non-Euclidean geometry was developed only when Euclidean geometry came to a dead end; that the concept of atomic events as happenings which do not take place in space or time was devised as a last resort only after all attempts to explain these phenomena by means of the laws of the world of objective reality had proved fruitless, and so on. But this present investigation has disclosed that the applicability of a theory based wholly on extrapolated relations was never tested in any of these instances, and that the supposed failures were not actually due to deficiencies in the laws being examined but to the fact that in their application these laws were always coupled with arbitrary and erroneous assumptions as to the relation between space and time: a fatal handicap for any theoretical structure.
Newton looked upon space and time as two independent entities. Modern theory recognizes that they are not independent, and regards them as components of a four-dimensional structure in which there are three dimensions of space and one dimension of time. But if we examine the bases of these two hypotheses it is apparent that they are both purely arbitrary assumptions, and in view of the points brought out in the preceding paragraphs neither of them should ever have been given any consideration until after the consequences of extrapolating the relation applicable in the known region had been thoroughly explored. In this known region the relation between space and time is recognized as motion. Motion is measured as velocity, and in velocity time and space have a reciprocal relationship; that is, more space is the equivalent of less time and vice versa. The most conservative assumption that we can possibly make concerning the general relation of space and time, the hypothesis that is by far the most probable representation of the underlying truth, is that this relationship which holds good in the known phenomenon also holds good in general. This hypothesis of a general reciprocal relation between space and time has therefore been adopted as the first of the assumptions of the new theory that will be developed in this work and it may be regarded as the cornerstone of the entire theoretical structure.
When we thus begin with a solid foundation based on extrapolation of an observed relationship rather than starting with a purely arbitrary hypothesis, we will find that the necessity for postulating that “things are different” in other parts of the universe disappears, and we will be able to take the position that the portion of the universe which is accessible to our observation is a reasonably representative sample of the whole and that the physical behavior of all other sectors of the universe can be deduced from the relationships which we find in the observable regions. This means, of course, that a large part of the existing structure of physical theory must be discarded, regardless of the great skill and ingenuity that have gone into its construction, as even the highest degree of competence cannot derive the right answers from the wrong basic assumptions. Starting with an untruth in physical theory is no different from making a false statement in everyday life; in either case an ever widening structure of fabrication is required in order to evade the contradictions which develop as a consequence of the original deviation from the truth.
Having arrived at a logical hypothesis as to the general relationship between space and time, let us now apply the same extrapolation process to the formation of the additional assumptions that will be necessary for a complete description of space-time. It is clear that when the reciprocal assumption is made it must immediately be followed by another. If space and time are reciprocally related they must have the same dimensions. We have very little specific knowledge of time, either as to dimensions or otherwise, but we do know from observation that there are at least three dimensions of space, and the simplest assumption which is consistent with the reciprocal hypothesis and the observed properties of space is that both space and time are three-dimensional. Here again the assumption is merely an extrapolation from the known to the unknown. We observe space to be three-dimensional where we are in direct contact with it, and by extension we assume that this is a general characteristic valid throughout the universe. The reciprocal hypothesis then requires the further extension of this assumption to time, a step which may also be regarded as simply a generalization of the geometrical properties of the more readily observed component of space-time.
The third assumption of the new theory is that space and time exist in discrete units. This, too, is an extrapolation from known facts into the region that is unknown. In the early days of science it was generally believed that all of the primary physical phenomena were continuous and infinitely divisible, but as knowledge has grown during the succeeding centuries one after another of these phenomena has been found to exist only in units. The atomic structure of matter was the first to be demonstrated. Later the unit of electricity was isolated and still more recently the work of Planck made it clear that radiant energy follows the same pattern. There is also strong evidence for the existence of basic units in other phenomena, such as magnetism for instance. Since experience shows that as our knowledge widens more and more physical phenomena are proved to exist only in discrete units, it is merely a reasonable extrapolation to assume that if all of the facts were known this would also be found to be true with respect to the basic entities, space and time.
These three assumptions constitute the definition of space-time which will be used in this work. For maximum economy of hypotheses it will be further postulated that space-time as thus defined is the sole constituent of the physical universe. We may then express the assumptions as to the physical nature of the universe as follows:
The physical universe is composed entirely of one component, space-time, existing in three dimensions, in discrete units, and in two reciprocal forms, space and time.
In developing the consequences of this First Postulate it will be necessary to use some mathematical processes and we must therefore make some assumptions as to the mathematical behavior of the universe. Until comparatively recently the validity of the relationships which will be assumed in this work was generally considered axiomatic, but other systems have been devised in the meantime and although we are unable to discover any physical reality corresponding to these unorthodox systems they do put us in the position where we must postulate the validity of the processes which we propose to utilize.
The first of this second group of assumptions will be that the physical universe conforms to the relationships of what may be called ordinary mathematics, for want of a better term. This means that two plus two equal four, the product ab equals the product ba, multiplication is the inverse of division, and so on. It is to be understood that probability mathematics are specifically included. Next the validity of Euclidean geometry will be assumed and finally it will be postulated that all primary physical magnitudes are absolute.
Here again the assumptions are merely generalizations of the relationships that are found to be valid in the regions which are accessible to observation. It is true that there are some experimental data which are currently accepted as being in conflict with the third assumption but these are not direct observations; they are merely inferences based on certain interpretations of the observed facts. It will be shown later in the discussion that these interpretations are not necessarily valid and that there are other equally acceptable interpretations of the same observations which are entirely consistent with the assumption of absolute magnitudes.
Combining these assumptions we have the
The physical universe conforms to the relations of ordinary mathematics, its magnitudes are absolute and its geometry is Euclidean.
If these two Fundamental Postulates are valid then a great many consequences necessarily follow. The objective of this presentation is to develop these consequences and to show that they describe a universe which is identical both qualitatively and quantitatively with the observed physical universe wherever comparisons can be made. It will be demonstrated that just because of the validity of these Postulates and without the intervention of any other factor, radiation, matter, electrical and magnetic phenomena, and the other major features of the observed physical universe must exist, matter must exist in the form of a series of elements, these elements must combine in certain ways and no others, the elements and their compounds must have certain properties such as volume, specific heat, etc., these properties must conform to certain specific sets of numerical values, and so on.
In beginning an examination of the consequences of the two Fundamental Postulates we note first that they involve a progression of space-time which is similar to the progression of time as it is ordinarily visualized. Let us consider a location A somewhere in space-time. During the next unit of time this location progresses to A + 1 in time and since one unit of time, on the basis of the First Fundamental Postulate, is equivalent to one unit of space the location also progresses to A + 1 in space. When n units of time have elapsed the location has progressed to A + n both in space and in time.
It should be emphasized that this statement does not refer to some object that might happen to occupy the location A; it refers to the location itself. If the hypothetical object has no independent motion of its own it will also be found at location A + n after n units of time, but this does not involve any motion of the object. It remains stationary at the same location in space-time but the location itself moves.
We thus arrive at a concept of the physical universe as being characterized by a continuous process of expansion. Although this idea of the fundamental nature of space-time is new and unfamiliar it should not be difficult to visualize since it is merely an extension of the universally recognized progression of time, and it is also entirely in harmony with the large-scale picture of the universe which has been reached through astronomical observations. As will be brought out in the subsequent discussion, the expansion of the universe deduced by the astronomers from the motions of the distant galaxies is a direct consequence of the progression of space-time itself.
Now let us consider some further implications of the postulate that space and time are reciprocally related. As already noted, this means that each individual unit of space is equivalent to an individual unit of time, but if this were the full extent of the relationship there would be no physical phenomena at all, since each unit would be exactly equivalent to all other units and the entire universe would be one vast domain of perfect uniformity in which nothing could ever happen. It is apparent that no physical phenomena can exist except as a result of a divergence from this one to one correspondence: a displacement of space-time from the unit ratio. The space-time ratio of unity therefore constitutes the initial level of all physical activity, the datum from which all phenomena extend.
This is a principle of great significance. In the subsequent development we will find that throughout the physical world relationships are simplified and seemingly contradictory facts are brought into harmony when we take unity as our datum rather than zero. We may, in fact, regard unity rather than the mathematical zero as the true physical zero.
The space-time displacements which are necessary for the existence of physical phenomena originate because the reciprocal postulate involves something more than the equivalence of the individual units. If this were the extent of the relationship we would postulate that space and time are equivalent, not that they are reciprocal. The reciprocal postulate includes the further requirement that under certain conditions associations of n units of one component must exist and that under those conditions the n units of this kind are equivalent to 1/n units of the other component.
We are then led to inquire how it can be possible for n units of space or time to act as an association when each of the individual units in this association is required to Progress uniformly with a unit of the opposite kind as an integral part of the general space-time progression. A detailed consideration of this point discloses that it requires the existence of a difference between space (or time) as a constituent of space-time and space (or time) as a separate entity. The only such difference permitted by the Fundamental Postulates is a difference in direction; hence we arrive at the conclusion that space-time is scalar and that direction is a property of space and time individually.
In the early stages of this investigation the scalar nature of space-time was embodied in an additional postulate. Further study indicated that it was a necessary consequence of the previous assumptions, as indicated in the preceding paragraph, and it was therefore eliminated from the list of postulates. However, if there is any question as to the logic involved in deriving this conclusion from the First Postulate the additional postulate can be restored and the number of basic physical assumptions will be increased from four to five. This comment is being made to clarify the point that the status of this principle has no bearing on the validity of the subsequent development of theory. The scalar nature of space-time is a part of the system; the only question at issue is whether or not it needs to be expressed as an additional postulate.
From the foregoing it is apparent that where n units of one component replace a single unit in association with one unit of the other kind in a linear progression, the direction of the multiple component must reverse at each end of the single unit of the opposite variety. Since space-time is scalar the reversal of direction is meaningless from the space-time standpoint and the uniform progression, one unit of space per unit of time, continues just as if there were no reversals. From the standpoint of space and time individually the progression has involved n units of one kind but only one of the other, the latter being traversed repeatedly in opposite directions. It is not necessary to assume any special mechanism for the reversal of direction. In order to meet the requirements of the First Postulate the multiple units must exist, and they can only exist by means of the directional reversals. It follows that these reversals are required by the Postulate itself.
Because of the periodic reversal of direction the multiple unit of space or time replaces the normal unidirectional space-time progression with a progression which merely oscillates back and forth over the same path. But when the translatory motion in this dimension is eliminated there is nothing to prevent the oscillating unit from progressing in another dimension, and it therefore moves outward at the normal unit velocity in a direction perpendicular to the direction of vibration. When viewed from the standpoint of a reference system which remains stationary and does not participate in the space-time progression the resultant path of the oscillating progression takes the form of a sine curve.
It is now possible to make some identifications. The oscillating system which has been described will be identified as a photon. The process of emission and movement of these photons will be identified as radiation and the space-time ratio of the oscillation will be identified as the frequency of the radiation.
Since space-time is scalar the actual direction in which any photon will be emitted is indeterminate and where a large number of photons originate at the same location the probability principles whose validity was assumed as a part of the Second Fundamental Postulate require that they be distributed equally in all directions. We find then that the theoretical universe which we are developing from the Fundamental Postulates includes radiation consisting of photons traveling outward in all directions from various points of emission at a constant velocity of one unit of space per unit of time; that is at unit velocity.
At this point it is in order to call attention to the fact that even in this early stage of the development simple explanations are already emerging for items with which previous theories have experienced great difficulty. The dual nature of radiation which causes it to travel as a wave but to act as a particle in emission and absorption has been a controversial issue for decades, yet the foregoing explanation shows that the reasons for this behavior are actually very simple. The photon acts as a particle in emission or absorption because it is a single independent entity; it travels as a wave because the resultant of its own inherent motion and that of the space-time progression has the form of a wave.
Furthermore, it is clear that this wave motion requires no medium; no troublesome hypothetical ether needs to be brought into the picture. Nor is there any need to make the unwelcome and disturbing postulate of action at a distance. The photon, having no independent translatory motion, remains at the same space-time location permanently but it is carried along by the progression of space-time itself. It acts only upon objects which do not participate in the progression and are therefore encountered in the path of motion. The nature of these objects will be discussed shortly.
A simple explanation is also provided for the observed fact that the velocity of radiation remains constant regardless of the reference system. Let us consider two photons originating at the same point and traveling in opposite directions. Each moves one unit of space in one unit of time. When the first unit of motion is complete the photons are separated by two units of space, and in the Newtonian system the relative velocity is obtained by dividing the increase in separation, two units, by the elapsed time, one unit. The result is a relative velocity of two units. But experiments indicate that if this velocity were measured it would be found to be unit velocity, not two units. The Newtonian system therefore fails at these high velocities.
Einstein met this situation by adopting a hypothesis previously advanced by Fitzgerald and Lorentz in which it is assumed that distance is not an absolute magnitude but varies with the velocity of the reference system in such a manner as to keep the relative velocity of radiation constant. In the case under consideration the velocity equation s/t = v, which produces the incorrect result 2/1 = 2 in the Newtonian system, now becomes s/1 = 1. Here it is assumed that the distance, s, automatically takes whatever value is required in order to arrive at the observed constant value of the velocity, the latter being accepted as being fixed by a law of nature. The highly artificial character of this solution of the problem aroused strong opposition when it was first proposed but it has won general acceptance by default, no reasonable alternative having heretofore appeared to challenge it.
In the theoretical universe being developed from the Fundamental Postulates physical magnitudes are absolute, and the variability which relativity theory introduces into the measurement of distance cannot be accepted. In this system, however, there is no necessity for any ad hoc assumption of this kind to force agreement with the observed facts since the constant relative velocity of radiation is a natural and unavoidable consequence of the Postulates.
The controlling factor in this situation is the three-dimensional nature of time. In the particular example under consideration each photon moves one unit of space in one unit of time (the normal unit velocity of the space-time progression). Both Newton and Einstein accepted the unit of time applicable to photon B as the same unit of time which is applicable to photon A. But the Postulates of this work specify that each unit of space is equivalent to a unit of time and since the motion involves two different units of space the equivalent units of time are also two separate and distinct units. Therefore when the photons increase their separation by two units of space they also increase their separation by two units of time; that is it takes two units of time to move the photons apart two units in space. The relative velocity is then 2/2 = 1, which is completely in agreement with the observed facts.
This unit velocity relative to a photon moving in the opposite direction is identical with the velocity relative to a stationary object, and the same result is obtained for any intermediate velocity of the reference system. We therefore arrive at the general principle that the velocity of radiation in free space is independent of the reference system. Basically this is a necessary consequence of the status of unity as the true physical zero.
Continuing the development of the consequences of the Fundamental Postulates, we will next consider the effect of rotation of the photons. Rotation differs from translation only in direction and this difference has no meaning from a space-time standpoint since space-time is scalar. Rotation at unit velocity is therefore indistinguishable from the normal space-time progression; that is, from the physical standpoint it is the equivalent of no rotation at all. In order to produce any physical effects there must be a rotational displacement: a deviation from unity. The magnitude of any rotational motion of the photons must therefore be greater than that of the space-time progression.
Another necessary characteristic of the rotational motion of the photons is that its direction must be opposite to that of the space-time progression, because any added displacement in the positive direction would result in a directional reversal and would produce a vibration rather than a rotation, as previously explained. This means that when the photon acquires a rotation it travels back along the line of the space-time progression, and since this retrograde motion is greater than that of the progression these rotating units are reversing the pattern of free space-time and are moving inward toward each other, either in space or in time, depending on the direction of the displacement.
We are now in a position to make some further identifications. The rotating photons, with some reservations to be discussed later, will be identified as atoms, collectively the atoms constitute matter, and the inward motion resulting from the rotational velocity will be identified as gravitation.
Here again we find that the Fundamental Postulates give us a very simple explanation of a hitherto mysterious phenomenon. Atoms of matter appear to exert attractive forces on each other merely because they are in constant motion toward each other. There is no action at a distance, no medium, no propagation of a force, no curved space; simply an inherent motion of the rotating units in the direction opposite to the ever-present expansion of the universe.
The next point which we will wish to explore is the nature of the atomic rotation. In approaching this subject let us for convenience visualize the photon in a vertical position. Since the photon is one-dimensional a simple rotation around this vertical line as an axis is impossible; such a rotation is indistinguishable from no rotation at all. The photon can, however, rotate around both of the horizontal axes passing through its midpoint. The basic rotation of the atom is therefore two-dimensional.
On examination of this two-dimensional rotation it will be seen that it is possible to have two coexisting rotating systems of this kind,. If there is only one unit of two-dimensional rotational displacement the original rotation may take place around either horizontal axis and this entire rotating unit then acquires a rotation around the other. Inasmuch as the displacement units are independent there is no requirement that the second unit follow the same pattern; on the contrary if the original rotation of the first displacement unit is around axis a the probability principles indicate that the original rotation of the second unit will be around axis b. The two rotations can take place simultaneously without interference and the atoms of matter therefore normally include two separate rotating systems.
Although simple rotation around the vertical axis in the same space-time direction as the two-dimensional rotation is impossible, now that the photon is rotating around the horizontal axes the entire system can be rotated around the vertical axis in the opposite space-time direction. This reverse rotation is not required for stability and is absent from certain types of atoms, which we will discuss shortly.
Since the axes may lie in any direction, the vertical orientation of the photon having been used merely for convenience in the preliminary discussion, some new terminology will be necessary for identification purposes. We will therefore call the one-dimensional rotation electric rotation and the corresponding axis the electric axis. Similarly we will refer to the two-dimensional rotation as magnetic rotation around the the magnetic axes. If the displacements in the two magnetic dimensions are unequal the rotation is distributed in the form of a spheroid and in this case the rotation which is effective in two dimensions of the spheroid will be called the principal magnetic rotation and the other will be the subordinate magnetic rotation. When it is desired to distinguish between the larger and the smaller magnetic rotations the terms primary and secondary will be used. Designation of these rotations as electric and magnetic does not indicate the presence of any electric or magnetic forces in the structures now being described. This terminology is merely being used to set the stage for the introduction of electric and magnetic phenomena in a later phase of the discussion.
Each of these three rotations may assume any one of a number of possible displacement values. This means that many different rotational combinations can exist and since the physical behavior of the atoms depends upon the magnitude of these rotational displacements the various rotational combinations can be distinguished by differences in their physical behavior, by differences in their properties, we may say.
We may now identify these rotational combinations as the chemical elements, each rotating unit of a particular kind constituting an atom of that element. For convenience in referring to the various combinations of rotational displacement a notation in the form 2-2-3 will be used, the three figures representing the displacements in the principal magnetic, the subordinate magnetic, and the electric rotational dimensions respectively.
Because of the scalar nature of space-time the rotation cannot have the same space-time direction as the primary oscillation. The net rotational displacement must therefore oppose the space or time displacement of the oscillation. The rotating units may be linear space displacements rotating with net displacement in time, or linear time displacements rotating with net displacement in space. The latter combinations, however, do not constitute matter and consideration of this type of rotating unit will be deferred until later. For the present the discussion will be confined to those combinations in which the net rotational displacement is in time and unless otherwise specified the displacement figures will refer to time displacement. If space displacement is present the applicable figure will be enclosed in parentheses.
Looking first at those combinations which have zero electric displacement, a single unit of magnetic time displacement results in the combination 1-0-0. This single displacement unit merely serves to neutralize the oscillating displacement in the opposite space-time direction and the resultant is the rotational base, a unit with a net displacement of zero; that is, the physical equivalent of nothing. One additional unit of magnetic time displacement results in the combination 1-1-0. In order to avoid interference it is necessary that the two rotating systems of the atom have the same velocities. Each added unit of displacement therefore increases the rotation of both systems in one dimension rather than one system in both dimensions. For reasons which will be developed later, the effect of a displacement less than 2 is negative in direction and the combination 1-1-0 still does not have the properties which we I will recognize as those of matter. We cannot go directly to 2-0-0 because the probability principles operate to keep the eccentricity at a minimum and the successive increments of displacement therefore go alternately to the principal and subordinate rotations. The first magnetic rotational combination which qualifies as matter requires one more unit of magnetic displacement, bringing the system up to 2-1-0. This combination we identify as the element helium. Additional units of magnetic displacement result in a series of elements which we identify as the inert gases. The complete series is as follows:
The number of possible combinations of rotations is greatly increased when electric displacement is added to these magnetic combinations, but the combinations which can actually exist as elements are limited by the probability principles. Where the two-dimensional magnetic displacement is n the equivalent number of one-dimensional electric displacement units is n2 in each dimension. The magnetic displacement is therefore numerically less than the equivalent electric displacement and is correspondingly more probable. Any increment of displacement consequently adds to the magnetic rotation if possible rather than to the electric rotation. This means that the role of the electric displacement is confined to filling in the intervals between successive additions of magnetic displacement.
At this point it is necessary to develop some further facts concerning the characteristics of the space-time progression. In the undisplaced condition all progression is by units. We have first one unit, then another similar unit, yet another, and so on, the total up to any specific point being n units. There is no term with the value n; this value appears only as the total.
The progression of displacements follows a different mathematical pattern because in this case only one of the space-time components progresses, the other remaining fixed at the unit value. The progression of 1/n, for instance, is 1/1, 1/2, 1/3, and so on. The progression of the reciprocals of 1/n is 1, 2, 3… n. Here the quantity n is the final term, not the total. Similarly when we find that the electric equivalent of a magnetic displacement is 2n2, this does not refer to the total from zero to n; it is the equivalent of the nth term alone. To obtain the total electric equivalent of the magnetic displacement we must sum up the individual 2n2 terms.
From the foregoing explanation it can be seen that if all rotational displacement were in time the series of elements would start at the lowest possible magnetic combination, helium, and the electric displacement would increase step by step until it reached a total of 2n2 units, at which point the relative probabilities would result in a conversion of these 2n2 units of electric displacement into one additional unit of magnetic displacement, whereupon the building up of the electric displacement would be resumed. This behavior is modified, however, by the fact that electric displacement in matter, unlike magnetic displacement, may be in space rather than in time.
As previously brought out, the net rotational displacement of any rotational combination must be in time in order to give rise to those properties which are characteristic of matter. It necessarily follows that the magnetic displacement, which is the major component of the total, must also be in time. But as long as the larger component is in time the system as a whole can meet the requirement of a net time displacement even if the smaller component, the electric displacement, is in space. It is possible, therefore, to increase net time displacement a given amount either by direct addition of the required number of units of electric displacement in time or by adding magnetic displacement in time and then adjusting to the desired intermediate level by adding the appropriate number of units of the oppositely directed electric displacement in space.
Which of these alternatives will actually prevail is again a matter of probability and from probability considerations we deduce that the net displacement will be increased by successive additions of electric displacement in time until n2 units have been added. At this point the probabilities are nearly equal and as the net displacement increases still further the alternate arrangement becomes more probable. In the latter half of each group, therefore, the increase in net displacement is normally attained by adding one unit of magnetic displacement and then reducing to the required net total by adding electric displacement in space (negative displacement), eliminating successive units of the latter to move up the atomic series.
By reason of this availability of electric displacement in space as a component of the atomic rotation, an element with a net displacement less than that of helium becomes possible. This element, 2-1-(1), which we identify as hydrogen, is produced by adding one unit of electric displacement in space to helium and thereby in effect subtracting one electric time displacement unit from the equivalent of four units (above the 1-0-0 datum) which helium possesses. Hydrogen is the first in the ascending series of elements and we may therefore give it the atomic number 1. The atomic number of any other element is equal to its net equivalent electric time displacement less two units.
One electric time displacement unit added to hydrogen eliminates the electric displacement in space and brings us back to helium, atomic number 2, with displacement 2-1-0. This displacement is one unit above the initial level of 1-0-0 in each magnetic dimension and any further increase in the magnetic displacement requires the addition of a second unit in one of the dimensions. With n = 2 the electric equivalent of a magnetic unit is 8, and we therefore have eight elements in the next group. In accordance with the probability principles the first four elements of the group are built on a helium type magnetic rotation with successive additions of electric displacement in time. The fourth element, carbon, can also exist with a neon type magnetic rotation and four units of electric displacement in space. Beyond carbon the higher magnetic displacement is normal and the successive steps involve reduction of the electric space displacement, the final result being neon, 2-2-0, when all space displacement has been eliminated. The following elements are included in this group:
Another similar group with one additional unit of magnetic displacement follows.
On completion of the 3-2 magnetic combination at element 18, Argon, the magnetic rotational displacement has reached a level of two units above the rotational datum in both magnetic dimensions. In order to increase the rotation in either dimension by an additional unit, a total of 2 × 32 or 18 units of electric displacement are required. This results in a group of 18 elements, which as before is followed by a similar group differing only in that the magnetic displacement is one unit greater.
|Displacement||Element||Atomic No.||Displacement||Element||Atomic No.|
The effective magnetic displacement now steps up to 4 in one dimension and consequently there are 2 × 42 or 32 members in each of the next two groups. Only half of the elements in the second of these groups have actually been identified thus far, but theoretical considerations indicate that this group can be completed under favorable conditions. The general situation with respect to atomic stability and the limitations to which the rotational displacement is subject will be discussed in a subsequent section. The known members of the 32 element groups are as follows:
|Displacement||Element||Atomic No.||Displacement||Element||Atomic No.|
For convenience in the subsequent discussion these groups of elements will be identified by the magnetic n value with the first and second groups in each pair being designated A and B respectively. Thus the sodium group, which is the second of the 8-element groups (n = 2) will be called Group 2B.
The next objective will be to evaluate some of the properties of the different elements as they are defined by the principles derived from the Fundamental Postulates. On beginning this task, one of the first things we will encounter is the fact that in many instances the variations in these properties will take place entirely within a single unit of space. For example, two atoms may be separated by t units of time. Since space and time are reciprocally related this separation in time is equivalent to a separation of 1/t units of space. If the time t increases the equivalent space decreases and the two atoms in effect move closer together. Such motion, however, differs in many respects from motion which involves actual units of space in association with actual units of time.
In view of the important differences in various physical phenomena which similarly depend on whether the magnitudes involved are above or below the unit levels we may conveniently regard these unit levels as dividing the universe into several general regions. At one extreme there is a region in which space remains at the minimum value, unity, and all variability is in time. This we will call the time region. Next we have a region in which one or more units of space are associated with a greater number of units of time. This region where the space-time ratio or velocity is below unity will be called the time-space region. A similar region on the other side of the neutral axis where the space-time ratio is greater than unity is the space-time region, and at the other extreme we have the space region where time remains at unity and all variability is in space.
We have found that the normal space-time progression involves n units of space for each n units of time, which means that the velocity of the progression, is always n/n or unity. In the time region space cannot progress but time does progress in the usual manner and since time and space are reciprocally related the progression of time t results in a progression of equivalent space 1/t. The velocity of the progression in this region is equivalent space 1/t divided by time t, or 1/t2.
In the time-space region the velocity corresponding to unit space and time t is 1/t. From the foregoing we find that in the time region it is 1/t2 . The time region velocity and all quantities derived therefrom, which means all of the physical phenomena of the region, are therefore second power expressions of the corresponding time-space region quantities. This is an important principle that must be taken into account in any relationship involving both regions. The intra-region relations may be equivalent; that is, the expression a = bc is the mathematical equivalent of the expression a2 = b2c2 . But if we measure the quantity a2 in the units applicable to a (the time-space region units), it is essential that the equation be written in the correct regional terms: a2 = bc. This principle is one of major significance because our measuring processes normally give us time-space region values.
Looking next at the direction of the motion, we note that in the time-space region the progression tends to move objects apart in space. Where motion is unimpeded the separation increases by n units of space in n units of time. In the time region the progression which increases time has the effect of decreasing equivalent space, since the space equivalent of time n is 1/n. This means that in the time region the space-time progression tends to move objects to positions which in effect are closer together.
If we appraise this situation in the usual manner, taking the mathematical zero as our datum, it appears inconsistent. We find the progression of space-time acting in a certain direction in one region and in the opposite direction in another region; a seemingly contradictory behavior. But the apparent conflict is only the result of using the wrong datum. It has already been pointed out that the true zero level of the physical universe is unity, not the mathematical zero. If we take unity as our datum the inconsistency disappears. We now find that space-time always progresses in the same direction: away from unity.
If two objects are initially separated by more than unit space they will move away from each other (outward from unity) under the influence of the space-time progression. If they are initially separated by the equivalent of less than unit space; that is, by t units of time, the space-time progression will take place in the same natural direction—away from unity—but in this case the result will be to move the objects toward each other, since outward from unity in this instance is toward zero.
The rotational motion of the atoms of matter necessarily opposes the space-time progression, for reasons previously explained, and it always acts in the direction toward unity. The resulting translational motion (gravitation) therefore causes the atoms to approach each other in the time-space region, but in the time region where unity lies in the opposite direction the gravitational motion increases the separation between the atoms.
Although it is quite apparent from the discussion thus far that both the space-time progression and the opposing motion due to the atomic rotation are always in existence, even if the resultant is no motion at all, it is convenient for many purposes to consider this resultant as having been brought about by a conflict of two forces tending to cause motion in opposite directions. We define force as that which will cause motion if not prevented from doing so by other forces, and we define the magnitude of the force as the product of mass and acceleration.
This introduces a new concept, that of mass, and in order to fit the force system into its proper position in the theoretical universe which we are developing from the Fundamental Postulates we must identify mass with the corresponding quantity in the velocity system; that is, we must reduce it to space-time terms. For this purpose we identify mass as the reciprocal of three-dimensional velocity. The correlation in this case is not as obvious as it has been in most of the identifications previously made, but this relation is inherent in the concept of force as it has been derived in the preceding paragraph and its validity will be demonstrated in the course of the subsequent discussion. In terms of space and time, mass may now be expressed as t3/s3. Force, which was defined as the product of mass and acceleration, becomes t3/s3 * s/t2 = t/s2. Acceleration and force are therefore analogous quantities, their space-time expressions having the same form with the space and time terms interchanged.
Before going on to a further consideration of force it will be desirable to point out that the space-time expression for energy or work, which is the product of force and distance, is t/s2 * s = t/s. This is the reciprocal of velocity s/t. Energy, therefore, is the reciprocal of velocity. When one-dimensional motion is not restrained by opposing motion (force) it manifests itself as velocity; when it is so restrained it manifests itself as potential energy. Kinetic energy is merely a measure of the energy equivalent of the velocity of a mass and it reduces to the same space-time terms as potential energy, since
½mv2 = ½t3/s3 x s2/t2 = ½ t/s
On the basis explained in the foregoing paragraphs we may treat gravitation as a force rather than a velocity. The gravitational force resulting from the rotation of an atom is equal to the mass corresponding to that rotation multiplied by unit acceleration. In this connection it will be desirable to state a general principle which we will call the Principle of Equivalence:
If a quantity a is expressed in terms of quantities x, y, z, etc., by means of the relationships derived from the Fundamental Postulates, and the quantities x, y, z, etc., are each given unit value, then the value of quantity a is also unity.
This is merely an expression of the obvious results of performing mathematical operations with all terms equal to unity, but it will not always be obvious in application to physical situations and we will find the principle useful in the subsequent development. In particular, it enables us to recognize the natural unit in cases where the usual measurement unit is arbitrary and the natural unit is not clearly identified physically. In the present instance it is evident that one unit of mass exerts one unit of force under unit conditions; that is, under such conditions that all of the factors x, y, z, etc., which enter into the determination of the gravitational force have unit value. This requirement of unit conditions is a very important point in all applications of the Principle of Equivalence. We cannot merely deduce from the general force expression F = ma that one unit of mass exerts one unit of gravitational force, as this general expression does not take into account all of the factors which affect the gravitational situation. In order to utilize the general equation for this specific purpose we must identify the special features that are involved and introduce them into the mathematical expression in such a manner that the resultant force is unity when each factor is likewise unity.
The first of these factors which should be considered is a consequence of the essential nature of force. As has been explained, force is merely a concept by means of which we visualize the resultant of oppositely directed motions as a conflict of tendencies to cause motion rather than as a conflict of the motions themselves. This method of approach facilitates mathematical treatment of the subject, and is unquestionably a great convenience, but whenever a physical situation is represented by a derived concept of this kind there is always a hazard that the correspondence may not be complete and that conclusions reached through the medium of the derived concept may therefore be in error. A serious error of this kind has been introduced into the currently accepted theories concerning masses moving at high velocities.
The basic error in this case is the assumption that a force applied to the acceleration of a mass remains constant irrespective of the velocity of the mass. If we look at this assumption only from the standpoint of the force concept it appears entirely logical. Force is a tendency to cause motion and it seems quite reasonable that this tendency could remain constant. When we look at the situation in its true light as a combination of motions, rather than through the medium of an artificial representation by means of the force concept, it is immediately apparent that there is no such thing as a constant force. The space-time progression, for instance, tends to cause objects to acquire unit velocity, and hence we say that it exerts unit force. But it is obvious that a tendency to impart unit velocity to an object which is already at a high velocity is not equivalent to a tendency to impart unit velocity to a body at rest. In the limiting condition, when the mass already has unit velocity, the force of the space-time progression (the tendency to cause unit velocity) has no effect at all, and its magnitude is zero.
It is evident that the full effect of any force is only attained when the force is exerted on a body at rest, and that the effective force component in application to an object in motion is a function of the difference in velocities. Ordinary terrestrial velocities are so low that the corresponding reduction in effective force is negligible and at these velocities forces can be considered constant. Experiments indicate, however, that acceleration decreases rapidly at very high velocities and approaches a limit of zero as the velocity of the mass approaches unity. Relativity theory explains the experimental results by the assumption that mass increases with velocity and becomes infinite at unit velocity (the velocity of light). In the theoretical universe being developed from the Fundamental Postulates this explanation is not acceptable as mass is constant, but the same results are produced by the fact that force is a function of the difference in velocities and drops to zero when the velocity of the mass reaches unity. In mathematical terms, the limiting zero value of a in the expression a = F/m (which is the fact determined by experiment) is not due to an infinite value of m but to a zero value of F.
Inasmuch as the gravitational equation will not normally be used in application at high velocities we will take this velocity situation into account for the present by limiting the application of the equation to low velocities, rather than introducing the necessary terms to make it generally applicable. There are two other factors, however, which will affect the normal application of the equation. Although the gravitational force of each unit of mass has an absolute value we will observe this force only in conjunction with the gravitational force of another mass and to use the Principle of Equivalence we must specify that this, be unit force. Likewise we must specify that the two interacting masses be separated by unit distance, since we will find that the gravitational force is also a function of the distance. With these two additions we may then say that unit mass exerts unit force against unit force at unit distance.
It follows that m units of mass exert m units of force on unit force at unit distance, and we may further conclude that m units of mass will exert mm’a units of force on m’a units of force at unit distance. It should be noted, however, that m’a is merely a ratio; it is m’a units of force divided by one unit of force and it has no physical dimensions. Therefore when we multiply the original expression ma or t/s2 by m’a we merely change the numerical value; we do not change the dimensions.
Since force is merely an aspect of motion it would seem on first consideration that no variation with distance should exist, as our usual concept of a velocity v in a direction AB is a magnitude which is not affected in any way by the distance between A and B. In the case of gravitation, however, the rotational velocity opposes the space-time progression, which has no fixed direction. It is true that a space-time unit which once starts in a given direction will continue in that direction indefinitely unless acted upon by an outside agency, simply through lack of any mechanism of its own which can cause a change. Radiation, for instance, which remains in the same space-time unit in which it originates, continues on unidirectionally as long as it remains undisturbed.
The rotating atoms, on the other hand, are not moving with the units of space-time; their motion is oppositely directed and hence they are continually passing from one space-time unit to another. As we have seen, the direction of the space-time progression with reference to a fixed system of coordinates is indeterminate. Each time the atom enters a new unit of space-time its direction of motion with reference to a stationary coordinate system therefore alters to oppose the direction of the space-time progression applicable to this particular unit. The probability principles require this motion to be distributed equally in all directions in the long run; hence the acceleration toward any specific area at a distance s from the rotating atom depends on the relationship of that area to the total area of the spherical surface of radius s. Since we have found that unit mass exerts unit force at unit distance, the force at distance s is inversely proportional to the ratio of areas; that is, inversely proportional to s2. Again we must take note of the fact that we are dealing with a pure ratio, s2 units of area divided by 12 units of area, and the introduction of this distance factor does not alter the dimensions of the original force equation F = ma.
The complete expression for gravitational force in the time-space region is then
|F units of force = (m units of mass * 1 unit of acceleration x m’a) / s2|| |
where m’a and s2 are pure numbers (ratios). With this understanding as to the nature of the magnitudes involved, we may simplify the equation for the purposes of numerical calculation by eliminating the terms which always have unit value.
|F = mm’/s2|| |
The derivation of this equation assumes that the various quantities are expressed in natural units. In order to use it in terms of conventional units we must therefore ascertain the relationship between each of the conventional units and the corresponding natural unit. This again involves a process of identification. For each of the fundamental quantities we must select some physical magnitude which we can identify in terms of natural units. The ratio between the values found for this particular quantity in the two systems is the conversion coefficient which is required for converting values from one system to the other. Since this ratio between the two systems is a constant for any specific property it can be derived from any quantity for which the value can be obtained in both systems. As a practical matter, however, it is desirable whenever possible to ascertain the conventional measurement corresponding to unit value in the natural system, since in most cases this unit quantity is readily identified and has been accurately measured in the conventional systems.
For example, the velocity of light in a vacuum obviously corresponds to unit velocity on the basis of the derivation of theory in the foregoing pages. This velocity has been measured very accurately and we therefore start our correlation with the natural unit of velocity equal to
Another well-established value is that of unit frequency, which has been determined from a study of the characteristics of radiation. It is known as Rydberg’s fundamental frequency and has the value 3.2880×1015 cycles per second. In this measurement the cycle per second has been taken as the unit on the assumption that frequency is a function of time only. From the explanation previously given it is apparent that frequency is a velocity, a ratio of space to time, and consequently the natural unit of frequency is one unit of space divided by one unit of time. This is the equivalent of one half-cycle per unit of time rather than one full cycle, as a full cycle involves one unit in each direction. For our purposes the measured value of the Rydberg frequency should therefore be expressed as 6.576×1015 half-cycles per second.
Expressing the frequency, which is actually a velocity, in terms of reciprocal time in this manner is equivalent to using the natural unit of space in combination with the cgs unit of time as the cgs unit of frequency. In other words, omitting consideration of the space term in selecting the unit of measurement has the same effect as giving it unit value. The natural unit of time in cgs terms is therefore the reciprocal of the Rydberg frequency or
We may now multiply this figure by the natural unit of velocity, 2.9979×1010 cm/sec, to obtain the natural unit of space,
Here we have the explanation of our distorted view of the space-time relations: the reason why space seems so much more real and understandable to us than time. Because the retrograde motion of gravitating matter neutralizes the progression of space in our sector of the universe while the progression of time continues unchecked, we are dealing with relatively large time magnitudes and relatively small space magnitudes.
The common units of space and time are not directly comparable as they were set up independently without any idea that there is a definite relationship between the two phenomena, but their practical utility depends on their being of the same order of magnitude with respect to human sensations. just because they are designed to be useful the centimeter and the second or any similar pair of practical units of space and time are approximately equal from the human standpoint; that is, the are about equally distant from the threshold of sensation. But the second, the unit of time which to us is of the same order of magnitude as the centimeter, is actually 3×1010 times as large. No wonder time seems elusive and mysterious to us when it goes by so fast that we experience in one second the time equivalent of 186,000 miles. This enormous difference in magnitudes is obviously one of the principal reasons why we fail to credit time with the properties that we distinguish so readily in space.
We have here a difference comparable to looking at a forest first from a distance of a few yards and then from an airplane several miles up above it. From the close-up viewpoint we are able to distinguish the details: the kind of trees, their sizes, spacing, etc. Furthermore, it is quite apparent that the forest is three-dimensional. On the other hand we learn nothing at all about the extent or shape of the wooded area. From the plane the latter information can be readily ascertained but we can obtain no information regarding those details which were so easily observed from the close-up vantage point. At this distance we are not even able to recognize more than one dimension.
From our position in space-time where only a relatively small amount of space is within our field of view we are able to observe such features as the multiple dimensions, but the space progression is difficult to detect and we catch a glimpse of it only with the aid of our largest telescopes. Our view of time is so extended that we can recognize the large scale feature, the progression, but we cannot identify any of the details that we see in space.
The natural unit of mass (the reciprocal of three-dimensional velocity) is equal to the cube of unit time divided by the cube of unit space, which gives us 3.7115×10-32 sec3/cm3. However, the relationship between mass and the two basic quantities, space and time, has not heretofore been recognized and mass has been taken as another fundamental quantity for which an arbitrary unit has been established. The ratio of the centimeter-second unit of mass to this arbitrary unit can be obtained from measurements of the force of gravity and is known as the gravitational constant. To obtain the natural unit of mass in conventional terms we divide 3.7115×10-32 by the appropriate gravitational constant. In the cgs system this constant has the value 6.670×10-8 and unit mass becomes 0.5565×10-24 grams. This is approximately one-third of the mass of the smallest unit of matter, the hydrogen atom. The exact relation will be developed later.
From the basic conversion ratios similar relations can be computed for the derived units. Among those which we will find useful are the following:
The natural unit of acceleration: unit velocity divided by unit time.
2.9979×1010 cm/sec / 0.1521 X 10-15 sec = 1.97×1026 cm/sec2
The natural unit of force: unit time divided by the square of unit space and by the gravitational constant.
0.1521×10-15 sec / ((0.4559×10-5 cm) × 6.670×10-8) = 109.7 dynes
The natural unit of energy: unit time divided by unit space and by the gravitational constant.
0.1521×10-15 sec / (0.4559×10-5 cm x 6.670×10-8) = 5.0×10-4 ergs
Within the time region the force which the atoms of matter exert upon each other because of their rotational velocities acts in the same natural direction as the gravitational force in the time-space region; that is, toward unity. But this direction, toward unity, which is inward in the time-space region and therefore makes the inter-atomic force which we term gravitation a force of attraction, is outward in the time region, and the corresponding inter-atomic force in this region is a force of repulsion, even though it is merely gravitation in a different environment.
This reversal of direction at the unit level makes possible the establishment of an equilibrium in which the atoms of matter can maintain the same relative positions in space indefinitely. Such an equilibrium cannot be established in the time-space region because in this region the effect of a change in the distance between the atoms is to accentuate any unbalance of forces. Here the rotational force (gravitation) is directed inward and the space-time force outward. If the rotational force exceeds the force of the space-time progression an inward motion takes place, making the effective rotational force still greater. Conversely, if the rotational force is the smaller the resulting motion is outward, which further weakens the already inadequate inward force. In either case there can be no establishment of equilibrium.
In the time region, however, the effect of a change in relative position opposes the unbalanced force which caused the change. If the rotational force is the greater an outward motion takes place, weakening this rotational force and ultimately reducing it to an equality with the space-time force. Similarly if the rotational force is the smaller the oppositely directed space-time force causes an inward motion. This strengthens the rotational force and again produces an equilibrium. The separation between any two atoms under these equilibrium conditions is the inter-atomic distance.
In order to calculate these inter-atomic distances it will first be necessary to determine the magnitudes of the corresponding inter-atomic forces. Since the inter-atomic rotational force in the time region is merely a different aspect of gravitation we may utilize the gravitational equation for its evaluation, providing that we replace the space-time region terms with the appropriate time region terms. We have already noted that velocity in the time region is 1/t2. Energy, the one-dimensional equivalent of mass, which will take the place of mass in the time region expression of the gravitational equation because of the directional characteristics of the rotations in this region, is the reciprocal of this expression, or t2. Acceleration is velocity divided by time, 1/t3. The time region equivalent of the equation F = ma is therefore F = Ea = t2 x 1/t3 = 1/t in each dimension.
As previously explained, the value 1/t applies only to the last of the t units of time, whereas in calculating the effective rotational force we will want the total. To obtain the latter we integrate 1/t from unity to t, the initial point of the integration being taken at unity rather than at zero because unit velocity is the natural datum, the true physical zero.
The force computed in this manner is the inherent rotational force of the individual atom; that is, the force which it exerts against a single unit of force. The force between two interacting atoms is then:
|Fr = ln tA ln tB|| |
The equivalent distance s’ between the two atoms will be measured in the time-space region as s2, because of the inter-regional relationship previously discussed. The force at distance s’ is therefore proportional to (s2)2 or s4 rather than to s2. On this basis the force at equivalent distance s’ is:
|Fr = ln tA ln tB) / s4|| |
To evaluate the inter-atomic distance from this force equation we take advantage of the fact that at the equilibrium point the force of the space-time progression and the component of the rotational force in the direction opposite to that of the progression are necessarily equal. Since time is three-dimensional the rotational force in the time region is distributed three-dimensionally. The space-time progression, however, is one-dimensional and only that portion of the rotational force in the dimension of the progression is effective in the force equilibrium. It is therefore necessary to introduce a factor into the equilibrium equation representing the ratio of effective to total rotation. In determining this ratio we note that the first effective unit of rotation (the first displacement unit) is equal to the space-time progression, since space-time progresses at a unit rate. This one displacement unit (two total units of rotation) therefore constitutes the time region maximum if the units are disposed linearly. If these units are distributed three-dimensionally there can be two units of rotation in each dimension, raising the allowable total to 23 or 8. Only one of these 8 units, the one displacement unit in the direction of the progression, is effective in opposition to the space-time force.
The same situation prevails in each dimension of the two-dimensional magnetic rotation except that in this case there are two effective units per dimension, one for each of the two rotational systems of the atom, and the ratio of effective to total rotational units in each dimension is 1 to 4. It may be somewhat confusing to speak of distributing the displacements in each space-time dimension in a three-dimensional manner, but it should be remembered that the three time region dimensions are dimensions of time, not of space-time, and the total time displacement of a rotation in any one space-time dimension may be disposed three-dimensionally in the time region without in any way affecting same situation the other space-time dimensions. We will encounter this again later in connection with other physical properties. On this basis one unit of rotation out of every 4 x 4 x 8 = 128 is effective against the space-time force. This ratio is further modified by the initial one unit negative level of the rotation due to the oppositely directed motion of the basic oscillation, as the portion of the rotational force required to overcome the negative initial level is not available to oppose the force of the progression. This initial unit is distributed over three dimensions and the one-third unit in the dimension parallel to the space-time progression is again distributed over the three dimensions of the time region. The resultant is 1/9 unit in each magnetic dimension, a total of 2/9 units. The electric rotation does not affect the initial level since it is merely a secondary rotation of the existing magnetic rotational structure. Other phenomena resulting from the rotational forces are similarly affected by the presence of the oppositely directed basic oscillation and we will encounter initial levels of one kind or another in a great many of the physical properties which we will examine.
Because of this negative initial level another 2/9 unit of displacement must be added to each of the 128 units in order to obtain one full unit in opposition to the space-time progression. This increases the ratio of total to effective units to 156.44 to 1. The one-dimensional rotational force applicable to each atom is therefore divided by 156.44 in setting up the equilibrium equation. For the two-dimensional magnetic rotation this factor becomes (156.44)2 and for two interacting magnetic rotations it increases to (156.44)4. Applying this factor to the square of the one-dimensional rotational force, equation 5, we obtain the effective magnetic rotational force.
|Fm = (1/(156.44)4s4) ln2tA ln2tB|| |
The distance factor does not apply to the space-time force as this force is omnipresent and unlike the rotational force is not altered as the objects to which it is applied change their relative positions. At the point of equilibrium, therefore, the rotational force is equal to the unit space-time force. Substituting unity for Fm in equation 6 and solving for the equilibrium distance, we obtain:
|so = (1/156.44) ln½tA ln½tB||(7)|
The inter-atomic distances for those elements which have no electric rotation, the inert gas series, may be calculated directly from this equation. In the elements, however, tA = tB in most cases and it will therefore be convenient to express the equation in the simplified form:
|so = 1/156.44 ln t||(8)|
In cgs units this is:
|so = 2.914 X 10-8 ln t cm||(9)|
As brought out in the discussion of the general characteristics of the atomic rotation, the two magnetic displacements may be unequal and in this case the velocity distribution takes the form of a spheroid with the principal rotation effective in two dimensions and the subordinate rotation in one. The average effective rotation under these conditions is (t12 t2)1/3 and this expression gives the equivalent value of t for use in the rotational force equations. The inter-atomic distances for the inert gases are as follows:
|Atomic No.||Element||Magnetic Rotation||Inter-atomic Distance|
Helium, which also belongs to the inert gas series, has some special characteristics due to its low rotational displacement and will be discussed in connection with other elements affected by the same factors. The reason for the appearance of the 4½ value in the xenon rotation will also be explained later.
Turning now to the elements which have electric as well as magnetic displacement, we note again that the electric rotation is one-dimensional and opposes the magnetic rotation. We may therefore obtain an expression for the effect of the electric rotational force on the magnetically rotating photon by inverting the one-dimensional force term of equation 4.
|Fe = 1 / (ln t’A ln t’B)||(10)|
Because of the fact that the electric rotation is not an independent motion of the basic photon but a rotation of the magnetic velocities in the reverse direction, combining the electric rotational force from equation 10 with the magnetic rotational force of equation 6 modifies the rotational terms (the functions of t) only and leaves the remainder of equation 6 unchanged.
|F = (1/(156.44)4) (ln2tA ln2tB) / (s4 ln t’A ln tB||(11)|
Here again the effective rotational (outward) and space-time (inward) forces are necessarily equal at the equilibrium point. Since the space-time force is unity we substitute this unit value for F in equation 11 and solve for so, the equilibrium distance.
|so = (1/156.44) (ln½tA ln½tB) / (ln1/4t’A ln1/4t’B)||(12)|
Again simplifying for application to the elements, where A is generally equal to B,
|so = (1/156.44) (ln t) / (ln½t’)||(13)|
In cgs units this becomes
|so = 2.914×10-8 ln t / ln½ t’ cm||(14)|
Inter-atomic distances for most of the elements in the first half of each rotational group can be calculated directly from equation 14, using the rotational values corresponding to the displacements previously determined for each individual element. Many elements, however, have different rotations which take the place of these normal values under certain conditions. The occurrence of these alternate rotations is largely dependent upon the position of the element within the rotational group and in preparation for the ensuing discussion of this factor it will be advisable for convenient reference to set up a classification according to position.
Within each of the rotational groups the most probable electric displacement for the elements in the first half of the group is in time, while for those in the latter half of the group it is in space. We will distinguish between these two divisions of the group by applying the term electropositive to those elements with probable electric time displacement and the term electronegative to those with probable electric space displacement. It should be understood, however, that this distinction is being drawn on the basis of the most probable situation in the electric dimension considered independently. Because of the conditions prevailing elsewhere in the environment of the atom an electronegative element often acts in an electropositive capacity, but this does not affect the classification as herein described.
There are also important differences between the behavior of the first four members of each series of positive or negative elements and that of the elements with higher rotational displacements. We will therefore divide each series into a lower division and an upper division so that those elements with similar general characteristics can be treated together. The classification will be based on the magnitude of the displacement, the lower division in each case including the elements with displacements from 1 to 4 and the upper division comprising those with displacements of 4 or over. The elements with displacement 4 belong to both divisions as they are capable of acting either as the highest members of the lower divisions or as the lowest members of the upper divisions. It should be recognized that in the electronegative series the members of the lower divisions have the higher net time displacement (higher atomic number).
For convenience these divisions within each rotational group will be numbered in the order of increasing atomic number as follows:
Another item which needs to be explained before resuming the calculation of inter-atomic distances is the relation between displacement and rotation. It has been shown that the elements constitute a continuous series in which the successive members differ by the equivalent of one unit of electric time displacement. The gravitational force in the time-space region is a function of the total three-dimensional rotation; that is, of the net total rotational time displacement. In the time region, however, the displacements in the electric dimension are opposite in time direction from those in the magnetic dimensions and the inter-atomic force is a function of the rotations in the different dimensions separately, as indicated in the force equations which have been developed.
In the simplest rotational combinations the basic rotating unit is a unit vibrational displacement, and unit rotational displacement applied to this unit vibration results in one unit of rotation. As long as the rotation is based entirely on this unit linear displacement, which for brevity we will call vibration one, the specific rotation, the quantity which enters into the force equations, is equal to the rotational displacement plus one unit (the physical zero value). When the rotation of this single vibrational unit reaches the time region maximum the rotational motion must be extended to vibration two (two units of linear displacement) if further additions of rotational displacement are to be made.
As previously indicated, the maximum time region rotation of a single displacement unit is two linear units or eight units distributed three-dimensionally. The change to vibration two therefore may take place after the first unit of displacement, if the rotational units are disposed linearly, and must take place before the addition of the eighth displacement unit. After the change to vibration two there are two units of vibrational displacement to be rotated and hence each added unit of rotational displacement corresponds to only one-half unit of specific rotation. As in the other time region phenomena which have been discussed, the higher displacement is in addition to and not in lieu of the lower displacement. The succession of rotation values is therefore either 1, 2, 2˝, 3, 3˝, etc., or 1, 2,… 7, 8, 8˝, 9, etc. The lower value is very commonly found where it first becomes possible; that is, displacement 2 normally corresponds to rotation 2˝ rather than 3. The next element may take the intermediate value 3˝ but beyond this point the higher value normally prevails.
The independent rotation of the different vibrational displacements may be visualized by means of a mechanical analogy. Let us consider an object with a translatory motion in some specific direction. If we rotate this object around the line of motion as an axis it is clear that this rotation will not interfere with the translatory motion. Furthermore, if the object is jointed so that it is capable of rotating by parts it is entirely possible for one or more parts to be rotating and the remainder not rotating, while the ensemble moves forward in translation unaffected by the nature of the rotation.
The references which have been made to the rotation of one or two units of vibrational displacement do not imply that these are necessarily the full vibrational displacements of the photons. Each unit of frequency is independent from the standpoint of motion in the opposite space-time component and the first unit of vibrational space displacement can be rotated in time irrespective of the number of additional units of displacement present, just as a single one of the individual parts described in the preceding paragraph can be rotated around the line of motion without regard to the total number of parts which make up the object as a whole. Similarly the second unit of vibrational displacement can be rotated if a second unit exists, no matter how many more of these displacement units may be included in the vibration as a whole. The change to a higher vibration level may affect either the electric or the magnetic rotation or both, since these rotations are not only independent of the total magnitude of the basic vibrational displacement but are also independent of each other, except to the extent that probability considerations are effective.
The general pattern of the magnetic rotational values is the same as that of the electric values but the upper limit for rotation on a vibration one basis is 4 rather than 8, for reasons previously discussed. Rotation 4˝ therefore follows rotation 4 in the regular progression. It is possible to reach rotation 5 in one dimension, however, without bringing the magnetic rotation as a whole up to the 5 level, and a 5-4 rotation therefore occurs in some elements either instead of or in combination with the 4˝-4 rotation.
Combinations of this kind frequently occur where there are alternate rotations of nearly equal probability. In the inter-atomic distance tables the magnetic combinations are indicated by notations such as 5-4, 4˝-4, and the electric combinations by two-figure designations (6-10, etc.). In each case the effective value of t is taken as the geometric mean of the values applicable to the two components.
We may now resume the calculation of inter-atomic distances. Table II lists the values computed for the elements in Division I of all groups except Group 2A, together with the corresponding experimental values. Data for the elements of Group 2A will be presented later as the members of the lower rotational groups have some special characteristics which will need further consideration.
The values calculated for the inter-atomic distance in Table II and the other similar tabulations included herein are those which would prevail in the absence of compression and thermal expansion. Some of the experimental data have been extrapolated to this zero base by the investigators but others are the actual observed values at atmospheric pressure and at different temperatures, depending on the properties of the substances involved, and the latter are not exactly comparable to the calculated figures. In general, however, the expansion and compression up to the temperature and pressure of observation are small and a comparison of the values in the last two columns gives a reasonably good idea of the extent of agreement between the theoretical figures and the experimental results. The effects of compression and thermal expansion will be examined in detail later.
Most of the elements of Division II also assume equilibrium positions on the basis of the same type of force relations as those of Division I but an alternate type of equilibrium can exist in this division and some of the Division II elements form structures of this second variety, either in addition or in preference to the regular electropositive structures. The factor which makes this alternate type of equilibrium possible is the ability of the atom to reorient itself with reference to the space-time zero point. We have already seen that eight units of time or space displacement, when distributed three-dimensionally, constitute a full space-time unit. Applying this to rotation, addition of eight units of displacement completes a full cycle and returns to the starting point. These eight rotational displacement units or any multiple of eight therefore constitute the equivalent of no displacement at all. Any intermediate displacement can be described in either of two ways: as x units in the positive direction from zero, or as 8-x units in the negative direction from the equivalent of zero.
It is possible on this basis to change a positive (time) displacement x to a negative (space) displacement 8-x merely by reorientation with reference to the space-time zero point. This does not involve any alteration in the rotation of the atom. It is simply a change in the relationship between the atomic positions and the space-time units in which they are situated. Consequently an atom may establish an equilibrium on the 8-x basis with one adjoining atom and yet maintain the normal electropositive equilibrium with other atoms in different directions.
For reasons which will be developed later, an equilibrium between two negative 8-x displacements is not possible, and the 8-x displacement, where it occurs in Division II, is in equilibrium with a positive displacement x. Let us consider the nature of this equilibrium. In the inter-atomic relationships previously examined all displacement has been in time. As long as this situation prevails there are no limitations on the combinations. The principles which have been established are valid for unequal as well as equal rotations and any atom with rotational time displacement x can establish equilibrium with any other atom having time displacement y, the resulting effective displacement being the geometric mean (xy)˝. However, if one displacement is in time and the other in space the rotations are no longer concurrent. Being in opposite space-time directions they are additive and the resulting effective displacement is the sum, x + y.
This expression x + y represents x units of space (or time) in association with y units of time (or space). But a ratio of units of space to units of time is a velocity and any velocity, other than a velocity of unity (zero displacement), is obviously incompatible with the establishment of equilibrium. It therefore becomes apparent that there is a rigid limitation on inter-atomic combinations of this character: the displacements x and y must be the same or equal. A displacement x is equal to a displacement -x and a time displacement x can therefore establish equilibrium with a space displacement x. A time displacement x is the same as a space displacement 8n - x if the atom is located in the appropriate position in the space-time unit, and hence a combination of these displacements also meets the equilibrium requirements. Except for a different type of zero point shift which will be discussed in a subsequent section all of the equilibrium relationships between space and time displacements follow one or another of these patterns.
The cohesive force which is exerted between the atoms of a molecule is commonly termed the chemical bond, and the variation in the magnitude of the inter-atomic force under different conditions is ascribed to the existence of several kinds of bonds. From the explanation in the preceding paragraphs it is apparent that the different “bonds” are merely the products of different rotational orientations, but the use of this term has an element of convenience, particularly in view of its general acceptance, and it will therefore be adopted in this work with the understanding that as herein used it refers to the net resultant of a particular orientation of the interacting rotational forces.
The regular bond of the electropositive elements on which the structures listed in Table II are based is a direct combination of two positive displacements. In the Division I elements the two displacements are equal and if we call their magnitude x the resultant relative displacement which, according to the principles previously stated, is the geometric mean of the two displacements, is also x. This we will call the positive bond.
When we turn to Division II, the upper electropositive division, we find that the positive bond is still very common, but since these elements are closer to the midpoint of the group negative rotations are more probable than in Division I and many of these elements form the alternate type of combination in which a positive electric displacement x is in equilibrium with a negative (space) displacement 8-x. As we have found, the resultant displacement is the sum of the two or 8. This is the neutral value, a full rotational cycle which returns to the equivalent of zero, and we will therefore call this the neutral bond. The rotation corresponding to displacement 8 is 10, as the complete space-time unit includes not only an initial time unit at one end but also an initial space unit at the other.
If the magnetic rotation extends to vibration two the specific electric rotation corresponding to displacement 8 may be reduced by one-half, in which case it becomes 5 instead of 10. This is the prevailing rotation in the Division II elements of Group 4A.
Inter-atomic distances based on the neutral bond can be calculated from equation 14 by substituting the relative electric rotation 10 or 5 for the normal rotation values used in evaluating the positive bond distances. The change in bond type does not affect the magnetic rotation since the magnetic displacement is always in time. Values obtained for the inter-atomic distances of both positive and neutral bond structures of Division II elements are listed in Table III.
In the elements which have been discussed thus far the most probable bond in any one individual dimension is applicable to all dimensions and the force system of the atom is isotropic. It follows that any aggregate of atoms of these elements has a structure in which the constituents are arranged in one of the geometrical patterns possible for equal forces: an isometric crystal. All of the electropositive elements crystallize in isometric forms and except for a few which apparently have quite complex structures each of the crystals of these elements belongs to one or another of three types, the face-centered cube, the body-centered cube, or the hexagonal close-packed structure.
We now turn to the other major subdivision of matter, the electronegative elements, those whose normal electric displacement is in space. Here the force system is not necessarily isotropic since the most probable bond in one or two dimensions may be the negative bond, a direct combination of two electric space displacements, but it is not possible to have negative bonds in three dimensions and wherever such bonds exist the atomic forces are anisotropic. The controlling factor in this situation is the necessity for a net rotational displacement in time in order to establish equilibrium in space. Negative orientation in three dimensions is obviously incompatible with this requirement but if the negative displacement is restricted to one dimension we have fixed atomic positions in two dimensions with a fixed average position in the third because of the net electric time displacement of the atom as a whole. This results in a crystalline structure which is difficult to distinguish from one with fixed positions in all dimensions. Such crystals are not usually isometric, however, as the inter-atomic distance in the odd dimension is generally different from that in the other two. Where the distances in all dimensions do happen to coincide we will find on further investigation that the space symmetry is not an indication of force symmetry.
If the negative displacement is very small, as in the lower Division IV elements, it is possible to have negative orientation in two dimensions as long as the positive displacement in the third dimension exceeds the sum of these two negative components so that the net resultant is still positive. Here the relative positions of the atoms are fixed in one dimension only, but the average positions in the other two dimensions are constant by reason of the net three-dimensional time displacement. The aggregate consequently retains most of the external characteristics of a crystal but when the internal structure is examined the atoms appear to be distributed at random rather than in the orderly arrangement of the crystal. In reality there is just as much order as in the crystalline structure but part of the order is in time rather than in space and there are no fixed equilibrium positions in space. This phase of matter we identify as the glassy or vitreous form to distinguish it from the crystalline form.
The term “state” is frequently used in this connection instead of “form” but the physical state of matter has an altogether different meaning based on another kind of differentiation and it seems advisable to confine the use of this term to the one application. Both glasses and crystals are in the solid state, the essential characteristics of which will be discussed in detail later.
In beginning a consideration of the structures of the individual electronegative elements it will be desirable to start with Division III. The possible equilibrium states in this division are analogous to those of Division II. The negative bond is comparable to the positive bond of the electropositive divisions. As in Division II there is an alternate bond in which the negative displacement x combines with the inverse orientation 8-x. While this is just the reverse of the Division II neutral bond in which the normal displacement is positive and the inverse is negative, the net result is exactly the same and this Division III combination will also be considered a neutral bond.
Where two or more alternate structures are possible the actual form which the crystal will take is a matter of probability. Low displacements are, of course, more probable than high displacements. Electric displacement in time is likewise more probable than electric displacement in space since the former conforms to the space-time direction of the rotational motion as a whole. In Division I both of these factors operate in the same direction. The positive bond, which is based entirely on time displacement and hence is inherently more probable than the neutral bond, also has the lower displacement. All structures in this division therefore take the positive bond. In Division II the margin of probability is narrow. Here the positive displacement is higher than the inverse 8-x displacement and this operates against the greater inherent probability of the time displacement. As a result both types of structures are encountered in this division, together with a combination of the two.
In Division III the greater probability of the time displacement is sufficient to keep the borderline elements of Groups 3A and 3B on the Division II basis, the electropositive preference extending as far as copper and silver. The higher Division III elements of Group 4A are beyond the range of positive bond structures and all of these elements crystallize on the basis of the neutral bond. Decreasing negative displacement in each group increases the probability of the negative bond and these bonds make their appearance in the vicinity of displacement 7. In Groups 3A and 3B the remaining elements of this division have the characteristic asymmetric electronegative structures utilizing both negative and neutral bonds. The negative bond is rare in Group 4A and in this group the neutral bond structures continue to predominate throughout Division III. Inter-atomic distances for the Division III elements are listed in Table IV.
As we pass from Division III to Division IV the magnitude of the inverse displacement 8-x increases and neutral bond structures become correspondingly less probable, although still very much in evidence. Negative bonds in crystal structures also become increasingly rare, not because of any decrease in probability but because they are likely to exist in two dimensions if they occur at all in these Division IV structures and this means a glassy or vitreous aggregate rather than a crystal. There is, however, a different type of combination which makes its appearance here where the inherently more probable bonds are excluded for one reason or another. Thus far we have examined the positive and negative bonds, in which one normal displacement is in equilibrium with another normal displacement, and the neutral bond, in which the normal displacement x combines with the inverse displacement 8-x. Now we complete the picture with a bond in which two of the 8-x inverse displacements are combined.
This bond was not encountered previously as it has a very low probability because of its high effective displacement and where more probable structures can be formed the existence of a crystal based on a bond of low probability is precluded. In Division IV the positive bond is impossible and the low probability of the 8-x combination competes only with the neutral bond, the probability of which is likewise low for the displacement values in this division. It was mentioned in the discussion of the Division II structures that a bond based on a direct combination of two 8-x displacements in that division is not possible. The 8-x displacement in Divisions I and II is negative and like the negative bond an 8-x combination would be confined to a subordinate role in one or two dimensions of an asymmetric structure. Such a crystal cannot compete with the high probability of the symmetrical electropositive structures and therefore does not exist. In the electronegative divisions, however, the 8-x displacement is positive and there are no limitations upon it other than those arising from the high effective displacement.
The effective displacement of this secondary positive bond, as we will call it, is even greater than might be expected from the magnitude of the quantity 8-x as the change of zero points for two oppositely directed motions is also oppositely directed and the new zero points are 16 units apart. The resultant displacement is 16 - 2x and the corresponding rotation is 18 - 2x. The numerical values of the latter expression range from 10 to 16 and because of the low probability of such high rotations the secondary positive bond is limited to one or one and one-half dimensions in spite of its positive direction. In the upper elements of Division IV the other dimensions take the neutral bond but the probability of this bond decreases as the value of 8-x approaches its upper limit and in some of the lower elements of the division there is no electric rotational force at all in one or more dimensions; that is, the specific rotation is unity. For example, the crystal of iodine has one dimension based on the secondary positive bond with rotation 16, the inter-atomic distance being 2.68 Å. A second dimension has unit rotation and an inter-atomic distance of 4.46 Å. The third dimension combines these two forces with a resultant distance of 3.46 Å, midway between the other two values. A similar combination bond with unit rotation as one of the components was shown for two of the rare earth elements in Table III.
A special type of structure occurs only in those electronegative elements which have a rotational displacement of four units. This rotation is on the borderline between Divisions III and IV where the neutral bond and the secondary positive bond are about equally probable. Under similar conditions other elements crystallize in hexagonal or tetragonal structures, utilizing the different bonds in different directions. For these displacement 4 elements, however, the two bonds have the same relative rotation: 10. The inter-atomic distance in these crystals is therefore the same in all dimensions and the crystal is isometric, even though the rotational forces are quite different in character. The molecular arrangement in this crystal pattern, the diamond structure, indicates the true nature of the force equilibrium. Outwardly this crystal cannot be distinguished from the isotropic cubic crystals but the analogous body-centered cubic structure has an atom at each corner of the cube as well as one in the center, whereas the diamond structure leaves alternate corners open to accommodate the abnormal projection of forces in the secondary positive dimensions. Inter-atomic distances for the Division IV elements are listed in Table V.
Up to this point no consideration has been given to the elements of atomic number below 10 as the rotational forces of these elements are subject to certain special influences which make it advisable to discuss them separately. One of these causes of deviation from the normal behavior is the small size of the rotational group. In the larger groups the four divisions are distinct and except for some overlapping each has its own characteristic force combinations. In an S-element group, however, the second series of four elements which would normally constitute Division II is actually in the Division IV position. As a result these four elements have to a certain extent the properties of both divisions.
A second influence which affects the crystal structures of the lower group elements is the inactivation of the rotational forces in certain dimensions. It was previously noted that a magnetic rotation of two units produces no effects in the positive direction. The reason for this is revealed by equation 3 which tells us that the rotational force in the time region is ln t. The value of this expression for t = 2 is 0.693, which is less than the space-time force 1.00. The net effective force of rotation 2 is therefore below the minimum value for action in the positive direction. In order to produce an active force the rotation must be high enough to make ln t greater than unity. This is accomplished at rotation 3. In some crystals the effective forces of the higher rotational combinations are also reduced to two dimensions, but this is a geometrical effect resulting from the nature of the force equilibrium and it appears only in the more complex structures, whereas the inactivity of the rotation 2 force is an inherent property.
The normal magnetic rotation of the 1B group, which includes only the two elements hydrogen and helium, and the 2A group of eight elements beginning with lithium is 3-2. Where the rotation value 2 applies to the subordinate rotation one dimension is inactive; where it applies to the principal rotation two dimensions are inactive. This reduces the force exerted by each atom to 2/3 of the normal amount for one inactive dimension and to 1/3 for two inactive dimensions. Inter-atomic distance is proportional to the square root of the product of the two forces involved, which means that the reduction in distance is also 1/3 per inactive dimension.
Since the electric rotation is not a simple motion but a reverse rotation of the magnetic rotational system, the limitations to which the basic rotation is subject are not applicable. The electric rotational displacement merely modifies the magnetic rotation and the low value of the rotation 2 integral makes itself apparent by an inter-atomic distance which is greater than that which would prevail if there were no electric displacement at all (unit rotation). Table VI gives the force constants and the inter-atomic distances for the elements of the lower groups.
In the preceding pages the mathematical relations governing the inter-atomic force systems were developed on a comprehensive basis but the subsequent examination of specific cases dealt only with the forces exerted between like atoms. We are now ready to begin a study of the forces between unlike atoms. The general principles developed in the discussion of the structures of the elements will, of course, apply to this situation as well, but the existence of differences between the components of the system will introduce some new factors into the calculations.
Looking first at combinations of electropositive elements, let us consider an element with electric rotation t1, in equilibrium with an element having electric rotation t2. Here the two forces are identical in character and concurrent, the kind of a force combination that we have called the positive bond. The resultant, according to the principles previously set forth, is ( t1t2 )˝, the geometric mean of the two constituent rotations. If the two elements have different magnetic rotations the resultant in each magnetic dimension is also the geometric mean of the individual rotations, since the magnetic rotations are always in time and they combine in the same manner as the electric time displacements; that is, the magnetic equivalent of the positive bond.
Since the properties of matter are determined by the nature and magnitude of the inter-atomic forces, the properties of a combination of this kind are in general intermediate between the properties of the components. This type of association between elements is called a mixture if the combination is irregular and incomplete or an alloy if it is uniform and fully effective.
There is no inherent limitation on the composition of mixtures or alloys. Any of the electropositive elements can enter into such combinations and they can mix in any proportions, except to the extent that geometrical considerations intervene. Many of the electronegative elements, particularly those of Division III, follow the same pattern to some degree by reorientation on an 8-x basis. When we turn to combinations involving other than positive bonds the situation is considerably different. These other bonds are based on the establishment of equilibrium through the balancing of opposing forces at neutral space-time values and this requires that the components have certain definite relations with respect to each other. Combinations of this kind therefore take place in definite proportions, each atom of one component being associated with a specific number of atoms of the other component or components. Such a combination is called a chemical compound.
In addition to the constant proportions of their components, compounds also differ from mixtures or alloys in that their properties are not necessarily intermediate between those of the components but may be of an altogether different character, since the resultant of a force equilibrium of this kind may differ widely from any of the force arrangements of the individual elements.
The simplest type of bond in chemical compounds is an equilibrium between electropositive and electronegative rotations of the same magnitude. Here the basis of the equilibrium is a space-time ratio of unity, the time displacement of one component being equal to the space displacement of the other. As indicated in the discussion of the structures of the elements, the resultant of a combination of space and time displacements is the sum of the two, in this case 2x. The corresponding rotation is 2 (x + 1), as it includes two initial unites, one of space and one of time.
Because of the fundamental character of this bond and the important part which it plays in the world of chemical compounds we will call it the normal bond. Inter-atomic distances for the normal bond structures can be calculated in the same manner as before, applying the mean magnetic rotation and the effective electric rotation 2(x + 1) to equation 14. If the active dimensions are not the same in both components the full rotational force of the more active component is effective in its excess dimensions. For example, the value of ln t for 3-3 magnetic rotation is 1.099 in three dimensions or 0.7324 in two dimensions. If this two-dimensional rotation is combined with a three-dimensional magnetic rotation x, the resultant is (0.7324 x)˝, the geometric mean of the individual values, in two dimensions, and x in the third. The average value for all three dimensions is (0.7324 x)1/3.
When this bond unites one electropositive atom with each electronegative atom the resulting structure is usually a simple cube with the atoms of each element occupying alternate corners of the cube. This is called the Sodium Chloride structure, after the most familiar member of the family of compounds crystallizing in this form. Table VII gives the inter-atomic distances for the common NaCl type crystals.
From this tabulation it can be seen that the special rotational characteristics which certain of the elements possess in the elemental aggregates carry over into their compounds. The elements of the lower groups have inactive force dimensions in these crystals just as in the structures previously examined. The second element in each group also shows the same preference for vibration two rotation that we encountered in studying the structures of the elements. As in the latter this preference extends to some of the following elements and in such series of compounds as CaO, ScN, TiC, one component keeps the vibration two status throughout the series and the resulting effective rotations are 5˝, 7, 8˝, rather than 6, 8, 10.
Except for certain types of crystals which are essentially interchangeable, the structures of the elements are determined almost entirely by the nature of the bonds. In compounds there is another equally active factor: the relative proportions of the components. Where two atoms of one kind form a normal bond compound with one atom of another, the unequal proportions make the NaCl arrangement impossible and instead we find the Calcium Fluoride structure. Inter-atomic distances for a number of common CaF2 type crystals are listed in Table VIII.
In spite of the difference in structure the inter-atomic distances in the CaF3 crystals are normally identical with the NaCl distances for the same positive component unless there is a secondary combination between the atoms of the two-atom component. Thus Na2S has the same inter-atomic distance as NaCl, CaF2 the same as CaO, and so on. The equilibrium distance is independent of the nature of the negative component because the atom is essentially a time structure and the magnitude of the inter-atomic forces is primarily a function of the net time displacement. Where space displacement enters into the situation it does so only as a modifier or neutralizer of the time displacement. The number of negative atoms that are required in combination with each positive atom to accomplish this result is immaterial from a force standpoint. For this reason the question of combining power or valence does not need to be taken into account in the present discussion and will be given separate consideration later.
(See Appendix B for description of material omitted from this edition.)
In our original consideration of the rotational characteristics of the elements we noted that the rotational datum, the condition of zero net displacement, has rotational displacements 1-0-0, whereas hydrogen, the first of the rotational combinations which is recognized as an element, has displacements 2-1-(1). Obviously there are a number of other possible combinations intermediate between these two, but hydrogen is the lowest combination with an effective displacement in both magnetic dimensions and hence those below hydrogen differ from the rotational combinations which we identify as the chemical elements in this important respect, a difference which affects their properties to such an extent as to exclude them from the classification matter which we apply to the elements and to their mixtures and compounds. These sub-material combinations were passed over in the earlier discussion to enable taking up the more familiar chemical elements first, but we are now ready to identify them and to examine the properties by which they make their presence known.
As in our previous study of the elements, it will be convenient to begin with those combinations which have no electric displacement. Helium, the lowest element of this type, has rotational displacement 2-1-0. If we eliminate one unit of magnetic displacement we arrive at a combination with displacement 1-1-0. This sub-material relative of the inert gases we will identify as the neutrons.
In this connection it will be noted that there is a significant difference in nomenclature between the elements and the sub-material combinations. We know the elements primarily as aggregates and hence the names which have been applied to them refer specifically to the aggregates. The name helium, for instance, normally applies to a helium aggregate. If we wish to talk about an individual particle we use the term “helium atom.” The sub-material combinations, on the other hand, are known only as individual particles and not as aggregates. The name neutron is therefore applied to the single particle and there is no need to use the term “atom” or its equivalent.
By eliminating one more unit of magnetic displacement we obtain the combination 1-0-0, which we have already recognized as the rotational datum, the rotational equivalent of nothing at all, the combination with a net total displacement of zero which forms the starting point for all rotational activity.
As in the various series of material elements we may now add electric displacement to each of these sub-material magnetic combinations. An addition of one unit of electric time displacement to the neutron produces the combination 1-1-1. Here we have two units of displacement, one magnetic and one electric. The question of the identity of this particle then arises.
One of the principal means of identifying the sub-material particles is by their masses. In the elements each unit of net electric time displacement above an initial level of two (one unit per magnetic dimension) constitutes one natural unit of rotational mass, or two units on the atomic weight scale. The limitation of the effective magnetic displacement to one dimension in the neutron group results in a decrease to one-half unit of mass (one unit of atomic weight) per displacement unit and also reduces the initial level to one-half unit. In this group, therefore, each unit of net electric time displacement above an initial level of one constitutes one-half natural unit of rotational mass, or one unit of atomic weight.
On this basis the atomic weight of the neutron is one unit and that of the 1-1-1 combination is two units. With net displacement two and rotational atomic weight two, this 1-1-1 particle is difficult to distinguish from hydrogen and is easily converted to that element by a process which will be discussed later. Because of these properties this particle has not yet been discovered experimentally and is as yet unnamed.
In the other direction, a unit of electric rotational space displacement added to the neutron results in the combination 1-1-(1), which we identify as the neutrino. This particle has a net displacement of zero and therefore has no rotational mass.
Similarly we may add a unit of electric time displacement to the rotational base 1-0-0, obtaining the combination 1-0-1. Since this is a single unit on a zero base it is essentially nothing but a rotating unit of time. As a particle it is known as the positron or positive electron. In like manner we may add one unit of electric space displacement to the rotational base and obtain the combination 1-0-(1), which is merely a rotating unit of space. This we identify as the electron, the negative analogue of the positron. Neither the positron nor the electron has any effective displacement in the magnetic dimensions and the primary rotational mass of both particles is therefore zero.
The electron and the positron are symmetrical with respect to space-time and the probability principles consequently indicate that in the universe as a whole they should originate in approximately equal numbers. In the material sector of the universe, however, the positrons are readily absorbed into the structure of the elements, whereas there is only a limited field for the utilization of space displacement (electrons) in these combinations. As a result free positrons are rare and short-lived, while there is a large excess of free electrons present at all times.
The same situation prevails with respect to the neutron group. The neutron itself and the positive member of the group, the unnamed 1-1-1 combination, are quickly absorbed or converted to hydrogen, but the 1-1-(1) structure, the neutrino, is subject to the same limitations as the electron. We may therefore deduce that there is also a substantial excess of free neutrinos present at all times. A definite experimental verification of this point is still lacking but we will see later in the discussion that some of the effects of this concentration of neutrinos can be identified.
The following tabulation shows the relationship between the lowest group of elements and the various sub-material particles:
The most familiar of the sub-material particles is the electron. Since there is an excess of electrons present in our material universe at all times, because of the inability of the atoms of matter to utilize more than a fraction of those available, the electrons play an important part in many physical phenomena and our next objective will be an examination of the various relationships involved in these phenomena.
Due to their net time displacement the atoms of matter are able to move freely in space. Since motion is a relation between space and time, the relation of space to the time displacement of the atoms constitutes motion. The electron, on the other hand, is essentially a unit of space and its relationship to space in general is the relation of space to space, which is not motion. As long as it remains in its normal state, therefore, the electron cannot move through open space but under the proper conditions it can move through matter, which is a time structure.
Motion of the electron through matter requires two free dimensions (that is, dimensions with displacement in time) since the electron rotation takes place in one dimension and the translatory movement in another. In the electropositive elements all three dimensions are free, as the rotational displacements of these elements are entirely in time. The electronegative elements of Division III also provide the necessary cross-section in time because they confine their rotational displacement in space to one dimension, but the elements of Division IV have space displacement in two dimensions in some of their modifications and this prevents motion of the electrons. These substances which lack the required two-dimensional cross-section for electron motion will be identified as insulators or dielectrics, whereas the substances which permit the movement will be identified as conductors. The electron motion itself will be identified as an electric current.
Inasmuch as each electron is essentially a unit of space, the movement of these electrons in conductors constitutes motion of space through matter. The magnitude of the motion is measured by the number of electrons per unit of time; that is, units of space per unit of time. But this is the definition of velocity; hence the electric current is a velocity. From a mathematical standpoint it is immaterial whether a mass is moving through space or space is moving through the mass.
In view of this identification of the electric current with velocity it follows that the passage of current through matter modifies the velocities previously existing; the resultant net velocity in each case being the algebraic sum of the original velocity and the velocity due to the current. As brought out in the preceding pages, the atoms of matter have both translational and rotational velocities and the electric current modifies both; hence it has two separate effects. Consideration of the rotational effect will be deferred until later, and we will now examine the effect on the translational or thermal velocity.
Since the thermal motion in a solid or liquid conductor is vibratory it has no directional limitations and the current increases the velocity in all cases. This means that the passage of current imparts heat to the conductor. Heat energy is the kinetic energy of the moving atoms: the product of the mass and the square of the velocity. The heat energy produced by the current flow is therefore the resultant of two factors: the magnitude of the velocity (current) and the amount of mass involved. This amount of mass, however, is not fixed as it is in the movement of mass through space which constitutes the thermal motion of the atoms. In the latter phenomenon the mass is constant while the space depends on the duration of the movement. In the current flow the space (number of electrons) is fixed whereas the mass depends on a the duration of the movement. If the flow is only momentary each electron may move through only a small fraction of the total amount of mass in the circuit, whereas if it continues for a longer period the entire circuit may be traversed repeatedly. The total mass affected by the flow of current is therefore the product of the mass per unit time by the time of flow. In the movement of mass through space we have the analogous situation of the total space being the product of the space per unit time (velocity) by the time of movement.
It is apparent from the foregoing that the mass per unit time is an important factor in the flow of electric current. If the current is constant the amount of mass traversed per unit time depends on the characteristics of the conductor through which the current is moving. We will identify the value of this quantity at unit current flow as the resistance of the conductor. The product of resistance and time, Rt, gives us the mass and multiplying the mass by the square of the velocity (current) we obtain energy RtI2 . Except for the difference in terminology this expression for the thermal energy of an electric current (the heat developed by the current flow) is identical with the expression for the kinetic energy of moving matter, ½ mv2.
With this understanding as to the nature of the quantities involved we may now evaluate the natural units of the electrical system in terms of conventional units as a basis for further mathematical treatment. For this purpose we must again select some measured quantity which we can identify in terms of the natural unit, so that we can derive a conversion ratio from the relation of the numerical values of this quantity as they appear in the two systems. The most convenient value of this kind is that of the natural unit of quantity, Q, the electrical equivalent of space, the currently accepted figure being
Here, however, we encounter a numerical discrepancy which is rather small but still greater than we like to see in a basic figure from which all of the other values in the electric and magnetic systems will be derived. Another possible method of evaluating the natural unit of quantity is to multiply the Faraday constant, the nature of which will be examined later, by the mass equivalent of unit atomic weight. Using the previously calculated value of the latter quantity we arrive at 4.8069×10-10 e.s.u. as the natural unit.
Some such discrepancies are to be expected in view of the uncertainty as to the actual degree of precision in the physical measurements, and in setting up the system of conversion constants to relate the natural and conventional units it is necessary to pass judgment on the relative accuracy of the different determinations and to select the values which appear to be the most firmly established. It would seem that we are quite safe in accepting the values of the natural units of time, space, and mass as calculated in the earlier pages; first, because the measurements from which they have been derived are direct determinations that have been carried out with a high degree of precision, and second, because calculations based on these values of the natural units lead to values for the mass of the hydrogen atom and the molar gas volume which agree exactly with the experimental determinations.
In calculating the mass equivalent of unit atomic weight, however, we arrive at a figure which differs somewhat from the accepted value, and when this in turn is applied to the Faraday constant the same discrepancy is carried forward into the value of Q. The explanation of this conflict apparently lies in the fact that the accepted value of Q is not a direct determination but has been calculated from spectroscopic data. As we will find in the more detailed study of radiation in the subsequent pages, the spectral pattern is affected by a great variety of conditions in the atom and its environment and the spectroscopic value could easily include some unrecognized “fine structure” effect. When we turn to the direct determinations of the electronic charge we find that the result obtained in the most accurate work of this nature, Millikan’s oil drop experiments, was 4.807×10-10 e.s.u., which agrees exactly with the value calculated from the Faraday constant and the natural units as previously established. We will therefore accept this value as correct. From it we may now compute the natural unit of current, I, which is equal to the natural unit of velocity, or one unit of quantity (space) per unit of time.
|I = Q/t = (4.807×10-10 e.s.u. / 0.1521×10-15 sec)
= 3.161×106 e.s.u./sec
= 1.054×10-3 amp
The electrical energy unit, the watt-hour, is the equivalent of 3.6×1010 ergs. The natural unit of energy, 5.0×10-4 ergs, can therefore be expressed as 1.8×10-13 watt-hours. Dividing this natural unit of energy by the natural unit of time we obtain the natural unit of power, a quantity which is expressed electrically as I2R.
|I2R = I2Rt/t = (5.0×10-4 ergs / 0.1521×10-15 sec)
= 3.289×1012 ergs/sec
= 3.289×105 watts
The natural unit of power divided by the natural unit of current gives us the natural unit of electromotive force, designated as IR or E.
|IR = I2R/I = 3.289×105 watts / 1.054×10-3 amperes = 3.119×108 volts||
Another division by unit current brings us to the natural unit of resistance, R.
|R = IR/I = 3.119×108 volts / 1.054×10-3 amp = 2.958×1011 ohms||
(See Appendix B for description of material omitted from this edition.)
At this point it will be helpful to review the status of the various electrical quantities from the standpoint of their relationship to the basic entities, space and time. Electrical quantity, Q, has been identified as space, s, and current, I, has similarly been identified as velocity, s/t. Resistance, R, is defined in such a manner as to make it equal to mass per unit time, t3/s3 × 1/t = t2/s3. Electrical energy is interchangeable with energy in other forms and like energy in general is t/s, the reciprocal of velocity.
Energy per unit time is power, hence power is t/s × 1/t = 1/s. Power divided by current is electromotive force, which makes this quantity equal to 1/s × t/s = t/s2 . This is the general expression for force, and the electromotive force, F, IR, or emf, therefore has the same basic characteristics as other forces, gravitational, mechanical, etc. In many respects it is analogous to gas pressure, which is also a force phenomenon; that is, force per unit area. The magnitude of the emf, or potential, at any point may be increased in the same manner that gas pressure is increased, either by the introduction of more electrons of the same average velocity or by imparting a greater velocity to the electrons already present. If this location is connected by means of a conductor with a region which does not participate in the increase in potential, the force difference which is created will cause a flow of current from the high potential region to the region of lower potential. This flow will persist until the potentials are equalized. Ordinarily we deal with currents which are produced by some agency that creates a continuing potential difference, and the current flows in a circuit starting and terminating at the generating agency. It is not essential, however, that such a circuit exist; a current will flow between any two points of different potential if the necessary conductor is available.
When the potential difference which caused the flow of current is eliminated the directional movement of the electrons which constitutes the current ceases. The electrons remaining in any particular volume, however, continue to react with the moving atoms of matter and since the electrons as well as the atoms are free to move the eventual result is an equilibrium wherein the thermal motion is divided between motion of mass through space and motion of space (electrons) through matter. The nature of the equilibrium; that is, the division of motion between the electrons and the mass, is determined by the average resistance in all directions. At a given temperature the atoms of a low resistance conductor such as copper impart less velocity to the electrons than the atoms of a conductor such as iron which has a greater resistance. Under the same conditions the various conductors therefore have different electron velocities and where the density of electrons is the same the electron pressure or potential depends upon the characteristics of the conductor. It should be noted, however, that the resistance of a conductor to the thermal motion of the electrons is not necessarily identical with the resistance which this conductor offers to the flow of electric current, inasmuch as there is no requirement that the directions of these motions be coincident. The possibility of a difference in flow direction is obvious in the case of anisotropic substances but even where the conductor is isotropic the directions of movement relative to the line of action of the inter-atomic forces may not coincide.
If we place two conductors with different electron potentials, copper and zinc for example, in contact the higher potential of the electrons in the zinc will cause a flow from the zinc to the copper until the density of electrons in the copper becomes great enough to equalize the potential. We then have an equilibrium of potential between a smaller number of high velocity electrons in the zinc and a greater number of low velocity electrons in the copper. This difference in potential which becomes apparent when two dissimilar conductors are placed in contact is known as a contact potential.
The effect of temperature on any aggregate of electrons is more complex than the corresponding effect on an aggregate composed of material atoms. The latter exists in free space and consequently the reaction to addition or removal of thermal energy is determined by the properties of the material aggregate itself without any modification by the environment. The behavior of the electron aggregate, on the contrary, is determined not only by its own properties but also by the properties of the conductor in which it is located. As a result the electronic effects show a range of variation both in magnitude and direction which is totally foreign to the analogous phenomena involving material atoms.
If a conductor is heated, the primary effect on the electrons within the conductor is a decrease in potential, the reaction of the electrons (units of space) being the inverse of the reaction of the material atoms (units of time) to the same addition of energy. Simultaneously, however, the heating of the conductor causes an increase in resistance and a corresponding increase in the potential per electron, as explained in the preceding paragraphs. The net result depends on the relative magnitude of the two effects. In a low resistance conductor such as copper or silver, the increase in potential due to the heating of the conductor is smaller than the direct effect of the temperature on the electrons and there is a net loss in potential as the temperature rises. When a conductor of this type is heated at one end only the cold end acquires a higher (more negative) potential: a phenomenon known as the positive Thompson effect. In a conductor such as iron or mercury which has a higher resistance the increase in potential due to the change in resistance may be greater than the direct effect on the electrons, in which case the hot end of the conductor acquires the higher potential: the negative Thomson effect. The effective resistance for this purpose is, of course, the resistance to thermal motion of the electrons and it does not necessarily coincide with the resistance to directional flow, as previously pointed out, but there is a general qualitative correspondence between the two, as would be inferred from the examples cited.
Now let us construct a circuit of two different conductors as in Figure 41 and cause a current to flow in this circuit. At junction A where the electrons flow from zinc to copper they leave the zinc with the relatively high potential which represents the equilibrium condition in the zinc conductor. In the copper conductor the equilibrium potential is lower, and the electrons therefore reduce their potential in the process of attaining equilibrium. This reduction in electrical potential corresponds to an increase in thermal energy and consequently the electrons absorb heat from the surroundings. The flow of electrons thus results in a cooling effect at junction A. Where the electrons return to the zinc conductor at junction B the reverse process takes place and heat is given up to the environment. This phenomenon is known as the Peltier effect.
The inverse of the Peltier effect is the Seebeck effect or thermoelectric effect. Here heat is applied to junction A. This lowers the potential per electron and since there are more electrons in the copper than in the zinc the effective potential of the copper drops below that of the zinc, causing a current flow from zinc to copper. If both junction A and junction B are at the same temperature the flow is only momentary until the necessary potential equilibrium is established but if one junction alone is heated a continuous current is produced and heat is transferred from the hot junction to the cold junction through the agency of the current.
In view of the free motion of the electrons in conductors and the establishment of thermal equilibrium between the electrons and matter it is obvious that the thermal energy will similarly tend to equalize in all parts of any system which is inter-connected electrically. It follows that the electron movement constitutes a means of heat transfer whenever a conductor is available. This type of heat transmission is called conduction.
It should be noted particularly that the motion of the electrons through matter is an integral part of the total thermal motion, not something separate. A mass m reaches a certain temperature T when the thermal velocity attains a specific average value v. It is immaterial from this standpoint whether the velocity consists entirely of motion of the mass through space or partly of such motion and partly of motion of space (electrons) through the mass. In either case the total velocity corresponding to the temperature T remains the same. In previous discussions of the theory that metallic conduction of heat is due to the movement of electrons the objection has been raised that there is no indication of any increase in the specific heat due to the thermal energy of the electron movement. The answer lies in the foregoing explanation that the thermal motion of the electrons is not an addition to the thermal motion of the atoms; it is an integral part of the atomic motion and hence has no effect on the specific heat.
Since the conduction of heat is accomplished through the same agency as the conduction of electric current—the movement of electrons—it follows that the resistance to the flow of heat is the same as the resistance to the flow of current. There is, however, a difference in the mechanism of conduction which introduces an additional factor into the heat flow. The force causing the flow of current is a directional force imposed from the outside and unaffected by the conditions within the conductor. The application of heat to one end of a conductor does not introduce any such directional force; it merely changes the average velocity of the electrons in the heated zone and the directional force gradient is a secondary effect. The greater the velocity gradient in the electron aggregate the more rapidly the velocity will be transferred, hence the effective force causing the heat flow is proportional to the temperature. The actual rate of heat transfer is the result of a combination of these two factors. As the temperature increases the resistance also increases, reducing the rate of heat flow. The same rise in temperature increases the rate of flow by reason of the greater thermal gradient. In the neighborhood of room temperature these two influences are nearly equal and the thermal conductivity, C, therefore has only a relatively small temperature variation. The general relation in this temperature range is expressed in approximate terms by the well-known Wiedemann-Franz Law:
|C = k T/R||
For greater accuracy it is necessary to take the initial level into account in both the thermal and electrical factors. Expressing the resistance in terms of temperature and introducing the initial level in both cases, we have the revised equation:
|C = k (T-IT)/(T - IE)||
At the higher temperatures where T is much larger than IT or IE, a variation in T has relatively little effect on C and equation 121 gives substantially the same results as the Wiedemann-Franz Law (equation 120). As the temperature decreases the difference between IT and IE becomes increasingly effective and since IE is normally larger than IT the value of C rises, slowly at first and then more rapidly. When T - IE approaches zero, however, the resistance diverges from the linear relation and follows a probability curve, as we have seen in our examination of the resistance relations. Beyond this point, therefore, further change in the denominator of the conductivity equation is relatively slow and the decrease in the numerator becomes the controlling factor. The thermal conductivity thus passes through a maximum and then drops gradually to zero at zero temperature. Here there is no resistance to the electron flow but nevertheless there is no heat conductivity because the electrons have no thermal motion.
The general shape of the experimental thermal conductivity curves agrees with the theoretical curve as described in the foregoing paragraph. In view of the uncertainties in the measurements at low temperatures, however, it has not appeared worth while to set up any detailed comparisons for this low temperature range, and since ample supporting data for the Wiedemann-Franz relation at the higher temperatures are available elsewhere, no heat conductivity comparisons are included in this presentation.
Thus far we have considered three general types of motion: unidirectional linear motion, unidirectional rotational motion, and vibratory linear motion. To complete the coverage in this respect we now turn to the fourth of the basic types: vibratory rotational motion. Such a rotational vibration, which is identical with the basic linear vibration except in direction, will be identified as a charge. Since the primary rotations of the atoms and other rotational combinations are either one-dimensional or two-dimensional it follows that the corresponding rotational vibrations are also one dimensional and two-dimensional respectively. A one-dimensional rotational vibration will be identified as an electric charge and a two-dimensional motion of the same character as a magnetic charge.
The rotational vibration has the same general characteristics as the linear atomic vibration (thermal motion) previously discussed, including the fact that it is opposite in direction to the rotation with which it is associated. The electric rotational displacement of an electropositive element, for instance, is in time, and such an element therefore takes a vibrational space displacement. This introduces an awkward question of terminology. From a logical standpoint the vibrational space displacement should be called a negative charge since its direction is opposite to that of the positive rotation. On this basis the term “Positive” would always refer to a time displacement and the term “negative” to a space displacement. Adoption of such a system of nomenclature would have some very definite advantages, but so far as this presentation is concerned it does not seem advisable to run the risk of adding further confusion to explanations which are already somewhat handicapped by the unavoidable use of unfamiliar terminology to express relationships not previously recognized. For present purposes, therefore, current usage will be followed and a vibrational space displacement will be called a positive charge and a vibrational time displacement a negative charge. This means that the significance of the terms “positive” and “negative” with reference to rotation is reversed in application to charge. An electropositive element (net time displacement) takes a positive charge (space displacement) whereas an electronegative element, which can assume either a positive or negative orientation, may take either charge. Normally, however, the negative charge is restricted to the most negative elements of this class: those of Division IV.
Electric charges do not participate in the basic motions of the material and sub-material combinations, but they are easily produced in almost any kind of matter or sub-material particle and can be detached from these units with equal case. In a low temperature environment such as that on the surface of the earth the electric charge plays the part of a temporary appendage to the relatively permanent rotating system of motions.
The simplest type of charged particle is produced by imparting one unit of one-dimensional rotational vibration to the electron which, as we have found, has only one unbalanced unit of one-dimensional rotational displacement. Since the rotational displacement is in space the electron takes a vibrational displacement in time: a negative charge. The production of a charged electron in a conductor involves merely the transfer of sufficient energy to the uncharged electron to bring the existing kinetic energy of translation, which is also a time displacement (opposite to the thermal space displacement of matter), up to the total energy equivalent of a unit charge. If the electron is to be projected into space an additional amount of energy is required to break away from the solid or liquid surface and overcome the surrounding gas pressure. The necessary energy can be supplied in a number of different ways, each of which therefore constitutes a method of producing charged electrons.
A convenient and widely used method furnishes the additional energy by means of an electrical potential. Here the translational energy of the uncharged electron is increased by the electromotive force until it meets the requirements of a charged electron. In many cases the increment of energy is minimized by projecting the newly charged electrons into a vacuum rather than requiring them to overcome gas pressure. The cathode rays used in x-ray production are streams of charged electrons propagated into a vacuum. The use of a vacuum is also a feature of the thermionic production of electrons in which the necessary energy is imparted to the uncharged electrons by means of heat. In the photoelectric effect the energy is absorbed from radiation.
Existence of the electron as a free charged unit is usually of brief duration. Within a short time after it has been produced by one transfer of energy and ejected from matter into space it again encounters matter and enters into another energy transfer by means of which the charge is converted back into thermal energy or radiation and the electron reverts to the uncharged condition. In the immediate neighborhood of an agency which is producing charged electrons both the creation of charges from kinetic energy and the reverse process which transforms the charge back into kinetic energy are going on simultaneously and one of the principal reasons for the use of a vacuum in electron production is to minimize the loss of charge which occurs in this manner.
The ability of the charged electron to move through space is a result of the neutralization of the rotational space displacement by the time displacement of the charge. In the uncharged condition the electron was a rotating unit of space and could move only through time, which meant that we could observe it only by means of its effects on matter. Now that the single unit of rotational space displacement has been balanced by a single unit of vibrational time displacement (charge) the particle is neutral from the space-time standpoint and it can move freely through either space or time, although it is still blocked by insulators, which are space-time combinations that do not have the necessary open dimensions in either space or time. Furthermore, the electric charge enables us to control the motion of the charged electron through space and unlike its mysterious and elusive uncharged counterpart this electron becomes a tangible entity which can be subjected to observation and can be manipulated as a tool to produce physical effects of various kinds.
It is not feasible to isolate and examine the individual charged electrons in matter as we do in space, but we can recognize the presence of these particles by evidence of freely moving charges within the material aggregate. Aside from the special characteristics due to the electric charges, these charged electrons in matter have the same properties as the uncharged units. They travel readily through good conductors, less readily through poor conductors, are restrained by insulators, move in response to potential differences, and so on. In their various activities within aggregates of matter these charged electrons are known as static electricity.
On undertaking an examination of the basic mathematical relationships in static electric phenomena it will first be necessary to define and evaluate the units in which we will express the various electrical magnitudes. Unlike the common mechanical systems of units, which are clearly defined and mutually consistent, differing only in the arbitrary sizes assigned to the basic units, the electrical and magnetic measurement systems now in use present an extraordinary picture of confusion in which different units are applied to separate manifestations of the same quantity, the same units are used in application to different quantities, and the various systems cannot even agree on the dimensions of the quantities involved.
Failure to recognize tl-le difference between the charged and uncharged electron is one of the items of this nature that has introduced some confusion in the electrical system. No distinction has been made between charge and quantity and the numerical value of the natural unit of electric charge is therefore 4.8069×10-10 e.s.u., the same expression that was used as the natural unit of quantity. Ordinarily the two different usages are entirely separate and under these circumstances no error is introduced into the calculations by utilizing the same expression for both quantities, but a clear distinction is necessary in any case where both units enter into the same calculation as they are not equivalents and cannot be treated as such.
As an analogy we might assume that we are undertaking to set up a system of units in which to express the properties of water. Let us further assume that we fail to recognize that there is any difference between the properties of weight and volume and consequently express both in cubic centimeters. Such a system is equivalent to using a weight unit of one gram and as long as we deal separately with weight and volume, each in its own context, the fact that the expression “cubic centimeter” has two entirely different meanings will not result in any difficulties. If we have occasion to deal with both quantities simultaneously, however, it is essential to recognize the difference in dimensions. Dividing cubic centimeters (weight) by cubic centimeters (volume) does not result in a pure number as the calculations seem to indicate; the quotient still has the dimensions weight/volume.
Similarly we may use the identical electrical charge and quantity units in a normal manner as long as they are employed independently and in the right context, which is the normal situation, but whenever the two enter into the same mathematical expression or are employed individually in the wrong context it is necessary to take into account both the dimensional difference and the numerical ratio of the two quantities. Charge has the dimensions t/s, but is numerically equal to t since s = 1 in the local environment. Quantity is space, s. The ratio of s to t is unit velocity, 3×1010 cm/sec, which for this purpose is reduced to 1010 cm/sec by the fact that the charge of the electron, a one unit displacement applicable to a three-dimensional particle is the equivalent of three purely one-dimensional quantity units.
One place in which both charge and quantity units are involved simultaneously is in the calculation of the natural unit of capacitance. This quantity is normally expressed in farads, which are equal to coulombs (e) per volt. The volt is one joule per coulomb (q). In computing the natural unit the first requirement is to put the coulomb values on the same basis and we therefore multiply coulombs (e) by 1010 to change them to coulombs (q). The natural unit of electric charge, now in coulombs (q), is then divided by the natural unit of potential in volts. This gives us the value 0.5137×10-17 farads for the natural unit of capacitance. The expression coulombs (e) per volt equals capacitance can be broken down into space-time terms as follows: t/s × s2/t = s. Capacitance is therefore one-dimensional space and its magnitude in centimeters can be calculated from geometrical measurements where conditions are favorable. From such measurements the centimeter has been found to be equal to 1.1126×10-12 farads. We may then divide 0.5137×10-17 by 1.1126×10-12 to obtain the value of the natural unit of capacitance in centimeters. The result is 0.462×10-5 cm. Within the limits of accuracy of the measurements this agrees with the value of the natural unit of space as previously determined and serves as a confirmation of the values computed for the units in both the electric current and electrostatic systems, since both were involved in the capacitance calculations.
The electron has an effective rotational displacement in one dimension. In this dimension only it has the gravitational characteristics of an atom; that is, it is rotating in the direction opposite to that of the space-time progression. This progression in the macroscopic material universe (the time-space region) is outward toward infinite space. On the opposite side of the neutral axis (the space-time region) it is outward toward infinite time. Since the net rotational displacements are in the opposite directions, these displacements are in time (positive rotation) in the time-space region and the direction of motion is inward toward zero space. Gravitational force in this region is therefore a force of attraction. The direction of a positive rotational vibration (positive charge) is opposite to that of the positive rotation and it is outward toward infinite space. Two such positive charges are therefore moving apart, and in terms of force we may say that they repel each other. The same situation prevails where the rotations and the charges are all negative. Here the space and time relations are reversed but gravitation always opposes the space-time progression and consequently it is still a force of attraction. The oppositely directed force between the charges is a force of repulsion.
It should be recognized that the several space-time regions as previously defined are not localized in space and phenomena of two or more regions, such as positive and negative charges, may therefore exist in close proximity. If one of two interacting charges is positive and the other negative, the space-time direction of the motion is outward in both cases but because of the space-time inversion outward in each region is inward with respect to the other, hence the motions of unlike charges are directed toward each other. We therefore arrive at the general principle that the vibrational motions which we identify as charges create forces of repulsion if they are alike and forces of attraction if they are unlike.
Except for the dimensional difference, the force between electric charges is identical with the gravitational force. As in the latter, the motion is numerically equal to the displacement because the associated magnitude of the opposite kind is unity. Three-dimensional displacement t3 results in mass, t3/s3, which from a numerical standpoint reduces to t3 again since s = 1. Similarly one-dimensional displacement (electrical quantity) t or Q results in energy, t/s, which also reduces to the numerical value t because of the unit value of s. The gravitational force relations can therefore be adapted to the electrical system by placing acceleration with reciprocal space, which gives us
t/s2 = t/s × 1/s,
or in the usual terms,
|Fe = E/s|| |
From the facts brought out in the discussion of the analogous gravitational relation, equation 2, it is obvious that the expression for the force between two charges takes the same form as the gravitational equation. We may therefore state this relation as:
|Fe = ee’/s2|| |
As indicated in the general discussion of the sub-material particles, the electron has no primary rotational mass. When it acquires a charge, however, a gravitational force is produced by the effect of the rotational vibration on the basic linear frequency. We have already seen how the existence of this frequency in the atom modifies the secondary mass per effective rotational mass unit from 1/128 unit to 1/137.48 unit. In the electron the original 1/128 mass unit, which is merely the three-dimensional distribution of the rotational mass unit, does not exist and only the difference between 1/128 and 1/137.48 appears as mass. As in the atoms the secondary effect increases this mass by the factor 2/137.48. The total electronic mass is then:
|me = (1/128 - 1/137.48) (1 + 2/137.48) = 0.0005458||(124)|
This figure, 0.0005458 natural units, may be expressed as
or as 1/1823.28 amu.
Two additional electrostatic quantities should be mentioned briefly, as it will be of interest to compare their dimensions with those of the corresponding magnetic quantities which will be considered later. The electric field intensity is the potential gradient and is expressed in volts per meter or similar terms. The space-time dimensions are t/s2 × 1/s = t/s3. The flux density is expressed in terms of flux (charge) per unit area, which is t/s × 1/s2 = t/s3. These two quantities are therefore essentially the same thing seen from two different viewpoints, and the justification for the use of two different units is rather questionable.
Electric charges are not by any means confined solely to electrons. They may also be imparted to any other particles with electric rotational displacement or its equivalent, including material atoms as well as sub-material particles. The process of producing charges in matter is known as ionization and the charged atoms or molecules are called ions. Like the electrons, matter can be ionized by any of a number of different agencies including radiation, thermal collisions, electron impacts, etc. Essentially the ionization process is simply a transfer of energy and any kind of energy will serve the purpose if it can be delivered to the right place in the necessary concentration. There is, however, one other requirement to be taken into consideration. Mechanical principles indicate that translational velocity can be converted into rotational velocity only through the agency of a force couple. The atom or sub-material particle to be ionized must therefore be associated with some unit of the opposite space-time character so that a couple is in existence at the time the translational impact occurs.
An important consequence of this requirement is that in gases and liquids ions are normally produced in pairs. In solids any atoms that acquire charges retain their fixed positions and the only mobile units produced are the charged electrons, but in fluids all of the constituent units are free to move and each ionization therefore produces both positive and negative mobile particles. The effective couple in the simple gases is normally between the atom (or molecule) and an uncharged electron. Ionization under these conditions results in the production of a positively charged atom and a negatively charged electron. Any atom can take a positive charge (space displacement), as the net rotational displacement of all elements is in time, but as a practical matter positive ions are rarely formed in a low temperature environment by the lower Division IV elements, which are strongly electronegative. In gases these elements can produce negative ions, normally as members of ion pairs since positrons are not usually available for the force couple. The electron, being a unit of space displacement, can take only a negative charge.
One of the sources from which the ionization energy can be obtained is the thermal energy of the ionizable matter itself. Because of the high temperature required for this process thermal ionization of gases is of minor importance in the terrestrial environment, but at the high temperatures prevailing in the sun and other stars thermally ionized atoms, including positively charged ions of Division IV elements, are plentiful. The ionized condition is, in fact, normal at these temperatures, and we may consider that at each location there is a general ionization level, determined by the temperature. At the surface of the earth the electric ionization level is zero and any atom or sub-material particle which acquires a charge while in the gaseous state is in an unstable condition. It therefore eliminates the charge at the first opportunity. In some other region where the prevailing temperature corresponds to an ionization level of two units, for example, the doubly ionized state is the stable condition and any units which are above or below this degree of ionization tend to eliminate or acquire charges to the extent necessary to reach this stable level.
Since the rotational vibration which we call ionization is basically a motion in opposition to the rotation of the atom, the ionization cannot exceed the net effective rotational displacement (the atomic number). In a region of high ionization level the heavier elements therefore have a considerably greater content of space displacement in the form of ionization than those of smaller mass. This point has an important bearing on the life cycle of the elements and will be discussed in detail later.
Although thermal ionization of gases in the local environment is negligible, it does occur freely in liquids. At first glance it may seem contradictory to explain the relatively small amount of thermal ionization in gases as a result of insufficient temperature when it is widespread in liquids where the temperature is still lower. It should’ be recognized, however, that the determining factor is not the temperature as ordinarily measured but the temperature on the basis of the appropriate regional scale. A gas temperature of 2000° K, for example, is very low in a region where unit temperature is 3.62×1012 degrees. A liquid temperature of 400° K, on the other hand, is a very high level in a region where unit temperature is only 510 degrees.
The process of ionization in the liquid is essentially the same as in the gas. A compound such as HNO3 consists of two components, in this case a hydrogen atom and a NO3, radical group, held together by forces arising from the atomic rotations and the space-time progression. The effective rotation of the hydrogen atom is positive (displacement in time) while the effective rotation of the NO3 group is negative (displacement in space). The two components are therefore able to take positive (displacement in space) and negative (displacement in time) charges respectively, and the combination constitutes the kind of a positive-negative force couple which is a prerequisite for ionization. The thermal motion of the liquid itself supplies the necessary ionizing energy.
Because of the fact that positive rotation and positive charge are opposite in direction the charge acquired by the hydrogen atom neutralizes the single active positive valence and enables the atom to lead an independent existence as a neutral one-atom molecule. Similarly the charged NO3 group becomes independent. By means of this process the HNO3 molecule has now been split into a H+ molecule and a NO3- molecule.
The proportion of the total number of molecules which will be ionized in any particular aggregate is a probability function, the value of which depends upon a number of factors, including the strength of the chemical bond, the nature of the other substances present in the liquid, the temperature, etc. Where the bond is particularly strong, as in the organic compounds, the molecules often do not ionize or dissociate at all within the range of temperature through which the substance is liquid. Substances such as the metals in which the atoms are joined by positive bonds likewise cannot be ionized in the liquid state since there is no positive-negative couple on which the translatory impact can act.
The presence or absence of ions in the liquid is an important factor in many physical and chemical processes and for this reason chemical compounds are often classified on the basis of their behavior in this respect as polar or non-polar, electrolytes or non-electrolytes, etc. This distinction is not as fundamental as it might appear since the difference in behavior is merely a question of whether the bond strength is greater or less than the value necessary to prevent ionization. The position of the organic compounds in general as non-electrolytes is primarily due to the extra strength of the two-dimensional bonds which are characteristic of these compounds. It is worthy of note in this connection that organic compounds such as the acids which have one atom or group less strongly attached than normal are frequently subject to an appreciable degree of ionization.
Ionization of a liquid is not a process which is completed when the substance is first exposed to the appropriate conditions; it is a dynamic equilibrium similar to the vapor-liquid equilibrium. The electric force of attraction between unlike ions is always present and if an ion of the opposite kind is encountered at a time when the thermal energy is below the ionizing level, recombination will occur. This elimination of ions is offset by the ionization of additional molecules whose energy reaches the ionizing level. If conditions are stable an equilibrium is ultimately reached at a point where the rate of formation of new ions is equal to the rate of recombination.
Any change of conditions which affects either the ion formation or the recombination will, of course, alter the point of equilibrium. An important change of this kind is initiated if two conducting surfaces or electrodes are placed in contact with the ionized liquid or electrolyte and a potential difference is maintained between the two by some outside agency. For an explanation of the action which takes place we turn to the Principle of Inversion. According to this principle the charge of the negative ion, a rotational vibration with a displacement in time, is equivalent to and in equilibrium with a similar but opposite motion of the space unit in which the ion is located. This inverse motion is therefore a unit rotational space displacement. As long as it is associated with the ion it is a unit vibration, the inverse of the atomic vibration, but if it is detached from the ion for any reason the vibrational characteristics disappear as a single displacement unit is not vibratory when independent. Under these circumstances the motion manifests itself as an uncharged electron: a single unit of electric rotational displacement.
The external energy source acts as an electron pump, withdrawing electrons from the anode and forcing them into the cathode. Since the electrolyte is not a conductor of electrons, the electrical equivalent of a vacuum is produced at the anode and the equivalent of a pressure at the cathode. The ion, which is an atom or molecule of matter, cannot enter an electrode, but this prohibition does not apply to the coexisting space motion, and since the inverse motion of a negative ion is equivalent to an electron it is subject to the differential forces at the electrodes. When a negative ion comes in contact with the anode, therefore, the space motion leaves the ion and enters the anode as a free particle, an electron. The negative ion, deprived of its rotational vibration (charge), reverts to the neutral state. This leaves an excess of positive charge in the immediate vicinity of the anode and causes a movement of negative ions in this direction.
A similar process is simultaneously taking place at the cathode. Here the electric potential is the equivalent of a pressure and this tends to force the electrons out of the cathode whenever the opportunity arises. The electron cannot enter the liquid, which is not a conductor, but it can enter a positive ion if the latter makes contact with the cathode, since the ion is an atom of matter. When this occurs the electron and the oppositely directed rotational vibration (charge) of the space unit associated with the ion destroy each other and the ion becomes a neutral atom. As in the analogous situation at the anode, this leaves an excess charge of the opposite kind, in this case negative, and a movement of positive ions toward the cathode results.
Inasmuch as the electrons in the external conductor appear at the anode, flow through the conductor, and disappear again at the cathode, it would seem on first consideration that they must be carried from the cathode to the anode through the liquid. Examination of the movement within the liquid, however, indicates that this is incorrect, as all of the movement is from the body of the liquid to the electrodes; there is no movement from cathode to anode. The positive and negative charges created by the ionization process move to the cathode and anode respectively, where they are destroyed in the manner described. This lowers the ion concentration below the equilibrium value and further ionization then occurs. If the ionizable matter is finally exhausted and there are no more ions available the electrolytic action ceases, regardless of the potential difference between the two electrodes.
When the cycle which originated with the ionization of the molecule of the electrolyte is completed by the neutralization of the positive and negative ions a complete electronic balance has been accomplished and no net change has occurred from the electronic standpoint. The result of the process has been a chemical change. By means of energy supplied from the external source of potential the cohesive forces within the molecule have been overcome and the positive and negative components have been physically separated. Because of this separation it is not possible to reconstitute the original compound when the charges are neutralized and the ions deprived of their charges therefore combine with others of the same kind, where this is chemically possible. Metals plate out on the cathode and hydrogen gas escapes from the liquid. In some instances a gas is similarly formed at the anode, as in the electrolysis of water, but negative radicals such as SO4, for example, cannot combine and they normally attack the metal of the anode.
It is also possible to utilize the same process to accomplish the inverse objective; that is, to obtain electrical energy from a chemical change. Let us assume, for instance, that copper and zinc electrodes are inserted into a suitable electrolyte and connected externally by a metallic conductor. Because of the difference in contact potential, electrons flow from the zinc electrode through the conductor to the copper electrode to achieve an electronic equilibrium. This leaves the zinc anode in the condition of an electronic vacuum relative to the electrolyte and similarly creates an electronic pressure at the cathode. The situation is then identical with that existing when the potential difference is created by an outside agency and the same kind of motion takes place.
In order to arrive at the condition of space-time equilibrium required for the independent existence of ions it is necessary for each component of the molecule to acquire a charge equal in magnitude but opposite in direction to the net effective rotational displacement. The number of units in the ionic charge is therefore equal to the effective valence of the atom or group which carries it. This enables us to compute the relation between the quantity of electricity and the mass involved in electrolytic action. Each ion of valence n carries n units of charge. Since a unit of charge is transformed into a unit of electrical quantity by the process at the anode the total number of ions of each sign corresponding to a current of quantity Q is Q/n. If n is one and the ion is monatomic the number of atoms is equal to the number of units of electric quantity. We may evaluate this one to one relationship in cgs units by dividing the unit of electric quantity by the unit of atomic mass.
|4.8069×10-10 e.s.u. / 1.66124×10-24 g |
= 2.8936×1014 e.s.u./g-equiv.
This is the Faraday Constant. It tells us that each 2.8936×1014 e.s.u. will remove from solution one gram of a univalent substance of unit atomic weight or m/n grams of a substance of atomic weight m and valence n. The determination of this constant is a much simpler and more direct process than the experimental evaluation of the electronic charge and since it is advisable to base the conversion constants on the most firmly established value, the Faraday constant is listed in Appendix A as the basis of the conversion ratios in the electrical system. As indicated by equation 125 the currently accepted experimental value of this constant, 2.8936×1014 e.s.u./g.-equiv., is equivalent to Millikan’s value of the electronic charge, on which the previous calculations were based.
In all of the discussion in the foregoing pages the direction of movement of the electric current has been indicated in terms of electron flow and reference to the direction of “flow of current” has been avoided because of the unfortunate convention which pictures the flow in the wrong direction. Inasmuch as some rather basic changes in the concept of the nature of electric currents will be necessary in view of the findings of this work, it would seem that this is an opportune time to discard this erroneous and confusing flow convention. No change will be necessary in the designations positive and negative as they apply to battery terminals, etc. It will, in fact, be quite desirable to retain the present nomenclature as long as we continue to regard the electronic charge as negative. All that is required is an understanding that current flow is from negative to positive, or from more negative to less negative in wires connecting terminals of the same sign. On this basis the terms “positive” and “negative” in application to electric currents will refer to the electronic “pressure” or potential, the higher potential being the more negative.
Corresponding to the electric charge, which is a one-dimensional rotational vibration in opposition to the one-dimensional electric rotation, is a two-dimensional equivalent, the magnetic charge, which is a two-dimensional rotational vibration in opposition to the two-dimensional magnetic rotation.
In undertaking a general examination of the magnetic phenomena originating from such charges, the first requirement is a clarification of the dimensional characteristics of the various magnitudes involved. While a certain amount of confusion has been introduced into currently accepted theoretical concepts by a lack of recognition of the fact that the gravitational equation F = mm’/s2 and its electromagnetic analogues are merely special expressions of the general force equation F = ma and the corresponding dimensional variations, this situation has not been particularly serious in the phenomena heretofore examined. When we turn to magnetic effects, however, still further complications of the same kind are introduced by another hitherto unrecognized fact: the two-dimensional nature of the magnetic motion. It will therefore be advisable to establish the dimensions and units of the various magnetic quantities before discussing the characteristics of the magnetic forces, even though this may to some extent reverse the logical order of presentation.
We may begin with the charge itself, which is a two-dimensional rotational motion (vibratory) and is therefore the two-dimensional equivalent of mass, just as the electric charge is the one-dimensional equivalent. From a space-time standpoint, therefore, magnetic charge is t2/s2. The corresponding magnetic force equation is t2/s2 x 1/t = t/s2, which we may state as
|F = M/t||
As in the case of the analogous one-dimensional and three-dimensional forces, the magnetic force between two charges can be expressed by a modification of equation 126 in which t is omitted because it has unit value, and the dimensionless ratios I and s2 are inserted.
|F = MM’/s2||
If unit charge is defined in terms of this equation without the introduction of any coefficient, the unit is related to the electrostatic unit derived from equation 124 by the factor s/t; that is, t2/s2 x s/t = t/s. The physical expression of the natural unit of magnetic charge is not clearly indicated by the information currently available either from experiment or from those theoretical deductions that can be made at this rather early stage of the development of magnetic theory from the Fundamental Postulates of this work, and we will not attempt a direct evaluation of this unit at the present time. It is not used in the derivation of the units applicable to other magnetic quantities.
Magnetic potential, like electric potential, is charge divided by distance and it therefore has the dimensions t2/s3. The potential gradient, potential divided by distance, is the magnetic field intensity,
t2/s3 × 1/s = t2/s4.
Another concept is that of magnetic flux, which is defined as the product of area and field intensity. Multiplying the space-time components of these quantities, t2/s4 × s2, we find that we are back to t2/s2, the expression for magnetic charge. The flux is therefore dimensionally equivalent to the charge and unit flux is customarily used instead of unit charge in establishing the related units. It is expressed in volt-seconds or in maxwells, the latter unit being equivalent to 10-8 volt-sec. The natural unit of magnetic flux is the product of the natural unit of electric potential, 3.1190×108 volts, and the natural unit of time, 0.1521×10-15 sec, and amounts to 0.4743×10-7 volt-sec or 4.743 maxwells. The justification for deriving the basic magnetic units from an electric quantity, the volt, can be seen by expressing this derivation in space-time terms: t/s2 × t = t2/s2.
We may now divide the natural unit of magnetic flux by the natural unit of space, 0.4559×10-5 cm, obtaining the natural unit of magnetic potential, 1.040×106 maxwells/cm or gilberts. Another division by unit space gives us the natural unit of magnetic field intensity,
2.282×1011 gilberts/cm or oersteds.
The natural unit of magnetic flux density or magnetic induction is the natural unit of magnetic flux divided by the natural unit of area, which arrives at exactly the same numerical result, 2.282×1011, but in this case the unit is called the gauss. Here again as in the electrostatic system, we find that the field intensity and the flux density are merely two different aspects of the same thing, and the proportionality constant connecting the two is dimensionless. A considerable amount of simplification and clarification of theoretical relationships could be accomplished in both the electric and magnetic systems by eliminating one set of units in each case.
Another basic magnetic quantity is inductance, L, which is the term applied to the production of an electromotive force in a conductor by variations in an electric current. The mathematical expression is:
|F = -L dI/dt||
The inductance in space-time terms is then:
|L = t/s2 × t × t/s = t3/s3||
These are the dimensions of mass, hence inductance is equivalent to inertia. Because of the dimensional confusion in the magnetic system the inductance is commonly regarded as being dimensionally equivalent to length and the centimeter is actually used as a unit. The true nature of the quantity is illustrated by a comparison of the inductive force equation with the general force equation F = ma.
F = ma = m dv/dt = m d2s/dt2 F = L dI/dt = L d2Q/dt2
The equations are identical. As we have previously found, I is a velocity and Q is space. It follows that m and L are equivalent. The qualitative effects likewise lead to the same conclusion. Just as inertia resists any change in velocity, inductance resists any change in the electric current. It has been possible to express the inductance in centimeters without getting into any difficulties only because the values of the other quantities involved are constant when we are dealing with electrons only. This is the same situation as in the illustration previously used, where we set up a hypothetical system that expressed the mass of water in cubic centimeters without introducing any serious numerical inconsistencies as long as it is used only in application to water under normal pressure and temperature conditions.
Recognition of the equivalence of self-induction and inertia also clarifies the energy picture. An equivalent mass L moving with a velocity I must have a kinetic energy ½LI2 and we find experimentally that when a current I flowing in an inductance L is destroyed an amount of energy ½LI2 does make its appearance. The explanation on the basis of existing theory is that this energy is “stored in the electromagnetic field,” but the dimensional clarification shows that it is actually the kinetic energy of the moving electrons.
The vibrational motion which constitutes a magnetic charge can have either a space displacement or a time displacement, but since all magnetic rotation in the material universe is in time and the charges which we recognize as magnetic are oppositely directed, all of the magnetic charges as herein defined have displacement in space. In spite of this uniformity in the inherent rotational direction, however, magnetic charges display directional effects because of the geometry of the two-dimensional motion.
To illustrate this point let us consider the positions of the axes of rotation. The axis of a one-dimensional rotation can be represented as a stationary line. This axis therefore occupies a fixed position irrespective of the location of the reference point and no question concerning motion of the axis arises. If the rotation is three-dimensional the position of axis A is no longer fixed, but the locus of all positions of this axis is a sphere and although the position relative to any specific reference point is continually changing, the time average of the changes is the same for all reference points and the motion of the axis therefore has the same aspect from all directions. Where the rotation is two-dimensional, however, the locus of the positions of axis A is a circle and the relative direction of the motion of this axis depends on the reference point. From one direction the motion is clockwise, whereas from the other direction it is counterclockwise. One of these directions is that of the space-time progression, consequently the other must be the opposite. This motion of the axis modifies the direction of the magnetic motion itself relative to the reference point. In the direction in which the resultant motion opposes that due to the space-time progression the magnetic charge has the normal properties of a positive (space displacement) charge; in the opposite direction it acts as a negative charge. Between the two extremes the magnitude and direction of the charge is determined by the geometry of the charged body. The effect is zero at the midpoint, anywhere in the plane of the circle of rotation.
Since the positive and negative magnetic charges are merely directional effects of the same motion they cannot be separated and there is no magnetic equivalent of the isolated electric charge. If any magnetically charged aggregate is broken up into smaller units, each individual fragment regardless of size still has positive and negative poles, or centers of the respective directional effects. For the same reasons given in the discussion of electric charges, like magnetic poles repel each other whereas there is a force of attraction between unlike poles.
Magnetic charges, like their electrical counterparts, are normally unstable in the local environment and are consequently short-lived. There are, however, some materials which have the ability to retain the charges on a more stable basis and magnetized materials of this kind constitute permanent magnets.
While the magnetic charge is a vibrational motion, the essential feature of magnetic motion in general is not the vibrational character but the two effective dimensions. Any basic motion effective in two dimensions only is magnetic and exerts magnetic forces on other objects with magnetic motion. It is therefore possible to develop a magnetic force by means other than a magnetic charge. We may, for instance, cause motion in two dimensions by giving a translational motion to a one-dimensional rotating unit of time or space. This unit, a positron or electron, whether charged or uncharged, then has an effective velocity component in each of two dimensions but not in the third dimension, and it consequently exerts a magnetic force. The magnitude of the translational motion of an electron is represented by the velocity, v, and that of the rotational motion by the rotational velocity s/t which reduces to s, or Q in the electric system of notation, because of the unit value of t. The two-dimensional motion is then Qv, and we may multiply this expression by the magnetic field intensity, H, to obtain the magnetic force.
|F = HQv||
In space-time terms this is
|F = t/s2 = t2/s4 × s × s/t||
Equation 130 is normally written F = Hev, on the assumption that it is the electronic charge that participates in the production of the magnetic force. As has been explained, it is actually the rotational velocity of the charged electron that enters into this relation, not the charge, and the uncharged electron produces exactly the same result. We cannot observe the uncharged electron individually but we observe such units collectively as an electric current. Here again the magnitude of the two-dimensional motion is represented by the product of the translational velocity and the rotational velocity. The translational velocity is the current I. As in equation 130 the rotational velocity reduces to the quantity, Q, but in this case the quantity is not a fixed magnitude as it depends on the number of electrons involved and it is therefore proportional to the length, l, of the conductor. The two-dimensional velocity is then Il and we may again multiply by the magnetic field intensity, H, to obtain the force.
|F = HIl = t2/s4 × s/t × s = t/s2||
The relative ease with which magnetic effects can be produced and controlled by means of electric currents has made electromagnetism the most familiar type of magnetic phenomenon in present day engineering practice. The general principles and relationships which are involved are well-established and since the revisions of the basic concepts of electricity and magnetism which have resulted from the development of the consequences of the Fundamental Postulates of this work do not alter those subsidiary relations to any significant degree, it will not be necessary to cover these items in detail in this presentation. In view of the drastic changes that have been made in the assignment of electrical and magnetic dimensions, however, it may be advisable to demonstrate the dimensional consistency of the major relations as they are derived from the principles deduced in the foregoing pages by reducing the other two principal force relationships to space-time terms.
EMF of Current (Rate of change of flux)
|F = V = dø/dt = t2/s2 × 1/t = t/s2||
Magnetic Force of Current (Ampere’s Law)
|F = I ds M/r2 = s/t × s × t2/s2 × 1/s2 = t/s2||
The direction of the electromagnetic force obviously depends on the direction of the current that produces it. The basic rotational motion of the electrons is in the gravitational direction; that is, it opposes the space-time progression and tends to move the rotating units together. When two currents also have the same translatory direction the magnetic forces are coincident and there is a force of attraction between the two. Parallel wires carrying currents in the same direction therefore attract each other. If the currents are moving in opposite directions this relation is reversed and a force of repulsion exists.
An alternate method of imparting another dimension of motion to the electrons is to move the entire mass in which they are present. Observation of the magnetic effect is inconvenient if this motion is translational, but by rotating the mass the required velocities can be attained under conditions permitting accurate observation. The magnetic effect thus produced is gyromagnetism. It is quite probable that the magnetism of the earth is primarily a gyromagnetic effect.
When an intermittent force such as that of an electric or magnetic charge is applied to a unit that is free to move, the latter unit, under favorable conditions will be set into motion of the same nature, giving it an induced charge. This process is similar to the production of mechanical vibrations by sound waves. A water glass, for instance, that starts to vibrate when a low note is played on the piano might be said to have acquired an induced oscillation. The charge likewise exerts an intermittent force and therefore tends to cause vibrational motion.
At this point it should be emphasized that according to the Principle of Inversion the negative charge of the electron and the positive charge of the material atom are equivalent and interchangeable. Any motion of an atom is equivalent to a similar but oppositely directed motion of the space unit in which it is located or to an equal motion of any other space unit. The same force that causes a positive vibration of the atom will therefore cause a negative vibration of the electron. Conversely it is immaterial whether the inducing charge is a positive atomic charge or a negative electronic charge. The result in either case is the induction of charges of both kinds, negative charges in electrons and positive charges in material atoms. Theoretically a negative charge on an atom (a negative ion) should only be able to induce similar negative atomic charges, since there are no free positrons in the local environment, but experimental verification of this point is not available at present.
In a conductor the charged electrons are free to move to accommodate themselves to the existing electric potential. The charged atoms cannot move if the conductor is solid, but they can accomplish essentially the same result by transferring the charge in the same manner in which they received it; that is, by induction. The charges therefore distribute themselves in accordance with the potential, and they can be drawn off through conductors, separated, or otherwise manipulated.
Magnetic charges can likewise be induced in any substance that is free to move in the magnetic dimensions, but the availability of such substances is limited. There is no common sub-material particle comparable to the electron which can take a magnetic charge (as defined herein) and, as we will find in a later section, most elements have an oppositely directed vibrational motion which prevents them from acquiring such a magnetic charge. In a suitable material, however, a charge may be induced by bringing a magnetically charged body (a magnet) into close proximity.
Similar results can also be produced by electromagnetic means. In an electromagnet the electron motion itself does not have the intermittent characteristics needed for inducing charges, but the magnetic effect of any individual electron on any individual atom is dependent on the direction of their relative velocity, and since that direction changes as the electron goes past the atom, the resulting force has an oscillating character.
The magnetic induction process can also be applied in the inverse manner by moving a conductor in the field of a magnetic charge or electromagnet, in which case an emf is induced in the conductor. The motion of the conductor itself has no magnetic effect since the material atoms already have three-dimensional motion, but there are free electrons within the conductor which are given a second dimension of motion in the process, and this produces a magnetic effect in the manner which has been described.
The lowering of the superconductive transition point by application of a magnetic field as mentioned in the discussion of superconductivity is another inverse magnetic effect. Since the flow of current creates a magnetic force, it follows that magnetic forces modify the flow of current and hence alter the relation between the thermal and electric temperature scales.
One of the noteworthy differences between the gravitational force and the forces due to electric or magnetic charges is that the gravitational force cannot be screened off or reduced by the contents of the intervening space, whereas the electric and magnetic forces are subject to very substantial modification by matter intervening between the point of origin of the force and its point of application. While this is outwardly a very striking difference, so outstanding, in fact, that it is often cited as strong evidence to indicate that gravitation must have an entirely different origin than electric or magnetic forces, it is in reality only a result of the three-dimensional character of the gravitational forces. In all cases the presence of matter between the point of origin of one of these forces and any location in space alters the net resultant force acting at that location. The gravitational forces, always attractive, are additive, and the intervening substance merely increases the total by the amount of its own gravitational force. Electric and magnetic forces, on the other hand, are anisotropic and the directional effect of the intervening forces of the same kind may operate to reduce the original force rather than add to it. Induced charges, particularly, are capable of distorting the original force pattern out of all recognition, even to the extent of complete neutralization.
Since each material substance affects the electric and magnetic forces in a manner determined by its rotational characteristics and its ability to respond to induction, it is possible in each case to derive a numerical value which represents the magnitude of the effect produced by the particular substance. This value, a dimensionless ratio, is the dielectric constant, εr, in the electric system and the permeability, µ, in the magnetic system. For free space the value of each of these quantities is unity.
We may now generalize the electrical force expression, equation 123, so that it is applicable to the situation in which the charges e and e’ are separated by a medium of dielectric constant εr.
|F = ee’/εr s2||
Equation 127, the corresponding magnetic expression, may be similarly modified to cover the general situation in which the intervening medium has a permeability µ.
|F = MM’/µ s2||
The study of the numerical values of the dielectric constant and the permeability is one of the more recent phases of this investigation and it is not far enough advanced to enable including the results in this current presentation, except in one particular area. Some of the preliminary findings may, however, be of interest even though they are necessarily tentative. It is indicated that there are two separate effects to be considered. Inasmuch as the three-dimensional gravitational rotation has a component parallel to any one-dimensional or two-dimensional motion, there is an interaction between mass and charge similar to that between charge and charge, but of a much smaller order of magnitude. Each substance has an inherent dielectric constant and permeability due to this interaction. In addition to possessing these inherent characteristics some, but not all, substances are susceptible to induction and add an inductive component, which is normally much larger than the gravitational component.
The gravitational component is independent of the temperature, and where temperature effects are observed they are due to secondary causes such as the change in density. The inductive component, on the other hand, is dependent on the existence of free rotational displacements which can be given charges, or opposing vibratory space displacements. The thermal motion, which is a space displacement, has the effect of reducing the net time displacement available as a base for the charge and hence it reduces the inductive capacity. The general trend of the inductive components of the dielectric constant and the permeability is therefore downward with increasing temperature.
The hydrocarbons are typical of substances with no inductive component in the dielectric constant. In these compounds the observed value of Ê decreases slightly as the temperature rises, but this is merely a density effect as the quantity (Ê - 1)/d remains practically constant through the range of temperatures where the experimental results are the most accurate. Octane, for instance, goes from 1.357 at -50° to 1.362 at +50° C. There is some decrease in the experimental values at still higher temperatures but it is questionable whether this decrease is real.
Some of the substances with inductive components have relatively high dielectric constants even at room temperature. For example, the observed value for ethyl alcohol at 20° C is 25.07. In many other substances the dielectric constant at room temperature is quite low, somewhere in the neighborhood of twice the constant gravitational component, but it rises to much higher values at lower temperatures.
The direction of the dielectric and permeability effects is determined by the direction of the atomic rotation in the free dimensions: those which do not participate in the electric or magnetic vibrational motion. In the electric system the free dimensions are magnetic, and since the magnetic rotation is entirely in time in the material universe the dielectric effect is always positive; that is, the dielectric constant of a material medium is always greater than unity. In the magnetic system the free dimension is electric and the permeability effect may be either positive or negative, depending on the direction of the electric rotation. In the electropositive elements (Divisions I and II) the electric rotation is positive, like the magnetic rotation, and the permeability is therefore greater than unity. The electronegative elements (Divisions III and IV) have negative electric rotation and the permeability effect is in the opposite direction, resulting in permeability values less than unity. As in the case of valence, it is possible for some atoms to reorient themselves in such a way as to reverse the normal force directions, but these reorientations are relatively rare in the magnetic system and the great majority of the elements and their compounds follow the normal pattern as outlined.
In dealing with the magnetic system it will be convenient to use the magnetic susceptibility, which is defined as (µ - 1)/4¶, rather than the permeability itself. The susceptibility of free space is zero and the positive and negative permeability effects are represented by positive and negative values of the susceptibility.
The electronegative elements are not normally subject to magnetic induction since they have no positive electric rotation to give the necessary positive direction to a charge, except in those few instances where reorientation has taken place. The susceptibility of these elements and the compounds of similar characteristics is therefore limited to the gravitational component, and since the latter is negative because of the negative direction of the electric rotation, these substances have relatively small negative susceptibilities which are independent of the temperature. The term diamagnetic is used to designate such properties.
The electropositive elements and the compounds of similar magnetic behavior are generally classified as paramagnetic, if the positive susceptibility is small, or as ferromagnetic, if it is large. The theoretical development in this work leads to somewhat different conclusions but in view of the incomplete status of the magnetic investigation these conclusions should be regarded as tentative for the present. From this new theoretical viewpoint it would appear that there should be a class of paramagnetic substances corresponding to the diamagnetic group with relatively small positive susceptibilities independent of temperature. When we examine the experimental susceptibilities of the electropositive elements this is just what we find. The majority of these elements have susceptibilities of approximately the same magnitude as the diamagnetic values, ranging from near zero to slightly over 1.00. Furthermore, these small positive susceptibilities show little or no temperature variation. The value reported for magnesium at 700° C, for example, is identical with that reported at 20° C. Where any temperature variation does exist it is usually an increase in the susceptibility at the higher temperatures, which is directly opposite to the behavior of inductive paramagnetism and suggests that secondary causes may be responsible.
The remaining electropositive elements have much higher susceptibilities which are strongly temperature-dependent, and while there is a very large difference between the susceptibilities of the ferromagnetic elements and the other elements of this group, it would seem that they all belong in the class of inductive paramagnetics. The existence of an inductive effect depends on the availability of free rotations which can act as a base for the charge. As mentioned earlier, no such free rotation exists in most elements because there is an oppositely directed rotational vibration, to be discussed later, which inhibits the magnetic vibration. This opposing motion, however, is a single entity and consequently it is dimensionally symmetrical; that is, it has the same displacement in both magnetic dimensions and is limited to the first space-time unit in the electric dimension whenever the magnetic rotation is confined to this one unit. Since each element of the atomic rotation is independent, any unsymmetrical portion of the atomic rotation can take a magnetic vibration even though the associated symmetrical rotational units are vibrating in a different manner. An unsymmetrical rotation in one dimension is inadequate for a magnetic motion but if such rotations are available in two dimensions induction of a magnetic charge is possible.
The effect of this requirement of two free dimensions is to limit magnetic induction to those elements which (1) have two vibrational units rotating in the electric dimension, and (2) have unequal primary and secondary magnetic displacement. The normal electric rotation does not enter the second space-time unit below Group 3A and the elements of the lower groups are therefore unable to respond to magnetic induction as long as they are in their normal states. Groups 3B and 4B are also excluded in their entirety because their magnetic displacements are 3-3 and 4-4 respectively and there is no unsymmetrical magnetic rotation. This leaves only those elements of Groups 3A and 4A which have electric rotation in the second space-time unit: the iron-cobalt-nickel group and the rare earths. It is not immediately apparent why the inductive capacity of the 3A elements should be so much greater than that of the 4A group but this question will have to be left for later treatment, along with the question of magnetic induction in chemical compounds.
The diamagnetic susceptibility has been studied in considerable detail and it has been found that this property is merely the reciprocal of the effective magnetic rotational displacement. There are, of course, two possible values of this displacement for most elements but the applicable value is generally indicated by the environment; that is, association with elements of low displacement generally means that the lower value will prevail and vice versa. Carbon, for instance, takes its secondary displacement, one, in association with hydrogen, but changes to the primary displacement, two, in association with elements of the higher groups.
This same quantity, the reciprocal of the magnetic displacement, plays an important part in the refraction of light. In the discussion of the refraction phenomenon in a subsequent section of this work the effective displacement, the total value plus or minus the initial level, will be evaluated for a large number of substances and tabulated as the refraction constant, kr The diamagnetic susceptibility is identical with the refraction constant except for certain differences in the initial levels and since the available refraction data are much more complete and far more accurate than the available magnetic susceptibility measurements, it will be advisable to calculate the susceptibilities from the corresponding refraction constants rather than to attempt direct calculation.
In Table CVI the column headed kr lists the refraction constants as computed in connection with the refractive index calculations. The next two columns show the derivation of the initial level adjustment. Normally the magnetic initial level is the same as the refractive initial level in the interior groups of the molecule but is 1/9 unit higher in the end groups. Under normal conditions, therefore, the sum of the individual differences in initial level, dI, is m’/9, where m’ is the number of rotational mass units in the end groups of the molecule, and the average difference for the molecule as a whole is m’/9m. In the normal paraffins, for example, there are 18 rotational mass units in the two CH3 groups at the ends of the chain. The value of dI for these compounds is therefore 18/9 = 2.0. Branching adds more ends to the molecule and consequently increases dI. The 2-methyl paraffins add one CH3 end group, raising dI to 3.0, the 2, 3-dimethyl compounds add one more, bringing this value up to 4.0, and so on. Some modifications of this general pattern are encountered where there is a very close association between the CH3 groups and the remainder of the molecule. In 2-methyl propane, for instance, the CHCH3 combination acts as an interior group and the value of dI for this compound is the same as that of the corresponding normal paraffin: butane. The C(CH3)2 combination likewise acts as an interior group in 2, 2-dimethyl propane, and as a unit with only one end group in the higher 2, 2-dimethyl paraffins.
|2, 2-di Me propane||.823||2.000||.048||.871||.874|
|2, 2-di Me butane||.816||3.000||.060||.876||.873||.883||.885|
|2, 2-di Me pentane||.814||3.000||.052||.866||.866||.869|
|2, 3-di Me butane||.809||4.000||.080||.889||.883||.885|
|2, 3-di Me pentane||.809||4.000||.069||.878||.873||.875|
|2, 3-di Me hexane||.808||4.000||.061||.869||.865|
|2, 2, 3-tri Me butane||.809||4.000||.069||.878||.878||.894|
|2, 2, 3-tri Me pentane||.808||4.000||.061||.869||.872||.874|
|2, 2, 4-tri Me pentane||.813||3.000||.045||.858||.859|
|1, 2-di Me cyclopentane||.786||1.889||.034||.820||.828|
The behavior of the substituted chain compounds is similar, but there is a greater range of variability because of the presence of components other than carbon and hydrogen. The alcohols, a typical family of this kind, have a CH3 group at one end of the molecule and a CH2OH group at the other. The value of dI for the longer chains is therefore 26/9 = 2.889. In the lower alcohols, however, the CH2 portion of the CH2OH unit reverts to the status of an interior group and dI drops to 2.0. The methyl alcohol molecule goes a step farther and acts as if it had only one end. A similar pattern can be seen in the lower acids and acid esters. Since we have found that the effective units of these compounds in some of the phenomena previously studied are double formula molecules, it appears probable that the magnetic behavior of methyl alcohol and other compounds with similar characteristics can also be attributed to the size of the effective molecule.
The organic rings act as double chains and the susceptibility pattern of the cyclic compounds is identical with that of the straight chains. The initial levels of the aromatic rings are one step lower; that is, the end groups of the double chain have the same initial levels as in refraction, and the levels of the interior groups are 1/9 unit below the refraction values. Methyl substitutions enter the ring and follow the ring pattern. Longer branches act as attached chains and the initial levels of the highly branched compounds therefore rise toward the levels prevailing in the straight chain structures.
In the earlier discussion of the characteristics of the various sub-material particles it was pointed out that there must be an excess of neutrinos present in the material universe at all times because these combinations cannot be utilized in the formation of matter in anything like the quantities in which they are produced. Obviously the presence of any such large concentration of particles of a particular type can be expected to have some kind of a significant effect on the physical system. We have already examined a wide variety of electrical phenomena resulting from the analogous excess of electrons. The neutrino, however, is more elusive and there is very little direct experimental information available concerning this particle and its behavior. We will therefore have to rely mainly upon theoretical deductions to trace the course of events until we come to effects upon matter which can be observed and measured.
We can logically conclude that in some environments the neutrinos exist in the uncharged condition, just as we find that the electron normally has no charge in the terrestrial environment. In this condition the neutrino has a net displacement of zero, and it is therefore able to move freely in either space or time. Furthermore, it is not affected by gravitation or by electrical or magnetic forces, since it has neither mass nor charge. It therefore has no motion with respect to space-time, which means that from the viewpoint of a stationary reference system the neutrinos produced at any given point move outward in all directions at unit velocity in the same manner as radiation. Each material aggregate in the universe is therefore exposed to a constant flux of neutrinos which may be regarded as a special kind of radiation.
While the neutrino is neutral with respect to space-time because the displacements of its separate motions add up to zero, it actually has effective displacements in both the electric and magnetic dimensions. It is therefore capable of taking either a magnetic or an electric charge. For reasons stated in the earlier pages the magnetic motion takes precedence over the electric motion where either is possible and the charge of a charged neutrino is therefore magnetic. The direction of the charge is, of course, opposite to that of the magnetic rotation, and since the latter is in time the charge is a space displacement. Inasmuch as this charge is the only significant feature of the structure, the charged neutrino is essentially nothing but a mobile unit of space.
As a space displacement the charged neutrino is subject to the same limitations as the analogous uncharged electron; it can move freely through the time displacements of matter but it is barred from passage through open space. Any neutrino which acquires a charge while passing through matter is therefore trapped and is unable to escape from the material aggregate unless the charge can be eliminated. At first the proportion of neutrinos captured in passing through a newly formed aggregate is probably small but as the number of charged particles within the aggregate builds up, increasing what we may call the magnetic temperature, the tendency toward capture becomes greater. Most of the neutrinos resulting from cosmic ray decay processes are also charged and join the captured particles. Being rotational in character the magnetic motion is not radiated away in the manner of the thermal motion and the increase of the neutrino population is therefore a cumulative process. There will inevitably be some differences in the rate of build-up due to local conditions, but in general the older a material aggregate becomes the higher its magnetic temperature rises. The ultimate result of this process will be discussed later.
As in the analogous thermal motion, the motion of the neutrinos (space) relative to matter is equivalent to motion of matter relative to space (the Principle of Inversion). The material aggregate is therefore in equilibrium with the neutrinos from the standpoint of magnetic temperature. At some stage of the rise in this magnetic temperature the first magnetic ionization level is reached. The situation at this point is the same as that existing at the first electric ionization level. The magnetic energy corresponding to the prevailing magnetic temperature is now equal to the energy required for one unit of magnetic vibration of an atom, and the latter is therefore set into vibrational motion. The motion of the atom relative to space (the neutrino) is the inverse of the motion of the neutrino with respect to time (the atomic displacement) and the atom therefore acquires a vibrational displacement in time. This is a magnetic charge similar to the charges discussed in connection with the general subject of magnetism, but opposite in space-time direction; that is, it is a time displacement rather than a space displacement: a difference which has a profound effect on the participation of the charges in physical phenomena. The ordinary magnetic charge is a foreign element in the material system: a magnetic space displacement in a structure based upon magnetic time displacements. Magnetism therefore plays a detached part of relatively small importance in the local system. The oppositely directed magnetic charges resulting from the magnetic ionization process, on the contrary, are fully compatible with the basic structure of the atoms of matter and are able to join with the magnetic rotational displacement as integral parts of the basic rotational system of the atom. A charge of this kind is not inherently stable, as the direction of the charge must oppose that of the rotation to achieve stability, but this motion we are now discussing is a forced charge. The charges of the neutrinos are stable and the coexisting atoms are forced to acquire the equivalent charges necessary for equilibrium.
In view of the very significant difference in behavior between the two oppositely directed magnetic vibrations we will not use the term “magnetic charge” in application to the vibrational time displacement which we are now discussing, but will call this a gravitational charge. The motion which constitutes such a charge is identical with the magnetic rotation of the atoms except for the fact that it reverses direction and is therefore effective only during half of the vibration period. Each unit of gravitational charge is therefore equivalent to half of a natural unit of electric rotational displacement. For convenience this half unit has been taken as the unit of atomic weight or atomic mass and the atomic mass of a gravitationally charged atom is therefore equal to 2Z + G, where Z is the atomic number and G is the number of units of gravitational charge.
Inasmuch as the gravitational charge is variable the atoms of an element do not all have the same total primary mass but cover a range of values depending on the size of the factor G. The different states which each element can assume by reason of the variable gravitational charge will be identified as isotopes of the element and the mass on the 2Z + G basis is the isotopic mass. As the elements occur naturally on the earth the various isotopes of each element are almost always in the same proportions and each element therefore has an average isotopic mass which is recognized as the atomic weight of that element. From the foregoing discussion it is evident that the atomic weight thus determined reflects local conditions and does not necessarily have the same value in a different environment.
Like the electric and magnetic charges, the charge of the neutrino with which the gravitational charge is in equilibrium is beyond the unit level and in the time-space region. The atomic rotation, as we have found, is in the time region. The relation between the vibrational and rotational motions is therefore between mv and mr2. Furthermore, the time region motion is subject to the factor 156.44, which represents the ratio of the total to the effective motion. Denoting the magnetic ionization level as I, we then have the equilibrium relation:
|mv = I mr2 / 156.44|| |
In this equation mr is expressed in the full sized mass units (two units of atomic mass) and mv in the half-size vibrational units.
The value of mv derived from equation 137 is the theoretical number of units of vibrational mass which will normally be acquired by an atom of rotational mass m, if raised to the magnetic ionization level I. It is quite obvious from the available information that the magnetic ionization level on the surface of the earth is unity and a calculation for the element lead on this unit basis, to illustrate the application of the equation, results in mv = 43. Adding the 164 units of rotational mass corresponding to atomic number 82 we arrive at a theoretical atomic mass of 207. The experimental value is 207.2.
This close agreement is not quite as significant as it appears. Actually there are stable isotopes of lead with isotopic masses ranging from 204 to 208. The value obtained from equation 137 is not necessarily the atomic mass nor the isotopic mass of the most stable isotope; it is the center of a zone of isotopic stability. Because of the individual characteristics of the elements the actual median of the stable isotopes and the average atomic mass may be off set to some extent from this theoretical center of stability, but the deviation is generally small. The variation of the atomic weight increment from the theoretical value of mv exceeds four units in only three of the first 92 elements, and sixty percent of these elements deviate only one unit or not at all.
This situation is shown in detail in Table CVII. The second column in the tabulation gives the values of mv calculated from equation 137. Column 3 is the theoretical equilibrium mass, 2Z + mv, taken to the nearest unit since the gravitational charge does not exist in fractional units. Column 4 is the observed atomic weight, also expressed in terms of the nearest integer, except where the excess is almost exactly one-half unit. Column 5 is the difference between the calculated equilibrium mass and the observed atomic weight. The trans-uranium elements are omitted since these elements cannot have (terrestrial) atomic weights in the sense in which the term is used in application to the stable elements.
The width of the zone of stability is quite variable, ranging from zero for technetium and promethium to a little over ten percent of the rotational mass. The reasons for the individual properties in this respect have not yet been determined. One of the interesting and probably significant points in this connection is that the odd-numbered elements generally have much narrower stability limits than the even numbered elements. Isotopes which are outside this zone of stability undergo modifications which have the result of moving the atom into the stable zone. The nature of these processes will be examined later.
It has previously been established that the maximum limit for magnetic rotational displacement is four units. The elements of rotational group 4B have magnetic rotational displacements 4-4 and it is possible to build this group up to 4-4-31, which corresponds to atomic number 117, without exceeding the maximum possible magnetic displacement. The next step does bring the magnetic rotation in one dimension up to the point where it exceeds the limit, and element 118 is therefore unstable and will disintegrate promptly if it is ever formed. All combinations above 118 (rotational atomic mass 236) are similarly unstable, whereas all elements and sub-material combinations from 117 down are stable at a zero level of magnetic ionization.
At a higher ionization level the vibrational mass is added to the rotational mass and the stability limit is reached at a lower atomic number. As indicated by Table CVII, the equilibrium mass of uranium, element 92, is 238 at the unit ionization level. This exceeds the 236 limit and uranium, together with all elements above it in the atomic series, is unstable in such an environment. Here we also encounter a probability effect similar to those resulting from the distribution of molecular velocities in many of the phenomena previously examined. If all of the magnetic vibrational motion conformed exactly to the magnetic temperature equivalent of the unit ionization level, the elements below uranium would all be stable from the standpoint of the overall limit and would be subject to decay only to the extent that individual isotopes might be outside the isotopic stability zone. Actually the magnetic temperature at the earth’s surface is somewhere in between the first and second ionization levels, and because of the probability distribution the magnetic temperature of some of the individual atoms occasionally rises high enough to reach the second ionization level. This increases the vibrational mass and moves the stability limit farther down the atomic series. The lowest element which theoretically could be affected by this situation is gold, element 79, for which the total mass at two units of ionization is 238, but the probability of the second ionization decreases as we move down the atomic series from uranium to gold, and while the first few elements below uranium are very unstable, the activity is negligible beyond bismuth, element 83.
As the magnetic ionization level rises the stability limit drops still lower in terms of atomic number. It should be noted, however, that the rate of decrease slows down rapidly. The first stage of ionization reduces the stability limit from 118 to 92, a difference of 26 in atomic number. The second ionization causes a decrease of 13 units, the third only 8, and so on. The significance of the higher ionization levels and the nature of the action initiated when the ionization limit is reached will be discussed later.
The ejection of space or time displacement by an atom which becomes unstable for one of the reasons that have been outlined will be identified as radioactivity or radioactive decay, and the adjective radioactive will be applied to any element or isotope of an element which is in the unstable condition. As has been brought out, there are two distinct kinds of instability. Those elements whose mass exceeds 236, either in rotational mass alone or in rotational mass plus the vibrational mass added by magnetic ionization, are beyond the over-all stability limit and must reduce their respective masses below 236. In a fixed environment this cannot be accomplished by modification of the vibrational mass alone, since the ratio of vibrational to rotational mass is determined by the prevailing magnetic ionization level. The radioactivity resulting from this cause therefore involves the actual ejection of mass and the transformation of the element into an element of lower atomic number. The most common process is the emission of a helium atom, or alpha particle, which gives it the name alpha decay.
The second type of instability is due to a ratio of vibrational to rotational mass which is outside the stable zone. In this case ejection of mass is not necessary; the required adjustment of the ratio can be accomplished by addition or emission of electric rotational displacement, which converts vibrational mass into rotational mass or vice versa and thereby transforms the unstable isotope into another isotope within or closer to the zone of stability. The most common process of this kind is the emission of a beta particle, an electron or positron, and the term beta decay is applied.
In this work the alpha and beta designations will be used in a more general sense. All processes which result from instability due to exceeding the 236 mass limit (that is, all processes which involve the ejection of primary mass) will be classified as alpha radioactivity and all processes which modify only the ratio of vibrational mass to rotational mass will be classed as beta radioactivity. If it is necessary to identify the individual process such terms as ß+ decay, etc., will be employed.
On first consideration it might appear that the observed characteristics of radioactivity are incompatible with the origin of this phenomenon as deduced from the Fundamental Postulates and outlined in the foregoing discussion. This derivation clearly requires radioactivity to be an explosive type of action, initiated as soon as an aggregate reaches the limit of stability and continuing as a single event until the atomic transformation is complete. The observed radioactivity, on the other hand, apparently consists of a series of independent events occurring at random within an aggregate and in many instances extending over a very long interval of time. The explanation of this seeming inconsistency is simple, but it will be more convenient to introduce it at a later stage of the discussion, and for the present we will turn to a consideration of the details of the basic radioactive processes.
In analyzing these processes, which are few in number and relatively simple, the essential requirement is to distinguish clearly between the rotational and vibrational mass. For convenience we will adopt a notation in the form 6-1, where the first number represents the rotational mass and the second the vibrational mass. The example cited is the isotope Li7. A negative mass will be indicated by parentheses as in the expression 2-(1), which is the isotope H1. This system is similar to the notation used for the rotational displacements, but there should be no confusion since one is a two-number expression while the other is a three-number expression.
The neutron mass has the same single unit value (one-half unit on the natural scale) which characterizes the vibrational mass and like the latter it is purely magnetic. it is therefore interchangeable with the vibrational mass. The mass symbol for the neutron is 0-1. The relationship between the neutron and the rotational vibration of an atom is the magnetic equivalent of the relation of the uncharged electron to the electric charge of an ion, as discussed in connection with the subject of electrolysis.
The first of the basic transition processes which we will consider is the direct addition or subtraction of pure rotational mass. Since each unit of rotational displacement is equal to two units of atomic mass the effect of this process is to increase or decrease the rotational mass by 2n units. The rotational combination with n = 1 is the H˛ isotope, which is unstable under terrestrial conditions, and the ejected particle is normally the first stable combination, in which n = 2. Emission of this particle, the He4 isotope, 4-0, results in a change such as
O16 → C12 + He4
16-0 → 12-0 + 4-0
In any location where the magnetic ionization level is zero and the H˛ isotope is consequently stable, the emission of H˛ undoubtedly takes precedence since the smaller unit has the greater probability, and in such an environment a forced disintegration of the O16 isotope proceeds in this manner:
O16 → N14 + H2
16-0 → 14-0 + 2-0
Since rotational vibration exists only in conjunction with rotation, units of vibrational mass cannot be added or subtracted directly except by a change of the magnetic ionization level, but the equivalence of the neutron mass and the vibrational mass makes it possible to accomplish this objective by adding or withdrawing neutrons. Thus we may start with the mass 2 hydrogen isotope, the deuteron, and by adding a neutron obtain the mass 3 isotope.
H2 + n1 → H3
2-0 + 0-1 → 2-1
Similarly the ejection of a neutron leaves the mass 1 isotope as the residual product.
H2 - n1 → H1
2-0 - 0-1 → 2-(1)
Inasmuch as the rotational vibration is a displacement of the same kind and direction as the magnetic rotational displacement itself, the only factor which permits it to exist as an independent vibrational entity rather than becoming merely a component of the total rotation is the lack of motion in the electric dimension. Addition of displacement in the electric dimension therefore has the effect of converting vibrational mass to rotational mass. One unit of electric time displacement is required for each rotational displacement unit, the equivalent of two units of atomic mass. Addition of one unit of electric time displacement thus results in the conversion of two units of atomic mass from the vibrational to the rotational basis. This can take place either by the addition of a positron or by ejection of the inverse particle, the electron, as in the reactions
H3 + e+ → He3
2-1 + e+ → 4-(1)
H3 - e- → He3
2-1 - e- → 4-(1)
Elimination of one unit of electric time displacement by addition of an electron or removal of a positron reverses this process, increasing the vibrational mass by two units and decreasing the rotational mass accordingly.
These are the basic growth and decay processes. The actual course of events in any particular case depends on the situation; it may involve only one such process, it may consist of several successive events of the same kind, or different basic processes may combine to bring about the required result. In natural beta radioactivity a single beta emission is normally sufficient as the unstable isotopes are seldom very far outside the zone of beta stability and alpha stability is not involved. In natural alpha radioactivity, on the other hand, the amount of mass which must be ejected usually amounts to the equivalent of several alpha particles. The loss of this rotational mass by successive alpha emissions necessitates beta emissions to restore the equilibrium between rotational and vibrational mass. As an example we may trace the various steps involved in the radioactive decay of uranium.
U238 → Th234 + He4
184-54 → 180-54 + 4-0
This puts the vibrational mass outside the zone of stability and two successive beta emissions follow promptly, bringing the atom back to another isotope of uranium.
Th234 → Pa234 + e-
180-54 → 182-52 + e-
Pa234 → U234 + e-
182-52 → 184-50 + e-
Two successive alpha emissions now take place, with a considerable length of time between stages, since both U234 and the intermediate product Th230 are relatively stable. These events bring us to radium, the best known of all the radioactive elements.
U234 → Th230 + He4
184-50 → 180-50 + 4-0
Th230 → Ra226 + He4
180-50 → 176-50 + 4-0
After another somewhat shorter time interval a rapid succession of decay events begins. Half-life periods in this zone range from days down as low as seconds. Three more alpha emissions start this sequence.
Ra226 → Rn212 + He4
176-50 → 172-50 + 4-0
Rn222 → Po218 + He4
172-50 → 168-50 + 4-0
Po218 → Pb214 + He4
168-50 → 164-50 + 4-0
By this time the vibrational mass of 50 units is well above the zone of stability, the center of which is theoretically 43 units at this point. The next emission is therefore an e- particle.
Pb214 → Bi214 + e-
164-50 → 166-48 + e-
This isotope is still above the stable zone and another beta emission is in order, but a further alpha emission is also imminent, and the next step may take either direction.
Bi214 → Po214 + e-
166-48 → 168-46 + e-
or Bi214 → Tl110 + He4
166-48 → 162-48 + 4-0
In either case this emission is followed by one of the alternate kind and the net result of the two successive events is the same regardless of which step is taken first.
Po214 → Pb210 + He4
168-46 → 164-46 + 4-0
or Tl210 → Pb210 + e-
162-48 → 164-46 + e-
After some delay due to a 22 year half-life of Pb210, successive emissions of two electrons and one alpha particle occur.
Pb210 → Bi210 + e-
164-46 → 166-44 + e-
Bi210 → Po210 + e-
166-44 → 168-42 + e-
Po210 → Pb206 + He4
168-42 → 164-42 + 4-0
The lead isotope Pb206 is within the stability limits both with respect to total mass (alpha) and with respect to the vibration-rotation ratio (beta) and the radioactivity therefore ends at this point.
The unstable isotopes which are responsible for natural radioactivity in the local environment originate in two ways: by past or present inflow of matter from regions where the magnetic ionization level is zero, and by atomic transformations initiated by high energy particles such as those in the cosmic rays. In those regions where the formation of matter takes place on a major scale all of the 117 possible elements originate in the proportions established by probability considerations. As long as the magnetic ionization level is zero these elements are all stable and there is no spontaneous alpha radioactivity. If this matter is then transferred to a region of higher magnetic ionization, such as the earth in its present condition, the stability limit in terms of atomic number drops because of the addition of vibrational mass originating from the magnetic vibrational motion, and radioactivity is initiated.
Whether the earth acquired the unit magnetic ionization level at the same time that it assumed its present status as a planet or reached this level at some earlier or later date is not definitely indicated by the information now available. There is some evidence which suggests that this change took place in a considerably earlier era, but in any event the situation with respect to the radioactive elements is essentially the same. They originated in a region of zero magnetic ionization and either remained in that region while the magnetic ionization increased, or in some manner, the nature of which is immaterial for present purposes, were transferred to their present location, where they have become radioactive for the reasons stated.
The other source of natural radioactivity is atomic rearrangement resulting from interaction of the material atoms with particles of other types, principally the cosmic rays and their derivatives. In such reactions stable isotopes of one kind or another are converted into related unstable isotopes and the latter then become sources of radioactivity, mostly of the beta type. The observed reactions of this kind can be duplicated experimentally, together with a great variety of similar transformations which presumably also occur naturally but have been observed only under the more favorable experimental conditions. We may therefore combine our consideration of natural beta radioactivity, the so-called artificial radioactivity, and the other experimentally induced transformations into an examination of atomic transformations in general.
In essence these transformations, regardless of the number and type of particles involved, are no different from the simple addition and decay reactions previously discussed, and the most convenient method of describing these more complex events is to treat them as successive processes in which the reacting particles first join in an addition reaction and then subsequently eject one or more particles from the combination. According to some of the theories currently in vogue this is the way in which the transformation actually takes place. This seems rather improbable, at least as a general rule, but for present purposes it is immaterial whether or not the symbolic representation conforms to physical reality and we will leave this question in abeyance. The formation of the isotope P30 from aluminum, the reaction which led to the discovery of artificial radioactivity, may be represented as
Al27 + He4 → P30 + n1
26-1 + 4-0 → 30-1 → 30-0 + 0-1
Here the rotational motions of two separate particles combine and the total motion is then redistributed in a different pattern. The two phases of the reaction are independent; that is, any combination which adds up 30-1 can produce P30 + n1, and conversely there are many ways in which the 30-1 resultant of the combination Al27 + He4 can be broken down. The final product may therefore be some such combination as Si30 + H1 rather than P30 + n1. It is even possible that the decay process may restore the original mass distribution Al27 + He4, although energy considerations normally favor a change of some kind.
The usual method of conducting these transformation experiments is to accelerate a small material or sub-material unit to a very high velocity and cause it to impinge on a target. In general the degree of fragmentation of the target atoms depends upon the relative stability of these atoms and the kinetic energy of the incident particles. For example, if we use hydrogen atoms against an aluminum target at a relatively low energy level we will get results similar to those produced in the helium-aluminum reactions previously described. Typical equations are
Al27 + Hą → Mg24 + He4
26-1 + 2-(1) → 28-0 → 24-0 + 4-0
Al27 + Hą → Si27 + n1
26-1 + 2-(1) → 28-0 → 28-(1) + 0-1
Greater energies cause further fragmentation and result in such re-arrangements as
Al27 + Hą → Na24 + 3Hą + n1
26-1 + 2-(1) → 28-0 → 22-2 + 6-(3) + 0-1
This general principle that the degree of fragmentation is a function of the energy of the incident particles has an important bearing on the relative probabilities of various reactions at very high temperatures and will have further consideration later.
In the extreme situation where the target atom is heavy and inherently unstable the fragments may be relatively large and the process is known as fission. The difference between this fission process and the transformation reactions previously described is merely a matter of degree, and the same relationships apply.
Although it is possible in some instances to transform one stable isotope into another, the more general rule is that if the original reactants are stable the major product is unstable and therefore radioactive. The P30 isotope, for instance, is below the stability zone; that is, it is deficient in vibrational mass. It therefore decays by positron emission to form a stable silicon isotope.
P30 → Si30 + e+
30-0 → 28-2 + e+
In the fission reactions of the heavy elements the products often have substantial amounts of excess vibrational mass, and in these cases successive emissions result in decay chains in which the unstable atoms move step by step toward stability. One of the relatively long chains of this kind that has been identified is the following:
Xe140 → Cs140 → Ba140 → La140 → Ce140
108-32 (19) → 110-30 (19) → 112-28 (20) → 114-26 (21) → 116-24 (22)
The figures in parentheses refer to the vibrational mass corresponding to the center of stability as calculated for each element from equation 137. The original fission product Xe140 has 13 excess vibrational units and is thus far outside the stability zone. Emission of electrons converts successive 2-unit increments of vibrational mass to rotational mass, and on reaching Ce140 the excess has been reduced to two units. This is within the stability margin and the radioactivity therefore ceases at this point.
The foregoing description of the atomic transformation processes has been confined to the essential element of the transformation, the redistribution of the primary mass, and the collateral effects have either been ignored or left for later treatment. In the latter category are the mass-energy relationships, which will be considered shortly. The electric charges carried by some of the reaction products are not particularly significant as they are merely an alternate means of absorbing some of the reaction energy which would otherwise go into translatory motion. Even this effect is only a temporary one as the charges are soon converted into kinetic energy. Absorption of energy by neutrinos is likewise a collateral and transient phenomenon which has no direct bearing on the primary process. Unlike the cosmic ray neutrinos, which are actually produced in the decay processes, the neutrinos which carry off part of the excess energy resulting from atomic transformations are pre-existing particles within the material aggregate. When translational energy is liberated at any particular point it can be acquired by any unit which is present; not only time units, atoms or sub-material particles, but also space units, electrons or neutrinos if rotating, photons if not rotating.
The atomic transformations which have been discussed thus far are primarily exchange reactions, in which some of the motion of one of the participants is transferred to the other, or fragmentation reactions, in which one or both of the participants are broken up into smaller units. Another class of transformations of prime importance in the general mechanism of the universe is the addition reaction which was mentioned briefly in the discussion of the basic processes by which the atomic rotational systems are modified.
Direct combination of two multi-unit atoms is not impossible, but it is difficult to accomplish. Because of the inverse gravitational action in the time region there is a strong force of repulsion between the two structures when they approach each other. Furthermore, each atom is a combination of motions in different dimensions and even if the two atoms have sufficient relative velocity to overcome the repulsion and make effective contact they cannot join unless the displacements in the different dimensions reach the proper conditions for combination simultaneously. The product of a reaction involving n units of this kind therefore normally consists of n or more particles, and this type of reaction is not available as an atom building process, except to the extent that the mass of the larger component can be increased without reducing the total number of particles, as in the reaction
C13 + He4 → O16 + n1
12-1 + 4-0 → 16-1 → 16-0 + 0-1
Where the hydrogen atom is employed as the incident particle the situation is much more favorable for combination, since hydrogen has only one net unit of displacement and only one dimension of combination is involved. We therefore encounter many reactions such as
Al27 + H1 → Si28
26-1 + 2-(1) → 28-0
The 1-1-1 particle which is equivalent to hydrogen is still better adapted to participation in these addition reactions and it is possible that some of the transformations attributed to hydrogen are actually the work of this anonymous and rather elusive particle. The atom builder par excellence, however, is the neutral member of the 1-1 family, the neutron. This particle is essentially nothing more than a unit of magnetic rotational time displacement, and as such it adds readily to any material or sub-material combination. A well-known example is
U238 + n1 → U239
194-54 + 0-1 → 184-55
Neutron absorption is a spontaneous process requiring nothing more than contact with the material atom, and the large kinetic energies commonly used with other bombarding particles are unnecessary. In many instances slow neutrons are actually more effective than fast neutrons, since they spend more time in the vicinity of the target atom. The source of the “raw material” for atom building will be discussed at length in a later section. At that time it will be shown that this building material is preferentially produced in the form of neutrons, and neutrons are therefore available in large numbers in those regions in which they are stable; that is, in regions of zero magnetic ionization. It will also be brought out in the same discussion that the primary units from which the neutrons are produced originate uniformly throughout space, and although the presence of matter has some bearing on the conversion into neutrons the greater part of this activity takes place where most of the primary units are produced; that is, in the vast expanse of inter-galactic and inter-stellar space. It follows that this open space is the primary atom-building region, the location in which most of the light elements are assembled.
A secondary atom-building process is simultaneously operating in the regions where the magnetic ionization is greater than zero. Here the neutron is outside the zone of stability and the equivalent stable particles, the neutrino and the positron, are formed instead. The positrons, although inherently stable, are short-lived as they are so easily absorbed into the rotating systems of the atoms. The neutrinos are normally magnetically charged as produced and they add to the constantly growing neutrino concentration which determines the magnetic temperature. Unlike the neutron, therefore, the neutrino-positron pair makes no immediate contribution to the mass of the system. Sooner or later, however, the continual additions to the neutrino population bring the magnetic temperature up to the next higher ionization level. Magnetic displacement is then transferred from neutrinos to atoms, increasing the rotational mass of the latter, until the equilibrium point as defined by equation 137 is attained. The atom building in these regions is therefore a delayed-action process rather than an immediate event comparable to the absorption of a neutron into the existing atomic system.
The relative abundance of each element in the original product is a question of probability. Conversion of the neutron to hydrogen is a relatively simple matter but anything further requires the making of the proper kind of contacts in a region in which the particle density is so low that contacts of any kind are few and far between. The great majority of the atoms therefore never get beyond the hydrogen stage. As would be expected from probability considerations, helium is in second place. Beyond this point the atomic rotation enters a stage of greater complexity and the individual characteristics of the elements affect the probabilities to some extent, but in relatively young matter we can expect to find a rather small proportion of heavy elements and a general trend toward a decrease in relative abundance as the atomic number increases.
Following this very early diffuse stage of the existence of matter comes a further long period of time spent in various stages of aggregation. Here neutrons are still plentiful as long as the magnetic ionization level remains at zero, and while the production of hydrogen is small compared to that occurring in open space, the building of heavier elements from the lighter ones goes on continuously. The proportion of heavy elements therefore increases with the age of the material aggregate. Although the relative abundance of the different elements is still determined by probability, the abundance curve is more irregular because the distribution of the total rotational displacement between the electric and magnetic rotations at the higher levels introduces some complexities. We have no satisfactory means of determining the relative proportions of the elements in the younger aggregates but we can get a good idea of the situation by examining the terrestrial abundances, which are representative of a somewhat later stage of development, as indicated by the unit magnetic ionization level.
Let us consider the 2B group of elements, for example. The first three of these elements, sodium, magnesium, and aluminum, are formed by successive additions of electric displacement to the 2-2 magnetic rotational base, and all three are among the moderately plentiful elements in the earth’s crust. Silicon, the next element, is likewise produced by a similar addition and the probability of its formation does not differ materially from that of each of the three preceding elements. Another such addition, however, would bring the displacement to 2-2-5, which is unstable, and in order to form the stable equivalent 3-2-(3) the magnetic displacement must be increased by one unit in one dimension. The probability of accomplishing this result is considerably less than that of adding an electric displacement unit and the step from silicon to phosphorus is consequently more difficult than those immediately preceding. The total amount of silicon in existence therefore builds up to the point where the lower probability of the next addition reaction is offset by the larger quantity available to participate in the reaction. As a result silicon is one of the most abundant of the post-helium elements.
The situation with respect to carbon, the equivalent element of the next lower group, is not clear, as the relative proportions in which the light elements are found under terrestrial conditions are not very significant in application to the universe as a whole, and the stars give conflicting testimony. At the midpoint of the next higher group is the iron-cobalt-nickel trio of elements, and iron, the predominant member of this closely related trio, conforms very definitely to the theoretical expectation, being even more abundant than silicon.
When we turn to the corresponding elements of the 3B group, ruthenium, rhodium, and palladium, we find a totally different condition. Instead of being relatively abundant, as would be expected from their position in the atomic series just ahead of another increase in the magnetic displacement, these elements are rare. This does not necessarily mean that the relative probability effect due to the magnetic displacement step is absent, as all of the neighboring elements are likewise rare. In fact, all elements beyond the iron-nickel group exist only in comparatively minute quantities. Estimates indicate that the combined amount of all of these elements in existence is less than one percent of the existing amount of iron.
It does not appear possible to explain this situation in terms of the probability concepts. A fairly substantial decrease in abundance compared to iron would be in order if the age of the local system were such as to put the peak of probability somewhere in the vicinity of iron, but this should still leave the ruthenium group among the relatively common elements. The nearly complete elimination of the heavy elements, including this group which should theoretically be quite plentiful, requires the existence of some much more powerful factor: either (1) an almost insurmountable obstacle to the formation of elements beyond the iron group, or (2) a process which destroys these elements after they are produced.
There is no indication of the existence of any serious obstacle which interferes with the formation of the heavier elements. Laboratory experiments indicate that neutron absorption and other growth processes are just as applicable to the heavy elements as the light ones. The building-up of the very heavy elements is endothermic, but this should not be a serious obstacle, and in any event it does not apply below Group 4A and it therefore has no bearing on the scarcity of the 3B and lower division 3A elements. The peculiar distribution of abundances therefore seems to require the existence of a destructive process which prevents the accumulation of any substantial quantities of the heavy elements even though they are produced in normal amounts. In the next section it will be shown that an independent line of reasoning based on the existence of a limiting value of thermal energy also leads to the same conclusion.
The discovery of the mass-energy relation E = mc2 by Einstein was a very significant advance in physical theory and has already had some far-reaching practical applications. It is, of course, entirely in harmony with the principles upon which this work is based and has been incorporated into the new theoretical structure, but when we develop this relationship from the Fundamental Postulates instead of following Einstein’s derivation we arrive at a somewhat different concept of the physical meaning of the equation which affects its applicability to a considerable extent.
From these postulates we find that mass and energy differ only in dimensions; that is, energy is the reciprocal of one-dimensional velocity while mass is the reciprocal of three-dimensional velocity. This mass-energy relation does not mean that a quantity of energy always has a certain mass associated with it; on the contrary it indicates that reciprocal velocity exists either as mass or as absolute momentum, or as absolute energy, depending on the effective dimensions, not as all three or any two simultaneously. Mass is equivalent to energy only when and if it is transformed from the one condition to the other, and the mass-energy equation merely gives the mathematical equivalents in the event of such a conversion. In other words, an existing quantity of energy does not correspond to any existing mass but to the mass that would exist if the energy were actually converted into mass. For these reasons Einstein’s hypothesis of an increase in mass accompanying increased velocity cannot be accepted. The kinetic energy increment could increase the mass only if it were actually converted to mass by some appropriate process, and in that event it would cease to be kinetic energy; that is, the corresponding velocity would no longer exist. Actually this hypothesis of Einstein’s is inconsistent with his concept of the conversion of mass into energy, regardless of the point of view from which the question is approached. Mass cannot be a by-product of kinetic energy and also an entity that can be converted into kinetic energy; the two concepts are mutually exclusive.
This hypothesis was formulated as a means of accounting for the otherwise unexplained decrease in acceleration at very high velocities, but in the system now being developed from the Fundamental Postulates this phenomenon is found to be due to the vanishing of force as velocity approaches unity, rather than to any variation in mass. In the theoretical universe now being described the mass-energy relation is applicable solely to those situations in which mass disappears and energy appears, or vice versa. The familiar process of this kind is the interchange between mass and energy which takes place as a result of radioactivity or other similar atomic transformations. We will first examine the characteristics of this process, after which we will go on to a consideration of some additional conversion processes which have not hitherto been recognized, but which are necessary consequences of the principles developed in the preceding discussion and will play important parts in the cosmological theories that will be derived by a further extension of these same principles.
It is evident from the facts brought out in the examination of the atomic transformation phenomena that both rotational and vibrational primary mass are conserved in these reactions. In the radioactive disintegration Ra226 → Rn222 + He4, for example, the total primary mass of the original radium atom is 226. The primary mass of the residual radon atom, 222, and that of the ejected alpha particle, 4, likewise add up to 226. Any mass-energy conversion involved in these atomic transformation processes is therefore confined to the secondary mass.
The nature of the secondary mass has already been indicated in connection with the evaluation of the atomic mass unit. As stated in the Principle of Inversion, any motion of an atom within a space unit is equivalent to and in equilibrium with a similar motion of the space unit in which it is located. Normally the two motions are in opposite directions, since one is the inverse of the other. As we have seen, the normal effect of the induced rotational space motion is to produce a negative mass of 0.0049 units per natural mass unit, which reduces the size of the atomic mass unit accordingly. In hydrogen, where the force integral is less than unity, the space motion is in the same direction as the effective atomic rotation and the secondary mass is therefore positive. In this case, however, only one dimension of motion is effective and the mass of the hydrogen atom is 1.0025 on the natural scale or 1.0074 amu.
If four hydrogen atoms combine, the positive secondary mass of the hydrogen atoms is replaced by negative secondary mass in the product, a helium atom. Helium has 11 units of secondary mass (0.0037 amu) more than the amu standard and the total reduction in mass is
(4 × 1.0074) - 4.0037 = 0.0259 amu.
The equivalent of this mass appears as kinetic energy of the products. Such reactions therefore constitute a means of energy production.
Hydrogen is the only element with positive secondary mass but the negative mass can vary all the way from zero, which corresponds to a total of 1.0049 amu per unit of primary mass, to the full three-dimensional value, 0.0074, which reduces the net effective mass to 0.9975 per primary unit. The values applicable to the individual isotopes depend on the characteristics of the particular rotational system and since each primary unit is independent from this standpoint there is a considerable range of variation, especially among the unstable isotopes. As in the vibrational mass, however, there is a definite center of the zone of variation and the stable isotopes are generally found at or near this center. Like most physical relations of this kind the secondary mass curve has a negative initial level and since the secondary mass is itself negative with respect to the primary mass, the initial level is positive on the primary basis. In the lower atomic mass range this initial level decreases in linear relation with the magnetic rotational displacement and it can be represented as 12-a, where a is the effective number of magnetic units above the rotational datum, 1-0-0. Beginning with this positive initial level 12-a, each successive element in the range up to element 26 adds one secondary mass unit (-0.0025 amu) and the deviation from 1.00 m amu for an isotope of primary mass m and atomic number n in this range can be expressed as
mass deviation = 0.0025 [12 - (a + n) ] amu (138)
Table CVIII shows how the experimental mass deviation of the most abundant isotope of each of the elements 3 to 26 inclusive compares with the value calculated from equation 138. Hydrogen is excluded from the application of this equation as its secondary mass is positive. Helium is also excluded because the deviation thus computed exceeds the maximum possible deviation as indicated in the previous discussion. Beginning with element number three, lithium, the measured values are in general reasonably close to the calculated center of the zone of variation as far as element 26, iron. Beyond iron there is a greater degree of uncertainty, both in the experimental values and in the theoretical relation, but it appears that the deviation remains in the neighborhood of the iron value, with the usual plus or minus variations in the individual isotopes, for another range of approximately the same number of elements. After this it decreases and returns to positive values in the very heavy elements. The effect of this secondary mass pattern is to make both the growth process in the light elements and the decay process in the heavy elements exothermic.
Many investigators have devoted considerable effort to the study and analysis of atomic transformations of this type which might possibly serve as a source of the energy generated in the sun and other stars. The general conclusion has been that the most likely reactions are those in which hydrogen is converted into helium, either directly or through a cycle of intermediate reactions. Hydrogen is the most abundant element in the stars and this process, if it is actually in operation, constitutes an energy source of sufficient magnitude to account for the observed energy production.
The existence of a fusion process of this kind is entirely compatible with the basic principles of this work, but the hypothesis that it is operative under the conditions prevailing in the stellar interiors and is the primary source of the energy of the stars is not consistent with the principles that have been developed in the preceding pages. A serious objection is that reactions of this kind are reversible and there is no adequate reason why the reaction between helium and the hydrogen isotope H1 should proceed preferentially in the direction H → He. The situation with respect to the H2 and H3 isotopes is entirely different. These isotopes are unstable under terrestrial or similar conditions and are therefore subject to reactions which convert them into stable isotopes. Such reactions take place spontaneously but can be speeded up by application of additional kinetic energy and if H2 or H3 are present in the stars in substantial quantities a process of conversion to He4 could be an important energy source. Available evidence indicates, however, that most of the hydrogen in the stars is in the H1 state, as would be expected from the probable level of magnetic ionization, and H1 is just as stable as helium.
At a very high temperature the chances of an atomic break-up and rearrangement are improved but this does not necessarily increase the proportion of helium in the final product; on the contrary, we have seen that a greater kinetic energy results in more fragmentation and it therefore favors the smaller unit rather than the larger. Furthermore, an increase in the amount of space displacement (thermal motion) is not conducive to building up time displacement (mass). The two principal processes which have been postulated as stellar energy sources begin with the reactions H1 + H1 → H2 and C12 + H1 → N13 respectively. These reactions involve combination of stable isotopes to form unstable isotopes and combination of smaller units to form larger units. In both of these respects the direction of the proposed reactions is in direct opposition to the normal probabilities under the prevailing conditions.
A second objection to the hypothetical fusion reaction is that it is a “dead end” process, and as such is open to criticism from both the theoretical and the observational standpoints. The Fundamental Postulates definitely require all basic physical processes to be cyclic and any one-way process such as the conversion of stellar hydrogen to helium violates this general principle. Also if this hypothesis were valid there should be some evidence of the existence of helium-rich structures, representing the later stages of the hypothetical stellar evolution. No such evidence is available. It is true that there are some peculiar structures, the white dwarf stars in particular, for which no satisfactory explanation has heretofore been found and which have therefore been postulated as the victims of hydrogen exhaustion and “collapse.” It should be understood, however, that this is pure speculation and there is no actual evidence that these white dwarf stars are rich in helium. There is, in fact, some collateral evidence that will be discussed in a later section which indicates that the white dwarfs contain much less helium than the average star, rather than more. At that time it will also be shown that the white dwarfs are not abnormal and that they are in the direct line of stellar evolution.
When the fusion process is thus ruled out as the source of stellar energy the question then arises as to what alternative energy generation process is operative under the existing conditions. Since the most distinctive physical condition within the stars is the very high temperature, this question reduces to the problem of determining what happens to matter under extreme temperature conditions. The answer to this problem is evident from the nature of the atoms of matter: if the temperature continues to rise the total space displacement, thermal energy and its equivalent, must eventually reach a destructive thermal limit.
There have been many instances in the preceding pages in which a limiting magnitude has been established for the particular quantity under consideration. The electric ionization of atoms, for instance, is limited to the equivalent of the net rotational displacement; that is, the element magnesium, which has 12 net effective electric rotational displacement units (equivalent basis) can take 12 units of electric vibrational displacement (ionization) but no more. Similarly we found that the maximum rotational base of the thermal vibration in the solid state is the primary magnetic rotation of the atom. Most of the limits thus far encountered have been of this type, which we may designate as the non-destructive limit. When such a limit is reached, further increase of this particular quantity is prevented, but there is no other effect.
We are now dealing with some physical phenomena which are subject to a different kind of limit: a destructive limit. The essential difference between the two stems from the fact that the phenomena to which the non-destructive limits apply are subsidiary properties, not the basic motions which are the essence of the unit under consideration. The electric rotation, for instance, is purely a supplement to the basic magnetic rotational motion of the atom, and reaching the electric ionization limit does not in any way imperil the existence of the atom itself. On the other hand, if the variable motion is in direct opposition to the basic motion of the system, the attainment of equality between the two motions has a deeper significance. When an oppositely directed velocity - a is superimposed on a rotational velocity + a, the net total is zero and there is no longer any rotational velocity at all. If this is the primary rotation of an atom or sub-material particle, or any full unit of that primary rotation, the existence of the rotating displacement unit automatically terminates and the displacement reverts to the linear basis (radiation).
A simple example is provided by the combination of a positron and an electron. The positron can combine in a normal manner with any other kind of a material or sub-material particle because the addition product still has an effective rotational displacement and therefore exists as a particle. When it combines with the electron, however, the resulting net rotational displacement is zero and the addition product is not a rotating system but a pair of oppositely directed photons. Since each particle entering into this reaction is only a single displacement unit, the destructive limit is reached by combination with a single unit of the opposite kind and the result is the complete destruction of both particles. In the more general case of the atomic rotational combinations, the basic rotation consists of several magnetic displacement units. Each of these is an independent entity and when the opposing displacement reaches equality with one of the basic units this unit is destroyed and the element is transformed into an element of a lower magnetic group, the neutralized displacement unit being converted into linear motion (energy).
The magnetic rotation is a two-dimensional motion with a time displacement, t2/s2. As indicated in the preceding discussion, any such n-dimensional motion is the equivalent of a specific amount of one-dimensional motion, t/s or energy. The thermal motion of the atom is an equivalent space displacement, the direct opposite of the displacement of the magnetic rotation. At the higher temperatures electric ionization also occurs, and since this ionization involves the addition of more space displacement, the total space displacement in opposition to the time displacement of the magnetic rotation is the sum of the thermal displacement and the electric ionization displacement. The thermal energy in fully dissociated gases is independent of the atomic mass but the maximum ionization level increases with atomic number, hence at extreme temperatures where all substances are completely ionized the heavier the atom the greater the total space displacement. This means that a heavier atom reaches the limiting value of the space displacement at a lower temperature. When the temperature of a star reaches the level which represents the destructive thermal limit for the heaviest atom present, one unit of the magnetic rotation of this atom is neutralized and the corresponding rotational displacement (mass) is converted into linear displacement (energy). As the rise in temperature continues one after another of the elements meets the same fate in the order of decreasing atomic number.
Here we have not only a source of practically unlimited energy but also just the sort of process which we found is necessary to account for the scarcity of heavy elements. This process of destruction of primary mass is, of course, purely theoretical. There is no direct experimental or observational evidence that such a neutralization of mass actually does take place, except to the extent that the observed neutralization of the electron and positron rotations can be extrapolated to apply to the atomic situation. It should be remembered, however, that all of the material in this presentation is theoretical; the specific objective of the work is to develop a theoretical universe from the two Fundamental Postulates, and all of the phenomena and relations previously described are theoretical deductions. The only difference is that it has usually been possible heretofore to verify each successive step by comparison with observation or measurement. We cannot verify the validity of this particular step by any direct method and we will have to develop the theory further before making the usual comparisons, but since the whole theoretical structure is a fully integrated unit a satisfactory correlation at a higher level should confirm the validity of the intervening steps.
When we turn to the second of the two destructive limits which should be considered at this time, the magnetic ionization limit, the gaps in the correlation between theory and observation are still greater. Here again, however, the theory as outlined consists of a series of straightforward deductions from principles whose validity has been established in the preceding pages, and wherever comparison with the results of observation or measurement can be made the correlation is satisfactory. The gaps are there only because we have no experimental knowledge at all in certain areas. In discussing the nature of the limiting value of the thermal energy we are dealing with a limit of which we have no direct observational evidence. In the case of the limiting magnetic ionization level it is not only the existence of the limit that cannot be verified experimentally at present; we have no actual evidence of the ionization level itself, except to the extent that the existence of isotopes can be accepted as confirmation. It is clear, however, that both magnetic ionization and a limit thereto are required by the Fundamental Postulates which define the theoretical universe that is being developed herein, and since (1) there are no observations that contradict these findings or the consequences thereof, and (2) the extension of these concepts in the preceding and subsequent pages leads to many conclusions which are fully confirmed by observation, the verification would appear to be as satisfactory as can be expected in the present state of experimental knowledge.
We have found that the accumulation of charged neutrinos within a material aggregate leads to the magnetic ionization of the atoms of matter, and we have further found that the increase in the neutrino population is cumulative, so that the magnetic ionization level increases with the age of the aggregate and ultimately reaches the destructive limit. In general, the magnetic ionization limit is the same kind of a phenomenon as the thermal energy limit. The points brought out in the discussion of the latter are therefore equally applicable to the magnetic limit and will not need to be repeated. There is one significant difference which should be pointed out. The magnetic ionization of the atoms is in time and its direction therefore coincides with that of the atomic rotation rather than opposing it. For this reason there is no level at which the displacements add up to zero in the manner of the space and time displacements at the thermal energy limit. As we have previously seen, however, there is an upper limit to the rotational displacement, which is in effect another physical zero point, and increasing magnetic ionization approaches this upper zero point rather than the mathematical zero. Attainment of this upper limit destroys the atomic rotation and terminates the existence of the particular element just as effectively as reaching the lower zero point, but there are some important differences in the details of the two processes which we shall consider in connection with some of the matters that will be examined later.
The role of radiation in the atomic transformation processes is generally confined to carrying off part of the excess energy released by the reaction. It is possible, however, for the radiation to supply the energy required for initiation of such a process and there is a general class of reactions such as
Be9 + y → Be8 + n¹
8-1 + y → 8-0 + 0-1
Another rotational effect which can be produced by radiation is the creation of a positron-electron pair by two oppositely directed photons: the inverse of the neutralization process previously mentioned.
A more general result of the interaction between radiation and the rotational systems is the conversion of radiant energy into translational energy or vice versa. As the simplest of all space-time phenomena, radiation is one of the major constituents of the physical universe and every material and sub-material unit is subjected to a ceaseless bombardment by the omnipresent photons. Some of this radiation is reflected and merely undergoes a change in direction without losing its identity as radiation. Some is able to pass through certain material substances in much the same manner as through open space, in which case we say that these substances are transparent to the particular radiation. Another portion is absorbed and in this process the radiation is transformed into a different type of motion.
We will look first at the situation which results when the radiation strikes electrons within the material aggregate rather than colliding with atoms of matter. The electron already possesses some translatory velocity due to energy interchange with the thermal motion of the matter in which it is located. A portion of the energy absorbed from the radiation is utilized in bringing this velocity up to the ionizing level. On being ionized the electron acquires the ability to move freely in space and the balance of the radiant energy becomes kinetic energy of translation in the time-space region, which means that the newly charged electron is ejected from the material aggregate and moves off into space. This emission of electrons on exposure to radiation is known as the photoelectric effect.
If we call the energy of the photon E and the ionizing energy k, the maximum kinetic energy of the ejected electron is E - k. Energy, E, is t/s, but in the time-space region where velocity is below the unit value, the effective value of s in primary processes is unity, hence E = t. The unit value of s has a similar effect on frequency (velocity) s/t, reducing it to 1/t. The conversion factor which relates frequency to energy is therefore t divided by 1/t or t2. Since the interchange in the photoelectric effect is across the boundary between the time-space region and the time region it is also necessary to introduce the dimensional factor 3 and the regional ratio, 156.44. We then have
|E = 3 / 156.44 t2v|| |
In cgs units this becomes
|E = (3 / 156.44) × ((0.1521×10-15)2/6.670×108) v = 6.648×10-27v ergs|| |
The coefficient of this equation is Planck’s Constant, commonly designated by the letter h. It is ordinarily considered to have the dimensions ergs × seconds, but these dimensions have no actual physical significance. In reality this is merely a conversion constant relating velocity s/t to energy t/s, and it has the dimensions t2/s2 since
s/t × t2/s2 = t/s.
These reversing dimensions t2/s2 then reduce numerically to t3/1 because of the unit value of s. The latest experimental values of the constant are in the neighborhood of 6.624×10-27. The amount of difference between this and the calculated value indicates the possibility of a secondary effect similar to those entering into the mass relationships, but this is one of the “fine structure” details which has not yet been investigated.
Substituting the value hv for E in the expression E - k, we have the Einstein equation for the maximum kinetic energy of the photo-electrons.
|Emax = hv - k|| |
If the absorption of energy from the radiation happens to take place under such conditions that none of it is lost to other electrons or to matter while the electron is escaping from the material aggregate, the kinetic energy will be the full amount indicated by equation 141. More often, however, there are contacts on the way out in which some of the energy is transferred to other units, and the actual kinetic energies therefore range from the maximum down to zero. It is, of course, also possible that the loss of energy on the way may be sufficient to reduce the total below the ionizing level, in which case the charge is given up and no photoelectron appears. In this case the entire energy of the photon becomes thermal energy of the material aggregate.
The term k, the energy required for ionization of the electron and escape from the material aggregate, is a characteristic of the substance in which the electron is situated and it is known as the work function. No detailed study of this quantity has been undertaken, but it is obvious from mere inspection that it is a simple function of the rotation in the dimension in which the electron escapes, and it can be expressed empirically as
|k = 2.25 n½ electron-volts|| |
The full rotation, including the initial level, is effective in the magnetic dimensions but the value of n in the electric dimension is the displacement only. Normally the escape takes place in the minimum dimension, the path of least resistance, but the experimental conditions may be such as to compel the electrons to leave by way of one of the other dimensions, in which case the effective value of the work function is higher. The minimum theoretical value for beryllium, for example, is 3.2, which corresponds to the electric displacement 2, and the experimental results normally ran-e from 3.10 to 3.30, but the value 3.92 has also been obtained. We might be inclined to dismiss this as an error except that it agrees with the theoretical value 3.9 for an escape in the primary magnetic dimension. Similarly, some of the results for iron agree with the secondary magnetic value, 3.9, while a number of others are reasonably close to the primary magnetic figure, 4.5. There are many other elements for which the experimental results are grouped around two of the theoretically possible values and it seems apparent that in some instances the experimental conditions must inhibit escape in the minimum dimension.
Table CIX shows how the observed work functions of the elements compare with the theoretical values derived from the atomic rotations. In this table the letters c and a refer to the electric and magnetic dimensions respectively. The entry of the rotation into the second space-time unit in the 4A and 4B groups reduces the value of n by one-half in some instances, and these modified values are identified in the table by the symbols a* and c*. It should also be noted that where the value 3 appears as the magnetic rotation of one of the higher group elements this is the inverse of the actual rotation, 5.
If the radiation strikes an atom of matter rather than an electron and impinges in such a manner that it is absorbed instead of being reflected, the full amount of radiant energy becomes thermal energy. This action is easily visualized, but the nature of the reverse process, the emission of radiant energy by the vibrating atom, is not self-evident and requires some further explanation. Here again the Principle of Inversion is the controlling factor. From this principle we find that the thermal motion of the atoms of matter is in equilibrium with a similar vibratory motion of the space units in which the atoms are located. Since the positions of the material atoms are determined by the gravitational forces, the atomic thermal motion likewise occupies gravitational positions and cannot progress with space-time. The coexisting space motion, on the other hand, is not affected by gravitation and as space-time progresses it carries this vibrational motion of the space units along as radiation. In order to restore the equilibrium required by the Principle of Inversion, motion is then transferred from the atoms to the new space units with which they are now associated.
These space units in turn carry the vibratory motion forward as radiation and the whole process is repeated over and over again indefinitely.
This situation is quite similar to that which we encountered in our examination of electrolysis. We found that the ions in the electrolyte are unable to carry their charges into the anode in response to the differential forces which are operative in this direction, since matter cannot penetrate matter except under special conditions, but the associated space motion is not subject to this limitation and the space units leave the ions and move into the anode, taking with them the rotational motion of the ionic charges.
In the interior of a solid or liquid aggregate the radiation from any one atom is promptly reabsorbed by its neighbors and no net change in the total thermal energy of the aggregate results. The radiation from the outside surface is, however, lost to the surroundings and to the extent that this loss is not counterbalanced by incoming radiation from other sources the temperature of the aggregate falls.
The rate at which energy is radiated from a surface depends on two factors: the number of space units leaving the surface per unit of time, and the energy carried by each unit. The first of these factors is simply the velocity of the space-time progression. From the Principle of Equivalence we may deduce that it can be expressed as one unit of radiation per unit of area per unit of time, but the space-time progression is not effective in the direction of the basic linear frequency, 1/9 of the total, which reduces the effective rate to 8/9 unit of radiation per unit of area per unit of time. From equation 72 the atomic thermal energy is proportional to the fourth power of the temperature. The energy of the associated space unit is the same value multiplied by the reversing dimensions, t2/s2, or (0.3334×10-10)2 in cgs units. The radiation rate can then be expressed as
|Erad = (8/9) × ((0.3334×10-10)2 × 5.001 × 10-4 ergs) / ((2.914×10-8 cm)2 × 0.1521×10-15 sec × (510.16 deg)4) |
= 5.655×10-5 ergs / sec cm2 deg4
This is the Radiation Constant or Stefan-Boltzmann Constant.
The frequency or vibrational velocity of the escaping space unit, the photon, is determined by the characteristics of the atomic vibrational motion from which the motion of the space unit was derived. At zero temperature the thermal vibration period is infinite and the equivalent thermal velocity 1/t2 is 1/∞ or zero. The addition of thermal energy, which is space displacement, is equivalent to reducing the time displacement. The vibration period and the corresponding equivalent thermal velocity therefore decrease with increasing temperature up to a limit of t = 1, at which point the molecule has reached the boundary of the time region and is ready to make the transition into the time-space region. Since it is the thermal velocity (with some modifications to be considered later) which is radiated, the distribution of radiated frequencies or spectrum emitted by time region structures comprises all values of 1/t2 from 1/∞ to a maximum which depends on the temperature, the absolute maximum being 1/1. Because of the small interval between these values of 1/t2 in the solid and the modifications to which the frequencies are subjected in escaping from the dense solid or liquid structures the actual distribution of the frequencies is essentially continuous, and we observe a continuous spectrum.
At the unit level, the boundary between the time region and the time-space region, there is a directional reversal and further additions of motion of the same nature, motion which is equivalent to thermal energy, go into the reverse velocity component of a compound velocity: velocity of a velocity (mass). The total equivalent thermal energy in the two regions is the sum of the regional components, but since the velocity in the time-space region is inversely directed, the resultant velocity (frequency) of the radiation is the difference between the time region and the time-space region velocity-energy components.
As we have found, velocity in the time region is in equilibrium with energy in the time-space region. The latter in turn is proportional to the square of the time-space region velocity. Where the time displacement is b, the velocity is 1/b and the energy per mass unit is equal to 1/b2. This is the time-space contribution to the radiation frequency and since it is in the opposite direction from the time region component 1/a2 the resultant is 1/a2 - 1/b2. In order to maintain a positive value of the resultant it is necessary that b exceed a by at least one unit, and the minimum value of b is therefore 2.
It will be noted that the velocity interval for the normal range of temperatures in the time-space (gas) region is relatively large; that is, the difference between 1/22 and 1/32 involves a reduction of about 55 percent. Furthermore, there is comparatively little interference on the way out of a gas aggregate. Instead of a continuous spectrum the gas therefore has a line spectrum: a regular succession of discrete frequencies resulting from the various possible values of the displacements a and b.
In the case of hydrogen there are no modifying factors and the frequencies can be obtained directly from the expression 1/a2 - 1/b2, utilizing the value of unit frequency previously employed, which we will designate R, in accordance with the usual practice.
|vH = R(1/a2 - 1/b2)|| |
Placing a equal to 1 and assigning the successive values 2, 3, 4, etc. to the displacement b, we obtain the Lyman series of spectral lines. Although the time region displacement a can reach unit value before gas motion starts, this is not mandatory and other series based on higher values of a also occur, becoming less and less probable as a increases beyond 2. When a = 2 and b has the values 3, 4, 5, etc. the well-known Balmer series, the first spectral series to be identified, is the result. With a = 3 we obtain the Paschen series, a = 4 gives us the Brackett series, and so on.
The frequencies for ionized helium can also be calculated from equation 144 by introducing the factor 4.
|vHe = 4R(1/a2 - 1/b2)|| |
In this case the normal one unit of space in the expressions 1/a2and 1/b2 has been increased to two by the addition of one unit of rotational space vibration. The velocity 1/a2, which is actually (1/a)2, now becomes (2/a)2 or 4(1/a)2. A similar change takes place in the b component. To generalize, we may say that ionization increases the 2 spectral frequencies by the factor e2, where e is the total ionization based on the normal state as unity. The generalized equation for the hydrogen type spectrum is therefore
|v = Re2 (1/a2 - 1/b2)|| |
When we examine the velocity relations of other elements we find that this hydrogen type spectrum is not characteristic of the normal atom but represents a special case in which the effects of all motion other than thermal are eliminated by the absence of any rotational motion with a force integral exceeding unity. In the normal atom effective rotational motions do exist and the radiation frequencies are modified very substantially by these coexisting velocities.
In order to understand why this should be the case we need only to recognize that the velocity which detaches itself from the atom as radiation is necessarily the absolute velocity relative to space-time, not the thermal velocity alone. This is similar to the situation which exists when a projectile is fired from a rapidly moving airplane. The force of the explosion imparts a certain definite velocity to the projectile irrespective of the motion of the plane but the velocity relative to the earth’s surface, which in this case is analogous to the velocity relative to space-time, is not this explosion-generated velocity but the resultant of the latter plus or minus the effective component of the velocity of the plane. In order to determine the velocity which will be radiated we must likewise modify the purely thermal velocity by adding or subtracting the effective velocity components of the other motions of the atom.
In the earlier stages of this present project a considerable amount of work was done toward the development of theoretical methods of calculation of the individual values of the spectral terms, on the assumption that this would open up some avenues of approach to the solution of the general problems of structure. It ultimately became apparent that the spectra contain too much information; each individual term is a composite of the effects of all of the different motions in the particular atomic system and in order to sort out the various components it is practically essential to determine the nature and general order of magnitude of each item in advance. Instead of using the spectral relationships as an aid in the study and analysis of the general questions of atomic structure, it became evident that these general structural principles would have to be used-to calculate the spectral terms. For this reason no recent work has been done on the spectra and since it has not appeared advisable to delay this presentation long enough to bring the previous work up to date, this material will be omitted.
It may be mentioned, however, that even a preliminary analysis is sufficient to indicate that the numerical values of the spectral terms conform to the general relationship that would be expected on theoretical grounds; that is, each term of the spectral combinations is one of the terms of equation 146 (the thermal motion) plus or minus the effective components of the other motions of the atom, including the rotation, the basic linear vibration, the rotational vibration, the secondary motion of the associated space unit, any electric or magnetic motion that may be present, etc. The splitting of the various terms under certain conditions is obviously due to the fact that the directions of these other motions are not necessarily fixed with reference to the direction of the thermal motion and the corresponding frequency increments may be either plus or minus.
In our original examination of the phenomenon of radiation we noted that the photons have no translatory motion of their own and are stationary with respect to space-time. They are, however, carried along by the progression of space-time itself and therefore have an apparent velocity equal to the space-time ratio, which in the absence of displacement is unity. This unit velocity, as we have seen, is the condition of rest in the physical universe: the datum from which all physical activity starts. Where time or space displacements are present so that the space-time ratio is no longer unity the apparent velocity of the radiation is modified accordingly. Since matter is a time displacement, the space-time ratio involved in the passage of radiation through matter or matter through radiation is not unity but a modified value resulting from the addition of the time displacement to the time component of the original unit velocity. Addition of more time to the ratio s/t decreases the numerical value and the apparent velocity of radiation in matter is therefore less than unity.
One of the important consequences of the velocity change is a bending of the path of the radiation in passing from space to matter or from one material medium to another. The amount of this bending or refraction is measured by the ratio of the sine of the angle of incidence to the sine of the angle of refraction, or the equivalent ratio of the velocities in the two media. It is called the index of refraction and is represented by the letter n.
It will be noted that on this basis the index of refraction relative to a vacuum is equal to the total time associated with unit space in the motion of the radiation. For present purposes we will be interested in the time displacement rather than in the total time and since the time per unit space in undisplaced space-time is unity, the displacement is n - 1. The displacement due to the presence of any specific atom or quantity of matter is independent of temperature but there is a temperature variation in the refractive index due to the accompanying change in density. We may eliminate this effect by dividing each displacement by the corresponding density, obtaining a temperature-independent quantity (n - 1)/d.
The refractive displacement is the sum of two components, one due to the motion of matter through radiation (the apparent translatory motion of the radiation) and the other due to the vibratory motion of the radiation through matter. In the first of these we have a simple motion at unit velocity in the time region. We have previously determined that the three-dimensional distribution of motion in the time region reduces the component parallel to one-dimensional time-space region motion to 1/8 of the total, and the vibratory nature of the motion of matter in the time region introduces an additional factor of ½. We therefore find the displacement on a time-space region basis to be 1/16 of the effective time region displacement units.
If the magnetic rotational displacement is unity the refractive displacement is also unity, but where the rotational displacement is n the radiation travels through only one of the n displacement units and the effective refractive displacement is 1/n. If we represent the average value of 1/n as kr, the refractive displacement due to the translatory motion is
|(n-1)/d (translation) = kr/16|| |
When the density is expressed in g/cm3 rather than in natural units, equation 147 must be multiplied by 10.53, the coefficient of the volume equation 53. We then have
|(n-1)/d (translation) = 0.6583 kr|| |
In the second refractive component, that due to the vibratory motion of the radiation, we are not dealing with unit velocity but with a lower velocity (frequency), and the refractive effect is reduced by the ratio v/1. This component is also modified by the geometric relationship between the path of the radiation and the structure of the material medium through which it passes. We will call this modifying factor the vibration factor, Fv. The vibrational refraction is then
|(n-1)/d (vibration) = 0.6583 v Fvkr|| |
Adding equations 148 and 149,
|(n-1)/d = 0.6583 kr (1 + v Fv)|| |
The wavelength most commonly used for refraction measurements is that of the sodium D line, 5893×10-8 cm. This is equivalent to 0.0774 natural units of frequency. Substituting this value for v in equation 150 we obtain
|(n-1) /d = 0.6583 kv (1 + 0.0774 Fv)|| |
Unless otherwise specified, the symbol n will refer to nd wherever it is used in the subsequent paragraphs.
The value of Fv applicable to a number of the most common organic series is 0.75. For convenience in dealing with these compounds we may simplify equation 151 to
|(n-1)/d = 0.6965 kr|| |
This equation shows that evaluation of the refractive index for compounds of this class is merely a matter of determining the refraction constant kr. As mentioned in the discussion of diamagnetic susceptibility, the refraction constant is the reciprocal of the effective magnetic rotational displacement: the total displacement minus the initial level. The situation is, however, complicated to some extent by a variability in the initial levels, especially those of the two most common elements in these compounds: carbon and hydrogen. In Table CX the variable factor is shown in the column headed “Dev.,” the numerical values there listed being the total deviation from the normal initial levels of the component elements in the particular group of compounds, as shown in the group sub-heading. The deviations are expressed in 1/9 units and the figure as given indicates the number of hydrogen atoms (or equivalent rotational mass units) in which the initial level has been shifted 1/9 unit, upward unless otherwise indicated.
In the acids, for example, the rotational displacement of the oxygen atoms and the carbon atom in the CO group is 2, while that of the hydrogen atoms and the remaining carbon atoms is 1. The normal initial level is 2/9 in all cases, and the normal refraction factors of the individual mass units are therefore .389 for the displacement 2 atoms and 0.778 for those of displacement 1. All of the acids from acetic to enanthic inclusive have normal initial levels and the differences in the individual refraction factors are due entirely to a higher proportion of the .778 units as the size of the molecule increases. The normal initial level of the hydrogen in the corresponding hydrocarbons, however, is only 1/9 and when the chain becomes long enough to free some of the hydrocarbon groups at the positive end of the molecule from the influence of the acid radical at the negative end, these groups revert to their normal initial levels as hydrocarbons, beginning with the CH3 end group and moving inward. In caprylic acid the three hydrogen atoms in the end group have made the change, those in the adjoining CH2 group do likewise in pelargonic acid, and as the length of the molecule increases still further the hydrogen in additional CH2 units follows suit.
|Compound||Dev.||kr||0.697 kr||Observed||120 KR||Observed|
O -.389 CO -.389 C -.778 H -.778
O -.399 CO -.389 C -.778 H -.778
|Methyl acetate||-3||.556||.387||.385||.389||66.7||65.5|| |
|C -.778 H -.889|
If the compound has side branches or more than one main branch (as in the ethers, diamines, etc.) the normal sequence of deviations, 3, 5, 7… n, may be modified to some such order as 3, 6, 8 n. The exact point in the series at which any particular change takes place varies to some extent between the related groups of compounds because of the geometric characteristics of the individual molecules.
In some cases small negative deviations from the normal initial levels are indicated. The explanation is obvious in such families as the paraffins where the normal initial level of the hydrogen atoms is only 1/9 unit. The -3 deviation in 2, 7 - Dimethyl octane, for example, simply means that three hydrogen atoms take the 2/9 initial level. The reason for the negative deviations in some of the esters and a few other compounds in which the normal initial levels are already at the theoretical 2/9 maximum is not entirely clear, but the carbon atoms in the negative components of these compounds are on the borderline between rotation one and rotation two and the negative deviations are probably connected with the rotational state of the carbon atoms.
From equation 150 it is apparent that the magnitude of (n-1)/d varies with the frequency of the radiation, and that the difference between the values corresponding to any two specified frequencies is likewise variable because of differences in the rotational characteristics of the various compounds, as expressed by the factors kr and Fv. The variability in the refractive index resulting from a change in the frequency is the dispersion. It is generally measured as the difference between the refractive index for a wavelength of 6563×10-8 cm (the C line) and that for a wavelength of 4861×10-11 cm (the F line). The corresponding frequencies in natural units are 0.0695 and 0.0938 respectively, and the frequency difference F - C is 0.0243. Substituting this value in equation 149 and multiplying by 104 to conform to the usual units for expressing dispersion we obtain the general dispersion equation,
|F - C = 160 Fvkr|| |
For the class of compounds thus far examined, those in which the value of Fv is constant at .75, equation 153 reduces to
|F - C = 120 kr|| |
Dispersions have been calculated for all of the compounds of Table CX and these values are listed in the table, together with the corresponding experimental figures where the latter are available. The value of kr is so nearly the same for all of the paraffins that all of these compounds have dispersions in the neighborhood of 100. In the other organic families to which equation 154 is applicable the presence of elements of effective rotational displacement two lowers the refraction constant and reduces the dispersion accordingly. This effect is, of course, most noticeable in the lower members of each series and the dispersion rises toward the hydrocarbon values as the molecule lengthens.
Theoretically it should have been possible to work out all of the foregoing development of the relations between the various components of the physical universe directly from the Fundamental Postulates by mathematical and logical processes without the necessity of checking the results against the actual properties of the existing universe at any stage of the development, and perhaps some one might have had the breadth of vision and the necessary infallibility to have accomplished the task in this manner, but as the work was actually performed each additional point that was established merely set the stage for a limited advance into new territory and a long period of checking against experimental results and reconciling the inevitable discrepancies was almost invariably required before the forward position was sufficiently well consolidated to support a new advance.
As indicated from time to time in the preceding pages there are a number of important physical properties and relationships which had to be omitted from this initial presentation because the detailed analyses of these subjects are still incomplete, and extending onward from the major relations covered in this work there is a never-ending proliferation of subsidiary phenomena. In all of these areas, however, the general nature of the answers is clearly indicated by the principles already developed, and the remaining task is that of working out the details. In another direction we face a different situation. Beyond the frontiers of our present-day knowledge lies an area in which definite correlations with observation and measurement cannot be made because the established facts are too few and their significance is too uncertain. As in the earlier stages of the development of the theories previously outlined, however, we can extend the known principles a reasonable distance into the unknown field with some degree of assurance that the conclusions reached therefrom will be substantially correct in their general aspects, although past experience suggests that accuracy in every detail is unlikely.
We may appropriately begin the theoretical exploration of this field by considering the age-old question as to whether space and time are finite or infinite. The Fundamental Postulates of this work unequivocally support the latter conclusion. There is nothing in these basic assumptions which would establish any kind of a finite limit on either space-time as a whole or space and time individually. Of course, it could be argued that the postulates may be deficient in this respect; that they should perhaps be enlarged to include such limitations. Such an argument, however, is irrelevant. In the preceding discussion it has been shown that a logical and mathematical development of the consequences of the two Fundamental Postulates correctly reproduces the existing universe insofar as it is accessible to observation. We are now attempting to determine what further information these same principles can give us if we make the plausible assumption that they are valid in the unknown regions of the universe as well as in the accessible regions, an assumption which is specifically included in the Postulates as stated. For this purpose it is essential that we maintain the principles in exactly the same form in which they were established as valid in the known region; if we alter them in any way we are no longer examining the effect of extending the range of application of principles of established validity, we are dealing with unsupported hypotheses. It is perfectly in order to make hypotheses and to determine the consequences thereof, but that does not accomplish the objective of this particular investigation.
An important point in connection with this question as to the existence of limitations on space and time is that on the basis of the Fundamental Postulates zero and infinity have equal standing. Zero space is equivalent to infinite time and so on. The concept of zero is much easier for the human mind to accept that that of infinity, but when we postulate space and time as reciprocals the two concepts become one, so far as space-time and its derivatives are concerned, and we can no longer accept one and reject the other.
In addition to defining the physical universe as infinite, the Fundamental Postulates also define it as changeless, when considered as a whole. The myriad of subsidiary phenomena resulting from space and time displacements are, of course, constantly changing but the effect of the reciprocal postulate in combination with the probability postulate precludes any net change in the universe as a whole. There is no mechanism defined by the postulates whereby displacements can be created or extinguished and the total displacement therefore remains constant. Furthermore, the displacements in each direction from the neutral axis must stay in balance, since the two forms of a reciprocal expression are identical from a probability standpoint. It is, in fact, impossible to state which is the original expression and which is the reciprocal.
These conclusions reached from the Fundamental Postulates are in agreement with the so-called “perfect cosmological principle,” which states that the universe has essentially the same aspect from any point in space or any point in time. The validity of this principle so far as space is concerned has been fairly well established by astronomical observations. It is now possible to see far enough into space to eliminate the effect of local irregularities and to confirm the homogeneous nature of the universe from a space standpoint. At the observational limits we are seeing as far into time as into space, but not all observers are convinced that the cosmological principle is applicable in time, because there are so many physical processes that appear to be irreversible. We are accustomed to thinking of an “arrow of time” pointing in a fixed direction and such processes as the observed expansion of the universe and the continual increase in the entropy of the material system seem to confirm the one-way nature of the temporal processes, so that there are formidable obstacles in the way of accepting any conflicting ideas.
In this work we deduce from the Fundamental Postulates that the arrow of time does indeed point in a fixed direction in our part of the universe. The galaxies are actually receding from each other, the general processes of growth and decay are irreversible, and so on. But the Postulates also tell us that we see only half of what is happening. They require the existence of another half of the universe: a non-material sector which is in all respects the inverse of the material sector which we recognize. In that other half of the universe the arrow of time points in the opposite direction and all of the effects of the unidirectional progression of time in our material region are completely nullified in the long run by the oppositely directed progression in the non-material region.
The expansion of the material galaxies carries all of the matter in the universe outward toward infinite space. If this were the only process of its kind the common “explosion” theories of cosmology would have a very strong case, but we find from the Fundamental Postulates that there is a co-existing system of non-material galaxies, equal in all respects to the material system, which is likewise expanding and carrying all of its constituent parts outward toward infinite time. While the material half of the universe moves toward infinite space the non-material half moves toward zero space (infinite time) at the same rate and the net effect on the system as a whole is zero. In order to maintain the constant relationships within the two halves of the system it is, of course, necessary that some conversion process be operating as an interchange between the two. The nature of this process will be examined later.
Because of the permanence of the universe in its general aspects, all major physical processes are necessarily cyclic in character. Where some unidirectional process, such as the increase in entropy required by the Second Law of Thermodynamics, is effective in one area it represents only one phase of the cycle, and in some other area there must be an oppositely directed process which keeps the net balance unchanged. The “heat death” envisioned by the Second Law has no place in the universe defined by the Fundamental Postulates. Instead of a universe that is continually running down and will ultimately reach a dead level of uniformity in which there is no activity at all, the Fundamental Postulates lead to a universe which is forever changing in detail but will always remain the same as a whole. This is a universe of motion, and motion continually alters the relationships of the subsidiary units. It is a universe of mathematical law, and the mathematics of probability lead to a never-ending conflict between individual probability and group probability. The most probable state for the individual is the average. The most probable state for the group is a condition in which there are individual deviations from the average. Each individual tends toward the most probable value, the average, but is continually driven away from that average by the tendency of the group to conform to a probability distribution of individual values.
Let us now examine some of the more specific problems. Since the stars are the most prominent actors on the astronomical stage, where the drama of the universe is enacted, it is appropriate to begin with the question of the path of stellar evolution. We have already deduced from the Fundamental Postulates that all basic natural processes such as this are cyclic in character and we may therefore start our consideration at any phase of the cycle. For convenience we will select a starting point somewhere on the main sequence. Whether the stars move up or down the main sequence in their evolutionary course is not clear from observation since we have only what amounts to an instantaneous picture, and we must therefore resort to theoretical consideration. It has been established both theoretically and from observation that stellar temperature is a function of mass, and since this is a rather obvious result of generating energy by processes which are proportional to the cube of the diameter (the total mass) and dissipating it by processes which are proportional to the square of the diameter (the surface area) no detailed discussion of this point would seem necessary. If the existence of the stars is to be regarded as primarily devoted to expending their substance in producing radiation to be dissipated into the depths of space, there can be no escape from the conclusion that they were originally hot and massive units and are gradually moving down or off the main sequence toward eventual extinction. But in order to meet the cyclic requirement it would then be necessary to find some process whereby cold dwarf stars are reconverted into hot massive stars, and there is no apparent foundation on which any such process could be based.
In recent years astronomers have begun to appreciate that a downward course is not the only possibility, and it is now generally agreed that the stars within dense dust clouds are acquiring enough material by accretion from the surroundings to more than compensate for the loss of matter by radiation and are actually growing hotter and more massive. We thus recognize that the direction of evolution along the main sequence is not necessarily downward as formerly believed; the net movement is the resultant of two opposing factors, the loss of mass or its equivalent by radiation and the gain in mass due to accretion. The conclusions of this present work are that the amount of interstellar matter and potential matter is considerably greater than has heretofore been realized and that there is a substantial accretion even where nothing more than the general interstellar haze is present. Furthermore, the radiation losses are reduced very sharply as the temperature falls, since they vary as the fourth power of the temperature. It therefore appears that even in the regions where the accretion of matter is at a minimum, a star does not cool down indefinitely; it merely moves down the main sequence to an equilibrium point and remains there until it enters a denser zone. In the regions where the accretion is normal or above normal the star moves up the main sequence, becoming hotter and more massive.
The production of energy to take care of radiation losses and to cause the rise in temperature which is an essential feature of this evolutionary course is initially due to certain processes, to be discussed later, which are a direct result of the manner in which the star is formed. As the temperature rise continues, a point is ultimately reached which represents the destructive thermal limit for the heaviest element present One of the magnetic displacement units of this element is then destroyed in the manner previously described and the rotational motion is converted into energy. The amount of energy thus released is very large and this process makes a practically unlimited source of energy available to the main sequence stars. There is a small proportion of heavy elements in the stars as originally constituted, and a similar proportion in the material acquired by accretion from the surroundings. Inasmuch as the entire stellar structure is fluid, the heavy elements necessarily make their way to the center. Here they reach the destructive thermal limits, are converted into energy, and replenish the stellar energy supply which is constantly being depleted by radiation.
As the mass increases and the temperature rises, successively lighter elements are made available as stellar fuel. Since none of the heavy elements is present in more than a relatively minute quantity in a region of minimum accretion, the availability of an additional fuel supply due to the attainment of the destructive limit of one more element is not normally sufficient to cause any significant change in the energy balance of the star. The stars of the upper portion of the main sequence are subject to somewhat higher rates of accretion but they are able to absorb greater heat fluctuations, for reasons which will be developed later, and the main sequence stars are therefore relatively quiet and unspectacular as they gradually increase in mass and temperature and move upward along their evolutionary path.
When the temperature corresponding to the destructive limit of the iron-nickel group of elements is reached, a totally different situation prevails. These elements are not limited to small amounts; they are present in concentrations which represent an appreciable fraction of the total stellar mass. The sudden arrival of this large quantity of material at the destructive limit activates a potential source of far more energy than the star is able to dissipate through the normal radiation mechanism. The initial release of energy from this source therefore blows the whole star apart in a tremendous explosion. Because of the relatively large concentration of the nickel-iron elements in the central core of the star the explosion takes place as soon as the first portions of this material are converted into energy and the remainder is dispersed by the explosion-generated velocities. This carryover of material from one cycle to the next enables the iron group elements to continue building up as the over-all age of the system increases, whereas the heavier elements have to start all over again after each explosion.
This sequence of events is, of course, purely theoretical, but it is the result of a straightforward application of the principles developed from the Fundamental Postulates, and where not actually corroborated by observation it is at least consistent with the observational data. Some observers will no doubt contest the assertion that there is sufficient accretion of mass to cause the upward progression along the main sequence which is required by theory. It is evident, however, that any conclusion on this score based entirely on the results of observation cannot be more than an opinion, in the existing state of knowledge. The existence of some accretion of mass is incontestable; the only open question concerns the quantities. In this connection it is probably significant that within very recent years general astronomical opinion has moved a long way in the direction of recognizing the importance of interstellar dust and gas; from a concept of interstellar space as essentially empty to a realization of the fact that the total amount of interstellar matter is at least comparable to the amount of matter concentrated in the stars.
The chemical composition of the stars and the distribution of elements in the stellar interiors are also debatable subjects, but again the deductions that have been made from the previously established principles do not conflict with the actual observations; they merely conflict with some interpretations of these observations. While the gravitational segregation of the stellar material which puts a relatively high concentration of the heavier elements into the central core is not entirely in agreement with current astronomical thought, it should be emphasized that such a segregation is the normal result in a fluid medium subject to gravitational forces and a theory which requires the existence of normal conditions is never out of order where the true situation is unknown.
Furthermore, even though these conclusions which have been reached as to the amount of iron and heavier elements present in the stellar interiors are beyond the possibility of direct verification, it will be brought out in the subsequent discussion of the solar system that some strong evidence as to the internal constitution of the stars can be obtained from collateral sources. The spectroscopic information from the stars is only of limited value since these data only tell us what conditions prevail in the outer regions. Even from this restricted standpoint the evidence may actually be misleading since it is more than likely that the spectroscopic results are affected to a significant degree by the character of the material currently being picked up throng the accretion process. The observed differences in the stellar spectra that can be attributed to variations in chemical composition are probably more indicative of the environments through which the stars happen to be moving at the moment than of the average composition of the stars themselves. The presence of substantial amounts of elements’ such as technetium, for example, in the outer regions of some stars presents a formidable problem if we are to regard this as an actual indication of the composition of the stars, but it is easily explained on the basis that the technetium has been derived from captured material and is on its way down to the central regions where it will add to the fuel supply. This element is stable wherever the magnetic ionization level is zero, as it usually is in the inter-stellar dust clouds, and relatively heavy concentrations could conceivably be produced in special areas which are left undisturbed for long periods of time.
Growing recognition of the importance of the capture of inter-stellar material has already begun to make an impression on astronomical thought. One of the current theories of the sun’s corona, for instance, is the “infall” theory, which attributes the corona to gas and dust particles being pulled in from the surroundings by the gravitational attraction of the sun. Similarly, the irregular fluctuations of the so-called “nebular variables” are explained as a result of variations in the rate of capture and digestion of material from the relatively dense dust clouds with which these stars are associated. Both of these theories are entirely consistent with the conclusions of this work.
The explosion which theoretically occurs at the destructive limit of the nickel-iron elements is consistent with observation as it can be identified with the observed phenomenon known as a supernova and the theoretical products of the explosion can be correlated with the observed residue of the supernova. So far as can be determined from the information now available the star that becomes a supernova is a hot, massive unit before the explosion, which agrees with the theoretical deduction that such an explosion occurs when a star reaches the upper end of the main sequence. As has been indicated, only a relatively small proportion of the mass of the star needs to be converted into energy in order to produce the explosion and the remainder, constituting the bulk of the original mass, is blown away from the original location at extremely high velocities. We therefore find the site of a relatively recent supernova explosion surrounded by a cloud of material moving rapidly outward. The Crab Nebula, which has been identified as the product of a supernova observed in 1054 A.D., is the typical example.
Inasmuch as this expansion takes place against the force of gravity and against some resistance from the inter-stellar material, it cannot continue indefinitely and at some time in the distant future the expansion of the Crab Nebula will cease. At this stage it will be merely a cloud of cold and very diffuse material occupying a tremendous expanse of space. Gravitational attraction between the particles will be small because of the huge distances involved, but nevertheless it will exist and once the expansion has ceased, a contraction will be initiated by the force of gravity. Another long interval must pass while this minute force does its work but ultimately the constituent particles will be pulled back to where the interior temperature of the mass can rise enough to produce radiation within the visible range and the star will have been reborn.
It is not to be expected that exactly the same mass will necessarily be reassembled into the reconstructed star. The force of the supernova explosion will no doubt give some fragments sufficient velocity to enable them to escape from the gravitational attraction of the remainder, but the cloud will be moving through interstellar matter and accretions from this source will more than offset the losses of the original material. The interstellar matter will also help to minimize the losses, as a part of the translatory velocity of the particles will be dissipated in collisions with this material. In the long interval that elapses between the explosion of the star and the birth of its successor the cloud of matter may also be altered substantially by encounters with other pre-stellar objects, but this does not change the nature of the final result. Eventually the diffuse cloud, whether modified or unmodified, will again condense into a star or be absorbed into existing stars.
At the stage when it first becomes visible a star is still extremely diffuse; in fact it has been said that such a star is nothing more than a red hot vacuum. But the work of gravity is not yet complete. The star still continues to contract, and as it does so it moves toward the main sequence, which we may regard simply as the location at which the gravitational forces are in equilibrium with the forces resisting further contraction. The evolutionary path of any star which has not yet reached the main sequence is thus determined by two separate factors: it is moving toward the main sequence to attain gravitational equilibrium and at the same time it is moving parallel to the main sequence to attain thermal equilibrium.
The contraction process due to the gravitational forces transforms potential energy into kinetic energy and is therefore one of the stellar energy sources during the period in which it is operative. The other major energy source in this early stage of the evolutionary cycle is radioactivity. Inasmuch as a large part of the matter which is assembled into the new star has been obtained by capture from the interstellar material, the magnetic temperature is lower than that of the exploding star and the ’unit magnetic ionization level is not regained until after the condensation into the new star is quite well advanced. We may, in fact, regard the attainment of the unit ionization level as the event which marks the dividing line between dust cloud and star, since the
immediate result of the ionization process is to make the heaviest elements radioactive, thereby activating a source of thermal energy within the cloud and causing the increase in temperature which distinguishes the infant star from a mere cloud of particles.
These newly formed stars, the red giants, are in the upper right section of the Hertzspring-Russell diagram, Figure 42. Let us assume the existence of such a star at the point marked A. This star is more luminous than a main sequence star of the same surface temperature because it is radiating from a very much larger surface. As the gravitational contraction proceeds this extended surface becomes smaller and the emission decreases accordingly, moving the star downward on the H-R diagram. At the same time, however, the contraction and other energy producing processes are increasing the stellar temperature and the star therefore moves toward the left of the diagram as well as downward. If the star is located in a region where the density of the interstellar material is relatively low the downward movement predominates and the evolutionary path is a curve such as AB. On reaching the vicinity of the main sequence in the neighborhood of point B the direction of further movement is determined by the location of the point C or C’ at which the star will reach thermal equilibrium on the main sequence. If the star is formed in a high density region or enters such a region during the early evolutionary stages, the rate of accretion may be rapid enough to relegate the attainment of gravitational equilibrium to a subordinate role and to move the star almost directly toward its ultimate destination at the upper end of the main sequence along a line such as AD. In either event we have now traced a full cycle and we are back to the main sequence where we began our examination of the evolutionary process.
Let us now return to a further consideration of the supernova explosion. At the very high temperatures prevailing in the interiors of the stars at the upper end of the main sequence the thermal velocities are approaching the unit level, and when these already high temperatures are still further increased by the processes which lead to the break-up of the star the velocities of many of the interior atoms rise above unity. When the explosion occurs the inner atoms are blown apart in time by these greater-than-unit velocities in the same manner as the outer atoms are blown apart in space by less-than-unit velocities to form the diffuse clouds of matter which eventually coalesce into the giant stars. In both cases the atoms which were in close contact in the hot massive star are widely separated in the product of the explosion, but in this second product the separation is in time rather than in space.
This does not change either the mass or the volumetric characteristics of the atoms of matter. But when we measure the density, m/V, of the giant stars we include in V, because of our method of measurement, not only the actual equilibrium volume of the atoms but also the empty three-dimensional space between them, and the density of the star calculated on this basis is something of a totally different order from the actual density of the matter of which it is composed. Similarly, where the atoms are separated by time rather than by space the volume obtained by our methods of measurement includes the effect of the empty three-dimensional time between the atoms, which reduces the equivalent space (the apparent volume), and again the density calculated in the usual manner has no resemblance to the actual density of the stellar material. In the giant stars the empty space between the atoms decreases the measured density by a factor which may be as high as 105, or 106. The time separation produces a similar effect in the opposite direction and the second product of the explosion is therefore an object of small apparent volume but extremely high apparent density: a white dwarf star.
When judged by terrestrial standards the calculated densities of these white dwarfs are nothing less than fantastic and the calculations were originally accepted with considerable reluctance and only after all conceivable alternatives had been ruled out for one reason or another. The indicated density of Sirius B, for instance, is in the neighborhood of 68,000 g/cm3 and other stars of the same type apparently have still greater densities. In the light of the relationships developed in this work, however, it is clear that this very high density is no more out of line than the very low density of the giant stars; each of these phenomena is simply the reverse of the other.
Gravitational forces in the white dwarf stars tend to draw the constituent atoms closer together in time just as the same forces in the giants tend to draw the atoms closer together in space. The white dwarfs therefore decrease in apparent density as they contract and they become more and more like the giants which are approaching the normal from the other side. This means that on the H-R diagram the white dwarfs are also moving toward the main sequence and once having attained that location, the volumetric normal, the giants and the white dwarfs are indistinguishable from the standpoint of the variables portrayed in the diagram. There is a very marked difference in composition since the white dwarfs were formed from the material in the center regions of the exploding star, whereas the giants were formed from the lighter material in the outer regions. We will consider this point in some detail shortly.
In Figure 42 the zone of formation of the white dwarfs is at the lower left of the diagram, directly opposite the zone of formation of the red giants. In this area the luminosity is low because the equivalent surface area is small, but the temperature is high because the thermal energy is concentrated in a relatively small equivalent volume. The normal evolutionary path is XY, the inverse of the normal path of the giant stars. As the star contracts in time, increasing the equivalent volume, the temperature drops accordingly but the luminosity increases because of the greater surface area available for radiation. In a region where the accretion rate is high the drop in temperature is minimized and the movement on the H-R diagram is nearly vertical, along a line such as XZ.
The general features of the binary and multiple star systems are readily explained by this dual evolutionary cycle. The seemingly incongruous associations of stars of very different types are seen to be perfectly normal developments. Combinations of giant and dwarf stars are not freaks or accidents; they are the natural initial products of the star formation process. As will find later when we examine the quantitative relations, the vast distances which we observe between the star systems are a permanent feature of the stellar distribution and there is no interaction between systems other than the escape of some diffuse material from one region to another. Every system that has been through the explosion process therefore contains two components: an A component on or above the main sequence and a B component on or below the main sequence. Since the evolutionary path for both components is first toward the main sequence and then up along that line there are no associations of dissimilar stars in the upper (more advanced) portions of the main sequence. Many of these stars are binaries, but they are pairs of stars of the same or closely related types.
In the earlier stages the pairing varies with the evolutionary age of the system. Immediately after the explosion the A component is merely a cloud of dust and gas which appears as a nebulosity surrounding the white dwarf B component. Later the cloud develops into a pre-stellar aggregate and then into a giant infra-red star, and since these aggregates are invisible the white dwarf appears to be alone during this phase. When the giant gets into the high luminosity range this situation is likely to be reversed as this bright star then overpowers its relatively faint companion. Further progress finally brings the giant down to the main sequence. The development of the white dwarf is usually slower and there is normally a stage in which a main sequence star is paired with a white dwarf, as in Sirius and Procyon, before the mature status as a pair of main sequence stars is attained by the system’ It is true that some of the double stars which have been reported by observers do not fit into the evolutionary picture. For example, Capella is said to be a pair of giants. Neither of these stars can qualify as the B component of a binary, hence on the basis of the theory that has been developed herein we must conclude that Capella is actually a multiple system rather than a double star and that it has two unseen white dwarf or faint main sequence components. The Algol type stars in which a main sequence star is accompanied by a sub-giant are similarly indicated as multiple systems, and in Algol itself at least one and possibly both of the theoretical B components have been identified. Further consideration will be given to the multiple systems when we take up a consideration of the different stellar populations.
In the earlier discussion of the stellar energy generation process it was pointed out that the increase in energy output resulting from the attainment of the destructive limit of one additional element is not normally sufficient to disturb the energy equilibrium within a star which is located in a region of minimum accretion. Detailed calculation of the various factors involved in this energy balance is outside the scope of this work, but it is evident without calculation that at some point there is a minimum below which the thermal equilibrium will not be affected enough to cause any noticeable irregularity. Since the stars which follow the gravitational path AB are observed to be stable it can be deduced that the variations in energy release in these stars are below this minimum. It is also apparent, without the necessity of numerical calculation, that complete stability and a violent explosion are not the only alternatives when a new destructive limit is reached. An intermediate possibility is that the sudden release of additional energy from this source may be sufficient to produce a substantial change in the physical condition of the star without being adequate to blow it apart. We can determine the qualitative effects of such an energy release and when we find that these effects are actually recognizable in certain classes of stars which should theoretically be subject to greater rates of energy release than the stars on the gravitational path AB, it is in order to conclude that the observed effects are due to this cause.
The stars which can be expected to show effects of this kind are those whose normal supply of fuel in the form of heavy elements is being augmented by a relatively heavy inflow from the surroundings: the stars which are following the evolutionary path AD. Let us examine the result of reaching a new thermal destructive limit in one of these stars. Since the star is unable to dissipate the additional output of energy by the normal heat transmission processes, the suddenly released excess heat will cause a rapid expansion. After the expansion has accomplished its purpose inertia carries it beyond the equilibrium point and this cools the interior of the star, which in turn drops the temperature in the central regions below the recently attained destructive limit and shuts off the extra supply of energy, accentuating the cooling effect. Ultimately the cooling causes a contraction of the star, whereupon the temperature again rises, the destructive limit is once more reached, and the whole process is repeated. The evolution of a star along the path AD, where it experiences a substantial accretion of mass from the surroundings, is therefore likely to be characterized by a pulsation. Such a star is classified as an intrinsic variable.
The length of the cycle or period of the variable star depends on the time required to restore the original conditions after the expansion takes place. Since the initial excess production of thermal energy which causes the expansion varies much less than the stellar temperature, the initial conditions are restored more rapidly in the hotter stars, and the period is therefore an inverse function of the temperature. The relatively new stars just entering the pulsation zone are long period variables, with periods ranging from 100 days to several years. More advanced stars with shorter periods that extend down to minutes are classified as Cepheids. Various subdivisions of both the Cepheid and long period classes are recognized, and there are also some other less common and less distinctive types of variables in the remaining sectors of the high density region.
Within a group of stars of the same temperature the period depends on the stellar volume, since the reaction of a more extended volume to any specific force of compression or expansion proceeds more slowly. Inasmuch as luminosity is a function of surface temperature and surface area, this means that the more luminous stars have the longer periods: the celebrated period-luminosity relation. The results of this present investigation suggest that this relation does not have the degree of accuracy in application to the entire Cepheid population that is usually assumed, since it is affected by both the stellar temperature and the rate of accretion, but it is approximately correct over a wide range of temperature and has therefore been a very valuable astronomical tool. Its deficiencies show up conspicuously at the two extremes; it is not applicable to the long period variables, and it has to be modified in application to the very short period cluster variables.
The region of the H-R diagram occupied by the variables is the triangular area between the gravitational path AB and the main sequence. The great majority of the stars in this zone are intrinsic variables; some observers even say that they are all variables. On the right of the variable region the irregularities in the rate of release of energy are too small to produce pulsation; along the main sequence the response of the system is too rapid and the period is negligible. As would be expected from the nature of the process which is responsible for the variability, the most prominent classes of variable stars are found in certain definite locations within this zone of instability. Each of these locations undoubtedly represents a stage at which the interior temperatures of the stars reach the destructive limit of an element or group of elements which is present in a higher concentration than the average heavy element. In Figure 43 we see that the region of the “classical” Cepheids, the best-known of the intrinsic variable stars, is a relatively narrow band running diagonally upward from left to right in the low temperature zone of the region of variability. The RR Lyrae stars, or cluster variables, the principal class of variable stars in Population II, are located on a downward extension of this band into the region of less luminosity and shorter period.
Inasmuch as the central temperature of a larger and more luminous star is higher than that of a smaller and less luminous star of the same surface temperature, it is apparent that the diagonal Cepheid band represents a zone of approximately equal central temperatures. The particular elements whose destructive limits are reached at this temperature cannot be positively identified without further investigation, but since the lead-mercury group is not only the first group of moderately abundant elements in the descending order of atomic mass but also the only such group in the upper half of the atomic series, we may at least tentatively correlate the destructive thermal limits of these elements 80 to 82 with the central temperature corresponding to the Cepheid band. It should be noted in this connection that lead is the heaviest element that is stable against radioactivity in a region of unit magnetic ionization and it therefore occupies a preferred position somewhat similar to that of iron.
The long period variables can be correlated with the elements above lead in the atomic series. Here the quantities of excess energy are smaller since these elements are relatively scarce, but each increment of energy has a greater effect on the stellar equilibrium because of the smaller heat storage capacity of these low temperature stars. This situation accentuates the effect of minor variations in the incoming flow of matter from the environment and as a result these long period variables are less regular than the Cepheids. On the other side of the Cepheid zone these relations are reversed. Because of the higher temperature and greater mass the heat storage capacity of each star is much greater and any variations, either in the rate of accretion of matter or in the abundance of the elements whose destructive limit is reached, are to a large extent smoothed out. In general, therefore, these stars are not separable into easily recognized groups on the order of the Cepheids.
Let us now turn to the opposite side of the main sequence. When we examine the stability situation in this area we find some important differences. The gravitational forces in the white dwarf stars are inverse; that is, they operate to move the atoms closer together in time rather than in space. At the location where these gravitational forces are the strongest, the center of the star, the compression in time is the greatest, and since compression in time is equivalent to expansion in space the center of a white dwarf star is the region of lowest density. The expansion due to the generation of thermal energy within these stars does not oppose the effect of gravitational compression as in the giant stars; it merely adds to the gravitational effects. The conflict of forces which is responsible for the pulsation effects in the giants is therefore absent in the white dwarfs.
Ultimately, however, the continued expansion in the interior of the white dwarf star eliminates the empty time between the atoms in this region and the thermal forces begin to build up a gas pressure. When this pressure is high enough the compressed gas breaks through the overlying material in the manner of a gas bubble forcing its way through a liquid, and the hot material makes its appearance at the surface of the star, increasing the luminosity by a factor which may be as high as 50,000. Within a short time the relatively small amount of ejected material cools by radiation and the star gradually returns to its original status. In this condition it is rather inconspicuous and the first observed events of this kind were thought to involve the formation of entirely new stars, as a result of which the inappropriate term nova has been applied to this phenomenon.
From the foregoing description it is apparent that the nova explosion is another periodic event. As soon as one gas bubble is ejected, the compressive and thermal forces in the interior begin working toward development of a successor. Since the gravitational forces within the star are gradually expanding it toward the gravitational normal represented by the main sequence (that is, they are drawing the constituent atoms closer together in time), the additional expansion required to cause the nova explosion is correspondingly reduced as the star grows older and this reduces the time interval between explosions. The first event of this kind may not occur for millions of years after the original formation of the white dwarf star, but as the star approaches closer to the main sequence the time interval decreases, and some novae have repeated in less than 100 years. Furthermore, there is a special kind of variable star which has all of the earmarks of a small scale nova. This stellar class, of which U Geminorum is the type star, follows the nova pattern in miniature with a very much shorter period, ranging from about a year downward. The U Geminorum stars are reported to be slightly under-luminous for their spectral type; that is, they are somewhat below the main sequence on the H-R diagram, which is just where they belong if they are nearing the end of the white dwarf stage. The long period novae lie still farther down on the H-R diagram and are reported to have densities in the neighborhood of 100 times the solar density. From this it would appear that such stars as Sirius B are still in the early white dwarf stage and have a long way to go before they reach the nova phase.
It is neither feasible nor appropriate to discuss all of the variations in stellar behavior in a general work of this kind, but some comments on the stars with extended atmospheres are in order since these stars furnish some additional information regarding the white dwarf branch of the evolutionary cycle. On the giant side of the main sequence the succession of events from supernova to red giant star is simple: there is first an expansion due to the translational velocity imparted to the stellar material by the explosion, and then a contraction due to the force of gravity. On the white dwarf side a similar process takes place, but since the expansion in this case is in time the entire action takes place in one small region of space and there are collateral effects in the surrounding space that have no parallel on the giant side of the main sequence.
When the explosion first occurs the density of the material expelled from the star is great enough to carry everything in the vicinity along with it, and we see only a rapidly expanding cloud of material such as that which constitutes the Crab Nebula. At this stage the inward-moving component is almost invisible as the radiation which it emits is mostly at extremely short wavelengths, and while the total amount is large because of the very high temperature the emission within the visible range is small. As the expansion progresses the density of the expanding cloud decreases and eventually the point is reached where it passes through the interstellar material rather than carrying that material with it. The interstellar gas and dust then resumes the gravitational flow toward the central star that was interrupted by the supernova explosion. The first material of this kind arriving at the surface of the star finds that surface at an extremely high temperature (calculations indicate temperatures on the order of 500,000° K) and the incoming material is heated to such a degree that it is ejected back into the surroundings. Since both the incoming and outgoing material are at a very low density one flow does not interfere with the other to any serious extent and the cold material continues to flow inward through the outward moving hot material.
The result of this process is a planetary nebula in which a central star of the white dwarf type is surrounded by a large expanding shell of very diffuse matter. As time goes on the surface of the central star gradually cools due to radiation and transfer of heat to the ejected material. Ultimately a point is reached at which the star is able to retain the incoming material and output to the nebula ceases. The shell then continues to expand and cool until it finally merges with the general interstellar medium, while the central star assumes the status of an ordinary white dwarf. From this description it can be seen that the planetaries are short-lived objects, in the astronomical sense, and the only reason why several hundred of them can be observed in our galaxy is that they occupy a definite place in the stellar evolutionary cycle and are therefore produced at a steady rate. It does not necessarily follow, however, that every white dwarf passes through the planetary stage. If the rate of expansion of the explosion products is slower, or if the rate of cooling of the outer surface of the white dwarf is faster, or if the density of the interstellar medium in the vicinity is less, the conditions which lead to the formation of the nebular shell either may not develop at all or may only result in the production of a light and transient nebulosity.
Similar ejection of material on a smaller scale is quite common in various classes of hot stars, and there are a great variety of stars with extended atmospheres which have apparently been produced by a process of this kind. Whether or not the ejection process in these stars is exclusively thermal is not yet certain but the high temperature is at least a major factor and practically all of these stars are in the very hot spectral classes O and B. An interesting group of this kind is the Wolf-Rayet class of stars. The outer regions of these stars are in a state of violent agitation and it is difficult to make accurate observations, as a result of which there is considerable difference of opinion as to the actual conditions, but the most general conclusion is that they are hot massive stars which are continuously ejecting matter. On this basis they are assigned to the spectral class W, which is above class O or at least on a level with the upper portion of class O.
The possibility has been suggested that the continuous ejection of mass by these stars may be an alternate and more peaceful method of eliminating excess mass when any kind of a stellar limit is reached. Such an explanation, however, is open to the objection that a process of this kind could not reduce the mass appreciably below the stability limit and any further accretion from the environment would promptly put the star back into the unstable condition. On this basis the Wolf-Rayet status once attained would be essentially permanent and the number of these stars in the older structures should be very large, which does not agree with observation. The general explanation of the ejection of material from hot stellar surfaces as developed in the foregoing discussion indicates that the Wolf-Rayet stars are simply those stars at the upper end of the main sequence which are near the maximum with respect to both of the variables which determine the amount of material ejected: the surface temperature and the rate of accretion from the surroundings. In other words, this class of star is a special type of incipient supernova. Some of the central stars of planetary nebulae are currently being classed as Wolf-Rayets but this is not a logical grouping as it combines stars of different evolutionary stages and widely different characteristics. The two types are quite similar in their ejection phenomena but the resemblance stops at this point. In almost all other respects the properties of these stars are widely divergent.
According to the foregoing theory the local star system, the group of stars in the immediate vicinity of the sun, should be composed principally of binary stars, if most of these stars are in the same age bracket, as the available evidence would indicate. A large number have actually been identified as binaries. Most of these recognized systems have main sequence stars in both positions but there are a few main sequence-white dwarf combinations. No giant-white dwarf systems are visible but this is probably due to the effect of the time factor on the number of stars in each part of the cycle, as the giant stage of stellar evolution is of short duration compared to the time spent in the pre-stellar and main sequence phases. It should be noted in this connection that this local system is representative only of a particular evolutionary stage; not of stellar systems in general, and the proportions in which the various types of stars occur in this local system are not indicative of the composition of the stellar population as a whole. The white dwarf, for instance, is an explosion product, a star of the second or later generation, and such stars are totally absent from the stellar systems which are composed of first generation stars: those which have not yet passed through the explosion phase of the cycle. It should not be assumed, therefore, that the high proportion of white dwarfs in the local system indicates a similar high proportion throughout space or even throughout the Galaxy.
In addition to the binaries we also observe a considerable number of stars in the local system which appear to be single. Some of these may actually be single stars which have drifted in from younger systems, but we have already noted that the A component of a double star is invisible during a portion of the early evolutionary stage and all we see under these conditions is a lone white dwarf. The white dwarfs are not dispersed in space and they do not participate in this retreat into obscurity, but they may become invisible for another reason: they may be too small to maintain the temperature required for radiation in the visible range. Inasmuch as velocities less than unity are normal in the material sector of the universe a greater proportion of the mass of the parent star is normally dispersed in space (by velocities below unity) than in time (by velocities above unity). If substantially the same amount of material is reassembled into a binary star system the giant member will have the greater mass. In Sirius, for example, the main sequence star, originally the giant, has more than twice the mass of the dwarf. A less violent explosion would result in a still smaller dwarf mass and it is not improbable that in many instances the mass of the dwarf component is below the minimum requirement for a star, in which case the final product is a single star with one or more relatively small and cool attendants: a planetary system.
Since this question of the origin of a planetary system is of considerable interest to the inhabitants of a planet, it will be desirable to examine the theoretical processes leading to the formation of such a system in more detail. When the supernova explosion occurs the material near the center of the star is obviously the part which acquires greater-than-unit velocity and disperses in time. The remainder of the stellar material is dispersed outward into space. In view of the segregation of heavy and light components which necessarily takes place in a fluid aggregate under the influence of gravitational forces the chemical composition of the two components must differ widely. Most of the lighter elements will have been concentrated in the outer portions of the star before the explosion, those heavier than the nickel-iron group will have been converted to energy, except for the stray atoms mixed in with other material, and the central portions of the star will contain a high concentration of the iron group elements. When the explosion occurs the outward moving material, which we may call Substance A, consists mainly of light elements with only a relatively small proportion of high density matter. Substance B, the inward-moving component, consists primarily of the iron group elements with some admixed lighter material.
In each of the two products of the stellar explosion the primary gravitational forces are directed radially toward the center of mass of the dispersed material. Secondary forces can be expected to develop by reason of local aggregation, but each aggregation as a whole is subject to the radial forces. Unless outside agencies intervene it is to be expected that any capture of one subsidiary aggregate by another will result in consolidation, the formation of a binary system being ruled out by the absence of non-radial motions. Ultimately the greater part of the matter in each of the two components will be collected into one unit. The two separate components then acquire orbital motion around each other, consolidation being unlikely in this case as neither unit will be moving directly toward the other unless by pure chance. The ultimate result is a system in which a mass composed principally of Substance B is moving in an orbit around a central star of Substance A. If the B component is of stellar size the system is a binary star; if it is smaller the product is a planetary system. Where interaction occurs before the consolidation process is complete some of the unconsolidated fragments may take up independent orbital positions in the final system, constituting additional planets or planetary satellites.
On this basis we may conclude that at the beginning of the formative period of the solar system a large mass of Substance A with some small subsidiary aggregates and considerable dispersed matter was approaching a smaller and less consolidated mass of Substance B, in which the subsidiary aggregates were relatively more numerous and much larger in proportion to the central mass than in the A component. When the combination of the two systems took place under the influence of the mutual gravitational attraction the major aggregates of the B component acquired orbital motion around the large central mass of the A component. In the process of assuming their positions these newly constituted planets encountered local aggregates of Substance A which had not yet been drawn into the central star and under appropriate conditions these aggregates were captured, becoming satellites of the planets. At the end of this phase all major units of both components had been incorporated into a stable system in which planets composed of Substance B were rotating around a star composed of Substance A, and smaller aggregates of Substance A were similarly in orbits as planetary satellites.
Smaller fragments are more subject to being pulled out of their normal paths by the gravitational forces of the larger masses which they may approach, and while orbital motion of these fragments is entirely possible the chances of being drawn into one of the larger masses increase as the size decreases. We may therefore deduce that during the latter part of the formative period all of the larger members of the system increased their masses substantially by accretion of fragments of Substance A in various sizes from planetesimals down to atoms and sub-material particles, with some smaller amounts of Substance B, also in assorted sizes. After the situation had stabilized we could expect to find a central star consisting primarily of Substance A, with a small inner core of Substance B derived from the heavy portions of the original Substance A mix and the accretions of Substance B. We could expect each planet to consist of a relatively large core of Substance B and an outer zone of Substance A, the surface layer of which would contain some minor amounts of Substance B acquired by capture of small fragments. The satellites, which have comparatively little opportunity to capture material from the surroundings because of their small masses and the proximity of their larger neighbors, should be composed of Substance A with only a small dilution of Substance B. It can also be deduced that after the formative period was complete further accretion took place at a slower rate from the remains of the original dispersed matter, from newly produced matter, and from matter entering the system out of interstellar space, but the general effect of such subsequent additions of material would not differ greatly from that of the accretions during the formative period and would not change the general nature of the result.
This is the theoretical picture as it can be drawn from the principles developed in the earlier pages. Now let us look at the physical evidence to see whether such a theory is tenable. The crucial issue is, of course, the existence of distinct substances A and B. Both the deduction as to the method of formation of the planetary systems and the underlying deduction as to the termination of the dense phase of the stellar cycle at a destructive limit would be seriously weakened if no evidence of a segregation of this kind could be found. Actually, however, there is no doubt on this score. Many of the fragments of matter currently being captured by the earth reach the surface in such a condition that they can be observed and analyzed. These meteorites definitely fall into two distinct classes, the irons and the stones, together with mixtures, the stony-irons. The approximate average chemical composition is as follows:
The composition of the iron meteorites is in full agreement with the hypothesis that these are fragments of pure Substance B. The stony meteorites have obviously been unable to retain any volatile constituents and when due allowance is made for this fact their composition is entirely consistent with the deduction that they represent Substance A. The existence of mixed structures, the stony-irons, is easily explained. The evidence from the meteorites therefore gives very strong support to those aspects of the theory which require the existence of two distinct substances A and B. There is no proof that the meteorites actually originated contemporaneously with the planets in the manner described, but this is immaterial so far as the present issue is concerned. The theoretical process that has been outlined is not peculiar to the solar system; it is applicable to any system reconstituted after a supernova explosion and the existence of distinct stony and iron meteorites is just as valid proof of the existence of distinct substances A and B whether the fragments have originated within the solar system or have drifted in from some other system which according to theory has originated in the same manner. The support given to the theory by the composition of the meteorites is all the more impressive because of the fact that the segregation of the fragmentary material into two distinct types on such a major scale has been very difficult to explain on the basis of previous theories.
Additional corroboration of the theoretical deductions is provided by the spectra of novae. Since these are stars of the white dwarf class they are composed of Substance B as originally formed. Within a relatively short time, however, the original star is covered by a layer of light material captured from the environment. This material is essentially the same as that in the outer regions of stars of other types and the composition of the stellar interior therefore is not revealed by the spectra obtained during the pre-nova and post-nova stages. When the nova explosion occurs, however, some of the Substance B from the interior of the star forces its way out as previously described and the radiation from this material can be observed along with the exterior spectrum. As would be expected from theoretical considerations the explosion spectra often show strong lines of highly ionized iron and nickel.
Another theoretical deduction that can be compared with the evidence from observation is the nature of the distribution of Substance A and Substance B in the planetary system. The sun has a relatively low density and we can undoubtedly say that it consists principally of Substance A as required by theory. Whether or not it actually contains the predicted small core of Substance B cannot be determined on the basis of the information now available. The planet that is most accessible to observation, the earth, definitely conforms to the theoretical requirement that it should consist of a relatively large core of Substance B with an overlying mantle of Substance A. The observed densities of the other inner planets, together with such other pertinent information as is available, likewise make it practically certain that they are similarly constituted.
The situation with respect to the major planets is less clearly defined. The densities of these planets are much lower than those of the earth and its neighbors, but we find that their outer portions are composed largely of very light elements, and this leaves the internal composition a wide open question. It seems, however, that there must have been some kind of a stable gravitational nucleus in each case to initiate the build-up of the light material and it is entirely possible that this original mass, which is now the core of the planet, is composed of Substance B. Jupiter has a total mass 317 times that of the earth and even if the core only represents a small fraction of the total it could still be many times as large as the earth’s core. This viewpoint as to the nature of the cores of these planets is further strengthened by observations which indicate that the outermost planet, Pluto, has a relatively high density and may actually have a metallic surface, which would classify it as pure Substance B. We may conclude that, although the observational data on the outer planets do not definitely confirm the theory that they have inner cores of Substance B, the observed properties are not inconsistent with this theory. Since it is highly probable that all of the planets have the same basic structure this lack of any definite conflict between theory and observation is very significant.
The satellites present a similar picture. The verdict with respect to the distant satellites, like that applicable to the distant planets, is inconclusive. The available observational information is consistent with the theory that the inner cores of these objects are composed of Substance A, but it does not exclude other possibilities. The satellite we know the best, like the planet we know the best, gives us an unequivocal answer. The moon is definitely composed of materials similar to the stony meteorites and the earth’s crust; that is, it is practically pure Substance A, as it theoretically should be.
It is appropriate to point out that this theory of planetary origin derived by extension of the principles developed from the Fundamental Postulates is independent of the temperature limitations which have constituted such formidable obstacles to many of the previous efforts to account for the existing distribution of material. The fact that the primary segregation of Substance A from Substance B antedated the formation of the solar system explains the existence of distinct core and mantle compositions without the necessity of postulating either a liquid condition during the formative period or any highly speculative mechanism whereby solid iron can sink through solid rock.
This explanation of the process of formation of the system also accounts for the fact that nearly all of the constituent units have the same direction of rotation. The reason for the near-coincidence of the orbital planes of the planets is not as obvious. The original distribution of the masses of Substance B which are now the cores of the planets should have been roughly spherical and the separation of the planets perpendicular to the orbital plane of Jupiter should have been comparable to that in the plane of the orbit. It is probable that the shift of the orbits to their present locations has been due to the inter-planetary gravitational forces. Jupiter exerts a small but significant force component tending to rotate the orbits of the other planets into its own orbital plane and in the long period of time that has elapsed since the formation of the system even a small force could be quite effective.
Let us now turn to another of the major evolutionary problems: the galactic cycle. The use of the term “cycle” in this connection may seem to be putting the cart before the horse, since no evidence of any cyclic course of evolution has heretofore been recognized, but in a universe based on the Fundamental Postulates of this work a galactic cycle is mandatory. As brought out in the discussion of the permanence of the major features of the universe, half of this cycle is located in our material sector of the universe and the other half in the non-material sector.
The necessity for a means of interchange between the material and non-material sectors has already been pointed out. This, of course, involves the existence of some process whereby the rotational space displacements of the non-material universe can be converted into the rotational time displacements which we recognize as matter. The nature of this process will be discussed later, but it is evident that new matter or potential matter entering the material sector of the universe from the non-material sector as a result of such a process cannot have any preferential location in space, since the physical entities of the non-material sector are not localized in space. It will also be shown in the subsequent discussion that all of this new matter is produced in the form of individual atomic units. These newly produced atoms uniformly distributed throughout space come under the influence of gravitational forces as soon as they are formed and a process of aggregation begins. As one vast period of time follows another and gravitation continues its slow but unremitting action the aggregates grow larger, the atoms become particles, the particles become clouds, the clouds become stars, the stars gather in clusters, the clusters become galaxies, the galaxies become larger galaxies. In the meantime the space-time progression moves the galaxies outward away from each other in space and new aggregations form from new matter and remnants of the old in the areas left vacant by the larger units. In due course these new formations grow older and larger and follow in the paths of their predecessors, leaving new vacancies to be filled by still other aggregations originating in the same manner. Each generation has its period of development, comes to maturity, and finally reaches the point of reconversion into the non-material sector of the universe to start the second half of the cycle. In order to make certain that the basis for this theoretical picture is clear, let us look at the gravitational situation as defined by the Fundamental Postulates. Every location in the universe is moving outward from every other location at unit velocity because of the space-time progression resulting from the equivalence of the basic units of space and time. Simultaneously all material atoms are moving in the opposite direction, inward toward each other, because of their rotational motion. At the shorter distances the inward motion exceeds the outward motion and the atoms move closer together. As the distance increases, however, the rotational motion toward any specific location decreases according to the inverse square relation and at extreme distances the gravitational motion is reduced to the point where it is less than the oppositely-directed velocity of the space-time progression. Beyond the point of equality the net resultant motion is outward, increasing toward unity (the velocity of light) as the distance increases.
These motions control the large-scale aspects of the material universe. Within the range of effectiveness of the gravitational motion, or gravitational forces, if we wish to speak in terms of the force equivalent of the motion, all units of matter move inward toward each other and if given sufficient time must join. Various subsidiary motions may control the nature of the combinations; for instance, they may cause orbital motion rather than actual consolidation, but they cannot prevent combination other than temporarily. Within the effective gravitational range, therefore, the aggregates of matter are continually growing. At the same time the space-time progression is increasing the separation between each of these aggregates and all others which are beyond the gravitational limit. The net effect is therefore a process of aggregation and a separation of the aggregates: formation of galaxies and expansion of the universe, to use the familiar terminology.
Once more, as in the discussion of the stellar cycle, let us see how close an agreement we can find between the purely theoretical course of evolution, as derived from the Fundamental Postulates and described in the foregoing paragraphs, and the results of astronomical observations. Since we are postponing consideration of the transitions to and from the non-material sector of the universe, the question now confronting us is whether we can recognize a definite course of evolution in the galaxies and pre-galactic structures from diffuse matter to a final form of some kind.
According to the theoretical evolutionary outline which has been presented, the primary criterion of age in the galactic world is size. It must be realized, of course, that accidents of environment and other factors will affect this situation to some extent so that the principle does not necessarily apply in every individual case, but in general the ages of the various types of structures theoretically stand in the same order as their sizes. Turning from theory to observation, we find that the recognized giants among the galaxies are the spirals. There is, in fact, a rather definite lower limit below which the spiral structure does not appear at all. The other major class, the elliptical galaxies, is found all the way down to the limits imposed by the capabilities of the observational equipment but is not represented above the lower limit of the spirals, except by certain very large systems which have the shape of elliptical galaxies but are much different in other respects. The criterion of size therefore definitely places the elliptical galaxies as the younger type and the spirals as the older, as in Hubble’s original classification. It also follows on the basis of this criterion that small spirals are in general younger than larger spirals and small aggregations of the elliptical type are younger than larger elliptical galaxies.
Now let us ask what evolutionary sequence would be normal for matter subjected to the forces which exist in the galaxies. There has been a great deal of speculation as to the nature of the forces responsible for the spiral form, but the justification for such speculation is rather questionable in view of the fact that the forces which are definitely known to exist, the rotation and the gravitational attraction, are sufficient in themselves to account for the observed structure. Inasmuch as the individual units in the galaxy are independent and widely separated the aggregate has the general characteristics of a fluid. A spiral structure in a rotating fluid is not unusual; on the contrary a striated or laminar structure is almost always found in a rapidly moving heterogeneous fluid, whether the motion is rotational or translational. It is true that objections have been raised to this “coffee cup” explanation on the grounds that the spiral in the coffee cup is not an exact replica of the galactic spiral, but it must be remembered that the coffee cup lacks one of the forces that plays an important part in the galaxy: the gravitational attraction toward the center of mass. If the experiment is performed in such a manner that a force simulating gravity is introduced, say for instance by replacing the coffee cup by a bowl which has an outlet at the bottom center, the resulting structure on the surface of the water is practically a picture of the galactic spiral.
In this kind of rotational structure the spiral is the last stage, not an intermediate form. By proper adjustment of the rotational velocity and the rate of water outflow the original dispersed material on the water surface can be caused to pull in toward the center and assume a circular or elliptic shape before developing into a spiral, but the elliptic structure precedes the spiral if it appears at all. The spiral is the end product. It will be brought out later in the discussion that the manner in which the growth of the galaxy takes place has a tendency to accentuate the spiral structure, but the rotating fluid experiment shows that the spiral will develop in any event when the necessary velocity is attained. Furthermore, this spiral is dynamically stable. We frequently find the galactic spirals characterized in astronomical literature as unstable and inherently short-lived, but the experimental spiral does not support this view. From all indications the spiral structure could persist indefinitely if the rotational velocity remained constant.
However, the rotational velocity of the galaxies does not remain constant. During the early stages of galactic aggregation when the combining units are of the same general order of magnitude, it is to be expected that some rotation will develop because of non-central impacts. Once such a rotation is initiated a difference in the rate of accretion develops between the two opposite sides of the galaxy in the plane of rotation. This accretion rate is affected very materially by the velocity of the mass relative to the diffuse material through which the galaxy is moving. On one side the net velocity is the sum of the translational and tangential velocities; on the other side it is the difference. The impact of the incoming particles or aggregates is therefore asymmetric and the result is an increase in rotational velocity with the age of the structure. Here again there are individual deviations, but in general the rotational velocity is directly related to the size and age of the galaxy and it is therefore one of the criteria of age.
Closely connected with the velocity is the shape of the rotating structure. The correlation in this case is so obvious that in actual practice the velocity is generally inferred from the shape rather than measured directly, although measurements have been made in some cases where conditions are favorable. Increased rotational velocity in the elliptical galaxies results’ in greater eccentricity. Beginning with the globular clusters, which are rotating very slowly and are spherical or nearly spherical, the elliptical units pass through all stages of eccentricity down to strongly lenticular shapes. At this point the spiral disk develops. The structure of the young spiral can be described as loose: the arms are thick and widely separated and the nucleus is rather inconspicuous. As the galaxy grows older and larger the nucleus becomes more prominent and the increased rotational velocity causes the arms to thin out and wind up more tightly. In the limiting condition the galaxy is practically all nucleus and the spiral arms are wound around this central mass so tightly that in effect they become part of it. These changes in appearance in the final stage account for some of the apparent deviations from the normal relation between size and age. There are a number of very large galaxies which are classified as elliptical, although they are greatly in excess of the size which normally results in the development of the spiral structure. The logical explanation is that these are not actually elliptical galaxies; they are the tightly wound, rapidly rotating, giant spirals which have reached the end of the road as galaxies and are ready to take the next step in the evolutionary cycle. Some particularly interesting inferences along this line can be drawn from the characteristics of the giant galaxy Messier 87, one of the well-known examples of this class, and this subject will receive further attention later.
At this point it may be appropriate to digress long enough to point out that if the correlation between size and shape is as close as is indicated by this preliminary examination of the theoretical relationships, it should have some useful applications in observational astronomy, particularly in the study of the more distant galaxies. Some complications are, of course, introduced by modifications of the basic structural pattern. The most common of these modifications, the barred spiral, will be given further consideration in another connection.
The fourth criterion of age applicable to the galaxies is that of relative abundance. In the evolutionary course as outlined, each unit of aggregation is growing at the expense of its environment. The smaller units are feeding on atoms or small particles, but the larger aggregations pull in not only the particles in the immediate vicinity but also any of the small aggregates which are within reach. As a result of this cannibalism the number of units of each kind should progressively decrease with age. When we examine the existing situation we find that the order of abundance is essentially in agreement with the age as determined by other criteria. The giant spirals, the senior members of the family of galaxies according to these criteria, are relatively rare, the smaller spirals are more common, the elliptical galaxies are abundant, and the globular clusters, which may be regarded as junior elliptical galaxies, exist in enormous numbers. It is true that the observed number of small elliptical galaxies, those in the range just above the globular clusters, is considerably lower than would be predicted from this sequence, but it is evident that this is a matter of observational selection. When the majority of galaxies are observed at such distances that only the spirals and the largest of the ellipticals are big enough to be visible it is not at all strange that the observed spirals are proportionately more numerous than is predicted by theory. The number of additional elliptical galaxies discovered within the Local Group in very recent years, increasing the already high ratio of elliptical to spiral in the region most accessible to observation, emphasizes the importance of this selection process.
A fifth criterion of galactic age is provided by the ages of the constituent stars. After a galaxy has reached the stage where the complete stellar cycle is represented the evaluation of galactic age becomes a matter of determining just how many times the constituent stars have been around the cycle: a somewhat complex problem. It is, however, relatively simple to distinguish between the galaxies which are old enough to have stars in all phases of the cycle and those in which the most advanced stars have not yet reached the upper portion of the main sequence, and this distinction is all that is required for present purposes. The initial product of condensation from the primitive material is, of course, identical with the product of condensation of a diffuse mass expelled from an exploding star; that is, it is a red giant. Under normal conditions this new star, irrespective of its origin, will follow one of the usual evolutionary paths: the lines AB or AD in Figure 42.
The smallest of the stellar aggregations in the line of galactic evolution, the globular clusters, are composed primarily of stars that are in the neighborhood of the initial evolutionary line AB. In some cases the line AD is also represented and frequently there are stars along the lower portions of the main sequence, but there are no representatives of the advanced types: the hot massive stars. We therefore conclude from this evidence that the globular clusters are relatively young structures, which agrees with the testimony from other sources. The next larger aggregates, the elliptical galaxies, are composed of stars of the same general type as those of the globular clusters, the so-called Population II. Here, however, a few blue giants are occasionally found—indications that the general age level is increasing. Then when we reach the spirals the full complement of advanced type (Population I) stars makes its appearance, confirming the status of these galaxies as the oldest inhabitants of the material system.
Another possible method of identifying the age of a galaxy or other material aggregate is a determination of the proportion of heavy elements in the matter of which it is composed. As indicated in the preceding discussion, the building up of heavy elements from the hydrogen and helium atoms which are the initial products in the formation of matter is a slow but continuous process. The elements heavier than the nickel-iron group are destroyed in the stellar cycle and it can be expected that the total amount of these elements will reach an equilibrium value and will not increase above this level, but the proportion of elements in the intermediate range should continue to increase indefinitely as the aggregate grows older. If the proportion of heavy elements in an aggregate can be measured, this measurement then serves as an indication of age. Obviously an accurate determination of this quantity presents some difficult problems, but some attempts in this direction have been made and it is interesting to note that the results of these initial efforts are entirely in accord with the ages of the various structures as inferred from other data. A recent evaluation finds the percentages of elements heavier than helium ranging from 0.3 in the globular clusters, theoretically the youngest stellar aggregation available, to 4.0 in the Population I stars and interstellar dust in the solar neighborhood, theoretically the oldest material within convenient observational range.
In the preceding paragraphs we have considered six different items which should theoretically serve as criteria of galactic age: (1) size, (2) rotational velocity, (3) shape, (4) relative abundance, (5) age of the constituent stars, and (6) proportion of heavy elements. All of these criteria are in agreement that the observed galaxies and sub-galaxies can be placed in a sequence which confirms the theoretical deduction that there is a definite evolutionary path in the material universe extending from dispersed atoms and sub-material particles through particles of matter, clouds of atoms and particles, stars, clusters of stars, elliptical galaxies and small spirals to the giant spiral galaxies which constitute the final stage of the material phase of the galactic cycle. It is possible, of course, that some of these units may have remained inactive from the evolutionary standpoint for long periods of time, perhaps because of a relative scarcity of galactic “food” in their particular regions of space, and such units may be chronologically older than some of the aggregations of a more advanced type. The capture of relatively large aggregates also necessarily results in a temporary divergence from the normal relationship between age and size. Such variations as these, however, are merely minor fluctuations in a well-defined evolutionary course.
Next we turn to a different kind of evidence which gives further support to the theoretical conclusions. In the preceding discussion it has been demonstrated that the deductions as to continual growth of the material aggregates by capture of matter from the surroundings are substantiated by the fact that the ages of the various types of galaxies, as indicated by several different criteria, are definitely correlated with their respective sizes. Now we will examine some direct evidence of captures of the kind required by theory. First we will consider evidence which indicates that certain captures are about to take place, then evidence of captures actually in progress, and finally evidence of captures that have taken place so recently that their traces are still visible.
The early history of the process of aggregation must be derived principally from theory since the observation of small non-luminous aggregates is possible only to a very limited extent (at least with the facilities now available). We deduce that the atoms which constitute the initial phase of matter combine to form particles, and this deduction is confirmed by evidence of the existence of dust particles in interstellar space. We further deduce that these particles gather together into dust clouds and that stars are formed from clouds of dust and gas when the first magnetic ionization level is reached and an adequate source of heat is thereby activated. At this point the aggregates become self-luminous and the task of the observer is greatly simplified, although the enormous distances which are involved still stand as formidable obstacles to complete knowledge. From the information gathered by observation two striking facts about the formation of the stars emerge. First, we find that the stars are separated by almost fantastic distances and that the most powerful gravitational forces in the universe, those in the central regions of the largest galaxies, are not able to reduce this separation by any significant amount. (From the standpoint of this discussion binary and multiple stars are regarded as stellar units, and the term “star” should be understood as including such systems.) The second of these rather surprising facts is that, although direct observation is possible only in very limited areas, we have sufficient observational information to show that single stars and relatively large groups (globular clusters) are abundant throughout space, but there is no indication of the existence of aggregations of intermediate size.
In order to throw some light on the situation which is responsible for these somewhat bizarre relationships, let us turn back to gravitational theory. We have found that the gravitational force exerted by mass m on unit mass at distance d is m/d2. At the point where the gravitational force exerted on unit mass is unity in all effective dimensions the gravitational and space-time forces are likewise in equilibrium in all dimensions. We have previously evaluated the inter-regional ratio of effective dimensions as 156.44 and we have found that a total of 3 × (156.44)3 three-dimensional units in the time region are required to produce one effective unit parallel to the time-space region forces. The ratio of the total gravitational force to the force exerted against a single one-dimensional rotational unit is therefore
3 × (156.44)3 × 3 × 156.44 = 5.391×109.
On this basis the equilibrium equation between the gravitational force and the unit force of the space-time progression is
|1 / (5.391×109) × m/d02 = 1|| |
Solving for d0, we obtain
|d0 = m½ / 73420|| |
At this distance do the gravitational motion is equal to the space-time progression and there is no resultant motion in either direction. At distances less than d0 there is a net inward velocity. Beyond do the net velocity is outward. We thus find that for any specific mass there is a gravitational limit beyond which the net effective force reverses direction and the resultant motion is outward rather than inward.
Here, then, is the explanation for both of the extraordinary characteristics of the stellar distribution. The stars are separated by tremendous distances because each star or pre-stellar cloud continually pulls in the material within its gravitational range and this prevents the accumulation of enough matter to form another star in this space. Formation of additional stars can take place only outside the gravitational limits and when such stars originate outside these limits they move outward from all previously existing stars. The immense region within the gravitational limit of each star is therefore reserved to that star alone.
The mass of the sun has been calculated as 2×1033 g, which is equivalent to 1.205×1057 natural units of mass. The corresponding number of natural units of space is the square root of this quantity or 3.47×1028, which amounts to 1.58×1023 cm or 167,000 light years. Applying the coefficient of equation 157 we find that the gravitational limit of the sun is at 2.27 light years. The nearest star system, Alpha Centauri, is 4.2 light years distant and the average separation of the stars in the vicinity of the sun is estimated at 2 parsecs or 6.5 light years. Sirius, the nearest star larger than the sun, has its gravitational limit at 3.5 light years and the sun, 8.7 light years away, is well outside this limit. It is evident that this space distribution in which the minimum distance is two-thirds of the average requires some kind of a barrier on the low side; it cannot be the result of pure chance. The existence of a gravitational limit just below the minimum stellar separation explains the highly abnormal distribution.
From the foregoing figures and the relation indicated by equation 157 it can also be seen why small clusters of stars are not formed under normal conditions. Let us consider, for example, a hypothetical cluster of ten stars in a region in which the stars of the general field are uniformly spaced at a density equal to that in the neighborhood of the sun. On calculating the gravitational limit of the cluster we find that even the closest of the field stars are outside this limit. Since the density of matter in the dust clouds from which the stars are formed is no greater and probably less than that assumed for purposes of this calculation, it is apparent that a cluster of this size not only could not grow but could not even be formed in the first place. We deduce, therefore, that where a large number of stars form contemporaneously from a dust cloud of vast proportions a relatively large star cluster is formed, but that all other stars are formed as individual units.
Within the clusters the star density is greater than that in regions such as the one in which the solar system is located, but the nature of the force equilibrium in any aggregation of stars is such as to preclude any major increase in the density. Unlike the units of matter within the star, each of which exerts a force of attraction on all others, the individual stellar units within the cluster repel each other and the cluster is held together only by the gravitational attraction between the individual stars and the cluster as a whole. This limits the concentration toward the center and, except for the outer regions in which the density gradually drops to the near zero value of the surrounding space, it is probable that the density is nearly uniform throughout the cluster and does not increase appreciably with the cluster size. The average density of the globular clusters is estimated at one star equivalent to the sun per two cubic parsecs, which is about five times the density of the local star system. The absolute maximum, on the basis of the figures previously quoted, is 20 times the local density and the maximum density in the clusters must stay within this limit to keep the system stable. The observed average density indicates that this requirement is met by a substantial margin.
In the light of the points brought out in the foregoing discussion we may conclude that individual stars and clusters of the globular type are continually being formed throughout the vast expanse of inter-galactic space. Each of the individual stars is ultimately captured by one of the clusters or galaxies. The great majority of the clusters also come within the gravitational limit of one or another of the larger aggregates sooner or later and are absorbed, but a few manage to stay out of the way of their voracious larger neighbors long enough to develop into full-sized galaxies. It is not unlikely that the union of two large clusters is the event that marks the advance from cluster to galaxy status, since this not only provides the additional mass needed to speed up the capture of other clusters and smaller units, but also explains the origin of the increased rotational velocity which is characteristic of the galaxies.
Because of the continual pull exerted by the galaxies on all of the clusters within the galactic gravitational limits, we can expect to find each galaxy surrounded by a concentration of globular clusters moving gradually inward. Inasmuch as the original formation of the clusters took place practically uniformly throughout all of this space the concentration of clusters should theoretically continue to increase as the galaxy is approached, until the capture zone is reached. Furthermore, the number of clusters in the immediate vicinity of each galaxy should theoretically be a function of the gravitational force and the size of the region within the gravitational limits, both of which are directly related to the size of the galaxy. All of these theoretical conclusions are confirmed by observation. A few clusters have been found accompanying such small galaxies as the member of the Local Group located in 4 Fornax; there are at least 3 or 4 in the Small Magellanic Cloud and about a dozen in the Large Cloud; our Milky Way System has at least 150 when allowance is made for those which we cannot see for one reason or another; the Andromeda spiral, M 31, has about 200; NGC 4594, the “Sombrero Hat,” is reported to have “several hundred” associated clusters; while the number surrounding M 87 is estimated to be about a thousand. These numbers of clusters are definitely in the same order as the galactic sizes indicated by the criteria previously established. The Fornax-Small Cloud-Large Cloud-Milky Way sequence is not open to question. M 31 and our own galaxy are probably close to the same size but the latest information indicates that M 31 is the larger, as the relative numbers of clusters would suggest. The dominant nucleus in NGC 4594 shows that this galaxy is still older and larger, while all of the characteristics of M 87 suggest that it has reached the upper limit of galactic size.
Here again, as in the case of stellar evolution, observation gives us only what amounts to an instantaneous picture and to support the theoretical deductions we must rely primarily on the fact that the positions of the clusters as observed are strictly in accordance with the requirements of the theory. It is worthy of note, however, that such information as is available about the motions of the clusters of our Galaxy is also entirely consistent with this theory. In the words of Struve, we know “that the orbits of the clusters tend to be almost rectilinear, that they move much as freely falling bodies attracted by the galactic center.” According to the theory that has been developed herein, this is just exactly what they are.
Capture of galaxies by larger galaxies is much less common than capture of globular clusters, simply because the clusters are very much more abundant. We may deduce, however, that there should be a few galaxies on the road to capture by each of the giant spirals, and this is confirmed by the observation that the nearer spirals (the only ones we can check) have “satellites,” which are nothing more than small galaxies that have come within the gravitational field of the larger units and are being pulled in to where they can be conveniently swallowed. The Andromeda spiral, for instance, has at least four satellites: the elliptical galaxies M 32, NGC 147, NGC 185, and NGC 205. The Milky Way galaxy is also accompanied by at least four fellow travelers: the two Magellanic Clouds and the elliptical galaxies in Sculptor and Fornax. The expression “at least” must be included in both cases as it is by no means certain that all of the small elliptical galaxies in the vicinity of these two spirals have been identified.
Some of these galactic satellites not only occupy the kind of positions required by theory, and to that extent support the theoretical conclusions, but also contribute evidence of the second class: indications that the process of capture is already under way. Let us look first at the irregular galaxies. This galactic classification was not given a separate place in the age-size-shape sequence previously established as it appears reasonably certain that these irregular aggregates, which constitute only a small percentage of the total number of observed galaxies, are merely galaxies belonging to the standard classes which have been distorted out of their normal shapes by special factors. The Large Magellanic Cloud, for instance, is big enough to be a spiral and it contains the high proportion of advanced type stars which is typical of the spirals. Why then is it irregular rather than spiral? The most logical conclusion is that the answer lies in the proximity of our own giant system; that the Cloud is in the process of being swallowed by our big spiral and that it has already been greatly modified by the gravitational forces which will eventually terminate its existence as an independent unit. We can deduce that the Large Cloud was actually a spiral at one time and that the “rudimentary” spiral structure which is recognized in this system is in reality a vestigial structure.
The Small Cloud has also been greatly distorted by the same gravitational forces and its present structure has no particular significance. From the size of this Cloud we may assume that it was a late elliptical or early spiral galaxy. The conclusion that it is younger than the Large Cloud reached on the basis of the relative sizes is supported by the fact that the Small Cloud is a mixture of Population I and Population II stars, whereas the stars of the Large Cloud belong almost entirely to the types assigned to Population I in Baade’s original classification.
The long arm of the Large Cloud which extends far out into space on the side opposite our Galaxy is a visible record of the recent history of the Cloud. It should be recognized that the gravitational attraction of the Galaxy is exerted on each component of the Cloud individually, not on the structure as a whole, since the Cloud is not an integral unit but an assembly of discrete units in which the cohesive and disruptive forces are in balance, a balance which is precarious at best in view of the repulsion between the individual units. The differential forces due to the greater distances to the far side of the Cloud were unimportant when the Cloud was far away but as it approached the Galaxy the force differential increased to significant levels. As the main body was speeded up by the increasing gravitational pull it was inevitable that some stragglers would fail to keep up with the faster pace, and once they had fallen behind the force differential became even greater. We would expect, therefore, to find a luminous trail along the recent path of the incoming Cloud: just the kind of a structure that we actually observe.
This is no isolated phenomenon. Small galaxies may be pulled into the larger units without leaving visible evidence behind, as the amount of material involved is too small to be detected at great distances, but when two of the large units, the spirals, approach each other we commonly see luminous trails of the same nature as the one that has just been discussed. Figure 44 is a diagram of the structural details which can be seen in photographs of the galaxies NGC 4038 and 4039. Here we see that one galaxy has come up from the lower right of the diagram and has been pulled around in a 90 degree bend. The other has moved down from the direction of the top center and has been pulled to the right and forward. When the action is complete there will be one giant spiral moving forward to its ultimate destiny, leaving the stray stars to be picked up by some other aggregation which will come along at a later time. Several thousand “bridges” which have developed from interaction between galaxies are reported to be visible in photographs taken with the 48 inch Schmidt telescope on Mt. Palomar. Some of these are trailing arms similar to those in Figure 44. Others are advance units which are rushing ahead of the main body. The greater velocity of these advance stars is also due to the gravitational differential between the different parts of the galaxy, but in this case the detached stars are the closest to the approaching galaxy and are therefore subject to the greatest gravitational force.
In order to produce effects of this kind it is, of course, necessary that the smaller unit be well within the effective gravitational limit of the larger. It will therefore be of interest to calculate the gravitational limit of our Galaxy, a typical large spiral, and to compare this distance with the observed separations between some of the objects which are presumably undergoing gravitational distortions. The galactic masses are usually expressed in terms of a unit equal to the solar mass and since we have already evaluated the gravitational limit for this mass we may express equation 157 in the convenient form
|d0 = 2.27 (m/m8)½ light years|| |
The mass of our Galaxy is estimated all the way from 1011 to 5×1011 solar masses. The probable accuracy of these estimates will be discussed later, but if we accept an intermediate value for present purposes equation 158 gives us a gravitational limit of about a million light years. The distance to the Magellanic Clouds is variously estimated from about 150,000 to some 230,000 light years, but in any event it is apparent (1) that the Magellanic Clouds are well inside the gravitational limit of the Galaxy, and (2) that the diameters of the Clouds, approximately 20,000 and 30,000 light years, are large enough in proportion to the distance from the Galaxy to give rise to significant differentials in the effective gravitational forces. The calculation thus verifies the conclusion that the Magellanic Clouds are well on their way to capture by the Galaxy. The diameter of the Galaxy is about 100,000 light years and we may therefore generalize these findings for application to distant systems by observing that considerable deformation and loss of material from a large incoming unit are produced at any distance less than the equivalent of two diameters of the larger galaxy. There are many visual pairs of galaxies which show no indications of gravitational distortion although they appear to be within the two diameter range, but in these instances we must conclude that there is actually a radial separation which puts them beyond the effective distance.
Irregularities of one kind or another are relatively common in the very small galaxies but these are not usually harbingers of coming events like the gravitational distortions of the type experienced by the Magellanic Clouds. Instead they are relics of events that have already happened. Capture of a globular cluster by a small galaxy is a major step in the galactic course of evolution, consolidation with another small galaxy is a revolutionary development. Since the relatively great disturbance of the galactic structure due to either of these events is coupled with a slow return to normal because of the low rotational velocity, the structural irregularities persist for a longer time in the smaller galaxies and the number of small irregular units visible at any particular time is correspondingly large.
Although the general spiral structure of the larger galaxies is regained relatively soon after a major consolidation because of the high rotational velocity which speeds up the mixing process, there are variations in some of these structures which seem to be correlated with recent captures. We note, for instance, that a number of spirals have semi-detached masses or abnormal concentrations of mass within the spiral arms which are difficult to explain as products of the development of the spiral itself, but could easily be the results of captures. The outlying mass, NGC 5195, attached to one of the arms of M 51, for example, has the appearance of a recent acquisition. Similarly the lumpy distribution of matter in M 83 gives this galaxy the aspect of a recent mixture which has not been thoroughly stirred. A study of the structure of the so-called “barred” spirals also leads to the conclusion that these units are galactic unions which have not yet reached the normal form. The variable factor in this case appears to be the length of time required for consolidation of the central masses of the combining galaxies. If the original lines of motion of the two units intersect, the masses are undoubtedly intermixed quite thoroughly at the time of contact, but an actual intersection of this kind is not a requirement for consolidation. All that is necessary is that the directions of motion be such as to bring one galaxy well within the gravitational limit of the other at the closest point of approach. The gravitational force then takes care of the consolidation. Where the gap to be closed by gravitational action is relatively large, however, the rotational forces may establish the characteristic spiral form in the outer regions of the combined galaxies before the consolidation of the central masses is complete and in the interim the galactic structure is that of a normal spiral with a double center.
Figure 45 (a) shows the structure of the barred spiral galaxy NGC 1300. Here the two prominent arms terminate at the mass centers a and b, each of which is connected with the galactic center c by a bridge of dense material which forms the bar. On the basis of the conclusions reached in the preceding paragraph we may regard a and b as the original nuclei of Galaxies A and B, the two units whose consolidation produced NGC 1300. The gravitational forces between a and b are modifying the translational velocities of these masses in such a manner as to cause them to spiral in toward their common center of gravity, the new galactic nucleus, but this process is slowed considerably after the galaxy settles down to a steady rotation as only the excess velocity above the rotational velocity of the structure as a whole is effective in moving the mass centers a and b forward in their spiral paths. In the meantime the gravitational attraction of each mass pulls individual stars out of the other mass center and builds up the new galactic nucleus between the other two. As NGC 1300 continues on its evolutionary course we can expect it to gradually develop into a structure such as that in Figure 45 (b), which shows the arms of M 51. Figure 45 (c) indicates how M 51 would look if the central portions of the arms were removed. The structural similarity to NGC 1300 is obvious.
Another valuable source of information corroborating the theoretical deductions with respect to the capture process is provided by the globular clusters. These clusters are too small to affect the shape of the larger galaxies which may absorb them and they are also too small for the development of noticeable distortion effects within their own structures such as those which we see in the Magellanic Clouds. On the other hand the process of capture of these units is taking place practically on our doorstep and we are able to follow the clusters into the main body of the galaxy and to read their history in much greater detail than is possible in the case of the larger and more distant aggregates.
We see the globular clusters as a roughly spherical halo extending out to a distance of about 100,000 light years from the galactic center. There is no definite limit to this zone; the clusters gradually decrease in concentration until they reach the cluster density of inter-stellar space, and individual clusters have been located out as far as 500,000 light years. Since the visible diameter of the average cluster is in the neighborhood of 100 light years and the actual over-all dimensions are undoubtedly greater, there should be a substantial gravitational differential between the near and far sides of the cluster at distances within 100,000 light years. We can therefore deduce that the clusters are experiencing an increasing loss as they approach the Galaxy, both by acceleration of the closest stars and by retardation of the most distant. The effect of slow losses of this kind on the shape of a nearly spherical rotating aggregate is minor and the detached stars merge with the general field of stars which is present in the same zone as the clusters. The process of attrition is therefore unobservable from our location, but we can verify its existence by comparing the sizes of the clusters before and after losses of this kind have taken place. Studies which have been made on the clusters accessible to observation indicate that the average size of the units at 25,000 parsecs from the galactic center is 30 percent greater than the average size of those only 10,000 parsecs distant. From this it would appear that the cluster loses more than half of its mass by the time it reaches what may be regarded as the capture zone, the region in which the gravitational action is relatively rapid.
In this capture zone the losses are still greater and by the time the cluster arrives in the vicinity of the galactic plane the remaining stars are numbered in the thousands instead of in the tens or hundreds of thousands. On entry into the rapidly rotating spiral disk still further disintegration occurs, and the original globular cluster becomes a number of separate galactic clusters, the largest of which has only a few hundred members. Since the gravitational attraction of this small group is not sufficient to offset the effect of the non-uniform rotational forces of the Galaxy, the galactic clusters slowly break up and the individual stars go their separate ways. In the meantime, however, the evolutionary development of the stars is speeded up by the greatly increased amount of “food” available in the galactic disk and the stars in the older galactic clusters are quite different from those in the units just making the transition from the globular to the galactic status.
This evolution of the constituent stars is the feature which enables us to identify the relative ages of the clusters and thereby to confirm the theoretical deductions as to the history of these units. The original globular clusters are relatively young aggregates and the spread between the oldest and youngest stars in each cluster, excluding strays from older systems that may have been picked up along the path, only represents a fraction of the total evolutionary cycle. After the cluster arrives in the immediate vicinity of the Galaxy it ceases to grow and there is no further increase in the age spread. The sector of the cycle on the H-R diagram occupied by the constituent stars then simply moves forward around the circle as the cluster grows older and passes through the various evolutionary stages.
Figure 46 is a series of clusters arranged in order of increasing age. As a means of facilitating identification of the position of each group with reference to the complete evolutionary cycle, the entire stellar cycle is shown in outline in each diagram and the sectors occupied by the stars of the particular group are filled in with heavy lines. We have already noted that the globular clusters are composed of very young stars in the early evolutionary region at the upper right of the H-R diagram. In Figure 46, diagram (a) shows the composition of a typical globular cluster, M 92. Here the most advanced stars have barely reached the main sequence, the youngest are still in the formation zone, and the great majority of the constituent stars are in the intermediate region on one of the paths AB or AD. Diagram (b) is a similar representation of the globular cluster M 13, which is in a slightly more advanced stage, a larger proportion of the stars having arrived in the lower section of the main sequence. The composition of the galactic cluster M 67, diagram (c), is very similar to that of M 13, indicating that M 67 is a very recent arrival in the galactic disk, a conclusion which is corroborated by the fact that this is one of the most populous of the known galactic clusters and one of the highest above the galactic plane (about 440 parsecs). In an older cluster, the Hyades (d), a few stars still remain on the contraction path AB but the majority have reached the main sequence. Next is a still older cluster, the Pleiades (e), in which the last stragglers have attained gravitational equilibrium and the entire body of stars has moved up along the main sequence.
Further development of the Pleiades cluster in the future will bring the hottest stars in this group to the destructive limit at the top of the main sequence and will cause these stars to revert back to the red giant status via the explosion route. In the double cluster h and X Persei (f) we find that such a process has already begun. Here the main body of stars is in the region just below the upper limit but a number of red giants are also present. We can identify these giants as explosion products rather than new stars as the former explanation keeps all of the stars in the cluster in an unbroken sequence along the evolutionary path, whereas if these were young stars of cycle A they would be totally unrelated to the remainder of the cluster: a highly improbable situation.
The identification of still older clusters of stars is more difficult because the stars of the clusters separate in the course of time and there are some problems involved in recognizing these stellar associations when they are no longer compact groups. It appears probable, however, that the sun and its immediate neighbors constitute a group with a common origin and diagram (g) represents the stars of this Local Group. Here we have evidence that the group is well along in the second cycle. There are no giants among these stars but the presence of white dwarfs in such systems as Sirius and Procyon and the planets in the solar system shows that the group has been through the explosion phase. We may interpret the lack of red giants as indicating that the former giants such as Sirius have had time to get back to the main sequence while their slower white dwarf companions are still on the way. It is not certain that all of the nearby stars actually belong in this same age group, as some younger stars may also be present, but there are no obvious incongruities. Finally in diagram (h) we have the full complement of Population I stars as found in the spiral arms, an assortment which includes stars in all phases of the evolutionary cycle.
Thus far the terms Population I and Population II have been used in the customary manner to refer to the two general classes of stars first distinguished by Baade, and characterization of the stars of Figure 46 (h) as Population I follows this practice. As the diagram shows, however, classifying the stars of the spiral arms as Population I makes this category so broad that its usefulness is severely limited and it therefore seems appropriate to modify these classifications to bring them into line with the relations which have been developed in the foregoing pages. The general significance of the two designations will be retained but new definitions will be set up, based on position in the evolutionary cycle. In this revision the Population I designation will be applied to main sequence stars only, and all of the pre-main sequence stars will be assigned to Population II. These I and II classifications will then be subdivided according to the particular evolutionary cycle in which the stars are located, using the letter A to refer to the first cycle (the pre-explosion stage) and B, C, etc., to identify the subsequent cycles.
On this basis the early type first cycle stars of the globular clusters and elliptical galaxies, which were placed in Population II by Baade, will fall in Population II-A. The stars of the galactic clusters (except the very young systems such as M 67) and the other first generation main sequence stars of the spiral arms, which formed part of Baade’s Population I, will become Population I-A. In most spiral galaxies the stars of the nuclei resemble those of the globular clusters and were included in Population II in the original classification. From the facts that have been developed herein it is apparent that these are actually the oldest stars in the galaxies and they do not belong with the young stars of the clusters. They are similar to the latter in many respects only because they have gone all the way around the cycle and are back to the same position on the H-R diagram that is occupied by the young stars. Under the new definitions this position keeps the stars in Population II but since they are in the second cycle the classification is II-B. The second generation main sequence stars, the group to which the sun belongs, are Population I-B.
Theoretically the stars of the galactic nucleus should continue moving around the cycle as they grow older, until the galaxy finally reaches the end of its life span, but detailed observation of the individual stars in this region is feasible only to a very limited degree with the facilities now available and it is difficult to determine just how far this cyclic course actually extends. We do observe, however, that the light from the nucleus of a galaxy does not always have the red color characteristic of the Class II populations. In a number of galaxies, perhaps as many as ten percent of the total, the light from the galactic center is reported to be as blue as that from the disk. This indicates that in these units a large proportion of the total light is coming from the most advanced members of Population I-B. The existence of I-B stars in relatively large numbers in other nuclei may then be inferred, since the presence of the upper main sequence stars of the second generation in some nuclei means that many slightly younger galaxies must contain lower main sequence stars of the same cycle. These early I-B stars are in the same spectral classes as the II-B group and cannot be distinguished by color. The same is true of the II-C stars, the class which follows the late type I-B stars that are responsible for the blue color in galactic nuclei where it appears. We can logically infer than at least some of these II-C stars are present but we cannot identify them in the nuclei with the facilities now available, and we cannot determine whether still older populations are present.
From the foregoing it can be seen that the characteristics of the composite light emitted by a galaxy or by one of its constituent parts constitute another means of identifying the age of the aggregate, supplementing the criteria previously discussed. The integrated light from the elliptical galaxies belongs to spectral type G. In the early spirals the emission rises to type F, or even A in some cases, because of the large number of stars which move up to the higher portions of the main sequence. As these stars pass through the explosion stage and revert to the II-B status, accumulating largely in the galactic nucleus, the light gradually shifts back toward the red and in the oldest spirals the color is very much like that of the elliptical galaxies. Summarizing this color cycle, we may say that the early structures are red, there is little change in the character of the light during the development of the elliptical galaxy, then a rapid shift toward the blue as the transition from elliptical to spiral takes place, and finally a slow return to red as the spiral ages. In order to lay the foundation for an explanation of these variations in the rapidity of change it will now be necessary to take up a consideration of the behavior of the interstellar dust and gas.
Since matter is continually forming throughout all space and is moving hither and yon under the influence of gravitation and other forces, there is a certain minimum amount of material subject to accretion in any environment in which a star may be located. Immediately after the formation of a star cluster by condensation of the denser aggregates of matter in a particular volume this thin diet of primitive material is all that is available for growth and the development of the structure is correspondingly slow. As time goes on the rate of action speeds up when material begins to arrive from the more distant regions which were not stripped of their substance by the initial condensation process. Furthermore, the increasing mass accelerates the rate of progress considerably as it not only extends the gravitational limit and puts additional material within reach but also makes the capture of larger aggregates feasible. As we have already noted, observation shows that the larger elliptical galaxies have reached the point where they are beginning to pull in globular clusters in addition to single stars and diffuse material.
We cannot see what is happening to the non-luminous material, but this matter is subject to the same gravitational forces as the luminous aggregates, and we can deduce that when the elliptical galaxies reach the size that permits them to start capturing globular clusters they simultaneously begin picking up pre-stellar clouds of similar size. The dust and gas clouds arrive too late in the elliptical stage of galactic evolution to have much effect on the properties of the elliptical units, although they are no doubt responsible for the development of the small representation of hot blue stars previously mentioned. But when the elliptical structure breaks up and spreads out to form the spiral, the stars of the galaxy are thoroughly mixed with the recent acquisitions of dust and gas and the stage is set for a period of rapid advance along the path of stellar evolution. This relatively fast progress is still further magnified when it is viewed from the standpoint of light emission since the hot stars at the upper end of the main sequence may be thousands of times as luminous as the average Population II star.
The identification of these conspicuous hot and luminous stars with the spiral arms was the step which led to the original concept of two distinct stellar populations, but the new information which has been developed herein makes it clear that the galactic arms actually contain a rather heterogeneous population and a more definite correlation between the various types of stars and the general stellar populations is in order. Population I as herein defined is composed entirely of stars of the main sequence, the most conspicuous being the blue giants at the top of the sequence. The various classes of hot and massive shell stars also belong in this group and we can include the supernovae, which mark the end of the dense phase of the stellar cycle. The Population II stars of all cycles on the minimum accretion branch are the red giants and sub-giants. The white dwarfs join this group after the first explosion; that is, in Class II-B and beyond.
The rapid accretion branch of the Population II-A stars is a group of variable stars sometimes called Type II Cepheids and including, in the order of increasing age and decreasing period, the stars of the RV Tauri, W Virginis, and RR Lyrae groups. The II-B variables, the corresponding stars of the next cycle, are similar but not identical and the groups which make up this class, listed in the same order as before, are the long period variables, the semi-regular variables, and the classical Cepheids. Since these are second generation stars they are binary or multiple systems and they are shifted upward on the H-R diagram relative to the corresponding II-A stars. According to recent determinations, the average difference in luminosity for stars of the same period is about 1½ magnitudes. Population II-B also includes a similar group of variables on the other side of the main sequence which is absent from the pre-explosion Population II-A. Here we have, also in the order of increasing age, the planetaries, the classical novae, the recurrent novae, and the dwarf novae of the U Geminorum and similar types. Population II-C and later variables no doubt extend the differences between the II-B and II-A classes still farther, but this point cannot be checked against observation because the available information regarding the third cycle stars is still quite incomplete. Table CXII is a summary of the stellar types included in each classification.
|Population I (all cycles)|
|Main sequence stars|
|Stable stars (all cycles)||Red giants|
|Stable stars (II-B and later)||White dwarfs|
|Variable stars (II-A)||RV Tauri|
|Variable stars (II-B)||Long period variables|
From the nature of the growth processes as they have been described it is apparent that no aggregate consists entirely of a single stellar population, but the very young structures approach this condition quite closely since these young aggregates are formed from young stars and the only dilution by older material results from picking up an occasional stray such as one of the stars that are left behind on trails similar to those shown in Figure 44. The earlier globular clusters, under normal conditions, are therefore practically pure Population II-A and their H-R diagrams are similar to that of M 92, Figure 46 (a). The component stars are red giants, sub-giants, and variables of the RR Lyrae and other II-A groups. In the older globular clusters and the elliptical galaxies some of these same stars are present but a substantial number of stars have reached positions on the main sequence. On the basis of the classification which has been set up in this work both the older globular clusters and the elliptical galaxies will have to be regarded as being composed of mixed II-A and I-A populations. The earlier galactic clusters are in the same evolutionary stage as the elliptical galaxies and the H-R diagrams of M 67 and the Hyades, Figs. 46 (c) and (d), are to some extent representative of the phases through which the elliptical galaxies pass, although it should be remembered that the early end of the age distribution is not cut off in the growing galaxies as it is in the disintegrating clusters and the diagram for an elliptical galaxy in the same evolutionary stage as the Hyades would extend the sector occupied by the Hyades stars all the way back through the globular cluster sector to the original zone of star formation.
The rapid development in the early spiral stage eliminates most of the II-A units, except those in the incoming stream of captured material, and the stars of these early spirals are predominantly Population I-A. Further aging of these spirals then results in the appearance of second generation stars, beginning with Population II-B. The fact that the development of the spiral structure antedates the formation of the second generation stars results in a general distribution principle which has important implications for observational astronomy. With the qualification “except for strays from older systems” which will have to be understood as attached to all statements in this discussion of stellar populations, we may say that the second and later population stars, long period variables, classical Cepheids, white dwarfs, novae, etc., are confined exclusively to the stellar disks (including the nucleus). At the other extreme the early first generation stars (Population II-A) are distributed throughout all space, with the main sequence stars of the first generation (Population I-A) occupying an intermediate position.
In our own galactic system, for example, we find the typical Population II-A stars, red giants and RR Lyrae stars, in all of the observable region surrounding the Galaxy, both as individuals and in the globular clusters. On the other band, the classical Cepheids and the novae, the most easily identified of the second generation stars, are strongly concentrated toward the galactic plane and these stars are not found in the globular clusters. A few long period variables have been reported in the globular clusters and among the high velocity stars which are outside the disk of the Galaxy, but the large degree of irregularity in these stars makes it rather difficult to classify them accurately and it seems likely that these apparently misplaced second generation stars are actually long period Type II variables (Population II-A). The distribution of the white dwarfs cannot be determined from observation as they are too faint to be seen at great distances, but we can at least say that there is no evidence which conflicts with the theoretical conclusions as to the evolution of these stars.
One of the very significant points brought out by the theoretical development is that the first cycle stars should be single units whereas those of the second and later cycles should be binary or multiple systems. The second part of this conclusion is given strong support by statistical studies of the stars in the local environment. These studies indicate that about two-thirds of the near-by stars with masses greater than that of the sun are binaries or multiple stars. As the stellar mass decreases this proportion falls off rapidly but the reason for this is clearly indicated in the previous discussion of the formation of planetary systems. We know that a planetary system can be formed in lieu of a binary star when the central mass is equal to that of the sun, and it is obvious that a smaller stellar mass is still more favorable for the appearance of a planet or system of planets rather than a star as the minor component of the post-explosion star system. The drop in the proportion of binary stars as the mass decreases is merely a reflection of the shift from visible stars to invisible stars or planets; it does not indicate any actual decrease in the number of two-component systems. The absence of binary or multiple units in the first cycle stars is more difficult to establish because of the relative inaccessibility of these stars, and the evidence thus far available is somewhat spotty. There are a number of reports of binary stars in the galactic clusters, where they should theoretically be absent except in Cycle B clusters and in the post-explosion members of the most advanced Cycle A units, such as the double cluster in Perseus. If any binary stars are actually present in the early type galactic clusters they are probably stars which have become mixed with the cluster stars during the entry of the cluster into the galactic disk.
It should be recognized, however, that the identification of some of these clusters as Cycle A structures is only tentative. It appears that the break-up of the clusters should proceed more rapidly than the evolution of the stars of which they are composed and for this reason the easily distinguished, homogeneous clusters are presumed to be relatively recent additions to the Galaxy. It is not impossible, however, that some of these clusters may have evolved quite rapidly and are already in the second cycle. We have already noted that the stellar evolution speeds up considerably in regions of high dust and gas concentration. A good illustration of the way in which the normal relationship between chronological age and evolutionary age can be modified by such an environment is provided by the globular clusters which are located in the Large Magellanic Cloud. Here the gravitational distortion of the galactic structure has resulted in an irregular distribution of the dust and gas clouds and some globular clusters have entered high density regions of this kind. As a result the evolution of the stars in these clusters has been much faster than normal and while the shape, size, and location of these clusters are those of normal globular clusters, the stars are similar to those of the galactic clusters: members of Population I-A. If the high percentages of binary stars reported by some observers for such clusters as Praesepe and the Hyades are confirmed it will be necessary to revise the tentative conclusions as to the evolutionary stage of these clusters and place them in Cycle B. There are also a large number of loose, heterogeneous clusters which quite definitely belong in the second cycle. One group of this kind which has been given extensive study is NGC 6231. Here we find a large proportion of Population II stars, indicating that this cluster is either considerably older or considerably younger than a main sequence cluster such as the Pleiades. Since the structure, or lack of structure, of the cluster indicates that it has undergone severe modification since entering the Galaxy we conclude that it is older and that the Population II stars belong to Class II-B. This conclusion is supported by evidence which indicates that the stars of the cluster are largely binaries.
As mentioned in the discussion of the spiral structure, the material of which a galaxy is composed is in such a physical condition that it has the general characteristics of a fluid. In such an aggregate the heavier material moves toward the center of gravity, displacing the lighter units, which concentrate in the outer regions (the galactic disk). The dust and gas clouds and the early type stars are therefore found mainly in the disk while the older and heavier stellar systems sink into the nucleus. The segregation process is very slow and irregular because of the effects of the galactic rotation and in spite of the general separation of the older material from the younger it can be expected that many of the older star systems will be found scattered through the predominantly Cycle B population of the spiral disk. The average mass of these systems is greater than the corresponding average of any of the earlier groups but in view of the large variation between individuals within any group this characteristic is not a positive means of identification. Multiple systems are more distinctive. From the points brought out in the discussion of the formation of planetary systems it can be seen that the ultimate result of a stellar explosion is a binary star or star and planet, probably with some additional small companions. While it is possible that one of these companions may be large enough to qualify as a star, the nature of the aggregation process is such as to make this quite unlikely, and in general we may regard a multiple star system as one which has passed through the explosion stage more than once.
It has been estimated that five percent of all visual binaries are members of multiple systems. In addition to these systems in which evidence of multiplicity has been detected by observation, there are also a substantial number of observed binaries which are associations of two type A stars or two type B stars and which, according to the binary star theories that have been developed, must have additional unseen components on the other side of the main sequence. The systems of the Algol type, for instance, consist of main sequence stars paired with sub-giants of somewhat smaller mass. The main sequence star cannot be the B component because it is the larger of the two units and the more advanced from an evolutionary standpoint, and the sub-giant cannot be the B component because it is above the main sequence. We must therefore deduce that these star systems have undergone a second set of explosions and that each of the observed stars is accompanied by a small B component. As mentioned earlier, at least one and possibly both of the additional components predicted by theory have been located in Algol itself and the theory merely requires that the other systems of the same kind be similarly constructed.
We have seen that the two stars of a binary system tend to approach equality of mass as they near the upper end of the main sequence. When one explodes the other should follow suit within a relatively short time, particularly since it will receive substantial amounts of matter and thermal energy from its disintegrating companion. The great majority of multiple systems should therefore contain even numbers of stars. The normal progression is from binaries to four-member systems such as Algol and then to six-member systems on the order of Castor. The latter may be regarded as one of the oldest star systems within our field of vision.
We have found thus far in our examination of the aggregation process that the primary units of matter, the atoms, respond to the gravitational forces by continually combining until they finally build up into units of the maximum size possible for simple aggregates. These secondary units, the stars, likewise gravitate into still larger aggregates, the galaxies. The question now arises, is this the end of the aggregation process or do the galaxies again combine into super-galactic aggregates? The existence of many definite groups of galaxies with anywhere from 10 to 1000 members would seem to provide an immediate answer to this question, but the true status of these groups or clusters of galaxies is not clear as that of the stars and the galaxies. Each of the stars is a definite and tangible unit, constructed according to a specific pattern from subsidiary units which are systematically related to each other. The same can be said of the galaxies. It is by no means certain, however, that this statement can be applied to the clusters of galaxies; on the contrary, the information now available suggests that it cannot.
Let us then turn to a theoretical examination of the question. It is immediately apparent that the basic situation is very similar to that involved in the combination of stars into galaxies. All of the smaller units which are formed within the gravitational limit of a giant spiral, or are brought within it by the relatively rapid extension of the limit due to the growth of the galaxy, are ultimately consolidated with the spiral; those outside this limit are continually receding. The question then reduces to a matter of whether or not the galaxies can extend their gravitational influence still farther by the formation of super-galaxies in the same manner that the stars extend their gravitational limits by the formation of star clusters. The mathematical relations are similar and since we find that the minimum star cluster contains thousands of stars we must conclude that if there are any super-galaxies the small clusters of galaxies now recognized do not meet the requirements. On the other hand we know from observation that our Galaxy cannot be a member of a giant super-galaxy since all galaxies other than the few in our immediate vicinity are observed to be receding. (According to theory the members of the Local Group, are also moving outward away from us but this movement is so slow that it is masked by the random motions of the galaxies.) Furthermore, the recession is observed to be uniform throughout the vast space accessible to present-day telescopes and it therefore follows that super-galaxies cannot exist anywhere in this region of space.
Another line of reasoning brings us to the conclusion that the situation which we find in the observable region is typical and that there are no super-galaxies. The Fundamental Postulates require all basic processes to be cyclical, and the formation of super-galaxies is therefore impossible unless a process also exists whereby their existence can be terminated. But there are no more destructive limits on which such a process could be based. The lower and upper destructive limits of matter are reached in the supernovae and the mature galaxies respectively, and there are no others. We must therefore conclude that the existence of super-galaxies is inconsistent with the postulates.
What, then, is the nature of the observed clusters? A clue to the answer to this question can be found by examination of the contents of these groups. In our Local Group, for example, we find three major spirals, in which the bulk of the mass is concentrated, and fifteen or twenty small units. A striking contrast is supplied by the Coma cluster which contains at least 800 units, but few, if any, spirals. When we take a second look at this situation, however, it becomes apparent that the difference between the two groups is merely a matter of age. The Coma cluster is a relatively young aggregation in which the individual units are numerous but small; the Local Group is an old system in which the greater part of the mass has gravitated into a few large galaxies. Each of these giants is equivalent to 100 or more of the elliptical galaxies of the Coma cluster and when we take this factor into consideration the two groups are seen to be associations of comparable size, differing only in age and the characteristics accompanying age. We have already deduced that new galaxies are formed in regions which have been left vacant by the outward motion of the previously existing galaxies. Presumably this process can and does take place on a single galaxy basis in many, if not most, instances but the galactic associations can easily be explained if we recognize that larger regions will on occasion be left open through chance, and still further irregularities in the size of these vacant regions will be introduced through the disappearance of the mature galaxies by means of a process which will be discussed later. When an extensive region is thus left vacant new galaxies begin to develop throughout all of this empty space and because these galaxies originate at approximately the same time they pass through the various stages of evolution together and we can recognize the same kind of age characteristics in each group as a whole that we normally see in the individual galaxies.
The very early groups, those whose largest aggregates are globular clusters or the loose irregular galaxies resulting from the union of two or three clusters, are invisible unless relatively close. As the growth process continues the regular elliptical form is developed and the groups arrive at the stage represented by the Coma cluster and the cluster in Corona Borealis, in which there are a large number of small elliptical and irregular galaxies spaced relatively close together. Here the characteristics of the group as a whole are identical with the characteristics of the individual elliptical galaxy. Almost all of the component stars belong to the first generation families, Populations II-A and I-A, the composite light from the cluster is red, and there is no evidence of dust accumulations. As the group ages it decreases in numbers because of the consolidation of units but it spreads out into more space. While these processes are taking place the other signs of maturity appear: spiral galaxies are formed and go through their evolutionary stages, stars of the hot massive types are developed, and so on. In the later stages the cluster is essentially nothing more than a region of approximately average concentration in the general field of galaxies.
A highly significant fact about these mature groups of galaxies is that the giant spirals into which most of the mass has been concentrated are in general well outside the gravitational limits of their nearest contemporaries. In the-Local Group, for example, the gravitational limit of M 31 is in the neighborhood of one million light years, whereas the distance from the Milky Way is double this figure. The average distance between bright galaxies of all kinds has been estimated at 2.4 million light years. Even within the groups, therefore, the major units have a general outward motion, although this velocity is small and the direction of the net movement can be reversed in any individual case by the random motion of the galaxy.
Calculation of the velocities of recession is complicated by uncertainties as to the true masses of the galaxies and the inter-galactic distances, but we may utilize the best available information to arrive at some tentative figures for comparison with the values indicated by the spectral red shifts. As we have found previously, equilibrium between the gravitational force due to the atomic rotation and the force of the space-time progression is reached when the gravitational force has unit value in each effective time region dimension. At greater distances the gravitational force falls below the level of the space-time force, which means that from this point on the net resultant of the two forces is directed outward rather than inward. Gravitation does not actually reach zero as long as it amounts to the equivalent of unity in at least one time region dimension but it vanishes on dropping below this unit level, as less than unit force, does not exist. We may express the equilibrium at the limiting distance, d1, by substituting unity for the expression 9 × (156.44)3 in equation 156, which gives us
|m / (156.44 d12) = 1|| |
The limiting distance beyond which all galaxies recede with the full velocity of light then becomes
|d1 = m½ / 12.5|| |
which can be expressed in terms of solar masses as
|d1 = 13350 (m/m8)½ light years|| |
The mass of the Galaxy is a difficult quantity to measure and the most recent determinations run all the way from 1011 to 5.0×1011 solar masses. If we accept the highest value for our tentative calculations, d1 becomes 13350 (5×1011)½ = 9440×106 light years. Between d0 and d1 the decrease in gravitational velocity and the corresponding increase in the velocity of recession are linear. Disregarding the relatively short distance between the Galaxy and d0, we may then calculate the distance from our Galaxy to any other galaxy of the same or smaller mass by converting the red shift in the spectrum of that galaxy to natural units and multiplying by 9440×106 light years or 2900×1011 parsecs. In Table CXIII the distances thus obtained are compared with a few of the values calculated from observational data.
(millions of parsecs)
In view of all the uncertainties that enter into these calculations, the uncertainty as to the true mass of the Galaxy, the confused state of the distance determinations since the overthrow of the previously accepted yardsticks, and the possibility that some factors may have been overlooked in the very considerable extension of theory upon which the calculations are based, the best that can be expected is to arrive at comparative values which are of the same general order of magnitude and the amount of divergence between the figures in the last two columns of Table CXIII is not significant. The calculations lead to a value of 104 km/sec per million parsecs for Hubble’s constant, the relation between red shift and distance. The 1954 distances shown in Table CXIII correspond to a constant of about 150, some more recently published values fall between 80 and 90, and it has been suggested that the true figure may be as low as 55. Since the accepted value before 1952 was 540 km/sec per million parsecs it is apparent that this whole situation is rather fluid at present and no firm conclusions are warranted. The calculated value would be increased to 230 km/sec per million parsecs if the minimum estimate of 1011 solar masses were used as the mass of the Galaxy, and it could just as easily be reduced below the 104 figure by an upward revision of the Galactic mass.
From the reciprocal relation between time and space it is apparent that the material universe with which we are familiar must be duplicated by a non-material universe identical in all respects except that space and time are interchanged. The points of contact between the two regions are relatively few and the non-material universe, or non-material sector of the universe, as it should be called, has a shadowy and elusive aspect from our viewpoint far over on the other side of the space-time axis, but we can recognize a limited number of areas in which it impinges on our theater of action in one way or another. We will now undertake an examination of those phenomena which involve an interchange between the two halves of the space-time structure.
Before beginning this examination it will be desirable to consider the question of nomenclature. Heretofore the term “non-material” has been adequate for the brief general references that have been made to the region beyond the dividing line, but for more extended and detailed consideration it will be convenient to have a general term which can be used in combination with the familiar names of the material universe to indicate the inverse phenomena. The expression “non-material” is not very suitable for this purpose since it results in such unacceptable names as non-material matter and it also leads to some ambiguities when used in connection with phenomena such as electricity. The prefix “anti,” which is currently being applied to some of these entities—the “anti-neutron,” etc.—is likewise objectionable, since this term implies that one quantity is the negative of the other, whereas the actual relationship is that of inversion. After consideration of various possibilities it has seemed that the adjective “cosmic” can be adapted to this service without any great violence either to the etymology of the word itself or to current usage. In the following pages, therefore, this term will apply to the inverse of the phenomena of the material sector of the universe. The analogue of matter on the opposite side of the neutral axis, for example, will be designated as cosmic matter, abbreviated c-matter.
In the discussion of the galactic cycle it was pointed out that the evolutionary course of the galaxies in the material sector of the universe constitutes only half of the complete cycle. When a giant spiral reaches the end of its career at the destructive limit of magnetic ionization the material of which it is composed must cross the neutral line and begin the other half of the cycle as cosmic matter. Let us now turn our attention to the process through which the interconversion takes place.
It is clear that this must be a catastrophic event: something that hurls the entire galaxy, or at least the greater part of it, across the, boundary. No mere leakage of matter will suffice, since the younger galaxies are continually growing older and if the mature units are not removed in some manner the proportion of later type galaxies will continually increase, which contradicts the general principle that the universe is unchanging in its general aspects. This means that the galaxy must terminate its existence with a gigantic explosion.
While this is apparently an inescapable deduction from the principles previously established, it must be conceded that it seems rather incredible on first consideration. The explosion of a single star is a tremendous event; the concept of an explosion involving billions of stars seems fantastic, and certainly there is no evidence of any gigantic variety of super-nova with which the hypothetical explosion can be identified. But let us examine the nature of this theoretical galactic explosion in more detail.
The galaxy is practically unaffected by thermal variations. Any changes in temperature apply to the individual stars and the temperature limit is reached in the interiors of these separate stars, not in the galaxy as a whole. Furthermore, the distances between the stars are so great that the temperature crisis in the individual star is relieved by the super-nova explosion without any significant effect on the temperature of its neighbors. The situation with respect to the other vibrational variable, the magnetic temperature, is entirely different. It has already been brought out that the increase in magnetic temperature is cumulative and the oldest stars, concentrated at the galactic center, therefore reach the destructive limit of magnetic ionization simultaneously just as the heaviest atoms, concentrated in the center of the star, simultaneously reach the destructive thermal limit. In each case the ensuing explosion propels the excess thermal or magnetic energy outward and the magnetic explosion is thus propagated through the mass of the galaxy just as the thermal explosion is propagated through the entire mass of the star.
Although the two explosion processes are very similar in these and other respects there is one very significant difference which was specifically pointed out in the original discussion of the destructive limits. The magnetic destructive limit does not involve cancellation of the magnetic rotational time displacement by an oppositely directed space displacement in the manner of the neutralization that takes place at the thermal limit, but is a result of reaching the upper zero point, the maximum possible magnetic time displacement. In other words, the galaxy and the star approach the zero limit of magnetic displacement from opposite directions. Thus the explosion of the galaxy is not a magnified super-nova; it is an explosion of the inverse type: a cosmic explosion. In the ordinary explosion with which we are familiar a portion of the mass is converted into energy in a very short time, and this results in dispersal of the remainder of the aggregate over a large amount of space in a limited amount of time. In the cosmic explosion space and time are reversed. Here a portion of the mass is converted into energy in a very small space, and this results in the dispersal of the remainder of the aggregate over a large amount of time in a limited amount of space.
In looking for astronomical evidence of a cosmic explosion, then, we should not expect to see any spectacular phenomenon. The direct results of the explosion are totally invisible since the matter is now being dispersed into time at velocities greater than unity, so that no radiation of any kind can reach us. There are, however, some collateral effects which should be observable. As the explosion proceeds a steadily increasing portion of the galaxy is dispersed into time and is lost from view. There may be some difficulty in distinguishing a galaxy which is on the way down from one which is on the way up, but there should be some difference in appearance which we can learn to recognize.
Another possible means of identifying an exploding galaxy is a reaction in the observable region. When events of this nature take place at a regional boundary line it is logical to expect that some portion of the participating units will fail to acquire the necessary energy (or velocity) to proceed in the outward direction and will be dispersed backward. In the super-nova explosion, for instance, we found that one portion of the stellar mass was blown forward into space whereas another portion was dispersed backward into time. Similarly we can expect to find a stream of particles issuing from the center of an exploding galaxy: a small replica of the large stream which is being propelled across the boundary line into time. In the galaxy M 87, which we have already recognized as possessing some of the characteristics that would be expected in the last stage of galactic existence, we find just the kind of a phenomenon which theory predicts, a jet issuing from the vicinity of the galactic center, and it would be in order to identify this galaxy, at least tentatively, as one which is now undergoing a cosmic explosion, or strictly speaking was undergoing such an explosion at the time the light now reaching us left the galaxy.
Of course, all this represents a very considerable extension of theory into the unexplored region. The extension is not entirely unsupported, however, as we can also observe cosmic explosions on a small scale. We have previously discussed the phenomenon of radioactivity, which was also found to be due to arrival at the destructive upper limit of magnetic displacement, and in view of the points which have been developed subsequently it is now evident that an explosion is initiated immediately when this limiting value is reached. Like the explosion of the galaxy, this is a cosmic explosion rather than an ordinary explosion, and since it takes place in a small space rather than in a short time it lacks the characteristics by which we are accustomed to identify an explosion.
When viewed from the standpoint of our ordinary experience radioactivity is a very strange process. A radioactive aggregate remains apparently quiescent for a finite interval of time, then for no apparent reason one atom out of millions suddenly disintegrates, whereupon all is quiet for a further interval until another atom succumbs. just why these particular atoms are affected and why the action continues at a constant rate irrespective of any change in physical conditions, even when these changes are of such magnitude that they would have a profound influence on any ordinary process, are questions that have never been satisfactorily answered.
The nature of these answers is now apparent. Radioactive decay is not a succession of separate events as it seems to be; the decay of any one aggregate is a single event initiated when the magnetic ionization level of the aggregate reaches the critical point and continuing until no more of the radioactive material remains. Each atom in turn takes part in the action at a time which depends on the rate of propagation of the cosmic explosion, just as each atom of a dynamite charge remains unaffected until the explosion is propagated through the intervening space from the initial point. The essential difference is that the rate of propagation of an ordinary explosion is very rapid, whereas the inverse condition prevails in the cosmic explosion and the rate of propagation is very slow. Furthermore, the ordinary explosion is propagated in space and the portions of the aggregate which are closer to the initial point or points are affected before the more distant portions, but the cosmic explosion is propagated in time and there is no order of succession in space. The successive atomic disintegrations are continuous in time order; that is, the atoms closest in time to the initial point or points disintegrate first and the explosion gradually moves outward in time, but there is no space order and the disintegrations therefore appear at random throughout the volume of the aggregate. The half-life of the radioactive substance is merely a measure of the rate of propagation of the cosmic explosion.
In the radioactive explosion the amount of material involved is small and the effects are rapidly dissipated. The velocities produced are therefore limited to values somewhat below unity and the explosion products remain in the time-space region. The galactic explosion, on the other hand, involves an enormous mass and the explosion is so violent that the greater part of the material of the galaxy is accelerated to velocities above unity and dispersed into the space-time region. It should be noted that this is not the same direction as that in which the super-nova explosion disperses the matter which becomes the white dwarf star. The explosion of the star takes place at the lower limit, the mathematical zero point, and the high velocities propel the material from the center of the star backward into the time region. The explosion of the galaxy takes place at the upper limit and the high velocities propel the galactic matter forward into the space-time region.
The importance of this point lies in the fact that the time region is inside the time-space region and the white dwarfs therefore occupy specific locations in space even though the individual atoms are separated by empty time. The space-time region, on the contrary, is entirely outside the time-space region and any matter which crosses the boundary leaves the material sector of the universe and no longer has a definite location in space. This matter therefore becomes subject to the non-material relationships and by the operation of cosmic forces is converted to cosmic matter, whereupon it begins to play its part in the cosmic galactic cycle. Under the influence of cosmic gravitation which moves the rotating atoms of cosmic matter toward each other in time, just as material gravitation acts on matter in space in our sector of the universe, the atoms of cosmic matter join together as cosmic particles, the cosmic particles gather into cosmic clouds, the cosmic clouds condense into cosmic stars, the cosmic stars form groups and clusters, these aggregations grow into cosmic galaxies, the cosmic-galaxies go through the same processes of development as described for the material galaxies, and finally each mature cosmic galaxy explodes, dispersing its cosmic matter back into the time-space region.
At this point the action reenters the region accessible to observation from our position in the material sector, and we may resume the detailed examination of the course of events which was interrupted when the cosmic explosion transferred the matter of the material galaxy into the inaccessible cosmic sector. We will identify the cosmic matter dispersed into our sector of the universe by the explosion of the cosmic galaxies as the cosmic rays.
Because of the reciprocal relation between space and time the rotational combinations with net displacement in time which we identify as the chemical elements and sub-material particles are necessarily paralleled by an exactly similar series of combinations with net displacement in space. The element chlorine, for instance, is a linear space frequency rotating with magnetic time displacements three and two and an electric space displacement of one. Corresponding to chlorine is a cosmic element, c-chlorine, consisting of a linear time frequency rotating with magnetic space displacements three and two and an electric time displacement of one.
In the first rough draft of this material, written many years ago, the statement in the preceding paragraph was followed by these comments, “Just where in the universe such an element would be located and how we would go about recognizing it are not apparent and no further consideration will be given to these space-elements in this work.” At that time the cosmic rays were regarded as radiation of very short wave-length and their place in the system being developed from the Fundamental Postulates was obscure. Within a few more years, however, the primary rays were found to consist of high energy particles, and by the time the first revision of the text was undertaken it was apparent that the observed characteristics of these particles were for the most part identical with the theoretical characteristics of the hypothetical cosmic elements, within the rather limited accuracy of the experimental results. Subsequent refinement of observation and measurement has further clarified the situation and we now have a very substantial body of experimental knowledge which can be compared with the theoretical properties of the cosmic matter as they are developed from previously established principles.
In beginning the construction of a theoretical picture of these cosmic particles and their behavior we may deduce first from the nature of the process through which they originate that they should reach us without preferential direction, inasmuch as they are dispersed into space from a different sector of the universe. This is substantially in agreement with the observations. There are some directional characteristics in the incoming stream, but not more than can be ascribed to conditions affecting the particles after their entry into the local system.
Next we deduce that the primary particles should arrive with extremely high velocities, ranging from slightly less than unity (the velocity of light) to velocities in the cosmic range, greater than unity. In order to propel the particles into the material sector of the universe the explosion of the cosmic galaxy must give them cosmic energies somewhat greater than unity, which means that the particle energy in the material system is slightly less than unity. For a particle of unit mass the corresponding velocity is also slightly less than unity, but the masses of the higher cosmic elements are less than unity, which means that their velocities at the same energy level are higher and may exceed the unit level. All of this is consistent with the results of observation, which merely indicate that the velocities are extremely high without establishing any upper limit.
The primary stream of particles should theoretically contain the various cosmic elements in approximately the same proportions that the material elements are found to occur in the oldest regions of our local system. This agrees with the results of observation except for the fact that these results are currently being interpreted as indicating that the cosmic particles are material elements. It is doubtful, however, if the available experimental techniques are capable of distinguishing between the cosmic elements and the material elements under the conditions existing when the observations of the primary particles are made. The presence of multiple charges, for instance, has no significance in this respect since the cosmic elements have the same ability to acquire charges as the material elements. The peculiar behavior of the particles after entry into the local system should be sufficient evidence to demonstrate that these are foreigners and not merely fast moving material atoms.
As soon as the primary particles arrive at the point where interaction with the material system is possible, the process of absorbing them into the system begins. Several different steps are involved in the process and the order of succession of these steps is not necessarily fixed. The particular sequence of events and the intermediate products are therefore somewhat variable, but we may trace what may be regarded as the normal sequence and then indicate the nature of the occasional deviations from the normal that can be expected.
The first process to which the cosmic elements should theoretically be subjected is a sort of stripping action whereby all of the components of the cosmic atomic motion which are compatible with the material system are removed, to the extent that is practicable, and only the “foreign” motion is left. Since the translational velocity, the electrical charges, and the rotational displacement in the electric dimension are all capable of being utilized in the local system, the effect of this first process in the normal sequence is to eliminate, insofar as is possible in the short time available, everything but the magnetic rotational space displacement, an item which cannot be incorporated into the material structure until it has undergone some major changes. The product of such a stripping process is one of the members of the purely magnetic series of cosmic elements (the cosmic equivalent of the inert gas series), with a greatly reduced velocity and a minimum charge, if any. Since the lower cosmic elements constitute the largest proportion of the primary particles the principal secondary product, aside from the electrons and other particles which are stripped off and absorbed into the local system, is cosmic helium.
Although this process is purely theoretical, it is a direct consequence of the probability principles. The combination of motions which constitutes the cosmic atom is a very stable unit in the cosmic sector o the universe but it has an extremely low probability under terrestrial conditions, and as soon as there is an opportunity for interaction with the material system each encounter tends to cause changes which move the atomic system toward a state of greater probability. The very high translational velocity, for instance, is an improbable condition in the local environment. Each contact with other units therefore tends to reduce the velocity of the cosmic atom to a lower level, a state of greater probability.
The second phase of the absorption of the cosmic particles into the material system involves the conversion of cosmic rotation into material rotation by a change in the orientation of the rotation with respect to space-time; that is, by a change in the zero point. We have already found in our examination of other phenomena that any rotational time displacement t is the equivalent of an oppositely directed rotational space displacement k-t, where k is the opposite end of the space-time unit. We have also evaluated the space-time unit in magnetic rotation as the equivalent of four subsidiary units in each dimension. Any magnetic rotational displacement a in space (or time) is thus equivalent to a displacement 4-a in time (or space).
The conversion of space displacement a into time displacement 4-a does not involve any modification of the rotation itself; it is merely a change in direction with reference to the general framework of space-time. The situation is analogous to a change in the valence of a material element. The negative valence one iodine atom in NaI is identical with the positive valence seven iodine atom in 1F7 even though the chemical behavior of the two atoms shows little resemblance, and by suitable methods either valence can be shifted to the alternate value. This is possible because the only difference between the two is a matter of direction; one unit clockwise in an eight unit circle reaches exactly the same spot as seven units counterclockwise. Similarly any cosmic atom is the equivalent of some material atom or combination of atoms, and by suitable methods can be converted into the latter. Here again probability is the active agent in the normally occurring processes. In the cosmic environment the cosmic atom is a stable structure with a high inherent probability of existence; in the material environment it is an improbable structure and therefore unstable. The effect of this situation is to force prompt conversion into the material status when the material environment is reached.
In view of the interconvertability of the 4-a displacement in one system and the a displacement in the other, we may set up the following table of equivalents for the purely magnetic elements (the inert gas series).
|Material System |
|Cosmic System |
On this basis it should be possible for any element in the list to be transformed into the equivalent structure in the other system. Cosmic helium, for instance, is equivalent to argon. This process, however, encounters an obstacle in that the two magnetic rotations are independent but must conform to the same space-time direction. It is therefore impossible for either rotation to convert from one system to the other unless the second rotation just happens to be ready to make the conversion simultaneously. Such a coincidence can occur but it has a relatively low probability and hence the conversion is normally accomplished by a more probable route.
Like the isotope of matter which is above or below the stability limits, the cosmic atom is outside the zone of stability in the material environment and it is therefore subject to the same type of losses from its system of motions. The most probable event in the short terrestrial sojourn of the cosmic particle, after the initial stripping, is therefore a loss of rotational displacement. The direction of greater stability is toward the cosmic equivalent of a lower time displacement; that is, a higher space displacement. The losses consequently take the form of ejection of time displacement, increasing the space displacement (the cosmic atomic number) of the residual cosmic atom.
The time displacement losses from a purely magnetic system are the equivalent of successive ejection of neutrons, and this is undoubtedly the actual process in locations where the magnetic ionization level is zero so that the neutron is stable. These neutrons are then immediately available for atom building and constitute one of the sources of the new matter which is continually being formed throughout space, as indicated in the preceding discussion. In the local system where the neutron, 1-1-0, is unstable the time displacement is ejected in the form of a pair of equivalent stable particles, a neutrino, 1-1-(1), and a positron, 1-0-1.
The difference between successive elements in the magnetic (inert gas) series is equivalent to two neutrons (or neutrinos plus positrons), since the neutron has effective rotational displacement in only one magnetic dimension. Emission of the equivalent of one neutron therefore takes the atom only as far as the midpoint of the following group. The second emission moves it up to the next place in the magnetic series. When the 3-3 space displacement (c-krypton) is reached, conversion to the 1-1 time displacement takes place and the cosmic krypton atom disintegrates into two neutrinos. Only one positron is emitted in this process as the other electric time displacement is absorbed in the split into two magnetic particles and the resulting conversion of rotational mass to neutron mass. This completes the transformation of the cosmic atom into sub-material particles, which now become available for atom building in the material system.
Theoretically the whole decay process all the way from cosmic helium to neutrons or their equivalent should take place by successive emission of neutrons or pairs of neutrinos and positrons until the conversion is complete, and presumably this is the actual course of events, but the intermediate products of this step process are of varying degrees of stability and since even the most stable cosmic atom has an extremely short life in a material environment the least stable is not much more than a dividing line between two simultaneous processes. It is to be expected that the order of stability will be in the direction of the path of decay; that is, a naturally occurring process normally tends toward more stable products. A possible exception is the last intermediate product between c-argon and c-krypton, which is so close to the final conversion level that it may be abnormally short lived.
The mass of a cosmic element is the inverse of the mass of the corresponding material element, hence the rotational mass of an element of cosmic atomic number n is 1/n on the natural scale or 2/n on the atomic weight scale. For convenience the masses of the cosmic ray decay particles are usually expressed in terms of electron masses and on this basis the 1/n natural units are equivalent to
2/n x 1823 = 3646/n
electron masses. The rotational masses of the cosmic elements in the normal decay path are therefore as follows:
|Cosmic Element||Natural Units||Electron Masses|
If we make the assumption, as previously suggested, that c-cobalt, which is within one-half of a magnetic unit of the final conversion level, has an abnormally short life for this reason, the most common and longest lived of the intermediate products of the decay of the primary cosmic particles is c-argon, with rotational mass 203. We will identify these intermediate products as mesons and c-argon as the mu meson. This mu meson is reported to have a mass of about 206, is formed by the decay of a heavier and shorter-lived meson, and itself undergoes a double decay process (two positrons emitted) in which it is completely converted to neutrinos. All of this agrees with theory if we assume that the lifetime of c-cobalt is near zero.
The immediately preceding cosmic element in the decay order is c-silicon, with rotational mass 260. This we identify as the pi meson. This particle has a life of about 10-8 sec, as compared with the mu meson life of approximately 10-6 sec, and it decays to the mu meson. Unlike the mu meson which is practically inert, it has a strong tendency toward interaction with the material elements. All of these properties of the observed pi meson are strictly in accordance with theory. The difference in the reaction tendencies of the pi and mu mesons is, of course, due to the fact that the pi meson (c-silicon) has an effective displacement in the electric dimension whereas the mu meson (c-argon) is a cosmic inert gas and has no electric displacement.
The measured mass of the pi meson is usually reported somewhere in the range from 265 to 275. In view of the experimental difficulties involved, these measurements are not entirely inconsistent with the theoretical value of 260 for the rotational mass of c-silicon but it is also possible that the greater mass is real. It has been emphasized in the preceding discussion that the values given for the masses of the cosmic elements refer to the rotational mass only. These elements, like the material elements, may have isotopes and the total mass applicable to a particular element may vary through a substantial range, just as in the material system.
In the primary stream of particles the isotopes, except c-H1, should normally be lighter than the parent atoms, since the cosmic isotopic weight will be above the cosmic atomic weight corresponding to the rotational mass, for the same reasons as in the material system. The material equivalent of this greater cosmic atomic weight is a smaller mass. On the other hand the intermediate products which are formed and exist in the material environment are subject to the same magnetic ionization forces as the material atoms and like those units will tend toward isotopic masses which are greater than the mass of the parent atom. The relative probability of the existence of heavier isotopes is in the same order as the probability in the material system, since it is the material environment that determines this probability. Isotopes of c-argon, the mu meson, which is the cosmic equivalent of helium, should be relatively rare, with those of c-silicon, the pi meson, somewhat more common. An isotope of mass 270, corresponding to a cosmic atomic weight of 27 for c-silicon, is therefore entirely in order.
According to the theoretical decay scheme there should be two more mesons of still shorter life between cosmic helium and the pi meson. Particles with masses in this range (approximately 350 to 750) are reported from time to time but the significance of these results is still somewhat uncertain. From the decrease in life span in passing from the mu meson to the pi meson it may be deduced that the mean life of the hypothetical earlier mesons will be in the range from about 10-10 sec downward, and the detection of such particles obviously presents a difficult problem. The immediate predecessor of the pi meson, c-neon, is another cosmic inert gas, which complicates the problem of identifying it.
The experimental production of pi mesons now being reported from the particle accelerators should be a similar chain reaction, as the theoretical result of the conversion of kinetic energy (linear space displacement) into cosmic matter (rotational space displacement) is the cosmic neutron. This should be converted into c-helium practically instantaneously and the decay should thereafter follow the cosmic ray pattern.
At the present time the experimental results are interpreted as indicating that the great majority of the primary cosmic ray particles have unit atomic weight. If this is correct it indicates that the “stripping” of the smaller atoms is already well advanced when the first observations are made, as the cosmic atom of unit mass on the material atomic weight scale is c-helium, whereas the original stream must be composed primarily of c-hydrogen. The stripping may be somewhat slower for the larger atoms and the first stage of magnetic decay in these structures may in some cases precede the electric decay, in which case different intermediate products will be formed. There is considerable evidence, for instance, of the existence of a meson with a mass in the neighborhood of 900, which corresponds to c-beryllium, and meson masses have been reported through practically the entire range from the pi meson to c-helium. Perhaps some of these values are in error, but there is an increasing amount of evidence of the existence of mesons other than the common mu and pi types, which is particularly significant in view of the large number of theoretically possible particles of this kind.
Many decay events of a complex nature have also been detected and studied in recent cosmic ray work. It is probably too early to attempt a definite identification of the particular cosmic particles involved in these events, but it should be pointed out that the cosmic elements are subject to the same kind of combining forces as those which are responsible for the great variety of chemical compounds in the material system and there is every reason to believe that the incoming stream of cosmic matter contains cosmic compounds as well as cosmic elements. Only the simpler types can be expected to survive long enough in the terrestrial environment to be recognized, but even so the number of different combinations that may be encountered is very large. It is definitely in order to suggest that in at least some of these more complex cosmic ray events we are observing the disintegration of cosmic chemical compounds.
Another contact between the material and non-material sectors of the universe occurs through the medium of radiation. Cosmic matter radiates its linear vibrational motion in the same manner and under the same conditions as ordinary matter, and there is just as much cosmic radiation in the universe as a whole as there is radiation from the material structures. Like the radiation with which we are familiar, the cosmic radiation covers the entire spectrum of wavelengths, but in the reverse order. The cosmic equivalent of wavelength a (in natural units) is a wavelength of 1/a. Cosmic x-rays and gamma rays are therefore in the long wave region whereas cosmic radio waves appear with short wavelengths.
In our material system mass is continually being converted into energy and the energy is not only being dissipated into space but is also degraded to lower frequencies as it moves outward. In the cosmic system the same sort of processes are operative and cosmic mass is undergoing the same gradual attrition. As has been emphasized previously, however, the Fundamental Postulates do not permit the existence of basic processes which operate only in one direction, and it therefore follows that both matter and cosmic matter must be reconstituted in some manner from radiation. Let us see if we can determine the nature of this reverse mechanism.
An atom of matter or a sub-material particle is a vibrating space displacement (photon) rotating with displacement in time. In order to produce such a particle we must therefore have (1) a photon with a space displacement, (2) a high c-energy photon or other source of sufficient time displacement to cause rotation of the first photon, and (3) the proper kind of a contact between the two. Now let us ask where these ingredients are available. The answer is, everywhere in the cosmic sector of the universe. All cosmic matter is emitting cosmic thermal radiation consisting of photons with space displacement (frequencies in the x-ray region) while thermal and radio frequency radiation, which is high energy radiation from the cosmic standpoint, is continually entering from the material sector. Contact of these two types of photons in an appropriate manner, a requirement which probability will satisfy sooner or later in a region where both are present in quantities, produces the sub-material particles which ultimately form matter. The type of radiation normally produced by the destruction of matter in the material sector is therefore converted back into matter in the cosmic sector.
Similarly the cosmic thermal radiation produced by the destruction of cosmic matter in the cosmic stars is converted back into cosmic matter by interaction with the material thermal radiation in the material sector. The primary product is the cosmic neutron which, as in the very similar process in the particle accelerators, is promptly converted into cosmic helium and then follows the normal cosmic ray decay path. Although the cosmic matter thus produced is otherwise indistinguishable from that which constitutes the cosmic rays previously described, it lacks the high velocities of the latter and probably does not penetrate very far into the atmospheres of the stars or planets. Identification of these particles will therefore be difficult. It is possible, however, to recognize the products of the related reaction which takes place when the incoming cosmic radiation is intercepted by atoms of matter rather than by photons. A single cosmic photon is not able to produce a magnetic rotation of the atom because of the complex atomic rotational structure, and instead it imparts an electric rotational vibration, ionizing the atom. In the outer atmospheres of the stars and planets we can therefore expect to find appreciable amounts of highly ionized atoms of the various elements that are present in the incoming flow of interstellar matter.
This phenomenon is readily identified in the corona of the sun. The surface temperature of the sun is about 6000° K and it is evident that if conditions in the vicinity of the sun are normal there must be a temperature gradient in the outer regions, including the corona, from this 6000° level down to the temperature of interstellar space. The ionization level in the chromosphere, however, corresponds to the thermal ionization which would exist at a temperature of 20,000° to 30,000° K and in order to explain the still stronger ionization in the corona on a thermal basis it would be necessary to assume a temperature in the neighborhood of one million degrees. The observed level of ionization is therefore inconsistent with a thermal origin unless a highly abnormal temperature situation exists in this region and no convincing reason why conditions should be abnormal has ever been discovered. We are thus led to the conclusion that the ionization is not thermal and that it is a product of the cosmic radiation which, according to theory, should be causing just the kind of an effect which we observe. In the light of this explanation the location of the maximum ionization in the outer regions of the corona is to be expected, since the matter in this zone is exposed to the maximum cosmic radiation. As this radiation travels inward it is gradually attenuated by contacts with the diffuse material in the intervening space and the degree of ionization of the material atoms is reduced accordingly.
The cosmic radiation of this type, originating from cosmic thermal and cosmic radio wavelength sources, is in the x-ray region and it is not received at the earth’s surface as it is cut off by the upper atmosphere. Even if it were accessible, however, it would be very difficult to interpret since the aggregations of cosmic matter are localized in time, not in space, and consequently we do not receive a continuous stream of radiation from a cosmic source as we do from a material source. The same comments apply to any other type of radiation from the cosmic aggregates. Such radiation if received at all is received by us as if it originated uniformly throughout space and whatever variations may exist are functions of time only.
We can, however, detect and identify radiation of the cosmic type originating from sources within the material sector of the universe. Inasmuch as the cosmic equivalent of visible light does not reach us, our reception of cosmic radiation is confined to the other principal type of radiation, the cosmic gamma rays, which we receive at radio wavelengths. These cosmic gamma rays originate from cosmic matter subjected to forces which cause atomic readjustments, just as the normal gamma rays originate from ordinary matter under the same conditions. Aside from the cosmic rays, the only appearance of cosmic matter in the material system is in connection with processes of extreme violence: galactic and super-nova explosions, inter-galactic collisions, etc. Objects which are undergoing or have recently (in the astronomical sense) undergone such processes are therefore the principal sources of the localized long wave radiations which are now being studied in the relatively new science of radio astronomy. Typical examples of the kinds of sources mentioned are the Crab Nebula (a super-nova), Messier 87 (an exploding galaxy) and Cygnus A (colliding galaxies).
Generation of long wave radiation by material systems is also possible, and no doubt many of the signals picked up by the radio telescopes emanate from such sources but the strong signals are more likely to originate from cosmic sources since the intensity peak for cosmic gamma radiation, as expressed by the cosmic equivalent of Wien’s Law is actually in the radio region, whereas the equivalent peak for thermal radiation is at very much shorter wavelengths and a strong radio signal from thermal sources would therefore require an extremely powerful emitter.
Here, for the time being we come to a stopping point. The project is not complete and many interesting and important problems are still unsolved or only partly solved, but the task is of such a nature that it will never be complete; no matter how far we go there will always be unexplored territory ahead. The subject matter that has been covered herein should, however, be more than adequate to demonstrate that the theoretical universe that necessarily exists if the Fundamental Postulates are valid is identical with the physical universe in which we live and upon which we make our observations. No work of finite proportions could penetrate very far into the profusion of details and minor variations that characterizes almost every physical phenomenon, but we have explored the major relationships all the way from the force characteristics of the smallest atom to the ultimate fate of the largest galaxy.
At no point in this wide field of coverage have the Fundamental Postulates failed to give a straightforward answer. It has not been necessary to treat anything other than space-time itself as an unanalyzable quantity; no limitations on the scope of applicability of the basic laws and principles derived from the Postulates have been required; no mysterious forces or special behavior characteristic have had to be postulated to account for discrepancies. Perhaps some of the answers are incomplete or only partially valid, but if and where this is true the fault undoubtedly lies in imperfect interpretation, as the validity of the basic principles is supported by an overwhelming mass of evidence.
As indicated in the introductory paragraphs, the general policy upon which the original program for this investigation was based called for retracing those steps taken in recent years which have had the effect of divorcing physical theory from physical reality, and making a new start along a route more closely defined by established physical facts. The somewhat unexpected, but in retrospect quite logical, result of following this policy has been in essence a return to the mechanical model. There are some features of the new theoretical structure which may give rise to conceptual difficulties, at least on first acquaintance, but this theory reduces all physical phenomena to motion, and in general it should meet the specifications of those who agree with Kelvin in wanting an explanation of the physical universe which can be visualized: a demand that modern scientific opinion has been inclined to look upon as naive and even somewhat juvenile.
It will also be noted that the mathematical development of the basic relations is extremely simple. This is another result that was not anticipated. In fact, a considerable amount of time was wasted in the early stages of the investigation in attempting to overcome obstacles by direct mathematical assault. Invariably such attempts were totally unsuccessful and when the true relationships were finally discovered, usually in a roundabout manner through the medium of advances made in some related fields, it was always found that nothing more than a very simple mathematical treatment was required. After this same succession of events had been repeated several times it finally became clear that the uniformly negative results were not accidental; that they involved an important fundamental principle. The physical relations being studied were basic and therefore simple. In a universe wherein the complex is built up from the simple, no complex physical mechanism can exist until the simple basic elements have been developed to the point where complex relationships have emerged. The mathematical expressions of these basic relations must then be equally simple, as the true mathematical representation of a physical phenomenon cannot be more complex than the phenomenon itself. The eventual recognition of this principle contributed immeasurably to the solution of problems subsequently encountered, as any hypothesis which could not be represented in simple mathematical form was immediately characterized as unacceptable, forcing the channeling of the available time and effort into a search for the unrecognized true relationship rather than dissipating it in fruitless mathematical investigation of hypotheses based upon accepted ideas which are now seen to lack validity.
As a final word, it may be appropriate to say something about the general nature of the results. There are, no doubt, those who will feel that the development of new information indicating that many of tile accepted scientific theories are erroneous shatters their belief in the permanence of scientific truth in general. Obviously if we are to discard today what we accepted as established facts yesterday, science will have lost its unique standing as a permanent and ever-growing systematic arrangement of knowledge. But this is not the true situation; the new information brought out in this work has not in any way disturbed the standing of any established facts or principles of science; it has merely demonstrated that some of the interpretations of and inferences based upon these established facts are in error.
It is true that there has been an unfortunate tendency in recent years to confuse fact and speculation and to elevate mere theories (relativity, for example) to a standing coordinate with or even superior to established facts. All too often we find statements of pure theory introduced by “It is now known that…”, “It is certain that…”, “Modern science has proved that…”, etc., when the introduction should be “We think…,” or some equivalent. One of the major tasks involved in carrying out this present program of investigation has been to separate fact from assumption and inference. But this is no reflection on science; it merely means that some scientists, by no means all, have succumbed to a characteristically human but definitely unscientific tendency to accept presumably authoritative pronouncements without adequate analysis and critical appraisal. One of the particularly subtle arguments that has helped to confuse the issues and to blur the line between factual an non-factual material is the contention so often raised that the theory under consideration is able “in principle” to explain all details and to reproduce all experimental values, and that inability to achieve this result in actual practice is merely due to mathematical complexity beyond the capabilities of existing facilities. In reality, of course, this “in principle” argument is a means of evading the issue, not of meeting it. Unless and until a hypothesis can be tested against facts it remains a hypothesis, no matter what it can do in principle.
Whatever advances have been made in the present work have not been the result of using any different methods but have been achieved by a more rigorous application of the recognized scientific disciplines: a more critical examination of the validity of the inferences drawn from experimental results, a more careful separation of facts from assumptions, and a more ruthless policy of discarding theories which cannot show full quantitative agreement with observation. Scientific methods of investigation and critical evaluation based upon established facts as the ultimate authority are the most powerful tools ever devised by man for the advancement of knowledge, and our rate of progress toward a better understanding of the world in which we live will be largely dependent on the extent to which we take advantage of these available tools.
|Velocity of light||2.99790×1010 cm/sec|
|Rydberg constant (hydrogen)||1.09677×105 cm-1|
|Gravitational constant||6.670×10-8 dynes × cm˛ × g-2|
|Faraday constant||2.89356×1014 e.s.u. × g-equiv-1|
|Gas constant||8.31436×107 ergs × deg-1 × mol-1|
|Molar gas volume||2.2.4145×10ł||22.4157×10ł||cmł|
|Mass of hydrogen atom||1.67339×10-24||1.67339×10-24||g|
|Mass of unit atomic weight||1.66124×10-24||1.65990×10-24||g|
|Planck constant *||6.648×10-27||6.62377×10-27||erg sec|
|Radiation constant||5.65475×10-5||5.6699×10-5||ergs/(cm˛ × deg4 × sec)|
|Electronic charge **||4.80690×10-10||4.807×10-10||e.s.u.|
|Ratio H1 to electron mass||1836.62||1837.1|
|Fine structure constant||137.481||137.043|
* Subject to adjustment—See text
** See text
|t/s||Electric charge||4.80690×10-10 e.s.u.|
|t/s2||Electric potential||3.11896×108 volts|
|t/s3||Elec. field intensity||6.84156×1013 Volts/CM|
|t/s3||Electric flux density||23.1290 e.s.u./cm˛|
|t2/s2||Magnetic charge||1.60342×10-20 e.s.u.|
|t2/s2||Magnetic flux||4.74294 maxwells|
|t2/s3||magnetic potential||1.04038×106 gilberts|
|t2/s4||Magnetic induction||2.28212×1011 gauss|
|t2/s4||Mag. field intensity||2.28212×1011 oersteds|
|t/s||(time-space region)||3.62074×1012 degrees C|
|t/s||(time region)||510.16 degrees C|
Section VII extends the calculation of inter-atomic distances to chemical compounds. Part of this material was included in Section VI. The omitted portion completes the calculation of distances for all of the common inorganic binary compounds crystallizing in the normal structural forms, including both isotropic and anisotropic crystals. The normal distances between the atoms in complex crystals are also evaluated.
Section VIII takes up the subject of the specific volume of complex compounds in which the volume is not a direct function of the inter-atomic distances. An expression for the calculation of these volumes is derived from the general principles previously formulated, and this expression is applied to the calculation of specific volumes for a substantial number of typical inorganic compounds of this class. The applicability of the same expression to organic compounds is also indicated, but actual calculations are deferred to a later section.
Section IX develops from the inter-atomic force equation a general expression for the effect of compression on solid structures. It is shown that the inward force acting on the solid under equilibrium conditions is equivalent to an initial pressure, and the total effective pressure is the sum of this initial pressure and the applied external pressure. An equation for the calculation of the initial pressure is derived from the general pressure expression and initial pressures are calculated for the elements and a large number of compounds. A simple relation between the initial pressure and the initial compressibility is derived and initial compressibilities are calculated for the elements and many compounds. It is shown that these values agree with the experimental results within the probable margin of error. An additional equation is formulated from the general pressure expression to enable calculation of the relative volumes of solids under pressure. Values obtained from this equation are then compared with most of Bridgman’s data on solid compression, including practically all of his results on the elements and a large amount of the data on both organic and inorganic compounds.
Section X takes up the subject of valence. It is shown that the factors determining valence are entirely separate and distinct from those entering into the determination of the inter-atomic distance, and the valence equilibrium is something of a totally different nature from the inter-atomic force equilibrium. The various types of valences, their derivation, and characteristics are described and explained and the possible valences of each element are tabulated. The factors governing the relative stability of the alternate valence combinations are determined. The nature of radicals and their participation in the molecular structure are explained and the composition and characteristics of the common inorganic radicals are covered in detail.
Section XI extends the principles developed in the preceding section to the compounds of the organic division. It is shown that the accepted “bond” theory of organic structure is not a true representation of the nature of these compounds, and that they are in reality constructed in the same manner as the inorganic compounds, differing from the latter in some respects only because of the two-dimensional nature of most of the inter-atomic forces in the organic compounds. The effect of this factor on the characteristics of the organic radicals and the interior structural groups of the organic compounds is explained. A condensed, but substantially complete, discussion of the chain compounds then follows, indicating how the special structural features of the various classes of compounds of this type result from the operation of the general principles previously derived. It is pointed out that the new theory derived from the Fundamental Postulates not only accounts for the major structural features equally as well as the accepted “bond theory” but also explains many facts on which the bond theory is silent; for example, the difference between the hydroxyl hydrogen (replaceable by Na) and the methyl hydrogen (replaceable by Cl), the reason why CO exists as a separate compound but CH2 does not, and so on.
Section XII is a similar detailed discussion of the ring compounds. The reason for the existence of the ring structure is explained, together with such other factors as the unusual stability of the benzene ring, the ability of the aromatic rings to utilize structural groups which do not appear in the chain compounds, the structural relationships in the condensed rings, etc. Each of the principal families of cyclic, aromatic, and heterocyclic compounds is discussed and the special features of these various groups are shown to result from the operation of the applicable general principles. A number of revisions of chemical nomenclature are suggested to conform with the new relationships which are established.
Section XIII introduces the property of heat. The nature and characteristics of thermal motion are derived from the Fundamental Postulates. The concept of temperature is defined and the conversion constants relating the natural and Centigrade scales are evaluated. Mathematical expressions are developed for the heat content of the solid and its derivative, the specific heat. The general specific heat pattern for the elements is derived from the latter equation and the nature of the possible variations is determined. Values are calculated for the specific heats of elements of different types and a number of diagrams are presented to show the correlation between these theoretical specific heat curves and the observed values. It is shown that the specific heats of the simple inorganic compounds follow the same pattern as those of the elements and additional similar graphs are included for these compounds. The concept of the thermal group is introduced and the specific heats of representative organic and complex inorganic compounds are calculated with the assistance of this concept. These calculated values are also compared with the experimental data in appropriate diagrams.
Section XIV applies the relationships of the preceding section to the inter-atomic force equation to determine the nature and characteristics of thermal expansion. An equation is developed for calculating the expansion of different substances and the expansions thus obtained for a number of elements are compared with experimental values.
Section XV examines the effect of a continued increase in thermal energy on the force system of the individual molecule and shows that at a particular thermal level, which varies with the nature of the substance, this system experiences a drastic change. The transition temperature is identified as the melting point and the new condition beyond the transition is identified as the liquid state. It is made clear that physical state is a property of the individual molecule and not, as generally assumed, a “state of aggregation.” In the vicinity of the melting point the liquid aggregate is a mixture of solid molecules and liquid molecules in proportions determined by probability considerations (not a mixture of solid and liquid aggregates, but a liquid which contains some solid molecules). The existence of both kinds of molecules in the aggregates in this and the similar region in the vicinity of the critical temperature has a major effect on the properties of the aggregates in these regions and much of the mathematical development in the next few sections is devoted to a determination of the magnitude of these effects. At this time the effect on the liquid specific heat is examined. A general liquid specific heat expression is derived and it is shown that modification of this expression as required by the presence of solid molecules results in a curve which reproduces the experimental results. The nature of the heats of fusion and transition is explained and the method of calculating the heat of fusion is indicated. In order to obtain some information needed in the subsequent development, some further attention is given to the property of mass and the concept of secondary mass is introduced and explained. The mass of the H¹ atom and the mass equivalent of unit atomic weight are calculated and from the latter figure Avogadro’s number is derived.
Section XVI establishes the relation of the low temperature volume (or density) of the liquid to the solid volume and derives a mathematical expression for computation of this liquid volume. A liquid equivalent of Avogadro’s Law is formulated on the basis of this expression. The liquid volume at these temperatures is shown to consist of two separate components: a constant initial component and a temperature-dependent component. Densities of approximately 700 organic compounds and 100 other substances (elements, fused salts, etc.) calculated on the basis are shown to be in agreement with experimental values. The nature and magnitude of the structural factors involved in these calculations are discussed.
Section XVII considers the transition from liquid to gas at the upper end of the liquid temperature range and produces further evidence supporting the theoretical conclusion that physical state is a property of the individual molecule. The general nature of the gaseous state is considered and the Gas Laws are derived from the Fundamental Postulates. The molar gas volume is computed from the basic conversion constants by means of the Gas Laws. Equations for the specific heats of gases are derived and their scope of application is indicated. The critical temperature is defined and an expression for calculation of the values applicable to different substances is formulated. Critical temperatures are calculated for approximately 200 elements and compounds and the results are shown to be in agreement with experimental values.
Section XVIII extends the liquid volume relationships to the higher temperatures. It is demonstrated that these high temperature volumes include a third component in addition to the two which make up the low temperature volume. An equation for the orthobaric volume is developed and it is shown that the volumes of approximately 50 elements and compounds computed over the range of temperatures from the boiling point to the critical temperature are in agreement with the measured values. The computations for water are extended down to the freezing point in order to illustrate the effect of the increasing proportion of solid molecules on the liquid volume. The probability relations applying to this situation are developed and from the probability values the percentage of solid molecules in liquid water is computed for each temperature. A composite solid-liquid volume is then obtained in each case and the resulting values are shown to agree with the measured volumes of the liquid aggregate.
Section XIX is a discussion of liquid compressibility. Further elaboration of the relationships previously developed indicates that the compressive forces act on each of the three volume components separately, and a mathematical expression is derived for each effect. An equation for calculating the initial pressure applicable to the liquid (which is not the same as the solid initial pressure) is also formulated and the initial pressures for a large number of liquids are calculated. All of this information is then applied to a computation of the compressions of various liquids studied by Bridgman and calculated values for 25 compounds at several different temperatures and over a wide range of pressures are shown to be in agreement with Bridgman’s results. Following these comparisons, which apply to liquids in which the solid component is still negligible at the highest pressure of observation, the discussion is extended to those liquids which begin the transition to the solid state within the experimental range. The effect of pressure on the probability relations is evaluated, and the proportion of solid molecules in the liquid aggregate is calculated for each individual temperature and pressure of observation, using the same methods as in the water calculations of Section XVIII. A good correlation with Bridgman’s results is shown on 16 different liquids over a wide range of temperatures and pressures. A very extensive tabulation of values for liquid water is included.
Section XX examines the corresponding situation on the other side of the melting point: the modification of the solid volume due to the presence of liquid molecules. The percentages of these liquid molecules in the solid aggregates under pressure and the resulting aggregate volumes are calculated by the methods of Section XIX. The tabulated comparisons of that section are then extended into the solid state up to Bridgman’s experimental pressure limit. This section also examines the volume relations in the liquid-gas transition zone. An expression for the compression of the critical volume component is derived and applied to the volumes calculated by the methods of Section XVIII to determine the volumes of the high-temperature liquid under pressure. Values for water and six hydrocarbons are shown to be in agreement with experimental results.
Section XXI is a discussion of surface tension. This phenomenon is explained as another manifestation of the same force that is responsible for the liquid initial pressure, and the initial pressure equation is modified for application to the calculation of the surface tensions. Values are computed for more than 100 substances over the normal liquid temperature range and it is shown that these values agree with the experimental results. The nature of the structural factors which determine the individual values is discussed.
Section XXII extends the application of the principles developed in connection with the discussion of the melting point in Section XV and shows that a similar change of state of the individual molecule takes place at the critical temperature. The process of evaporation at temperatures below the critical point is indicated to be a result of the operation of the probability principles. The general nature of the vapor state is explained. A mathematical expression for the specific heat of the vapor is developed and a number of curves based on this expression are compared with experimental data. The relation of vapor volume to liquid volume is discussed and a general equation for saturated vapor volume is derived. Volumes calculated from this equation for 16 compounds over the normal liquid temperature range are shown to agree with experimental results. An equation is derived for the critical volume and calculated values are compared with experimental data. It is shown that the factors determining the total heat of liquids and vapors are the same as those determining the volume, and the volume equations are modified to apply to total heat. The total heat of liquid water and saturated steam is calculated at 20° intervals all the way from the melting point to the critical temperature and it is shown that the calculated values agree with the experimental results.
Section XXIII analyzes the results of superheating a vapor and develops an expression for calculating the superheated vapor volume. Because of the rather small amount of variation between substances and the large amount of tabular data required to cover the normal temperature and pressure range of each substance, the comparisons between calculated and experimental values are limited to five compounds at constant pressure over a range of temperatures and two more at constant temperature over a range of pressures, plus water vapor over a wide range of conditions. The relation of the superheated vapor volume to the volume of gases in the range above the critical volume is discussed and the superheated vapor equation is modified to apply to the volumes of real gases. Close agreement is shown between the experimental values and the volumes calculated from this equation for seven compounds. This comparison includes a very extensive tabulation of water volumes.
Section XXIV shows that the volumetric behavior of gases in the range below the critical volume is totally unlike that in the range covered in Section XXIII, and the condition existing below the critical volume and above the critical temperature is defined as a different state of matter: the condensed gas state. It is shown that the theoretical principles require condensed gases to follow volumetric relations analogous to those of the liquid, and values calculated on this basis for representative compounds are shown to agree with the observed volumes. As in the preceding sections very extensive comparisons of water volumes are included, covering the entire range up to 2500 atm. and 1000° C at 50° intervals.
Section XXVI is a discussion of the phenomena originating from the presence of electrons in the material environment. A portion of this material is included with Section XXV. In the omitted portion a mathematical expression for the calculation of resistivities of conductors is derived from the basic principles and resistivities computed for the elements are compared with such experimental values as are available. The nature of superconductivity at low temperatures is explained. An equation is developed for the effect of compression on resistivity and the values calculated from this expression are shown to be in agreement with Bridgman’s results on 23 elements.
Section XXXIV is devoted to refraction. A portion of this material is included with Section XXXIII to show the general nature of the refraction phenomenon and the method of calculation of the refractivity where the factors involved are relatively simple. The omitted portions include a discussion of the more complex refraction patterns and numerical calculations of both refraction and dispersion for approximately 500 substances.